text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
10 Steps to Happiness with Scala and NetBeans IDE By Geertjan-Oracle on Aug 17, 2013 that's the smallest distribution providing the basic toolset on top of which you will install the Scala plugin. - Get Scala. Download Scala and set it up so that when you run Scala on the command line, it works. That's a nice way to check that you have downloaded and installed Scala correctly. - Connect Scala to NetBeans IDE. Open the "etc/netbeans.conf" file, in the installation directory of NetBeans IDE, and use "-J-Dscala.home" in "netbeans_default_options" to point to your Scala installation. For example, on Ubuntu, my "netbeans_default_options" in "etc/netbeans.conf" is now as follows: netbeans_default_options="--laf Nimbus -J-Dscala.home=/home/geertjan/scala/scala-2.10.2 .java2d.dpiaware=true -J-Dsun.zip.disableMemoryMapping=true" - Install the NetBeans Scala Plugin. Start NetBeans IDE and go to Tools | Plugins and install all the Scala modules which, together, constitute the NetBeans Scala plugin: - Verify Scala is Connected to NetBeans IDE. After installing all the modules shown above, go to Tools | Scala Platforms. Each of the tabs, "Classes", "Sources", and "Javadoc" should show some content, based on a combination of the Java platform and the Scala platform. - Take a Look at the Project Templates. Go to the New Project wizard (Ctrl-Shift-N) and notice there are three new project templates in a new category appropriately named Scala: - Import the Scala Samples into NetBeans IDE. Select "Scala Project with Existing Sources" and click Next. Name the new project "scala-samples" and place it anywhere you like on disk, e.g., in the NetBeans Projects folder: Click Next. Browse to the root folder of your Scala download, i.e., in my case, "/home/geertjan/scala/scala-2.10.2". Then click Next again. In the "Excludes" field, exclude all the folders that are not named "examples". That is, we're creating a new NetBeans project based on the "examples" folder in the Scala distribution, hence we don't want all the non-examples folders in our project. Click Finish and now you have a NetBeans Scala project consisting of all the samples in the Scala distribution: - Fix Import Statements. Right-click and build the project and notice that there are some Scala packages that are incorrectly declared. I.e., the Duration class is in a different package than that which is declared in some of the examples, so use the error annotations and hints to correct those occurrences. - Define Run Configurations. I noticed that individual files can't be run, i.e., no right-click and Run File. So go to the toolbar and create some run configurations: In each case, simply type the full path to the class you want to run: - Run the Project. Finally, when you run the project, and you have a main class defined as shown in the previous step, you'll see the result in the Output window: Seems to me that the NetBeans Scala plugin is very mature. Syntax coloring, code completion, etc. Nice. This is a windows 8 issue. When I add "-J-Dscala.home=C:\Program Files (x86)\scala" to the 'netbeans_default_options' in the netbeans.conf file NetBeans fails to load. I am able to get a version number from the command prompt, so I know scala is installed. Do you have any experience with scala on windows? Thanks! Posted by William Bradley Rouse on August 23, 2013 at 08:38 PM PDT # -J-Dscala.home=C:\PROGRA~1\scala Posted by Geertjan on August 24, 2013 at 09:29 AM PDT # The scala plugin does not seem so happy with Netbeans 7.4. Do you know any plans to make it compatible? Posted by guest on October 20, 2013 at 09:14 PM PDT # Yes Scala does not work with 7.4 :-( Posted by guest on October 30, 2013 at 06:42 AM PDT # 7.4 was released when? Last week? So, give it some time, OK. Let the Scala developers work on updating the plugin. Give them some time. Posted by Geertjan on October 30, 2013 at 06:59 AM PDT # Haven't tried it yet, but: Posted by guest on November 09, 2013 at 02:21 PM PST # If you're having problems at step 5 on windows simply add the environment variable SCALA_HOME (obviously pointing to the scala home directory) through advanced settings. that got me going. Posted by guest on November 22, 2013 at 08:31 AM PST # Thank you for making this easy-to-learn tutorial. I'm a Java programmer trying to learn Scala. So, I set up Netbeans + Scala in Windows 7. I went past step #5 and tried to create a Scala Application project. Netbeans then created a package with a default class. Inside the class was a method like so: def main(args: Array[String]): Unit = { println("Hello, world!") } However when I tried to run the file (Shift+F6), the IDE complained: Class "testscala.Main" does not have a main method. The project itself though, did run (F6) well. I wonder what I did miss there... Thanks again, Posted by akkumaru on December 16, 2013 at 01:41 AM PST # Thanks for sharing Posted by Azuryy on December 25, 2013 at 08:20 PM PST # What is the procedure of installing scala (downloaded from the site) w/NB 8? Posted by Aparna Basu on March 09, 2014 at 09:48 AM PDT # I wanted to add that I would like to know howto for wintel and mac (pl. refer to my earlier post) Posted by Aparna Basu on March 09, 2014 at 09:50 AM PDT # Thank you, Geertjan, very useful post. Worked like a charm for me. Posted by Mark Kerzner on March 12, 2014 at 11:31 AM PDT # Have anyone done this with Netbeans 8.0+ ? I can point Netbeans to the installation folder just fine, but when I try to start a new project, the only template I see is the ScalaSbtProject. I have previously used this plugin on a different computer, and then there were at least a Scala Application template if I remember correctly. Posted by guest on February 13, 2015 at 03:56 AM PST # Thank you, it alsmost works. My last problem is that there is no way on creating or opening a scala project. I'm on Netbeans 8.0, have installed scala plugins for project support and linked Netbeans with the latest scala platform, but I still can't create a project. Posted by Baba the Dw@rf on February 21, 2015 at 12:41 PM PST # I have tried every which way to install and run scala on NB 8.0.2 on windows 7 jdk 8 upd 31 I have set my SCALA_HOME to a 2.10.5 distribution as well as in the conf file. The platform shows this 2.10.5 distribution Downloaded the plugins, new project errors off with a file not found "sh" But, the library dependency shows 2.11.4 ? Does the scala plugin work with 2.11 ? Have reinstalled several times with various combinations -- Any thoughts would be appreciated. Posted by guest on March 27, 2015 at 01:15 PM PDT #
https://blogs.oracle.com/geertjan/entry/10_steps_to_happiness_with
CC-MAIN-2015-22
refinedweb
1,205
73.98
Dynamically invoking a static method with Reflection in .NET C# Say you do not have access to a .NET assembly at compile time but you want to run code in it. It’s possible to dynamically load an assembly and run code in it without early access. Here we’ll see how to invoke a static method of a type in a referenced assembly. It is very similar to how you would invoke an instance-level method. Check out the following post for related topics: Open Visual Studio 2012/2013 and create a new C# class library project called Domain. Add the following Customer class to it: public class Customer { private string _name; public Customer() : this("N/A") {} public Customer(string name) { _name = name; } public static int CallStaticMethod(int inputOne, int inputTwo) { return inputOne + inputTwo; } }. Let’s see how we can dynamically call the CallStaticMethod method and read its result: string pathToDomain = @"C:\Studies\Reflection\Domain.dll"; Assembly domainAssembly = Assembly.LoadFrom(pathToDomain); Type customerType = domainAssembly.GetType("Domain.Customer"); MethodInfo staticMethodInfo = customerType.GetMethod("CallStaticMethod"); int returnValue = Convert.ToInt32(staticMethodInfo.Invoke(null, new object[] { 3,5 })); You should obviously adjust the path to Domain.dll. The code to call a static method is almost the same as calling an instance-level one. The key difference is that we pass in null as the first parameter to Invoke. That parameter specifies which instance the method should be invoked on. As there’s no instance here, we can skip the step of first invoking the constructor of Customer. ‘returnValue’ will be 8 as expected. View all posts on Reflection here. great Andras!. how-to invoke static method-property like this: internal static class A has a internal static class B, and this last class B has a public property. namespace DirectoryServices.Configuration { internal static class ConfigManager internal static class SecuritySettings public static string UserName Thanks!
https://dotnetcodr.com/2014/10/15/dynamically-invoking-a-static-method-with-reflection-in-net-c/?replytocom=82401
CC-MAIN-2020-10
refinedweb
311
50.12
I am teaching myself C++ and wrote a program that does the queue data structure. It runs corectly. I would like some feedback on how I can improve the program to make it run faster, use fewer lines of code, or be clearer to someone else. I haven't added my comments to the program so that is one area. For now, I want to concentrate on improving efficiency. Thanks in advance. #include "../../std_lib_facilities.h" int top; int act; string s; string names [6] = {"James", "John", "Jerrold", "Jennifer", "", ""}; int start() { cout << "Enter the desired activity.\n" << "1 for listing the queue items.\n" << "2 for adding an item to the queue.\n" << "3 for removing an item from the queue.\n" << "4 to exit the program.\n" << endl; cin >> act; while (act != 1 && act != 2 && act!= 3 && act != 4) { cout << "Please enter a choice 1 through 4.\n" << endl; cin >> act; } return act; } void list_items() { for (int i = 0; i < 6; i++) { cout << i + 1 <<", " << names[i] << endl; } start(); } void add_item() { if (names [5] != "") {cout << "The queue is full.\n" << endl;} else { cout << "Enter the name to add to the queue.\n" << endl; cin >> s; int i = 0, top = 0; while (names [i] != "") { top = i + 1; i++; } names [top] = s; cout << "The name " << s << " has been successfully added to the queue.\n" << endl; } start(); } void remove_item() { if (names [0] == "") {cout << "The queue is empty.\n" << endl;} else { s = names [0]; int i = 0; while (names [i] != "" && i < 5) { names [i] = names [i+1]; top = i + 1; i++; } names [top] = ""; cout << "The name " << s << " has been successfully removed from the queue.\n" << endl; } start(); } int main() { act = 0; start(); while (act != 4) { if (act == 1) {list_items();} if (act == 2) {add_item();} if (act == 3) {remove_item();} if (act == 4) {keep_window_open();} } return 0; }
https://www.daniweb.com/programming/software-development/threads/419446/constructive-comments-on-queue-program
CC-MAIN-2017-09
refinedweb
299
92.83
[ ] Martin Sebor closed STDCXX-499. ------------------------------- Resolution: Won't Fix There doesn't seem to be a good way to resolve this issue. It also isn't important enough to spend time on. Closing as Won't Fix. > std::num_put inserts NUL thousand separator > ------------------------------------------- > > Key: STDCXX-499 > URL: > Project: C++ Standard Library > Issue Type: Bug > Components: 22. Localization > Affects Versions: 4.1.2, 4.1.3, 4.1.4 > Reporter: Martin Sebor > Assignee: Martin Sebor > Priority: Minor > Fix For: 4.2.2 > > Original Estimate: 1h > Time Spent: 5h > Remaining Estimate: 2h > > Moved from Rogue Wave Bugzilla: > -------- Original Message -------- > Subject: num_put and null-character thousand separator > Date: Tue, 11 Jan 2005 16:10:23 -0500 > From: Boris Gubenko <Boris.Gubenko@hp.com> > Reply-To: Boris Gubenko <Boris.Gubenko@hp.com> > Organization: Hewlett-Packard Co. > To: Martin Sebor <sebor@roguewave.com> > : > template <class _CharT, class _OutputIter /* = ostreambuf_iterator<_CharT> > */> > _TYPENAME num_put<_CharT, _OutputIter>::iter_type > num_put<_CharT, _OutputIter>:: > _C_put (iter_type __it, ios_base &__flags, char_type __fill, int __type, > const void *__pval) const > { > const numpunct<char_type> &__np = > _V3_USE_FACET (numpunct<char_type>, __flags.getloc ()); > // FIXME: adjust buffer dynamically as necessary > char __buf [_RWSTD_DBL_MAX_10_EXP]; > char *__pbuf = __buf; > const string __grouping = __np.grouping (); > const char *__grp = __grouping.c_str (); > const int __prec = __flags.precision (); > #if defined(__VMS) && defined(__DECCXX) && !defined(__DECFIXCXXL1730) > const char __nogrouping = _RWSTD_CHAR_MAX; > if (!__np.thousands_sep()) > __grp = &__nogrouping; > #endif > Here is the test: > cosf.zko.dec.com> setenv LANG fr_FR.ISO8859-1 > cosf.zko.dec.com> locale -k thousands_sep > cosf.zko.dec.com> cxx x.cxx && a.out > null character thousand_sep was not inserted > cosf.zko.dec.com> cxx x.cxx -D_RWSTD_USE_CONFIG -D_RWSTDDEBUG \ > -I/usr/cxx1/boris/CXXL_1886-2/stdlib-4.0/stdlib/include/ \ > -nocxxstd -L/usr/cxx1/boris/CXXL_1886-2/result/lib -lstd11s \ > && a.out > null character thousand_sep was inserted > cosf.zko.dec.com> > x.cxx > ----- > #ifndef __USE_STD_IOSTREAM > #define __USE_STD_IOSTREAM > #endif > #include <iostream> > #include <sstream> > #include <string> > #include <locale> > #include <locale.h> > #ifdef __linux > #define FRENCH_LOCALE "fr_FR" > #else > #define FRENCH_LOCALE "fr_FR.ISO8859-1" > #endif > using namespace std; > int main() > { > ostringstream os; > if (setlocale(LC_ALL,FRENCH_LOCALE)) > { > setlocale(LC_ALL,"C"); > os.imbue(locale(FRENCH_LOCALE)); > os << (double) 10000.1 << endl; > if ( (os.str())[2] == '\0' ) > cout << "null character thousand_sep was inserted" << endl; > else > cout << "null character thousand_sep was not inserted" << endl; > } > return 0; > } > ------- Additional Comments From sebor@roguewave.com 2005-01-11 14:50:44 ---- > -------- Original Message -------- > Subject: Re: num_put and null-character thousand separator > Date: Tue, 11 Jan 2005 15:50:06 -0700 > From: Martin Sebor <sebor@roguewave.com> > To: Boris Gubenko <Boris.Gubenko@hp.com> > References: <00f201c4f821$fa0b72c0$29001c10@americas.hpqcorp.net> > Boris Gubenko wrote: > > : > I don't think this fix would be quite correct in general. NUL is > a valid character that the locale library was specifically designed > to be able to insert and extract just like any other. In addition, > in the code below, operator==() need not be defined for the character > type. > > > ... > > Here is the test: > Thanks for the helpful test case. > My feeling is that this case points out a fundamental design > disconnect between the C and C++ locales. In C, NUL is not > an ordinary character -- it's a special character that terminates > strings. In addition, C formatted I/O is done in multibyte > characters. In contrast, in C++, NUL is a character like any other > and formatted I/O is always done in single chars (or wchar_t when > char is not wide enough), but never in multibyte characters. > In C, the thousand separator is a multibyte string so even if > grouping is non-empty, inserting an empty string will be as good > as inserting none at all. In C++ the separator is assumed to be > a single character so there's no way to achieve the same effect. > Instead, whether a thousand separator gets inserted or not is > controlled by the grouping string. > One way to fix this would be to set grouping to "" if thousands_sep > is NUL, although that would be quite correct, either because numpunct > can be used directly by user programs. I'll have to think about how > to deal with this. In the meantime, I filed bug 1913 for this problem > so that you can track it. > Martin -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
http://mail-archives.apache.org/mod_mbox/stdcxx-issues/200902.mbox/%3C567438453.1234743539604.JavaMail.jira@brutus%3E
CC-MAIN-2014-42
refinedweb
709
51.34
in reply to import list I think it's mainly defined by pure convention as intended by the module author's idea of the modules interface. Seems, the module author's intention here was to let the argument resemble a command-line option switch. Basically use Module qw(a b c d); is translated to BEGIN{ require Module; Module->import(qw(a b c d)); } (see use). In Module::import(), the author has the freedom to do anything with the given list (after removing the module's name from the parameter list) like using it as-is ... @list = qw(a b c d) ... or interpreting it as a flattened hash ... %pairs = (a => 'b', c => 'd') ... or as a parameter list ... my($a,$b,$c,$d) = qw(a b c d) ... and so on. Then s/he can treat the arguments as needed, like filtering, grouping, normalizing, etc. In this case: less `perldoc -l English` ... # Grandfather $NAME import sub import { my $this = shift; # 'English' my @list = grep { ! /^-no_match_vars$/ } @_ ; # anything that is +not '-no_match_vars' local $Exporter::ExportLevel = 1; if ( @_ == @list ) { ... [download] Ceramics Glass Wood Metal Plastic Paper Banana leaves Something else Results (392 votes), past polls
http://www.perlmonks.org/index.pl?node_id=978006
CC-MAIN-2013-20
refinedweb
196
67.96
I have been learning Python for a month now and just finished coding my first Markov chain. def markovmaker(filename): import random fhin = open(filename, 'r') worddict = {} newlist = [] wordlimiter = 0 for line in fhin: wordlist = line.split() for word in wordlist[:-1]: pos = wordlist.index(word) if word in worddict: if wordlist[pos+1] not in worddict[word]: worddict[word].append(wordlist[pos+1]) else: pass else: worddict[word] = [wordlist[pos+1]] first_word = random.choice(worddict.keys()) newlist.append(first_word) while wordlimiter < 10: next_word = random.choice(worddict[first_word]) newlist.append(next_word) first_word = next_word wordlimiter += 1 print ' '.join(newlist) Sometimes the script works fine. But, other times, it displays a key error. Traceback (most recent call last): File "<pyshell#36>", line 1, in <module> markovmaker('D:/rgumnt.txt') File "<pyshell#35>", line 20, in markovmaker next_word = random.choice(worddict[first_word]) KeyError: 'still,' The word is 'still,' here but it changes each time. Can someone please tell me what I am doing wrong? Also, is there a way I can simplify the code?
https://www.daniweb.com/programming/software-development/threads/438830/help-with-markov-chain
CC-MAIN-2022-33
refinedweb
171
53.27
Frequently Asked QuestionsThe frequently asked questions listed below assume you have some basic knowledge of PyNGL, and know how to create plots using PyNGL. General questions - How do I change the default NCGM/PostScript/PDF file name to something else? - How do I set up my PyNGL script so that it uses a resource file? - How do I change all of my fonts to be the same font? - How do I get access to special characters, like Greek symbols? - Can I access PostScript fonts in my PyNGL script? - How do I retrieve an environment variable in PyNGL? - How can I change the text function code to something other than the default? - My text strings in my plots either have garbage in them or they are not getting fully displayed. What's the problem? - How do I call a C or Fortran function or procedure from PyNGL? - How do I change the aspect ratio or size of my plot? - How do I get multiple plots on a page? - I've got multiple contour plots on one page and want to create one labelbar that represents all of them. How do I do this? - What's a good way to generate a color map from within PyNGL? - Can I set color resources by color name rather than color index values? - Can I change color index values or color maps in mid-frame? - How do I get my PostScript/PDF output to cover most of an 8-1/2" x 11" page? - When I try to create a plot, I'm getting an error message about workspace reallocation exceeding maximum size. How can I fix this? Reading/writing data files and accessing variables and attributes - How do I read/write netCDF, HDF, GRIB, or CCM History Tape files? - How do I read/write ASCII files? - How do I deal with NaN's (not a number) in my data? Data processing questions - How do I set a missing value for my data? - How do I determine if my data contains any missing values? - Are there any interpolation functions in the PyNGL module? Plotting questions - How do tell PyNGL not to plot missing values in my contour or XY plot? - How do I get log scaling in an XY or contour plot? - How do I fill my contour levels with various shading patterns and/or solid colors? - How do I change the attributes of a curve, or curves, in an XyPlot? - How do I overlay a contour plot (or a vector/streamline plot) on a map? - How do I label the latitude/longitude lines on my map? - Can I improve the resolution of my map outlines? - If I am creating a color contour or vector plot, how do I select my colors such that they are spread across my whole color map? - How do I get multiple scales on my XY or contour plot? - How can I have a double quote (") appear as part of a text string in PyNGL? - How can I have a new line appear as part of a text string in PyNGL? - How can I draw to an X11 window and use keyboard input for plotting control without having to do a mouse click? - How come I get different line thicknesses on different output devices using the same thickness value? Post-processing questions - How do I convert my PyNGL graphics file to another format, like GIF or PNG, to put on the web? - How do I convert my multi-frame NCGM file to individual PNG files? - How do I convert my multi-frame PostScript file to individual PNG files? - If I obscure a PyNGL X11 window with another application, the window doesn't refresh when I bring it back to the front. Documentation questions - Where can I see a list of all the PyNGL resources? - Where can I see a list of all the PyNGL functions? - Where can I see some examples of how to use PyNGL? General questions - How do I change the default NCGM/PostScript/PDF file name to something else? Just change the second argument in the call to Ngl.open_wks. Here's an example of changing the NCGM file name to "plot1.ncgm": import Ngl # # Create an NCGM Workstation and change the name of the # metafile to "plot1.ncgm". # wks = Ngl.open_wks("ncgm","plot1") mapid = Ngl.map(wks) Ngl.end() - How do I set up my PyNGL script so that it uses a resource file? See the resource files page for full details on resource files. If you call Ngl.open_wks to open a workstation, then the string in the second argument allows you to name a resource file to read in. For example, if you have the following PyNGL script: import Ngl # Create an X Workstation object. wks = Ngl.open_wks("x11","title_app") # Draw a text string Ngl.text_ndc(wks,"Hello, World",0.5,0.5) Ngl.frame(wks) Ngl.end()then it will look for a resource file called "title_app.res". If you then have the following lines in your "title_app.res" file: *txFontColor : Red *txFont : helvetica-bold *txFontHeightF : 0.06you should see a large, red "Hello, World" string in the center of the frame using the helvetica-bold font. - How do I change all of my fonts to be the same font? First set up your PyNGL script to use a resource file (see the question on resource files above), and then in the resource file include the following line: *Font : helveticato change all fonts to helvetica (use whatever font you want, of course). Any fonts that you are explicitly setting in your PyNGL script will override this resource file setting. For an example, see the PyNGL script "ngl08p.py", along with its resource file "ngl08p.res" and its output. Try changing "times-roman" in the "ngl08p.res" file to other fonts to see the results. Use either a font name or a font index as listed in the font table. If you always want to use this font in all of your PyNGL scripts, then put the above line in a file called ".hluresfile" in your home directory. If PyNGL sees a ".hluresfile" in your home directory, it will always load it before running any PyNGL scripts. - How do I get access to special characters, like Greek symbols? You first need to find the font that your character exists in by browsing the font tables. Clicking on any one of the font names, like "math_symbols", will show you all the characters for that particular font. Once you find the character you desire, note the font table index ("math_symbols" is index 18, for example), and the corresponding character, and~)" - Can I access PostScript fonts in my PyNGL script? No. When you use PyNGL font number 4, you will get the stroked simplex_roman font, no matter what kind of workstation you are going to, even a PostScript workstation. If you change "4" to "Helvetica," then you will get PyNGL font 21, i.e. a Helvetica font that uses polygons to produce the characters and not direct access to a PostScript font. There is some confusion about the availability of PostScript fonts, because they are accessible from low-level NCAR Graphics, as per the discussion on PS vs. GKS fonts. For NCAR GKS fonts numbered 1 and -2 through -34, the PostScript drivers access the standard 13 PostScript fonts (oblique and bold-oblique are supported, i.e. font numbers -23, -24, -27, -28, -31, -32). For fonts -2 through -20 (the Hershey fonts) a mapping is done to equivalent PostScript fonts when possible. If a character is not available (like the script fonts or road map signs), then one would have to use Plotchar. We hope to add the ability to access PostScript fonts from PyNGL in the future. - How do I retrieve an environment variable in PyNGL? Use the environ function of the Python os module: path = os.environ["PATH"] - How can I change the text function code to something other than the default? If you are using a standard implementation of PyNGL, the default function code is a tilde: "~". To change the function code to another character when using the Ngl.text, Ngl.text_ndc, or Ngl.add_text procedures, set the txFuncCode resource: txresources.txFuncCode = "^"If you want your text function code permanently changed for every single text resource you ever use in all of your PyNGL scripts, then set the global resource "TextFuncCode", which can only be set in a resource file. This resource is best set in the file "~/.hluresfile" as follows: *TextFuncCode : ^The ".hluresfile" file must be in your home directory in order for PyNGL to see it and load it. - My text strings in my plots tilde character ("~") in your string. By default, if PyNGL sees a tilde in your string, it's expecting the next character to be a special "function code." To get around this, you need to tell PyNGL that you don't want the tilde character to represent the start of a function code. See the previous question on how to change your text function code. - How do I call a C or Fortran function or procedure from PyNGL? For extending with C or C++, see Extending and Embedding the Python Interpreter. This page describes how to write modules in C or C++ to extend the Python interpreter with new modules. Those modules can define new functions but also new object types and their methods. It also describes how to embed the Python interpreter in another application, for use as an extension language. See also SWIG. For extending with Fortran, see f2py. - How do I change the aspect ratio or size of my plot? You can use the viewport resources vpWidthF and vpHeightF. The viewport coordinates are normalized; that is, they take values from 0.0 to 1.0 (inclusive). For example, if you want your plot to be twice as long as it is high, then you might set vpWidthF to 0.4 and vpHeightF to 0.8. Please note that in order to change the aspect ratio of a map projection, you must also set the resource mpShapeMode to "FreeAspect". For an example of changing the aspect ratio of a cylindrical equidistant map, see the PyNGL script "map1.py" and its output (frame 1 and frame 2). - How do I get multiple plots on a page? If your plots are all the same size, then you can use the PyNGL procedure Ngl.panel that takes as input the workstation you want to draw the plots on, the list of plots you want to put on one page, an array describing the layout of the plots, and an optional list of resources. For an example of using Ngl.panel, see the script "panel1.py" and its output (frame 1 and frame 2). - I've got multiple contour plots on one page and want to create one labelbar that represents all of them. How do I do this? See the answer above to the question "How do I get multiple plots on a page?" The example there shows how to get the desired label bar. - What's a good way to generate a color map from within PyNGL? First see the section "Creating your own color map using HSV values" for an introduction to the HSV color wheel and how it helps you create a color map. Then, have a look at the color3 example and its output (frame 1, frame 2, frame 3). The function Ngl.hsvrgb is in the Ngl module. You can use the PyNGL function Ngl.draw_colormap to draw the current color map associated with the workstation. For an example, see the PyNGL "color1.py" script and frame 1 and frame 2 of its output. - Can I set color resources by color name rather than color index values? See the "Creating your own color map using named colors and RGB values" section. -gl.set_color to redefine all color index values except for index 0. The logic in the following example illustrates this: import numpy,Ngl res = Ngl.Resources() # # Draws cells, fitting them into a box with lower left corner (xll,yll) # and upper right corner (xur,yur) with nx cells in x and ny cells in y. # For each call each cell is colored using increasing indices in the # current color map *starting with color index 2*. # def draw_cells(wks,xll,yll,xur,yur,nx,ny): xinc = (xur-xll)/nx yinc = (yur-yll)/ny for j in xrange(ny): plly = yll + j*yinc pury = plly + yinc for i in xrange(nx): pllx = xll + i*xinc purx = pllx + xinc x = numpy.array([pllx,purx,purx,pllx,pllx]) y = numpy.array([plly,plly,pury,pury,plly]) res.gsFillColor = i+j*nx+2 Ngl.polygon_ndc(wks,x,y,res) # # Define colors "black", "red", "green", "blue" in order # starting with color index 2. # cmap = numpy.array([[0.,0.,0.],[1.,1.,1.],[0.,0.,0.], \ [1.,0.,0.],[0.,1.,0.],[0.,0.,1.]]) rlist = Ngl.Resources() rlist.wkColorMap = cmap wks_type = "ps" wks = Ngl.open_wks(wks_type,"cell",rlist) # Open a PostScript workstation. # # Draw four color boxes in the lower half of the plot # using the cmap. # draw_cells(wks, 0., 0., 1.0, 0.5, 2, 2) # # Redefine color for indices 3-5. # Ngl.set_color(wks, 3, 0., 1., 1.) # cyan Ngl.set_color(wks, 4, 1., 0., 1.) # magenta Ngl.set_color(wks, 5, 1., 1., 0.) # yellow # # Draw color boxes using the redefined color map. # draw_cells(wks, 0., 0.5, 1.0, 1.0, 2, 2) Ngl.frame(wks) Ngl.end() - How do I get my PostScript/PDF output to cover most of an 8-1/2" x 11" page? PyNGL does this for you automatically, since the special resouce nglMaximize is set to True by default. If you want to control the positioning of a plot in PS/PDF output, when you create your PostScript or PDF workstation use the resources wkDeviceLowerX, wkDeviceLowerY, wkDeviceUpperX, and wkDeviceUpperY. Note: For the above resources to take effect, you must set nglMaximize to False, otherwise your plot will fill the page. Note that it is possible to have negative values for these resources. - When I try to create a plot, I'm getting an error message about workspace reallocation exceeding maximum size. How can I fix this? If the error message'll need to bump up your workspace size. First, you need to retrieve the default workspace, and then you can set the "wsMaximumSize" resource. To do this, add the following code right after you create a workstation: ws_id = Ngl.get_workspace_id() rlist = Ngl.Resources() rlist.wsMaximumSize = 33554432 Ngl.set_values(ws_id,rlist)You can permanently bump up the workspace size by setting wsMaximumSize in your ~/.hluresfile: *wsMaximumSize : 33554432If you are drawing a contour plot, you might instead try switching to raster contouring, by setting the resource cnFillMode to "RasterFill". Reading/writing data files and accessing variables and attributes - How do I read/write netCDF, HDF, GRIB, or CCM History Tape files? You can use the Nio module, which is bundled with PyNGL. Nio is a Python module that allows read and/or write access to a variety of data formats using an interface modelled on netCDF. netCDF is a self-documenting and network-transparent data format - see the netCDF User Guide for details. - How do I read/write ASCII files? Strictly to support the examples, a very basic ASCII read function, Ngl.asciiread, was written as part of the PyNGL module. Of course all of the Python read functions are also available. - How do I deal with NaN's (not a number) in my data? You can use numpy to change the NaN values to missing values: import numpy as N var1 = N.where(N.isnan(var[:]),var._FillValue[0],var[:]) Data processing questions - How do I set a missing value for my data? PyNGL is designed as a graphics library and, as such, it has mimimal data processing functionality. Such functionality will be a part of a future PyNPL module. PyNGL does no automatic setting of missing values. If your data contain missing values, then you will have to specifically set the appropriate resources. For examples of setting vector field and scalar field missing values see example 6; for an example of setting coordinate array missing values, see example 10. - How do I determine if my data contains any missing values? You will have to check the source data. For example, if you are using data from a netCDF file read in with the Nio module, then you can check if a given variable has a _FillValue or a missing_value attribute. See example 6. - Are there any interpolation functions in the PyNGL module? A few interpolation functions have been included in the PyNGL module, primarily in support of the examples. The 1-dimensions tension spline interpolator Ngl.ftcurv is used in example 11. The 1-dimensional tension spline interpolator for periodic data Ngl.ftcurvp is used in example 7. The 1-dimensional integral calculator for periodic data Ngl.ftcurvpi is used in example 7. The 2-dimensional interpolator for random data Ngl.natgrid is used in example 8. Plotting questions - How do tell PyNGL not to plot missing values in my contour or XY plot? For contour data, use the ScalarField resource sfMissingValueV. For an example, see example 6. For XY data, use the resource caXMissingV or caYMissingV. For an example, see example 10. - How do I get log scaling in an XY or contour plot? For an XY plot, set the XyPlot resources xyXStyle and/or xyYStyle to "Log" depending on which axes you want to be in log scaling. For an example see the script "xy1.py" and the output (frame 1). example "contour1.py" and its output (frame 1 with default linear/linear scaling and frame 2 with linear/log scaling). Log scaling for irregularly spaced data is currently not implemented. - "contour2.py" and the output. - How do I change the attributes of a curve, or curves, in an XyPlot? You should be able to set resources for an XyPlot just as you would for other plot objects. For an example, see the script "multi_y.py" and its first frame. An XyPlot can have multiple curves. For an example of changing attributes for multiple curves, see the script "color2.py" and the output. - How do I overlay a contour plot (or a vector/streamline plot) on a map? For contours over a map, see example 5 and example 9. For vectors, see example 6. Streamlines over maps can be handled in a similar way using Ngl.streamline_map. - How do I get labeled latitude/longitude tickmarks on my map? If your projection is rectangular, then you should get lat/lon tickmarks by default. If you want to turn these off, then you can set the resource pmTickMarkDisplayMode to "Never". For an example that has lat/lon tickmarks, see contour plot overlaid on it, see "chkbay.py" and its output. For an example that turns these tickmarks off, see "ctnccl.py" and its output. - Can I improve the resolution of my map outlines? Yes, by using the mpDataBaseVersion resource. The default is "LowRes". If you change this to "MediumRes" or "HighRes", you will improve the resolution of your map outlines. For an example comparing the three levels of resolutions, see the script coast1.py and frame 1 (low resolution) frame 2 (medium resolution) frame 3 (high resolution) Note: the "HighRes" resource value is for coastal outlines only, and you must download the RANGS (Regionally Accessible Nested Global Shorelines) database, developed by Rainer Feistel from Wessel and Smith's GSHHS (Global Self-consistent Hierarchical High-resolution Shoreline) database. For more information on how and where to install the database, read the short section on high-resolution coastlines. - If I am creating a color contour or vector plot, how do I select my colors such that they are spread across my whole color map? This is the default when using PyNGL (NCL users note that this is different from the default setting in that package). The colors in the current color table are spread evenly across the contour levels. If you want to turn this feature off set the resource nglSpreadColors to False. In that case the colors in the color table will be assigned to the contour levels by matching color indices with contour intervals. For an example of where the default value of nglSpreadColors (True) takes effect, see the fourth frame of ngl02p.py. For an example of setting this resource to False, see the first frame of ngl08p.py. If you want to reverse the colors, then you can set nglSpreadColorStart to a color index towards the end of the color table, and nglSpreadColorEnd to a color index near the beginning of the color table. For an example, see seam.py and its first frame. - How do I get multiple scales on my XY or contour plot? Set one of the tm*Mode resources to Explicit (depending on which axis you want to label separately) and then use tm*Values and tm*Labels resources to get the labels that you want at whatever tick mark values you want them. For an example, see "multi_y.py" and its third frame. For an example of different scales on a contour plot, see the script "ngl11p.py" and its first frame. - How can I have a double quote (") appear as part of a text string in PyNGL? Enclose the string in single quotes. For example: import Ngl wks = Ngl.open_wks("x11","test") txres = Ngl.Resources() txres.txFontHeightF = 0.03 Ngl.text_ndc(wks,'"title"',0.5,0.5,txres) Ngl.frame(wks) Ngl.end() - How can I have a new line as part of a text string in PyNGL? You can use the text function code that represents a carriage return ("C"): import Ngl wks = Ngl.open_wks("x11","test") txres = Ngl.Resources() txres.txFontHeightF = 0.03 Ngl.text_ndc(wks,"title1~C~title2~C~title3~C~title4",0.5,0.5,txres) Ngl.frame(wks) Ngl.end()Note: The above example uses the tilde ("~") for the text function code. This is the default if you are using a standard implementation of PyNGL. See the question on changing the text function code for more information. - How can I draw to an X11 window and use keyboard input for plotting control without having to do a mouse click? See the example in the documentation for Ngl.clear_workstation. - example thickness.py can be run to get an idea what various lines drawn with different thickness specifications will look like on different devices. In that example you will need to change the output device if you are interested in results other than for PostScript. Post-processing questions - How do I convert my PyNGL graphics file to another format, like GIF or PNG, to put on the web? We used to recommend using a script called "ncgm2gif" for converting NCGMs to GIF files. However, "ncgm2gif" has become a bit outdated, and there are better methods for converting to other raster formats. Our recommendation is to start with a PostScript (PS) or Encapsulated PostScript (EPS) file, and then use "convert" from the ImageMagick suite of tools to convert the PS file to another format like GIF, PNG, or even MPEG. You can get the ImageMagick suite from: on the "download" link on the left side and you will be presented with lots of options for download locations. Mac users can download it using "fink install imagemagick". You must have fink installed first -. Once you have the tools built and installed, you can convert your PS or PDF files to other formats using: convert -trim file.ps file.png convert -trim file.pdf file.pngWe like to use the "-trim" option to remove unwanted white space around the image. This cuts it pretty close, so if you put it in a PowerPoint presentation, it is better to use a pure white background so the cropped image doesn't look funny. Documentation on "convert" can be found at: are lots of options available with convert, that, according to their web page, allow you to "resize an image, blur, crop, despeckle, dither, draw on, flip, join, re-sample, and much more." - How do I convert my multi-frame NCGM file to individual PNG files? Download the Python script ncgm2png. Executing that script without any arguments will produce usage information. To work, this script requires that you have convert from the ImageMagick package installed as well as ctrans and med from the PyNGL distribution. - How do I convert my multi-frame PostScript file to individual PNG files? Download the Python script ps2png. Executing that script without any arguments will produce usage information. To work, this script requires that you have convert from the ImageMagick package installed as well as psplit from the PyNGL distribution. - If I obscure a PyNGL X11 window with another application, the window doesn't refresh. This is possibly due to an X11 backing store problem. If you run the command: xdpyinfo | grep backingand the output is similar to: options: backing-store NO, save-unders NOthen you will need to enable backing store in your X server configuration. There are different ways to do this, depending on what kind of system you're on. On a Linux system, try inserting the line: Option "backingstore"in the Device section of your X server configuration file (normally /etc/X11/XF86Config or /etc/X11/XF86Config-4). Alternatively, you can run your X server with the option "+bs" to obtain the same result without editing your configuration file. Restart your X server, then run the xdpyinfo command again to verify that backing store is now enabled. On a PC running Hummingbird, run Exceed's "xconfig" program. In the xconfig menu, double-click on "Performance". Make sure "Save Unders" is checked, set "Maximum Backing Store" to "Always", set "Default Backing Store" to "When Mapped", and set "Minimum Backing Store" to "When Mapped". Documentation questions - Where can I see a list of all the PyNGL resources? Go to the URL: - Where can I see a list of all the PyNGL functions? Go to function listings, available alphabetically or by category. - Where can I see some examples of how to use PyNGL? Go to the PyNGL example scripts.
http://www.pyngl.ucar.edu/FAQ/
crawl-001
refinedweb
4,369
66.23
#include <stdint.h> #include <rte_compat.h> #include <rte_common.h> #include <rte_meter.h> Go to the source code of this file. RTE Generic Traffic Metering and Policing API This interface provides the ability to configure the traffic metering and policing (MTR) in a generic way. The processing done for each input packet hitting a MTR object is: A) Traffic metering: The packet is assigned a color (the meter output color), based on the previous history of the flow. B) Policing: There is a separate policer action configured for each meter output color, which can: a) Drop the packet. b) Keep the same packet color: the policer output color matches the meter output color (essentially a no-op action). c) Recolor the packet: the policer output color is different than the meter output color. The policer output color is the output color of the packet, which is set in the packet meta-data (i.e. struct rte_mbuf::sched::color). C) Statistics: The set of counters maintained for each MTR object is configurable and subject to the implementation support. This set includes the number of packets and bytes dropped or passed for each output color. Once successfully created, an MTR object is linked to one or several flows through the meter action of the flow API. A) Whether an MTR object is private to a flow or potentially shared by several flows has to be specified at creation time. B) Several meter actions can be potentially registered for the same flow. Definition in file rte_mtr.h. Verbose error types. Most of them provide the type of the object referenced by struct rte_mtr_error::cause. Definition at line 369 of file rte_mtr.h. MTR capabilities get Meter profile add Create a new meter profile with ID set to meter_profile_id. The new profile is used to create one or several MTR objects. Meter profile delete Delete an existing meter profile. This operation fails when there is currently at least one user (i.e. MTR object) of this profile. MTR object create Create a new MTR object for the current port. This object is run as part of associated flow action for traffic metering and policing. MTR object destroy Delete an existing MTR object. This operation fails when there is currently at least one user (i.e. flow) of this MTR object. MTR object meter disable Disable the meter of an existing MTR object. In disabled state, the meter of the current MTR object works in pass-through mode, meaning that for each input packet the meter output color is always the same as the input color. In particular, when the meter of the current MTR object is configured in color blind mode, the input color is always green, so the meter output color is also always green. Note that the policer and the statistics of the current MTR object are working as usual while the meter is disabled. No action is taken and this function returns successfully when the meter of the current MTR object is already disabled. MTR object meter enable Enable the meter of an existing MTR object. If the MTR object has its meter already enabled, then no action is taken and this function returns successfully. MTR object meter profile update MTR object DSCP table update MTR object policer actions update MTR object enabled statistics counters update MTR object statistics counters read
https://doc.dpdk.org/api-19.11/rte__mtr_8h.html
CC-MAIN-2021-39
refinedweb
559
56.55
IRC log of xproc on 2011-01-06 Timestamps are in UTC. 15:57:38 [RRSAgent] RRSAgent has joined #xproc 15:57:38 [RRSAgent] logging to 15:57:40 [Norm] zakim, this will be xproc 15:57:40 [Zakim] ok, Norm; I see XML_PMWG()11:00AM scheduled to start in 3 minutes 15:57:46 [Norm] Meeting: XML Processing Model WG 15:57:48 [Norm] Date: 6 January 2011 15:57:50 [Norm] Agenda: 15:57:52 [Norm] Meeting: 186 15:57:54 [Norm] Chair: Norm 15:57:56 [Norm] Scribe: Norm 15:57:58 [Norm] ScribeNick: Norm 15:58:30 [ht] ht has joined #xproc 15:59:57 [Zakim] XML_PMWG()11:00AM has now started 16:00:08 [Zakim] +??P3 16:00:13 [ht] zakim, ? is me 16:00:13 [Zakim] +ht; got it 16:00:15 [Zakim] +Norm 16:00:56 [PGrosso] PGrosso has joined #xproc 16:01:06 [Vojtech] Vojtech has joined #xproc 16:01:12 [Zakim] +MoZ 16:01:27 [Zakim] +[ArborText] 16:01:59 [Zakim] +Jeroen 16:02:07 [Vojtech] Zakim, Jeroen is Vojtech 16:02:07 [Zakim] +Vojtech; got it 16:02:30 [alexmilowski] alexmilowski has joined #xproc 16:03:10 [Zakim] +Alex_Milows 16:03:38 [Norm] zakim, who's on the phone? 16:03:38 [Zakim] On the phone I see Norm, ht, MoZ, PGrosso, Vojtech, Alex_Milows 16:03:46 [Norm] Present: Norm, Henry, Mohamed, Paul, Vojtech, Alex 16:04:07 [Norm] Topic: Accept this agenda? 16:04:07 [Norm] -> 16:04:12 [Norm] Accepted. 16:04:18 [Norm] Topic: Accept minutes from the previous meeting? 16:04:18 [Norm] -> 16:04:26 [Norm] Accepted. 16:05:09 [Norm] Topic: Next meeting: telcon, 13 Jan 2011? 16:05:35 [Norm] Regrets from Mohamed; possible regrets from Norm; Henry to chair if we don't skip the meeting. 16:05:44 [Norm] Topic: Review of the template note 16:06:05 [Norm] -> 16:06:07 [MoZ] 16:07:09 [Norm] Norm points to Mohamed's comments: 16:07:26 [Norm] Norm: Anyone think I got the rules for parsing "{" and "}" wrong? 16:07:28 [Norm] 16:10:21 [Norm] Mohamed proposes renaming p:in-scope-names to p:set-in-scope-names 16:10:26 [Norm] Norm: I'm not moved. 16:11:13 [Norm] Vojtech: We also have p:value-available() to check if an option is set; so maybe values would be better in the name. 16:11:37 [Norm] Norm: Any other comments? 16:12:18 [Norm] Mohamed: I'm persuaded the the verb question isn't relevant here. 16:12:50 [Norm] Norm: I'm not sure I like values better, but I won't lie down in the road over the name. 16:13:01 [Norm] Vojtech: No, p:in-scope-names is ok with me? 16:13:04 [Norm] s/me?/me./ 16:13:13 [Norm] Norm: Anyone else? 16:13:14 [Norm] None heard. 16:13:26 [Norm] Norm: I propose to leave the name unchanged. Any objections? 16:13:40 [Norm] Accepted. 16:14:14 [Norm] Norm: Now on to p:document-template; Mohamed proposes instead p:template-document and points out, in particular, that p:document-template would be another step starting "p:document", so makes completion harder. 16:15:16 [Norm] Norm: I'm sort of moved. I'm not thrilled with p:parameterize-document, but p:template-document works. 16:15:21 [Norm] Vojtech: What about just p:template? 16:15:25 [MoZ] +1 16:15:25 [Norm] Henry: I have to say I like that... 16:16:15 [Norm] Norm: I can't think of any problem with p:template. Anyone prefer *not* to name it p:template? 16:16:59 [Norm] Norm: I think the proposal is to rename p:document-template to simply p:template 16:17:25 [Norm] Accepted. 16:18:03 [Norm] Norm: The rest of Mohamed's note observes that the error links are broken and we don't have any examples. 16:18:49 [Norm] Mohamed: The declaration of the steps aren't the same as the declarations in XProc; the background color is missing. 16:19:10 [Norm] ACTION: Norm to produce a new draft. 16:22:28 [Norm] Mohamed: what about the error namespace? 16:22:39 [MoZ] Zakim, mute me 16:22:39 [Zakim] MoZ should now be muted 16:23:45 [Norm] Vojtech: Yes, don't we encourage users to use our error namespace? 16:24:02 [Norm] Norm: That was specifically for err:XD0030, I think, not the errors namespace. 16:25:27 [Norm] Vojtech: Or maybe it was the xproc-step namespace? 16:25:31 [Norm] Norm: Yes, that rings a bell. 16:25:40 [Norm] Brief searching doesn't turn up the relevant prose from the spec. 16:26:18 [Norm] Norm: So where are we? 16:26:44 [Norm] Vojtech: Saying we don't allow the error namespace for custom errors is what I'd like, but I think that would be a breaking change. 16:27:03 [Norm] Henry: Yes, but if users are doing that, they're already in danger of walking on each other. 16:27:50 [Norm] Henry: Given that we didn't publish a policy for that little symbol space, people use it at their own risk. 16:29:25 [Norm] Norm: Yes, I'm with Henry, if you started with XC0067 for your private errors, you've made an interesting design choice, but the consequences are small. 16:29:40 [Norm] Vojtech: Perhaps we could say that we discourage users from using the err: namespace? 16:29:40 [Zakim] -ht 16:29:59 [Norm] Vojtech: And perhaps something similar for the XProc step namespace? 16:30:04 [Norm] Norm: I'd be ok with that. 16:31:17 [Zakim] +??P0 16:31:30 [ht] zakim, ? is me 16:31:30 [Zakim] +ht; got it 16:31:50 [alexmilowski] gotta go take Max to school... 16:31:59 [Zakim] -Alex_Milows 16:32:38 [Norm] Norm: I think the proposal is to add a note of the form "Users are discouraged from using the error namespace..." 16:32:56 [Norm] Accepted. 16:33:06 [MoZ] Zakim, unmute me 16:33:06 [Zakim] MoZ should no longer be muted 16:33:30 [MoZ] Zakim, mute me 16:33:30 [Zakim] MoZ should now be muted 16:33:57 [Zakim] -ht 16:34:01 [Norm] Norm: How about we do this New Orlean's style? I'll publish a draft this week. If no one objects in email next week, I'll send it off to be published as an official WG note. 16:34:26 [ht] +1 16:34:34 [Norm] Accepted. 16:34:59 [Norm] Topic: Review of comments on the processor profiles document 16:35:07 [Norm] -> 16:35:07 [Zakim] +[IPcaller] 16:35:24 [Norm] -> 16:35:42 [Norm] Norm: There aren't any new comments. 16:35:48 [Norm] Henry: I haven't looked at it. 16:36:37 [Norm] Norm: I think all we need to do is close the loop with David Lee that we're not comfortable adding more profiles 16:36:57 [Norm] Henry: What about Vojtech's comment? 16:37:05 [Norm] Vojtech: I think it's obvious that we expect a namespace aware processor. 16:37:21 [Norm] Norm: I think that is what we meant, but if it's not clear... 16:37:22 [Zakim] -ht 16:37:45 [ht] ht has joined #xproc 16:37:46 [Norm] Vojtech: We refer to the term "namespace well-formed document", I think that naturally assumes a namespace aware processor. 16:38:04 [ht] Yes, that's what I was looking for 16:38:14 [Norm] Norm: I think you're right. Namespace well-formed is absolutely definitive, I think. 16:38:26 [Norm] Norm: So we can close your issue without change? 16:38:29 [Norm] Vojtech: Yes, I think so. 16:39:05 [Zakim] +??P12 16:39:10 [ht] zakim, ? is me 16:39:10 [Zakim] +ht; got it 16:39:47 [Norm] ACTION: Henry to close the loop with David Lee to get his assent to not add new profiles. 16:40:15 [Norm] Norm: If that works out, then I think we should begin the process of getting this published as a PR. 16:40:29 [Norm] Topic: Definition of an XProc processor 16:40:57 [Norm] -> 16:41:07 [Norm] Norm: Vojtech made a proposal that I liked. 16:41:28 [Norm] Norm: I'll draft an erratum to add that definition to the spec. 16:41:51 [Norm] Norm: Any other business? 16:42:13 [MoZ] Zakim, unmute me 16:42:13 [Zakim] MoZ should no longer be muted 16:42:27 [Norm] We've got stuff we can do in email, I propose that we *don't* meet next week. 16:42:35 [Norm] Next meeting is 20 January. Any objections? 16:42:46 [Norm] None heard. 16:42:55 [Norm] Norm: Any regrets for 20 January? 16:43:11 [Norm] None heard. 16:43:23 [Norm] Adjourned. 16:43:23 [Zakim] -Norm 16:43:25 [Zakim] -PGrosso 16:43:26 [Zakim] -Vojtech 16:43:27 [Norm] rrsagent, set logs world-visible 16:43:29 [Zakim] -MoZ 16:43:29 [Norm] rrsagent, draft minutes 16:43:29 [RRSAgent] I have made the request to generate Norm 16:43:37 [Zakim] -ht 16:43:39 [Zakim] XML_PMWG()11:00AM has ended 16:43:41 [Zakim] Attendees were ht, Norm, MoZ, PGrosso, Vojtech, Alex_Milows, [IPcaller] 16:43:52 [PGrosso] PGrosso has left #xproc 18:51:01 [Zakim] Zakim has left #xproc 18:55:43 [ht] ht has left #xproc 19:01:28 [MoZ] MoZ has joined #xproc 20:29:48 [MoZ] MoZ has joined #xproc 22:09:29 [Norm] Norm has joined #xproc 23:37:54 [Norm] Norm has joined #xproc
http://www.w3.org/2011/01/06-xproc-irc
CC-MAIN-2016-07
refinedweb
1,649
80.31
go to bug id or search bugs for Description: ------------ Hello! The number of my site users has recently increased and I have faced the following bug: mssql_connect function sometimes fails to connect to MSSQL. It works fine for few hours, then suddenly fails, and the work is resumed in a few minutes. Here is the part of my DB class (CSql) function connect_db($server, $user, $password) { $this->dbc=mssql_connect($server, $user, $password); if(!$this->dbc) return false; return true; } The warnings in Apache log are [Thu Feb 21 20:50:58 2008] [error] [client 82.200.***.***] PHP Warning: mssql_connect() [<a href='function.mssql-connect'>function.mssql-connect</a>]: Unable to connect to server: ****** in ******.php on line 22 When I reload the page on site, connection sometimes succeeds, sometimes ? fails. Web-server configuration: Windows 2000 Server (SP4) + Apache 2 + PHP 5.2.5. Peak usage of CPUs is no more than 70%, RAM usage ? no more than 50%. There is a program on this server that works with MSSQL through ADO, and it never has connection problems. There are no connection problems from LAN too. SQL Server 2000 (SP4) is working on dedicated server (Windows 2000 Server (SP4)) which is connected to the web server by gigabit LAN. I have tested the network adapters, they work fine. CPUs usage is no more than 60%, RAM usage ? no more than 70%. I have already: 1) replaced ntwdblib.dll in PHP directory with a new version. Also, I have installed MDAC 2.81 2) moved SQL Server to a new, fast server. 3) tried to use mssql_connect or mssql_pconnect. 4) tried to use server name or server IP and port as the first parameter for mssql_connect 5) tried to connect through TCP/IP or Named Pipes. 6) When I cycled connection attempts, it succeeded in connection almost every time (usually from the second or third attempt) $db=new CSql; $db_failed=1; while(!$db->connect_db(DBHost, DBLogin, DBPassw)) { if($db_failed>=5) break; sleep(2); $db_failed++; } if(!$db->dbc) die("DB connection failed!"); 7) I checked the network connection by using sockets or ODBC when mssql_connect had failed. It worked fine! if(!fsockopen("192.168.0.3", 1433, $errno, $errstr, 10)) WrLog("C:\\sqlerr.txt","Err1 (($errstr ($errno)))"); if(!odbc_connect("sqlsrv",DBLogin, DBPassw)) WrLog("C:\\sqlerr.txt","Err2"); 8) There are no connection limits in SQL Server or PHP.ini [MSSQL] mssql.max_persistent = -1 mssql.max_links = -1 I examined Apache logs few months ago, when MSSQL worked on the same server with Apache. There were no messages about connection problems. Add a Patch Add a Pull Request Well, I found two solutions. 1) I use ADO to connect to MSSQL. ADO is more slower (up to 2 times!), than mssql_* functions, but there are no connection problems 2) I set connection timeout in code: function getmt() { list($usec, $sec)=explode(" ",microtime()); return ((float)$usec+(float)$sec); } $time_st=getmt(); $db=new CSql; while(!$db->connect_db($Host, $Name, $Login, $Passw)) if(round(getmt()-$time_st, 0)>60) break; if(!$db->dbc) die("Connection failed!"); It is too hard to use odbc_* functions, because there are a lot of bugs with TEXT field type ? I must put it on the last position in query and use CONVERT(varbinary, other way I get ODBC errors. I hope that mssql_* and odbc_* problems will be solved in future PHP releases. I think that this problem occurs due to usage of old DB-Lib for connecting to MSSQL. There are another bug ? that you can?t fetch varchar more than 256-character length (you have to convert it to TEXT). It?s a pity, but the best way to work with MSSQL now is using ADO (no connection/long varchar/Unicode problems). I am running three Windows XP SP2 boxes as developer workstations, all PHP 5.2.5, installed into c:\php to avoid the space in "Program Files", and the MS Sql connection works perfectly on one of the machines, but not the other two. All three boxes are on the same LAN, can connect to the database using MS SQL Studio, and all have IIS 5.1. I have copied the PHP installation from the working PC to the others, no difference. This is whether the connection is to a remote or local SQL server. Have you try editing php.ini mssql.max_procs? mssql.max_procs = -1 Hi, same problem detected here (connection "rarely" successful with mssql_connect, with a MSSQL server under quite heavy load). Happens only with PHP on Windows, not on Linux (FreeDTS). But for some reason I needed to connect from PHP/Windows, so I have used the "ADO workaround", as previously suggested by alfa77. At first, I didn't understand very well that workaround, so here are some details : Do not use the ADOdb engine 'mssql' because it will still use mssql_connect(). Instead, use 'ado_mssql' which uses COM objects ; that makes all the difference. Here is a basic database functions lib : function db_open($db_host, $db_login, $db_pass, $db_name) { $db = NewADOConnection('ado_mssql'); $dsn="PROVIDER=MSDASQL;DRIVER={SQL Server};" . "SERVER=".$db_host.";DATABASE=".$db_name.";UID=".$db_login.";PWD=".$db_pass.";"; $db->Connect($dsn); return $db; } function db_query($db, $query) { return $db->Execute($query); } function db_fetch_assoc($res) { $obj = $res->FetchNextObj(); return get_object_vars($obj); } function db_close($db) { $db->Close(); } Same problem here. Apache 2.2.10 Php 5.2.5 as module. OS win2003 sp1+all critical updates I even try upgrade to php 5.2.8, but it still appears. I too have suffered this same problem for the last couple of years on all of the 5.x release of PHP. It works great for days then has problem for a few days then works great. 1-3 page reloads often solves the problem but not always. Unfortunatley support for this extension has been discontinued--I wonder if this is one of the reasons why? I would like to try using Microsofts SQLSRV driver but converting an entire website from mssql to sqlsrv is going to be a lot of work! We're experiencing the same problem with Microsofts SQLSRV driver. So it might be it's not just a driver problem. trials Related to Bug #35217. Problem is due to connection limit. it would be good if you close connection once your done with sql part. use mssql_free_statement(resouceid) and connection close to db every page of you site. I think that this bug is not related to connection limit. I have closed each connection at the end of the script (although it is not necessary due to documentation: ---------------------------- mssql_close <...> Note that this isn't usually necessary, as non-persistent open links are automatically closed at the end of the script's execution. ---------------------------- Also, exec sp_who2 did not show a lot of connections (no more than 200). I have solved this problem by using another extension (odbtp as a gateway on Windows server, PHP is hosted now on UNIX) - everything is all right now. You should really move to 5.3+ and use
https://bugs.php.net/bug.php?id=44300
CC-MAIN-2020-50
refinedweb
1,162
67.65
cavcalc is a program for computing optical cavity parameters. Project description A command line program and python module for computing parameters (and plots of these parameters) associated with linear, Fabry-Perot optical cavities. - Find the documentation at: - Follow the latest changes: Installing the release version To install the latest release version of cavcalc: pip install --upgrade cavcalc Example usage For details on available arguments run cavcalc -h on the command line. Some examples follow on how to use cavcalc. Computing single parameters You can ask for, e.g., the beam size on the mirrors of a symmetric cavity given its length and stability factor (g) with: cavcalc w -L 4000 -g 0.83 This would result in an output of: Given [SYMMETRIC CAVITY]: Cavity length = 4000.0 m Wavelength of beam = 1064 nm Stability g-factor of cavity = 0.83 Computed: Radius of beam at mirrors = 5.732098477230927 cm Units for both inputs and outputs can also be specified: cavcalc w -u mm -L 10km -gouy 145deg This requests the beam radius (in mm) on the mirrors of a symmetric cavity of length 10km given that the round-trip Gouy phase is 145 degrees; resulting in the following output: Given [SYMMETRIC CAVITY]: Cavity length = 10.0 km Wavelength of beam = 1064 nm Round-trip Gouy phase = 145.0 deg Computed: Radius of beam at mirrors = 59.59174828941794 mm Computing all available parameters A compute target of all is the default choice which is used to calculate all parameters which can be determined from the arguments specified. For example, using aLIGO parameters, cavcalc -L 4km -Rc1 1934 -Rc2 2245 -T1 0.014 -L1 37.5e-6 -T2 5e-6 -L2 37.5e-6 gives the following output: Given [ASYMMETRIC CAVITY]: Cavity length = 4.0 km Wavelength of beam = 1064 nm Reflectivity of ITM = 0.9859625 Reflectivity of ETM = 0.9999574999999999 Radius of curvature of ITM = 1934.0 m Radius of curvature of ETM = 2245.0 m Computed: FSR = 37474.05725 Hz Finesse = 443.11699254426594 FWHM = 84.56921734107604 Hz Pole frequency = 42.28460867053802 Hz Eigenmode = (-1837.2153886417173+421.68018375440016j) Radius of beam at ITM = 5.342106643304925 cm Radius of beam at ETM = 6.244807988323089 cm Radius of beam at waist = 1.1950538458990878 cm Position of beam waist (from first cavity mirror) = 1837.2153886417168 m Round-trip Gouy phase = 312.0813565565169 degrees Stability g-factor of ITM = -1.0682523267838677 Stability g-factor of ETM = -0.7817371937639199 Stability g-factor of cavity = 0.8350925761717987 Mode separation frequency = 4988.072188176179 Hz Units of output The default behaviour for the output parameter units is to grab the relevant parameter type option under the [units] header of the cavcalc.ini configuration file. When installing cavcalc, this file is written to a new cavcalc/ directory within your config directory (i.e. ~/.config/cavcalc/cavcalc.ini under Unix systems). See the comments in this file for details on the options available for the output units of each parameter type. cavcalc attempts to read a cavcalc.ini config file from several locations in this fixed order: - Firstly from the current working directory, if that fails then - next it tries to read from $XDG_CONFIG_HOME/.cavcalc/(or %APPDATA%/cavcalc/on Windows), if that also fails then - the final read attempt is from the within the source of the package directory itself. If a successful read occurs at any of these steps then cavcalc will use the configuration defined by that file for the rest of the session - it will not try to read from any of the subsequent locations as well. Note that if you specify a -u argument when running cavcalc then this takes priority over the options in the config file (as we saw in the above example). Evaluating parameters over data ranges Parameters can be computed over ranges of data using: - the data range syntax: -<param_name> "linspace(start, stop, num) [<units>]", -<param_name> "range(start, stop, stepsize) [<units>]", -<param_name> "start stop num [<units>]"(a shorthand version of the linspace command), - or data from an input file with -<param_name> file.dat. An example of using a range could be: cavcalc w -L "1 10 100 km" -g 0.9 --plot This results in a plot (see below) showing how the beam radius at the mirrors of a symmetric cavity varies from a cavity length of 1km to 10km with 100 data points, with a fixed cavity stability factor g = 0.9. Alternatively one could use a file of data, e.g: cavcalc gouy -L 10km -w beam_radii.txt --plot --saveplot symmcav_gouy_vs_ws.png This then computes the round-trip Gouy phase (in degrees) of a symmetric cavity of length 10km using beam radii data stored in a file beam_radii.txt, and plots the results (see below). Note also that you can save the resulting figure using the --saveplot <filename> syntax as seen in the above command. Image/density plots Two arguments can be specified as data ranges (or files of data) in order to produce density plots of the target parameter. For example: cavcalc w -L "1 10 100 km" -gouy "20 120 100 deg" --plot computes the radius of the beam on the mirrors of a symmetric cavity, against both the cavity length and round-trip Gouy phase. This results in the plot shown below. A matplotlib compliant colour-map can be specified when making an image plot using the --cmap <name> option. For example, the following command gives the plot shown below. cavcalc w0 -L 10km -g1 "-2 2 200" -g2 "-2 2 200" --plot --cmap nipy_spectral Finding conditions in a data range Using the --find <condition> argument one can prompt cavcalc to spit out the value(s) at which the given condition is satisfied when doing a data range computation. Taking an example above, we can find the closest value of the Round-trip Gouy phase when the radius of the beam is 11 cm. The result is printed to the terminal and given on the plot (see below). The command to perform such a computation is: cavcalc gouy -L 10km -w "5.8 15 1000 cm" --plot --find "x=11" A note on g-factors Stability (g) factors are split into four different parameters for implementation purposes and to hopefully make it clearer as to which argument is being used and whether the resulting cavity computations are for a symmetric or asymmetric cavity. These arguments are detailed here: -gs: The symmetric, singular stability factor. This represents the individual g-factors of both cavity mirrors. Use this to define a symmetric cavity where the overall cavity g-factor is then simply g = gs * gs. -g: The overall cavity stability factor. This is the product of the individual g-factors of the cavity mirrors. Use this to define a symmetric cavity where the individual g-factors of both mirrors are then gs = sqrt(g). -g1: The stability factor of the first cavity mirror. Use this to define an asymmetric cavity along with the argument -g2such that the overall cavity g-factor is then g = g1 * g2. -g2: The stability factor of the second cavity mirror. Use this to define an asymmetric cavity along with the argument -g1such that the overall cavity g-factor is then g = g1 * g2. Using cavcalc programmatically Whilst cavcalc is primarily a command line tool, it can also be used just as easily from within Python in a more "programmatic" way. The recommended method for doing this is to use the single function interface provided via cavcalc.calculate. This function works similarly to the command line interface, where a target can be specified along with a variable number of keyword arguments corresponding to physical parameters. It then returns a cavcalc.Output object which has a number of properties and methods for accessing the results and plotting them against the parameters provided. For example, the following script will compute all available targets from the cavity length and mirror radii of curvature provided: import cavcalc as cc # target = "all" is default behaviour # parameters can be given as single values, an array of values or a tuple # where the first element is as before and the second element is a valid # string representing the units of the parameter out = cc.calculate(L=(4, 'km'), Rc1=1934, Rc2=2245) # we can get a dictionary of all the computed results... computed = out.get() # ... or just a single one if we want w0 = out['w0'] # out can also be printed displaying results in the same way as the command line tool print(out) Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/cavcalc/
CC-MAIN-2022-21
refinedweb
1,434
55.03
What is HDFS? HDFS stands for Hadoop Distributed File System. It is a sub-project of Hadoop. HDFS lets you connect nodes contained within clusters over which data files are distributed, overall being fault-tolerant. You can then access and store the data files as one seamless file system. HDFS has many goals, among which: - Fault tolerance + automatic recovery - Data access via MapReduce streaming - Simple and robust coherency model - Portability across heterogeneous operating systems + hardware - Scalability to reliably store and process large amounts of data - … HDFS is written in Java, so any machine supporting Java can run HDFS. How can you access data stored in HDFS? - Using the Native Java API - Using a C-wrapper - Using a web-browser interface What is HDFS made of? HDFS is an interconnected cluster of nodes where files and directories reside. HDFS is an interconnected cluster of nodes where files and directories reside. HDFS splits input data into blocks of 64 or 128MB and stores them on computers called DataNodes. An HDFS cluster consists of : - A single node, known as a NameNode, that manages the file system namespace and regulates client access to files. It manages file system namespace operations like opening, closing, and renaming files and directories. A name node also maps data blocks to data nodes, which handle read and write requests from HDFS clients. - DataNodes, that create, delete and replicate data blocks according to instructions from the governing name node. Data nodes continuously loop, asking the name node for instructions. The file system is similar to most other existing file systems; you can create, rename, relocate, and remove files, and put them in directories. HDFS also supports third-party file systems such as CloudStore and Amazon Simple Storage Service (S3). Process. Data Storage Reliability Remember that HDFS needs to be reliable even when failures occur within : - name nodes, - data nodes, - or network partitions. How do we control if a node is failing? We use a heartbeat process, a small message sent by the node to the name node. If the message is received, then the node is up. Once we don’t receive the message anymore, we know that the node is down. HDFS guarantees Transaction and Data Integrity processes.. Conclusion: I hope this high-level overview was clear and helpful. I’d be happy to answer any question you might have in the comments section.
https://maelfabien.github.io/bigdata/HDFS/
CC-MAIN-2020-40
refinedweb
395
56.35
I've been playing around with the different classes available in Gpiozero and I've created over thirty little programs to test the inputs and outputs. All worked fine with no problems at all. However, I get an unexpected error when using the 'active_state' parameter in the InputDevice class. According to the documentation at: ... input.html the class is defined thus: class: gpiozero.InputDevice(pin, *, pull_up=False, active_state=None, pin_factory=None) The relevant parameters are: pull_up (bool or None) – If True, the pin will be pulled high with an internal resistor. If False (the default), the pin will be pulled low. If None, the pin will be floating. As gpiozero cannot automatically guess the active state when not pulling the pin, the active_state parameter must be passed. active_state (bool or None) – If True, when the hardware pin state is HIGH, the software pin is HIGH. If False, the input polarity is reversed: when the hardware pin state is HIGH, the software pin state is LOW. Use this parameter to set the active state of the underlying pin when configuring it as not pulled (when pull_up is None). When pull_up is True or False, the active state is automatically set to the proper value. Clearly, if pull_up is set to None (for example, when interfacing to digital devices which don't require a pull up or down) it is necessary to set active_state to True or False in order to define the active state. To test this, I created the following simple program: When I run the program I get the following error: Code: Select all from gpiozero import InputDevice input = InputDevice(pin=4, pull_up=None, active_state=True) input.wait_for_active() print ("input is active") TypeError: __init__() got an unexpected keyword argument 'active_state' Incidentally, I get exactly the same error if I set 'active_state' to False. Can you guys give me a sanity check? Am I doing something stupid? Cheers, Tony.
https://www.raspberrypi.org/forums/viewtopic.php?f=32&t=241064
CC-MAIN-2019-43
refinedweb
319
54.32
The latest release of Rust, version 1.30 extends procedural macros by allowing them to define new attributes and function-like macros. Additionally, it streamlines the Rust module system by making it more consistent and straightforward. Rust 1.30 introduces two new types of procedural macros, "attribute-like procedural macros" and "function-like procedural macros". Procedural macros are Rust metaprogramming foundation and enable the manipulation of a program syntax tree. In this respect procedural macros are much more powerful than declarative macros, which provide a mechanism to define a shorthand for more complex code based on pattern matching. Attribute-like procedural macros are similar to existing derive macros but are more flexible in that they allow you to create new attributes and may be applied also to functions in addition to structs and enums. For example, an attribute macro could enable the specification of a route attribute to define HTTP routing: // use of route procedural macro #[route(GET, "/")] fn index() { ... } // procedural macro defining route #[proc_macro_attribute] pub fn route(attr: TokenStream, item: TokenStream) -> TokenStream { // attr receives the GET, "/" part of the macro // item receives fn index () { ... } Similarly, function-like procedural macros allows you to define macros that look like functions, e.g.: // parse an SQL statement let sql = sql!(SELECT * FROM posts WHERE id=1); #[proc_macro] pub fn sql(input: TokenStream) -> TokenStream { In both examples, TokenStream represents the syntax tree the attribute is applied to or the attribute/function definition. The route/ sql function converts the received syntax tree into a new syntax tree which is returned to the caller, i.e., generating new code to execute. Rust 1.30 also brings a few changes to the use macro to improve developer experience when using Rust module system. Firstly, use can now bring in a macro definition, thus making the macro_use annotation obsolete: // old: #[macro_use] extern crate serde_json; // new: extern crate serde_json; use serde_json::json; Additionally, external crates are now more resilient to functions being moved across the module hierarchy by ensuring all references to a namespace are checked against all extern crate directives included in the module prelude and using the one that matches, if any. Previously, you had to explicitly use extern inside of a module or use the ::extern_name syntax, as shown in the following example: extern crate serde_json; fn main() { let json = serde_json::from_str("..."); // OK } mod foo { // to use serde_json in this module you explicitly use it use serde_json; fn bar() { let json = serde_json::from_str("..."); } fn baz() { // alternatively, you fully qualify the external module name let json = ::serde_json::from_str("..."); } Finally, use is now more consistent in the way it interprets module paths. You can use now the crate keyword to indicate that you would like the module path to start at your crate root. Previous to 1.30, this was the default for module paths but paths referring to items directly would start at the local path: mod foo { pub fn bar() { // ... } } mod baz { pub fn qux() { // old ::foo::bar(); // does not work, which is different than with `use`: // foo::bar(); // new crate::foo::bar(); } } More changes brought by Rust 1.30 are the following: - You can now use keywords as identifiers by prefixing them with r#, e.g. r#for. This change is mostly motivated by the fact that Rust 2018 will introduce new keywords, so a mechanism shall be available to convert existing code using those keywords as variable or function names. - It is now possible to build applications not using the standard library with no_std. Previously, you could only build libraries with no_stddue to the impossibility of defining a panic handler. You can update your Rust distribution using $ rustup update stable. For full detail of Rust 1.30 do not miss the release notes. Community comments
https://www.infoq.com/news/2018/11/rust-1.30-released
CC-MAIN-2019-04
refinedweb
622
52.09
We are sharing our experience about Apache Hadoop Installation in Linux based machines (Multi-node). Here we will also share our experience about different troubleshooting also and make update in future. User creation and other configurations step - - We start by adding a dedicated Hadoop system user in each cluster. $ sudo addgroup hadoop $ sudo adduser –ingroup hadoop hduser - Next we configure the SSH (Secure Shell) on all the cluster to enable secure data communication. user@node1:~$ su – hduser hduser@node1:~$ ssh-keygen -t rsa -P “” The output will be something like the following: Generating public/private rsa key pair. Enter file in which to save the key (/home/hduser/.ssh/id_rsa): Created directory '/home/hduser/.ssh'. Your identification has been saved in /home/hduser/.ssh/id_rsa. Your public key has been saved in /home/hduser/.ssh/id_rsa.pub. The key fingerprint is: 9b:82:ea:58:b4:e0:35:d7:ff:19:66:a6:ef:ae:0e:d2 hduser@ubuntu ..... - Next we need to enable SSH access to local machine with this newly created key: hduser@node1:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys Repeat the above steps in all the cluster nodes and test by executing the following statement hduser@node1:~$ ssh localhost This step is also needed to save local machine’s host key fingerprint to the hduser user’s known_hosts file. Next we need to edit the /etc/hosts file in which we put the IPs and Name of each system in the cluster. In our scenario we have one master (with IP 192.168.0.100) and one slave (with IP 192.168.0.101) $ sudo vi /etc/hosts and we put the values into the host file as key value pair. 192.168.0.100 master 192.168.0.101 slave - Providing the SSH Access The hduser user on the master node must be able to connect - to its own user account on the master via ssh master in this context not necessarily ssh localhost. - to the hduser account of the slave(s) via a password-less SSH login. So we distribute the SSH public key of hduser@master to all its slave, (in our case we have only one slave. If you have more execute the following statement changing the machine name i.e. slave, slave1, slave2). hduser@master:~$ ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@slave Try by connecting master to master and master to slave(s) and check if everything is fine. Configuring Hadoop - Let us edit the conf/masters (only in the masters node) and we enter master into the file. Doing this we have told Hadoop that start Namenode and secondary NameNodes in our multi-node cluster in this machine. The primary NameNode and the JobTracker will always be on the machine we run bin/start-dfs.sh and bin/start-mapred.sh. - Let us now edit the conf/slaves(only in the masters node) with master slave This means that, we try to run datanode process on master machine also – where the namenode is also running. We can leave master to act as slave if we have more machines as datanode at our disposal. if we have more slaves, then to add one host per line like the following: master slave slave2 slave3 etc…. Lets now edit two important files (in all the nodes in our cluster): - conf/core-site.xml - conf/core-hdfs.xml 1) conf/core-site.xml We have to change the fs.default.parameter which specifies NameNode host and port. (In our case this is the master machine) <property> <name>fs.default.name</name> <value>hdfs://master:54310</value> …..[Other XML Values] </property> Create a directory into which Hadoop will store its data - $ mkdir /app/hadoop We have to ensure the directory is writeable by any user: $ chmod 777 /app/hadoop Modify core-site.xml once again to add the following property: <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop</value> </property> 2) conf/core-hdfs.xml We have to change the dfs.replication parameter which specifies default block replication. It defines how many machines a single file should be replicated to before it becomes available. If we set this to a value higher than the number of available slave nodes (more precisely, the number of DataNodes), we will start seeing a lot of “(Zero targets found, forbidden1.size=1)” type errors in the log files. The default value of dfs.replication is 3. However, as we have only two nodes available (in our scenario), so we set dfs.replication to 2. <property> <name>dfs.replication</name> <value>2</value> …..[Other XML Values] </property> - Let us format the HDFS File System via NameNode. Run the following command at master bin/hadoop namenode -format - Let us start the multi node cluster: Run the command: (in our case we will run on the machine named as master) bin/start-dfs.sh Checking of Hadoop Status - After everything has started run the jps command on all the nodes to see everything is running well or not. In master node the desired output will be - $ jps 14799 NameNode 15314 Jps 14880 DataNode 14977 SecondaryNameNode In Slave(s): $ jps 15314 Jps 14880 DataNode Ofcourse the Process IDs will vary from machine to machine. Troubleshooting It might be possible that Datanode might not get started in all our nodes. At this point if we see the logs/hadoop-hduser-datanode-.log on the effected nodes with the exception - java.io.IOException: Incompatible namespaceIDs In this case we need to do the following - - Stop the full cluster, i.e. both MapReduce and HDFS layers. - Delete the data directory on the problematic DataNode: the directory is specified by dfs.data.dir in conf/hdfs-site.xml. In our case, the relevant directory is /app/hadoop/tmp/dfs/data - Reformat the NameNode. All HDFS data will be lost during the format perocess. - Restart the cluster. Or We can manually update the namespaceID of problematic DataNodes: - Stop the problematic DataNode(s). - Edit the value of namespaceID in ${dfs.data.dir}/current/VERSION to match the corresponding value of the current NameNode in ${dfs.name.dir}/current/VERSION. - Restart the fixed DataNode(s). In Running Map-Reduce Job in Apache Hadoop (Multinode Cluster), we will share our experience about Map Reduce Job Running as per apache hadoop example. Resources - - - hi & thanks for this post . Is it possible for you to do these steps on windows-based system ? (i have this error : Cannot connect to the Map/Reduce location java.net.ConnectException … in eclipse -based on ) thanks.
http://www.javacodegeeks.com/2013/06/setting-up-apache-hadoop-multi-node-cluster.html/comment-page-1/
CC-MAIN-2014-10
refinedweb
1,093
58.08
07 April 2010 16:30 [Source: ICIS news] LONDON (ICIS news)--European polypropylene (PP) buyers are confirming increases of €100/tonne ($133/tonne) for April sales on restricted availability, after already paying 20% more for their product in 2010, several said on Wednesday. “I’ve never seen anything like this in my 15 years in the business,” said one buyer. PP prices had already risen by €200/tonne in 2010, and now a third hike, which some producers reported to be as high as €130/tonne, was being foisted onto the market, buyers said. “Our net homopolymer injection prices are around €1,200/tonne [FD (free delivered) NWE (northwest ?xml:namespace> “Product is very, very tight. We can ask almost what we want,” said another. “It is very difficult to survive,” said the buyer. “Last week I thought I was going to have to close a line because I didn’t have the product. And there is no way we can get these increases back.” Another large buyer confirmed paying an increase for April, of €100-110/tonne, depending on supplier. Other buyers were trying to hold the increase at €70/tonne, but as the month progressed, freely negotiated sales were looking less likely to be done at such an increase. Propylene-based contracts would automatically be settled at plus €70/tonne, the increase seen in the April monthly propylene contract, agreed sources. PP inventories had been low for many months. Buyers expected to be offered material from new capacities in the Producers’ inventories were also low, mainly due to restraints upstream, as low refinery runs and cuts at crackers, introduced initially to keep ethylene from being oversupplied, also affected propylene output. One major PP producer acknowledged that the situation was difficult for converters, but said that margins were unworkably low: “Pricing is not the issue at the moment. We need margin on top of propylene. €30/tonne [increase above propylene costs] is realistic and defendable. Internally we have endless discussion over allocation.” Another producer said: “I’ve been accused of exploiting the situation and profiteering, among other things.” Market sources began to ponder on when the current tight situation would ease. Several producers hinted at yet more increases in May, when they expected the tight situation to continue. Others kept an eye on Asia, looking for opportunities to import product into “One thing is sure,” said a trader. “When prices start coming down, the buyers will be looking for blood.” PP producers in Europe include LyondellBasell, Borealis, SABIC, Total Petrochemicals, Dow Chemical, Repsol, INEOS Olefins and Polymers, Polychim and Domo. (
http://www.icis.com/Articles/2010/04/07/9348816/Europe-PP-prices-rise-by-almost-10-in-April.html
CC-MAIN-2014-35
refinedweb
431
54.02
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. create openerp user through python code Hi guys, How to Create openerp user through Python code ? I have created a custom menu, there i need to create openerp user. How can we override create method for this. i refered below link: \ def create(self, cr, uid, vals, context=None): user_obj = self.pool.get('res.users') vals_user = { 'name': vals.get('name'), 'login': default_login, #other required field } user_obj.create(cr, uid, vals_user, context) result = super(hr_employee, self).create(cr, uid, vals, context=context) return result I didnt understood "'login': default_login", How to do this? Is there any other method's? Thanks, Hi, The suggestion in the link should work fine. You just need to specify the required fields: vals_user = { 'name': 'value_of_name', 'login': 'login_password_value_eg_123', ... and so on You can check the required fields of the user here ... Hi all, Thank you, it worked. I was getting 'Integrity error'. and it is solved. def create(self, cr, uid, vals, context=None): user_obj = self.pool.get('res.users') vals_user = { 'name': vals.get('name'), 'login': vals.get('login'), 'password': vals.get('password'), } user_obj.create(cr, uid, vals_user, context) result = super(users_profile, self).create(cr, uid, vals, context=context)!
https://www.odoo.com/forum/help-1/question/create-openerp-user-through-python-code-97812
CC-MAIN-2016-50
refinedweb
224
54.59
Sockets and the socket API are used to send messages across a network. They provide a form of inter-process communication (IPC). The network can be a logical, local network to the computer, or one that’s physically connected to an external network, with its own connections to other networks. The obvious example is the Internet, which you connect to via your ISP. This tutorial has. By the end of this tutorial, you’ll understand how to use the main functions and methods in Python’s socket module to write your own client-server applications. This includes showing you how to use a custom class to send messages and data between endpoints that you can build upon and utilize for your own applications. The examples in this tutorial use Python 3.6. You can find the source code on GitHub. Networking and sockets are large subjects. Literal volumes have been written about them. If you’re new to sockets or networking, it’s completely normal if you feel overwhelmed with all of the terms and pieces. I know I did! Don’t be discouraged though. I’ve written this tutorial for you. As we do with Python, we can learn a little bit at a time. Use your browser’s bookmark feature and come back when you’re ready for the next section. Let’s get started!. Socket API Overview Python’s socket module provides an interface to the Berkeley sockets API. This is the module that we’ll use and discuss in this tutorial. The primary socket API functions and methods in this module are: socket() bind() listen() accept() connect() connect_ex() send() recv() Python provides a convenient and consistent API that maps directly to these system calls, their C counterparts. We’ll look at how these are used together in the next section. As part of its standard library, Python also has classes that make using these low-level socket functions easier. Although it’s not covered in this tutorial, see the socketserver module, a framework for network servers. There are also many modules available that implement higher-level Internet protocols like HTTP and SMTP. For an overview, see Internet Protocols and Support. TCP Sockets As you’ll see shortly, we’ll create a socket object using socket.socket() and specify the socket type as socket.SOCK_STREAM. When you do that, the default protocol that’s used is the Transmission Control Protocol (TCP). This is a good default and probably what you want. Why should you use TCP? The Transmission Control Protocol (TCP): - Is reliable: packets dropped in the network are detected and retransmitted by the sender. - Has in-order data delivery: data is read by your application in the order it was written by the sender. In contrast, User Datagram Protocol (UDP) sockets created with socket.SOCK_DGRAM aren’t reliable, and data read by the receiver can be out-of-order from the sender’s writes. Why is this important?. In the diagram below, let’s look at the sequence of socket API calls and data flow for TCP: The left-hand column represents the server. On the right-hand side is the client. Starting. In the middle is the round-trip section, where data is exchanged between the client and server using calls to send() and recv(). At the bottom, the client and server close() their respective sockets. Echo Client and Server Now that you’ve seen an overview of the socket API and how the client and server communicate, let’s create our first client and server. We’ll begin with a simple implementation. The server will simply echo whatever it receives back to the client. Echo Server Here’s the server, echo-server.py: #!/usr/bin/env python3 import socket HOST = '127.0.0.1' # Standard loopback interface address (localhost) PORT = 65432 # Port to listen on (non-privileged ports are > 1023) with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen() conn, addr = s.accept() with conn: print('Connected by', addr) while True: data = conn.recv(1024) if not data: break conn.sendall(data) Note: Don’t worry about understanding everything above right now. There’s a lot going on in these few lines of code. This is just a starting point so you can see a basic server in action. There’s a reference section at the end of this tutorial that has more information and links to additional resources. I’ll link to these and other resources throughout the tutorial. Let’s walk through each API call and see what’s happening. socket.socket() creates a socket object that supports the context manager type, so you can use it in a with statement. There’s no need to call s.close(): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: pass # Use the socket object without calling s.close(). The arguments passed to socket() specify the address family and socket type. AF_INET is the Internet address family for IPv4. SOCK_STREAM is the socket type for TCP, the protocol that will be used to transport our messages in the network. bind() is used to associate the socket with a specific network interface and port number: HOST = '127.0.0.1' # Standard loopback interface address (localhost) PORT = 65432 # Port to listen on (non-privileged ports are > 1023) # ... s.bind((HOST, PORT)) The values passed to bind() depend on the address family of the socket. In this example, we’re using socket.AF_INET (IPv4). So it expects a 2-tuple: (host, port). host can be a hostname, IP address, or empty string. If an IP address is used, host should be an IPv4-formatted address string. The IP address 127.0.0.1 is the standard IPv4 address for the loopback interface, so only processes on the host will be able to connect to the server. If you pass an empty string, the server will accept connections on all available IPv4 interfaces. port should be an integer from 1- 65535 ( 0 is reserved). It’s the TCP port number to accept connections on from clients. Some systems may require superuser privileges if the port is < 1024. Here’s a note on using hostnames with bind(): ) I’ll discuss this more later in Using Hostnames, but it’s worth mentioning here. For now, just understand that when using a hostname, you could see different results depending on what’s returned from the name resolution process. It could be anything. The first time you run your application, it might be the address 10.1.2.3. The next time it’s a different address, 192.168.0.1. The third time, it could be 172.16.7.8, and so on. Continuing with the server example, listen() enables a server to accept() connections. It makes it a “listening” socket: s.listen() conn, addr = s.accept() listen() has a backlog parameter. It specifies the number of unaccepted connections that the system will allow before refusing new connections. Starting in Python 3.5, it’s optional. If not specified, a default backlog value is chosen. If your server receives a lot of connection requests simultaneously, increasing the backlog value may help by setting the maximum length of the queue for pending connections. The maximum value is system dependent. For example, on Linux, see /proc/sys/net/core/somaxconn. accept() blocks and waits for an incoming connection. When a client connects, it returns a new socket object representing the connection and a tuple holding the address of the client. The tuple will contain (host, port) for IPv4 connections or (host, port, flowinfo, scopeid) for IPv6. See Socket Address Families in the reference section for details on the tuple values. One thing that’s imperative to understand is that we now have a new socket object from accept(). This is important since it’s the socket that you’ll use to communicate with the client. It’s distinct from the listening socket that the server is using to accept new connections: conn, addr = s.accept() with conn: print('Connected by', addr) while True: data = conn.recv(1024) if not data: break conn.sendall(data) After getting the client socket object conn from accept(), an infinite while loop is used to loop over blocking calls to conn.recv(). This reads whatever data the client sends and echoes it back using conn.sendall(). If conn.recv() returns an empty bytes object, b'', then the client closed the connection and the loop is terminated. The with statement is used with conn to automatically close the socket at the end of the block. Echo Client Now let’s look at the client, echo-client.py: #!/usr/bin/env python3 import socket HOST = '127.0.0.1' # The server's hostname or IP address PORT = 65432 # The port used by the server with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) s.sendall(b'Hello, world') data = s.recv(1024) print('Received', repr(data)) In comparison to the server, the client is pretty simple. It creates a socket object, connects to the server and calls s.sendall() to send its message. Lastly, it calls s.recv() to read the server’s reply and then prints it. Running the Echo Client and Server Let’s run the client and server to see how they behave and inspect what’s happening. Note: If you’re having trouble getting the examples or your own code to run from the command line, read How Do I Make My Own Command-Line Commands Using Python? If you’re on Windows, check the Python Windows FAQ. Open a terminal or command prompt, navigate to the directory that contains your scripts, and run the server: $ ./echo-server.py Your terminal will appear to hang. That’s because the server is blocked (suspended) in a call: conn, addr = s.accept() It’s waiting for a client connection. Now open another terminal window or command prompt and run the client: $ ./echo-client.py Received b'Hello, world' In the server window, you should see: $ ./echo-server.py Connected by ('127.0.0.1', 64623) In the output above, the server printed the addr tuple returned from s.accept(). This is the client’s IP address and TCP port number. The port number, 64623, will most likely be different when you run it on your machine. Viewing Socket State To see the current state of sockets on your host, use netstat. It’s available by default on macOS, Linux, and Windows. Here’s the netstat output from macOS after starting the server: $ netstat -an Active Internet connections (including servers) Proto Recv-Q Send-Q Local Address Foreign Address (state) tcp4 0 0 127.0.0.1.65432 *.* LISTEN Notice that Local Address is 127.0.0.1.65432. If echo-server.py had used HOST = '' instead of HOST = '127.0.0.1', netstat would show this: $ netstat -an Active Internet connections (including servers) Proto Recv-Q Send-Q Local Address Foreign Address (state) tcp4 0 0 *.65432 *.* LISTEN Local Address is *.65432, which means all available host interfaces that support the address family will be used to accept incoming connections. In this example, in the call to socket(), socket.AF_INET was used (IPv4). You can see this in the Proto column: tcp4. I’ve trimmed the output above to show the echo server only. You’ll likely see much more output, depending on the system you’re running it on. The things to notice are the columns Proto, Local Address, and (state). In the last example above, netstat shows the echo server is using an IPv4 TCP socket ( tcp4), on port 65432 on all interfaces ( *.65432), and it’s in the listening state ( LISTEN). Another way to see this, along with additional helpful information, is to use lsof (list open files). It’s available by default on macOS and can be installed on Linux using your package manager, if it’s not already: $ lsof -i -n COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME Python 67982 nathan 3u IPv4 0xecf272 0t0 TCP *:65432 (LISTEN) lsof gives you the COMMAND, PID (process id), and USER (user id) of open Internet sockets when used with the -i option. Above is the echo server process. netstat and lsof have a lot of options available and differ depending on the OS you’re running them on. Check the man page or documentation for both. They’re definitely worth spending a little time with and getting to know. You’ll be rewarded. On macOS and Linux, use man netstat and man lsof. For Windows, use netstat /?. Here’s a common error you’ll see when a connection attempt is made to a port with no listening socket: $ ./echo-client.py Traceback (most recent call last): File "./echo-client.py", line 9, in <module> s.connect((HOST, PORT)) ConnectionRefusedError: [Errno 61] Connection refused Either the specified port number is wrong or the server isn’t running. Or maybe there’s a firewall in the path that’s blocking the connection, which can be easy to forget about. You may also see the error Connection timed out. Get a firewall rule added that allows the client to connect to the TCP port! There’s a list of common errors in the reference section. Communication Breakdown Let’s take a closer look at how the client and server communicated with each other: When using the loopback interface (IPv4 address 127.0.0.1 or IPv6 address ::1), data never leaves the host or touches the external network. In the diagram above, the loopback interface is contained inside the host. This represents the internal nature of the loopback interface and that connections and data that transit it are local to the host. This is why you’ll also hear the loopback interface and IP address 127.0.0.1 or ::1 referred to as “localhost.” Applications use the loopback interface to communicate with other processes running on the host and for security and isolation from the external network. Since it’s internal and accessible only from within the host, it’s not exposed. You can see this in action if you have an application server that uses its own private database. If it’s not a database used by other servers, it’s probably configured to listen for connections on the loopback interface only. If this is the case, other hosts on the network can’t connect to it. When you use an IP address other than 127.0.0.1 or ::1 in your applications, it’s probably bound to an Ethernet interface that’s connected to an external network. This is your gateway to other hosts outside of your “localhost” kingdom: Be careful out there. It’s a nasty, cruel world. Be sure to read the section Using Hostnames before venturing from the safe confines of “localhost.” There’s a security note that applies even if you’re not using hostnames and using IP addresses only. Handling Multiple Connections The echo server definitely has its limitations. The biggest being that it serves only one client and then exits. The echo client has this limitation too, but there’s an additional problem. When the client makes the following call, it’s possible that s.recv() will return only one byte, b'H' from b'Hello, world': data = s.recv(1024) The bufsize argument of 1024 used above is the maximum amount of data to be received at once. It doesn’t mean that recv() will return 1024 bytes. send() also behaves this way. send() returns the number of bytes sent, which may be less than the size of the data passed in. You’re responsible for checking this and calling send() as many times as needed to send all of the data: “Applications are responsible for checking that all data has been sent; if only some of the data was transmitted, the application needs to attempt delivery of the remaining data.” (Source) We avoided having to do this by using sendall(): “Unlike send(), this method continues to send data from bytes until either all data has been sent or an error occurs. None is returned on success.” (Source) We have two problems at this point: - How do we handle multiple connections concurrently? - We need to call send()and recv()until all data is sent or received. What do we do? There are many approaches to concurrency. More recently, a popular approach is to use Asynchronous I/O. asyncio was introduced into the standard library in Python 3.4. The traditional choice is to use threads. The trouble with concurrency is it’s hard to get right. There are many subtleties to consider and guard against. All it takes is for one of these to manifest itself and your application may suddenly fail in not-so-subtle ways. I don’t say this to scare you away from learning and using concurrent programming. If your application needs to scale, it’s a necessity if you want to use more than one processor or one core. However, for this tutorial, we’ll use something that’s more traditional than threads and easier to reason about. We’re going to use the granddaddy of system calls: select() allows you to check for I/O completion on more than one socket. So you can call select() to see which sockets have I/O ready for reading and/or writing. But this is Python, so there’s more. We’re going to use the selectors module in the standard library so the most efficient implementation is used, regardless of the operating system we happen to be running on: “This module allows high-level and efficient I/O multiplexing, built upon the select module primitives. Users are encouraged to use this module instead, unless they want precise control over the OS-level primitives used.” (Source) Even though, by using select(), we’re not able to run concurrently, depending on your workload, this approach may still be plenty fast. It depends on what your application needs to do when it services a request and the number of clients it needs to support. asyncio uses single-threaded cooperative multitasking and an event loop to manage tasks. With select(), we’ll be writing our own version of an event loop, albeit more simply and synchronously. When using multiple threads, even though you have concurrency, we currently have to use the GIL with CPython and PyPy. This effectively limits the amount of work we can do in parallel anyway. I say all of this to explain that using select() may be a perfectly fine choice. Don’t feel like you have to use asyncio, threads, or the latest asynchronous library. Typically, in a network application, your application is I/O bound: it could be waiting on the local network, endpoints on the other side of the network, on a disk, and so forth. If you’re getting requests from clients that initiate CPU bound work, look at the concurrent.futures module. It contains the class ProcessPoolExecutor that uses a pool of processes to execute calls asynchronously. If you use multiple processes, the operating system is able to schedule your Python code to run in parallel on multiple processors or cores, without the GIL. For ideas and inspiration, see the PyCon talk John Reese - Thinking Outside the GIL with AsyncIO and Multiprocessing - PyCon 2018. In the next section, we’ll look at examples of a server and client that address these problems. They use select() to handle multiple connections simultaneously and call send() and recv() as many times as needed. Multi-Connection Client and Server In the next two sections, we’ll create a server and client that handles multiple connections using a selector object created from the selectors module. Multi-Connection Server First, let’s look at the multi-connection server, multiconn-server.py. Here’s the first part that sets up the listening socket: import selectors sel = selectors.DefaultSelector() # ... lsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) lsock.bind((host, port)) lsock.listen() print('listening on', (host, port)) lsock.setblocking(False) sel.register(lsock, selectors.EVENT_READ, data=None) The biggest difference between this server and the echo server is the call to lsock.setblocking(False) to configure the socket in non-blocking mode. Calls made to this socket will no longer block. When it’s used with sel.select(), as you’ll see below, we can wait for events on one or more sockets and then read and write data when it’s ready. sel.register() registers the socket to be monitored with sel.select() for the events you’re interested in. For the listening socket, we want read events: selectors.EVENT_READ. data is used to store whatever arbitrary data you’d like along with the socket. It’s returned when select() returns. We’ll use data to keep track of what’s been sent and received on the socket. Next is the event loop: import selectors sel = selectors.DefaultSelector() # ... while True: events = sel.select(timeout=None) for key, mask in events: if key.data is None: accept_wrapper(key.fileobj) else: service_connection(key, mask) sel.select(timeout=None) blocks until there are sockets ready for I/O. It returns a list of (key, events) tuples, one for each socket. key is a SelectorKey namedtuple that contains a fileobj attribute. key.fileobj is the socket object, and mask is an event mask of the operations that are ready. If key.data is None, then we know it’s from the listening socket and we need to accept() the connection. We’ll call our own accept() wrapper function to get the new socket object and register it with the selector. We’ll look at it in a moment. If key.data is not None, then we know it’s a client socket that’s already been accepted, and we need to service it. service_connection() is then called and passed key and mask, which contains everything we need to operate on the socket. Let’s look at what our accept_wrapper() function does: def accept_wrapper(sock): conn, addr = sock.accept() # Should be ready to read print('accepted connection from', addr) conn.setblocking(False) data = types.SimpleNamespace(addr=addr, inb=b'', outb=b'') events = selectors.EVENT_READ | selectors.EVENT_WRITE sel.register(conn, events, data=data) Since the listening socket was registered for the event selectors.EVENT_READ, it should be ready to read. We call sock.accept() and then immediately call conn.setblocking(False) to put the socket in non-blocking mode. Remember, this is the main objective in this version of the server since we don’t want it to block. If it blocks, then the entire server is stalled until it returns. Which means other sockets are left waiting. This is the dreaded “hang” state that you don’t want your server to be in. Next, we create an object to hold the data we want included along with the socket using the class types.SimpleNamespace. Since we want to know when the client connection is ready for reading and writing, both of those events are set using the following: events = selectors.EVENT_READ | selectors.EVENT_WRITE The events mask, socket, and data objects are then passed to sel.register(). Now let’s look at service_connection() to see how a client connection is handled when it’s ready: def service_connection(key, mask): sock = key.fileobj data = key.data if mask & selectors.EVENT_READ: recv_data = sock.recv(1024) # Should be ready to read if recv_data: data.outb += recv_data else: print('closing connection to', data.addr) sel.unregister(sock) sock.close() if mask & selectors.EVENT_WRITE: if data.outb: print('echoing', repr(data.outb), 'to', data.addr) sent = sock.send(data.outb) # Should be ready to write data.outb = data.outb[sent:] This is the heart of the simple multi-connection server. key is the namedtuple returned from select() that contains the socket object ( fileobj) and data object. mask contains the events that are ready. If the socket is ready for reading, then mask & selectors.EVENT_READ is true, and sock.recv() is called. Any data that’s read is appended to data.outb so it can be sent later. Note the else: block if no data is received: if recv_data: data.outb += recv_data else: print('closing connection to', data.addr) sel.unregister(sock) sock.close() This means that the client has closed their socket, so the server should too. But don’t forget to first call sel.unregister() so it’s no longer monitored by When the socket is ready for writing, which should always be the case for a healthy socket, any received data stored in data.outb is echoed to the client using sock.send(). The bytes sent are then removed from the send buffer: data.outb = data.outb[sent:] Multi-Connection Client Now let’s look at the multi-connection client, multiconn-client.py. It’s very similar to the server, but instead of listening for connections, it starts by initiating connections via start_connections(): messages = [b'Message 1 from client.', b'Message 2 from client.'] def start_connections(host, port, num_conns): server_addr = (host, port) for i in range(0, num_conns): connid = i + 1 print('starting connection', connid, 'to', server_addr) sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setblocking(False) sock.connect_ex(server_addr) events = selectors.EVENT_READ | selectors.EVENT_WRITE data = types.SimpleNamespace(connid=connid, msg_total=sum(len(m) for m in messages), recv_total=0, messages=list(messages), outb=b'') sel.register(sock, events, data=data) num_conns is read from the command-line, which is the number of connections to create to the server. Just like the server, each socket is set to non-blocking mode. connect_ex() is used instead of connect() since connect() would immediately raise a BlockingIOError exception. connect_ex() initially returns an error indicator, errno.EINPROGRESS, instead of raising an exception while the connection is in progress. Once the connection is completed, the socket is ready for reading and writing and is returned as such by After the socket is setup, the data we want stored with the socket is created using the class types.SimpleNamespace. The messages the client will send to the server are copied using list(messages) since each connection will call socket.send() and modify the list. Everything needed to keep track of what the client needs to send, has sent and received, and the total number of bytes in the messages is stored in the object data. Let’s look at service_connection(). It’s fundamentally the same as the server: def service_connection(key, mask): sock = key.fileobj data = key.data if mask & selectors.EVENT_READ: recv_data = sock.recv(1024) # Should be ready to read if recv_data: print('received', repr(recv_data), 'from connection', data.connid) data.recv_total += len(recv_data) if not recv_data or data.recv_total == data.msg_total: print('closing connection', data.connid) sel.unregister(sock) sock.close() if mask & selectors.EVENT_WRITE: if not data.outb and data.messages: data.outb = data.messages.pop(0) if data.outb: print('sending', repr(data.outb), 'to connection', data.connid) sent = sock.send(data.outb) # Should be ready to write data.outb = data.outb[sent:] There’s one important difference. It keeps track of the number of bytes it’s received from the server so it can close its side of the connection. When the server detects this, it closes its side of the connection too. Note that by doing this, the server depends on the client being well-behaved: the server expects the client to close its side of the connection when it’s done sending messages. If the client doesn’t close, the server will leave the connection open. In a real application, you may want to guard against this in your server and prevent client connections from accumulating if they don’t send a request after a certain amount of time. Running the Multi-Connection Client and Server Now let’s run multiconn-server.py and multiconn-client.py. They both use command-line arguments. You can run them without arguments to see the options. For the server, pass a host and port number: $ ./multiconn-server.py usage: ./multiconn-server.py <host> <port> For the client, also pass the number of connections to create to the server, num_connections: $ ./multiconn-client.py usage: ./multiconn-client.py <host> <port> <num_connections> Below is the server output when listening on the loopback interface on port 65432: $ ./multiconn-server.py 127.0.0.1 65432 listening on ('127.0.0.1', 65432) accepted connection from ('127.0.0.1', 61354) accepted connection from ('127.0.0.1', 61355) echoing b'Message 1 from client.Message 2 from client.' to ('127.0.0.1', 61354) echoing b'Message 1 from client.Message 2 from client.' to ('127.0.0.1', 61355) closing connection to ('127.0.0.1', 61354) closing connection to ('127.0.0.1', 61355) Below is the client output when it creates two connections to the server above: $ ./multiconn-client.py 127.0.0.1 65432 2 starting connection 1 to ('127.0.0.1', 65432) starting connection 2 to ('127.0.0.1', 65432) sending b'Message 1 from client.' to connection 1 sending b'Message 2 from client.' to connection 1 sending b'Message 1 from client.' to connection 2 sending b'Message 2 from client.' to connection 2 received b'Message 1 from client.Message 2 from client.' from connection 1 closing connection 1 received b'Message 1 from client.Message 2 from client.' from connection 2 closing connection 2 Application Client and Server The multi-connection client and server example is definitely an improvement compared with where we started. However, let’s take one more step and address the shortcomings of the previous “multiconn” example in a final implementation: the application client and server. We want a client and server that handles errors appropriately so other connections aren’t affected. Obviously, our client or server shouldn’t come crashing down in a ball of fury if an exception isn’t caught. This is something we haven’t discussed up until now. I’ve intentionally left out error handling for brevity and clarity in the examples. Now that you’re familiar with the basic API, non-blocking sockets, and select(), we can add some error handling and discuss the “elephant in the room” that I’ve kept hidden from you behind that large curtain over there. Yes, I’m talking about the custom class I mentioned way back in the introduction. I knew you wouldn’t forget. First, let’s address the errors: “All errors raise exceptions. The normal exceptions for invalid argument types and out-of-memory conditions can be raised; starting from Python 3.3, errors related to socket or address semantics raise OSErroror one of its subclasses.” (Source) We need to catch OSError. Another thing I haven’t mentioned in relation to errors is timeouts. You’ll see them discussed in many places in the documentation. Timeouts happen and are a “normal” error. Hosts and routers are rebooted, switch ports go bad, cables go bad, cables get unplugged, you name it. You should be prepared for these and other errors and handle them in your code. What about the “elephant in the room?” As hinted by the socket type socket.SOCK_STREAM, when using TCP, you’re reading from a continuous stream of bytes. It’s like reading from a file on disk, but instead you’re reading bytes from the network. However, unlike reading a file, there’s no f.seek(). In other words, you can’t reposition the socket pointer, if there was one, and move randomly around the data reading whatever, whenever you’d like. When bytes arrive at your socket, there are network buffers involved. Once you’ve read them, they need to be saved somewhere. Calling recv() again reads the next stream of bytes available from the socket. What this means is that you’ll be reading from the socket in chunks. You need to call recv() and save the data in a buffer until you’ve read enough bytes to have a complete message that makes sense to your application. It’s up to you to define and keep track of where the message boundaries are. As far as the TCP socket is concerned, it’s just sending and receiving raw bytes to and from the network. It knows nothing about what those raw bytes mean. This bring us to defining an application-layer protocol. What’s an application-layer protocol? Put simply, your application will send and receive messages. These messages are your application’s protocol. In other words, the length and format you choose for these messages define the semantics and behavior of your application. This is directly related to what I explained in the previous paragraph regarding reading bytes from the socket. When you’re reading bytes with recv(), you need to keep up with how many bytes were read and figure out where the message boundaries are. How is this done? One way is to always send fixed-length messages. If they’re always the same size, then it’s easy. When you’ve read that number of bytes into a buffer, then you know you have one complete message. However, using fixed-length messages is inefficient for small messages where you’d need to use padding to fill them out. Also, you’re still left with the problem of what to do about data that doesn’t fit into one message. In this tutorial, we’ll take a generic approach. An approach that’s used by many protocols, including HTTP. We’ll prefix messages with a header that includes the content length as well as any other fields we need. By doing this, we’ll only need to keep up with the header. Once we’ve read the header, we can process it to determine the length of the message’s content and then read that number of bytes to consume it. We’ll implement this by creating a custom class that can send and receive messages that contain text or binary data. You can improve and extend it for your own applications. The most important thing is that you’ll be able to see an example of how this is done. I need to mention something regarding sockets and bytes that may affect you. As we talked about earlier, when sending and receiving data via sockets, you’re sending and receiving raw bytes. If you receive data and want to use it in a context where it’s interpreted as multiple bytes, for example a 4-byte integer, you’ll need to take into account that it could be in a format that’s not native to your machine’s CPU. The client or server on the other end could have a CPU that uses a different byte order than your own. If this is the case, you’ll need to convert it to your host’s native byte order before using it. This byte order is referred to as a CPU’s endianness. See Byte Endianness in the reference section for details. We’ll avoid this issue by taking advantage of Unicode for our message header and using the encoding UTF-8. Since UTF-8 uses an 8-bit encoding, there are no byte ordering issues. You can find an explanation in Python’s Encodings and Unicode documentation. Note that this applies to the text header only. We’ll use an explicit type and encoding defined in the header for the content that’s being sent, the message payload. This will allow us to transfer any data we’d like (text or binary), in any format. You can easily determine the byte order of your machine by using sys.byteorder. For example, on my Intel laptop, this happens: $ python3 -c 'import sys; print(repr(sys.byteorder))' 'little' If I run this in a virtual machine that emulates a big-endian CPU (PowerPC), then this happens: $ python3 -c 'import sys; print(repr(sys.byteorder))' 'big' In this example application, our application-layer protocol defines the header as Unicode text with a UTF-8 encoding. For the actual content in the message, the message payload, you’ll still have to swap the byte order manually if needed. This will depend on your application and whether or not it needs to process multi-byte binary data from a machine with a different endianness. You can help your client or server implement binary support by adding additional headers and using them to pass parameters, similar to HTTP. Don’t worry if this doesn’t make sense yet. In the next section, you’ll see how all of this works and fits together. Application Protocol Header Let’s fully define the protocol header. The protocol header is: - Variable-length text - Unicode with the encoding UTF-8 - A Python dictionary serialized using JSON The required headers, or sub-headers, in the protocol header’s dictionary are as follows: These headers inform the receiver about the content in the payload of the message. This allows you to send arbitrary data while providing enough information so the content can be decoded and interpreted correctly by the receiver. Since the headers are in a dictionary, it’s easy to add additional headers by inserting key/value pairs as needed. Sending an Application Message There’s still a bit of a problem. We have a variable-length header, which is nice and flexible, but how do you know the length of the header when reading it with recv()? When we previously talked about using recv() and message boundaries, I mentioned that fixed-length headers can be inefficient. That’s true, but we’re going to use a small, 2-byte, fixed-length header to prefix the JSON header that contains its length. You can think of this as a hybrid approach to sending messages. In effect, we’re bootstrapping the message receive process by sending the length of the header first. This makes it easy for our receiver to deconstruct the message. To give you a better idea of the message format, let’s look at a message in its entirety: A message starts with a fixed-length header of 2 bytes that’s an integer in network byte order. This is the length of the next header, the variable-length JSON header. Once we’ve read 2 bytes with recv(), then we know we can process the 2 bytes as an integer and then read that number of bytes before decoding the UTF-8 JSON header. The JSON header contains a dictionary of additional headers. One of those is content-length, which is the number of bytes of the message’s content (not including the JSON header). Once we’ve called recv() and read content-length bytes, we’ve reached a message boundary and read an entire message. Application Message Class Finally, the payoff! Let’s look at the Message class and see how it’s used with select() when read and write events happen on the socket. For this example application, I had to come up with an idea for what types of messages the client and server would use. We’re far beyond toy echo clients and servers at this point. To keep things simple and still demonstrate how things would work in a real application, I created an application protocol that implements a basic search feature. The client sends a search request and the server does a lookup for a match. If the request sent by the client isn’t recognized as a search, the server assumes it’s a binary request and returns a binary response. After reading the following sections, running the examples, and experimenting with the code, you’ll see how things work. You can then use the Message class as a starting point and modify it for your own use. We’re really not that far off from the “multiconn” client and server example. The event loop code stays the same in app-client.py and app-server.py. What I’ve done is move the message code into a class named Message and added methods to support reading, writing, and processing of the headers and content. This is a great example for using a class. As we discussed before and you’ll see below, working with sockets involves keeping state. By using a class, we keep all of the state, data, and code bundled together in an organized unit. An instance of the class is created for each socket in the client and server when a connection is started or accepted. The class is mostly the same for both the client and the server for the wrapper and utility methods. They start with an underscore, like Message._json_encode(). These methods simplify working with the class. They help other methods by allowing them to stay shorter and support the DRY principle. The server’s Message class works in essentially the same way as the client’s and vice-versa. The difference being that the client initiates the connection and sends a request message, followed by processing the server’s response message. Conversely, the server waits for a connection, processes the client’s request message, and then sends a response message. It looks like this: Here’s the file and code layout: Message Entry Point I’d like to discuss how the Message class works by first mentioning an aspect of its design that wasn’t immediately obvious to me. Only after refactoring it at least five times did I arrive at what it is currently. Why? Managing state. After a Message object is created, it’s associated with a socket that’s monitored for events using selector.register(): message = libserver.Message(sel, conn, addr) sel.register(conn, selectors.EVENT_READ, data=message) Note: Some of the code examples in this section are from the server’s main script and Message class, but this section and discussion applies equally to the client as well. I’ll show and explain the client’s version when it differs. When events are ready on the socket, they’re returned by selector.select(). We can then get a reference back to the message object using the data attribute on the key object and call a method in while True: events = sel.select(timeout=None) for key, mask in events: # ... message = key.data message.process_events(mask) Looking at the event loop above, you’ll see that sel.select() is in the driver’s seat. It’s blocking, waiting at the top of the loop for events. It’s responsible for waking up when read and write events are ready to be processed on the socket. Which means, indirectly, it’s also responsible for calling the method process_events(). This is what I mean when I say the method process_events() is the entry point. Let’s see what the process_events() method does: def process_events(self, mask): if mask & selectors.EVENT_READ: self.read() if mask & selectors.EVENT_WRITE: self.write() That’s good: process_events() is simple. It can only do two things: call read() and write(). This brings us back to managing state. After a few refactorings, I decided that if another method depended on state variables having a certain value, then they would only be called from read() and write(). This keeps the logic as simple as possible as events come in on the socket for processing. This may seem obvious, but the first few iterations of the class were a mix of some methods that checked the current state and, depending on their value, called other methods to process data outside read() or write(). In the end, this proved too complex to manage and keep up with. You should definitely modify the class to suit your own needs so it works best for you, but I’d recommend that you keep the state checks and the calls to methods that depend on that state to the read() and write() methods if possible. Let’s look at read(). This is the server’s version, but the client’s is the same. It just uses a different method name, process_response() instead of process_request(): def read(self): self._read() if self._jsonheader_len is None: self.process_protoheader() if self._jsonheader_len is not None: if self.jsonheader is None: self.process_jsonheader() if self.jsonheader: if self.request is None: self.process_request() The _read() method is called first. It calls socket.recv() to read data from the socket and store it in a receive buffer. Remember that when socket.recv() is called, all of the data that makes up a complete message may not have arrived yet. socket.recv() may need to be called again. This is why there are state checks for each part of the message before calling the appropriate method to process it. Before a method processes its part of the message, it first checks to make sure enough bytes have been read into the receive buffer. If there are, it processes its respective bytes, removes them from the buffer and writes its output to a variable that’s used by the next processing stage. Since there are three components to a message, there are three state checks and process method calls: Next, let’s look at write(). This is the server’s version: def write(self): if self.request: if not self.response_created: self.create_response() self._write() write() checks first for a request. If one exists and a response hasn’t been created, create_response() is called. create_response() sets the state variable response_created and writes the response to the send buffer. The _write() method calls socket.send() if there’s data in the send buffer. Remember that when socket.send() is called, all of the data in the send buffer may not have been queued for transmission. The network buffers for the socket may be full, and socket.send() may need to be called again. This is why there are state checks. create_response() should only be called once, but it’s expected that _write() will need to be called multiple times. The client version of write() is similar: def write(self): if not self._request_queued: self.queue_request() self._write() if self._request_queued: if not self._send_buffer: # Set selector to listen for read events, we're done writing. self._set_selector_events_mask('r') Since the client initiates a connection to the server and sends a request first, the state variable _request_queued is checked. If a request hasn’t been queued, it calls queue_request(). queue_request() creates the request and writes it to the send buffer. It also sets the state variable _request_queued so it’s only called once. Just like the server, _write() calls socket.send() if there’s data in the send buffer. The notable difference in the client’s version of write() is the last check to see if the request has been queued. This will be explained more in the section Client Main Script, but the reason for this is to tell selector.select() to stop monitoring the socket for write events. If the request has been queued and the send buffer is empty, then we’re done writing and we’re only interested in read events. There’s no reason to be notified that the socket is writable. I’ll wrap up this section by leaving you with one thought. The main purpose of this section was to explain that selector.select() is calling into the Message class via the method process_events() and to describe how state is managed. This is important because process_events() will be called many times over the life of the connection. Therefore, make sure that any methods that should only be called once are either checking a state variable themselves, or the state variable set by the method is checked by the caller. Server Main Script In the server’s main script app-server.py, arguments are read from the command line that specify the interface and port to listen on: $ ./app-server.py usage: ./app-server.py <host> <port> For example, to listen on the loopback interface on port 65432, enter: $ ./app-server.py 127.0.0.1 65432 listening on ('127.0.0.1', 65432) Use an empty string for <host> to listen on all interfaces. After creating the socket, a call is made to socket.setsockopt() with the option socket.SO_REUSEADDR: # Avoid bind() exception: OSError: [Errno 48] Address already in use lsock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) Setting this socket option avoids the error Address already in use. You’ll see this when starting the server and a previously used TCP socket on the same port has connections in the TIME_WAIT state. For example, if the server actively closed a connection, it will remain in the TIME_WAIT state for two minutes or more, depending on the operating system. If you try to start the server again before the TIME_WAIT state expires, you’ll get an OSError exception of Address already in use. This is a safeguard to make sure that any delayed packets in the network aren’t delivered to the wrong application. The event loop catches any errors so the server can stay up and continue to run: while True: events = sel.select(timeout=None) for key, mask in events: if key.data is None: accept_wrapper(key.fileobj) else: message = key.data try: message.process_events(mask) except Exception: print('main: error: exception for', f'{message.addr}:\n{traceback.format_exc()}') message.close() When a client connection is accepted, a Message object is created: def accept_wrapper(sock): conn, addr = sock.accept() # Should be ready to read print('accepted connection from', addr) conn.setblocking(False) message = libserver.Message(sel, conn, addr) sel.register(conn, selectors.EVENT_READ, data=message) The Message object is associated with the socket in the call to sel.register() and is initially set to be monitored for read events only. Once the request has been read, we’ll modify it to listen for write events only. An advantage of taking this approach in the server is that in most cases, when a socket is healthy and there are no network issues, it will always be writable. If we told sel.register() to also monitor EVENT_WRITE, the event loop would immediately wakeup and notify us that this is the case. However, at this point, there’s no reason to wake up and call send() on the socket. There’s no response to send since a request hasn’t been processed yet. This would consume and waste valuable CPU cycles. Server Message Class In the section Message Entry Point, we looked at how the Message object was called into action when socket events were ready via process_events(). Now let’s look at what happens as data is read on the socket and a component, or piece, of the message is ready to be processed by the server. The server’s message class is in libserver.py. You can find the source code on GitHub. The methods appear in the class in the order in which processing takes place for a message. When the server has read at least 2 bytes, the fixed-length header can be processed: def process_protoheader(self): hdrlen = 2 if len(self._recv_buffer) >= hdrlen: self._jsonheader_len = struct.unpack('>H', self._recv_buffer[:hdrlen])[0] self._recv_buffer = self._recv_buffer[hdrlen:] The fixed-length header is a 2-byte integer in network (big-endian) byte order that contains the length of the JSON header. struct.unpack() is used to read the value, decode it, and store it in self._jsonheader_len. After processing the piece of the message it’s responsible for, process_protoheader() removes it from the receive buffer. Just like the fixed-length header, when there’s enough data in the receive buffer to contain the JSON header, it can be processed as well: def process_jsonheader(self): hdrlen = self._jsonheader_len if len(self._recv_buffer) >= hdrlen: self.jsonheader = self._json_decode(self._recv_buffer[:hdrlen], 'utf-8') self._recv_buffer = self._recv_buffer[hdrlen:] for reqhdr in ('byteorder', 'content-length', 'content-type', 'content-encoding'): if reqhdr not in self.jsonheader: raise ValueError(f'Missing required header "{reqhdr}".') The method self._json_decode() is called to decode and deserialize the JSON header into a dictionary. Since the JSON header is defined as Unicode with a UTF-8 encoding, utf-8 is hardcoded in the call. The result is saved to self.jsonheader. After processing the piece of the message it’s responsible for, process_jsonheader() removes it from the receive buffer. Next is the actual content, or payload, of the message. It’s described by the JSON header in self.jsonheader. When content-length bytes are available in the receive buffer, the request can be processed: def process_request(self): content_len = self.jsonheader['content-length'] if not len(self._recv_buffer) >= content_len: return data = self._recv_buffer[:content_len] self._recv_buffer = self._recv_buffer[content_len:] if self.jsonheader['content-type'] == 'text/json': encoding = self.jsonheader['content-encoding'] self.request = self._json_decode(data, encoding) print('received request', repr(self.request), 'from', self.addr) else: # Binary or unknown content-type self.request = data print(f'received {self.jsonheader["content-type"]} request from', self.addr) # Set selector to listen for write events, we're done reading. self._set_selector_events_mask('w') After saving the message content to the data variable, process_request() removes it from the receive buffer. Then, if the content type is JSON, it decodes and deserializes it. If it’s not, for this example application, it assumes it’s a binary request and simply prints the content type. The last thing process_request() does is modify the selector to monitor write events only. In the server’s main script, app-server.py, the socket is initially set to monitor read events only. Now that the request has been fully processed, we’re no longer interested in reading. A response can now be created and written to the socket. When the socket is writable, create_response() is called from write(): def create_response(self): if self.jsonheader['content-type'] == 'text/json': response = self._create_response_json_content() else: # Binary or unknown content-type response = self._create_response_binary_content() message = self._create_message(**response) self.response_created = True self._send_buffer += message A response is created by calling other methods, depending on the content type. In this example application, a simple dictionary lookup is done for JSON requests when action == 'search'. You can define other methods for your own applications that get called here. After creating the response message, the state variable self.response_created is set so write() doesn’t call create_response() again. Finally, the response is appended to the send buffer. This is seen by and sent via _write(). One tricky bit to figure out was how to close the connection after the response is written. I put the call to close() in the method _write(): def _write(self): if self._send_buffer: print('sending', repr(self._send_buffer), 'to', self.addr) try: # Should be ready to write sent = self.sock.send(self._send_buffer) except BlockingIOError: # Resource temporarily unavailable (errno EWOULDBLOCK) pass else: self._send_buffer = self._send_buffer[sent:] # Close when the buffer is drained. The response has been sent. if sent and not self._send_buffer: self.close() Although it’s somewhat “hidden,” I think it’s an acceptable trade-off given that the Message class only handles one message per connection. After the response is written, there’s nothing left for the server to do. It’s completed its work. Client Main Script In the client’s main script app-client.py, arguments are read from the command line and used to create requests and start connections to the server: $ ./app-client.py usage: ./app-client.py <host> <port> <action> <value> Here’s an example: $ ./app-client.py 127.0.0.1 65432 search needle After creating a dictionary representing the request from the command-line arguments, the host, port, and request dictionary are passed to start_connection(): def start_connection(host, port, request): addr = (host, port) print('starting connection to', addr) sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setblocking(False) sock.connect_ex(addr) events = selectors.EVENT_READ | selectors.EVENT_WRITE message = libclient.Message(sel, sock, addr, request) sel.register(sock, events, data=message) A socket is created for the server connection as well as a Message object using the request dictionary. Like the server, the Message object is associated with the socket in the call to sel.register(). However, for the client, the socket is initially set to be monitored for both read and write events. Once the request has been written, we’ll modify it to listen for read events only. This approach gives us the same advantage as the server: not wasting CPU cycles. After the request has been sent, we’re no longer interested in write events, so there’s no reason to wake up and process them. Client Message Class In the section Message Entry Point, we looked at how the message object was called into action when socket events were ready via process_events(). Now let’s look at what happens after data is read and written on the socket and a message is ready to be processed by the client. The client’s message class is in libclient.py. You can find the source code on GitHub. The methods appear in the class in the order in which processing takes place for a message. The first task for the client is to queue the request: def queue_request(self): content = self.request['content'] content_type = self.request['type'] content_encoding = self.request['encoding'] if content_type == 'text/json': req = { 'content_bytes': self._json_encode(content, content_encoding), 'content_type': content_type, 'content_encoding': content_encoding } else: req = { 'content_bytes': content, 'content_type': content_type, 'content_encoding': content_encoding } message = self._create_message(**req) self._send_buffer += message self._request_queued = True The dictionaries used to create the request, depending on what was passed on the command line, are in the client’s main script, app-client.py. The request dictionary is passed as an argument to the class when a Message object is created. The request message is created and appended to the send buffer, which is then seen by and sent via _write(). The state variable self._request_queued is set so queue_request() isn’t called again. After the request has been sent, the client waits for a response from the server. The methods for reading and processing a message in the client are the same as the server. As response data is read from the socket, the process header methods are called: process_protoheader() and process_jsonheader(). The difference is in the naming of the final process methods and the fact that they’re processing a response, not creating one: process_response(), _process_response_json_content(), and _process_response_binary_content(). Last, but certainly not least, is the final call for process_response(): def process_response(self): # ... # Close when response has been processed self.close() Message Class Wrapup I’ll conclude the Message class discussion by mentioning a couple of things that are important to notice with a few of the supporting methods. Any exceptions raised by the class are caught by the main script in its except clause: try: message.process_events(mask) except Exception: print('main: error: exception for', f'{message.addr}:\n{traceback.format_exc()}') message.close() Note the last line: message.close(). This is a really important line, for more than one reason! Not only does it make sure that the socket is closed, but message.close() also removes the socket from being monitored by select(). This greatly simplifies the code in the class and reduces complexity. If there’s an exception or we explicitly raise one ourselves, we know close() will take care of the cleanup. The methods Message._read() and Message._write() also contain something interesting: def _read(self): try: # Should be ready to read data = self.sock.recv(4096) except BlockingIOError: # Resource temporarily unavailable (errno EWOULDBLOCK) pass else: if data: self._recv_buffer += data else: raise RuntimeError('Peer closed.') Note the except line: except BlockingIOError:. _write() has one too. These lines are important because they catch a temporary error and skip over it using pass. The temporary error is when the socket would block, for example if it’s waiting on the network or the other end of the connection (its peer). By catching and skipping over the exception with pass, select() will eventually call us again, and we’ll get another chance to read or write the data. Running the Application Client and Server After all of this hard work, let’s have some fun and run some searches! In these examples, I’ll run the server so it listens on all interfaces by passing an empty string for the host argument. This will allow me to run the client and connect from a virtual machine that’s on another network. It emulates a big-endian PowerPC machine. First, let’s start the server: $ ./app-server.py '' 65432 listening on ('', 65432) Now let’s run the client and enter a search. Let’s see if we can find him: $ ./app-client.py 10.0.1.1 65432 search morpheus starting connection to ('10.0.1.1', 65432) sending b'\x00d{"byteorder": "big", "content-type": "text/json", "content-encoding": "utf-8", "content-length": 41}{"action": "search", "value": "morpheus"}' to ('10.0.1.1', 65432) received response {'result': 'Follow the white rabbit. 🐰'} from ('10.0.1.1', 65432) got result: Follow the white rabbit. 🐰 closing connection to ('10.0.1.1', 65432) My terminal is running a shell that’s using a text encoding of Unicode (UTF-8), so the output above prints nicely with emojis. Let’s see if we can find the puppies: $ ./app-client.py 10.0.1.1 65432 search 🐶 starting connection to ('10.0.1.1', 65432) sending b'\x00d{"byteorder": "big", "content-type": "text/json", "content-encoding": "utf-8", "content-length": 37}{"action": "search", "value": "\xf0\x9f\x90\xb6"}' to ('10.0.1.1', 65432) received response {'result': '🐾 Playing ball! 🏐'} from ('10.0.1.1', 65432) got result: 🐾 Playing ball! 🏐 closing connection to ('10.0.1.1', 65432) Notice the byte string sent over the network for the request in the sending line. It’s easier to see if you look for the bytes printed in hex that represent the puppy emoji: \xf0\x9f\x90\xb6. I was able to enter the emoji for the search since my terminal is using Unicode with the encoding UTF-8. This demonstrates that we’re sending raw bytes over the network and they need to be decoded by the receiver to be interpreted correctly. This is why we went to all of the trouble to create a header that contains the content type and encoding. Here’s the server output from both client connections above:. \xf0\x9f\x90\xb0"}' to ('10.0.2.2', 55340) closing connection to ('10.0.2.2', 55340) accepted connection from ('10.0.2.2', 55338) received request {'action': 'search', 'value': '🐶'} from ('10.0.2.2', 55338) sending b'\x00g{"byteorder": "little", "content-type": "text/json", "content-encoding": "utf-8", "content-length": 37}{"result": "\xf0\x9f\x90\xbe Playing ball! \xf0\x9f\x8f\x90"}' to ('10.0.2.2', 55338) closing connection to ('10.0.2.2', 55338) Look at the sending line to see the bytes that were written to the client’s socket. This is the server’s response message. You can also test sending binary requests to the server if the action argument is anything other than search: $ ./app-client.py 10.0.1.1 65432 binary 😃 starting connection to ('10.0.1.1', 65432) sending b'\x00|{"byteorder": "big", "content-type": "binary/custom-client-binary-type", "content-encoding": "binary", "content-length": 10}binary\xf0\x9f\x98\x83' to ('10.0.1.1', 65432) received binary/custom-server-binary-type response from ('10.0.1.1', 65432) got response: b'First 10 bytes of request: binary\xf0\x9f\x98\x83' closing connection to ('10.0.1.1', 65432) Since the request’s content-type is not text/json, the server treats it as a custom binary type and doesn’t perform JSON decoding. It simply prints the content-type and returns the first 10 bytes to the client: $ ./app-server.py '' 65432 listening on ('', 65432) accepted connection from ('10.0.2.2', 55320) received binary/custom-client-binary-type request from ('10.0.2.2', 55320) sending b'\x00\x7f{"byteorder": "little", "content-type": "binary/custom-server-binary-type", "content-encoding": "binary", "content-length": 37}First 10 bytes of request: binary\xf0\x9f\x98\x83' to ('10.0.2.2', 55320) closing connection to ('10.0.2.2', 55320) Troubleshooting Inevitably, something won’t work, and you’ll be wondering what to do. Don’t worry, it happens to all of us. Hopefully, with the help of this tutorial, your debugger, and favorite search engine, you’ll be able to get going again with the source code part. If not, your first stop should be Python’s socket module documentation. Make sure you read all of the documentation for each function or method you’re calling. Also, read through the Reference section for ideas. In particular, check the Errors section. Sometimes, it’s not all about the source code. The source code might be correct, and it’s just the other host, the client or server. Or it could be the network, for example, a router, firewall, or some other networking device that’s playing man-in-the-middle. For these types of issues, additional tools are essential. Below are a few tools and utilities that might help or at least provide some clues. ping ping will check if a host is alive and connected to the network by sending an ICMP echo request. It communicates directly with the operating system’s TCP/IP protocol stack, so it works independently from any application running on the host. Below is an example of running ping on macOS: $ ping -c 3 127.0.0.1 PING 127.0.0.1 (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.058 ms 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.164 ms --- 127.0.0.1 ping statistics --- 3 packets transmitted, 3 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 0.058/0.129/0.165/0.050 ms Note the statistics at the end of the output. This can be helpful when you’re trying to discover intermittent connectivity problems. For example, is there any packet loss? How much latency is there (see the round-trip times)? If there’s a firewall between you and the other host, a ping’s echo request may not be allowed. Some firewall administrators implement policies that enforce this. The idea being that they don’t want their hosts to be discoverable. If this is the case and you have firewall rules added to allow the hosts to communicate, make sure that the rules also allow ICMP to pass between them. ICMP is the protocol used by ping, but it’s also the protocol TCP and other lower-level protocols use to communicate error messages. If you’re experiencing strange behavior or slow connections, this could be the reason. ICMP messages are identified by type and code. To give you an idea of the important information they carry, here are a few: See the article Path MTU Discovery for information regarding fragmentation and ICMP messages. This is an example of something that can cause strange behavior that I mentioned previously. netstat In the section Viewing Socket State, we looked at how netstat can be used to display information about sockets and their current state. This utility is available on macOS, Linux, and Windows. I didn’t mention the columns Recv-Q and Send-Q in the example output. These columns will show you the number of bytes that are held in network buffers that are queued for transmission or receipt, but for some reason haven’t been read or written by the remote or local application. In other words, the bytes are waiting in network buffers in the operating system’s queues. One reason could be the application is CPU bound or is otherwise unable to call socket.recv() or socket.send() and process the bytes. Or there could be network issues affecting communications like congestion or failing network hardware or cabling. To demonstrate this and see how much data I could send before seeing an error, I wrote a test client that connects to a test server and repeatedly calls socket.send(). The test server never calls socket.recv(). It just accepts the connection. This causes the network buffers on the server to fill, which eventually raises an error on the client. First, I started the server: $ ./app-server-test.py 127.0.0.1 65432 listening on ('127.0.0.1', 65432) Then I ran the client. Let’s see what the error is: $ ./app-client-test.py 127.0.0.1 65432 binary test error: socket.send() blocking io exception for ('127.0.0.1', 65432): BlockingIOError(35, 'Resource temporarily unavailable') Here’s the netstat output while the client and server were still running, with the client printing out the error message above multiple times: $ netstat -an | grep 65432 Proto Recv-Q Send-Q Local Address Foreign Address (state) tcp4 408300 0 127.0.0.1.65432 127.0.0.1.53225 ESTABLISHED tcp4 0 269868 127.0.0.1.53225 127.0.0.1.65432 ESTABLISHED tcp4 0 0 127.0.0.1.65432 *.* LISTEN The first entry is the server ( Local Address has port 65432): Proto Recv-Q Send-Q Local Address Foreign Address (state) tcp4 408300 0 127.0.0.1.65432 127.0.0.1.53225 ESTABLISHED Notice the Recv-Q: 408300. The second entry is the client ( Foreign Address has port 65432): Proto Recv-Q Send-Q Local Address Foreign Address (state) tcp4 0 269868 127.0.0.1.53225 127.0.0.1.65432 ESTABLISHED Notice the Send-Q: 269868. The client sure was trying to write bytes, but the server wasn’t reading them. This caused the server’s network buffer queue to fill on the receive side and the client’s network buffer queue to fill on the send side. Windows If you work with Windows, there’s a suite of utilities that you should definitely check out if you haven’t already: Windows Sysinternals. One of them is TCPView.exe. TCPView is a graphical netstat for Windows. In addition to addresses, port numbers, and socket state, it will show you running totals for the number of packets and bytes, sent and received. Like the Unix utility lsof, you also get the process name and ID. Check the menus for other display options. Wireshark Sometimes you need to see what’s happening on the wire. Forget about what the application log says or what the value is that’s being returned from a library call. You want to see what’s actually being sent or received on the network. Just like debuggers, when you need to see it, there’s no substitute. Wireshark is a network protocol analyzer and traffic capture application that runs on macOS, Linux, and Windows, among others. There’s a GUI version named wireshark, and also a terminal, text-based version named tshark. Running a traffic capture is a great way to watch how an application behaves on the network and gather evidence about what it sends and receives, and how often and how much. You’ll also be able to see when a client or server closes or aborts a connection or stops responding. This information can be extremely helpful when you’re troubleshooting. There are many good tutorials and other resources on the web that will walk you through the basics of using Wireshark and TShark. Here’s an example of a traffic capture using Wireshark on the loopback interface: Here’s the same example shown above using tshark: $ tshark -i lo0 'tcp port 65432' Capturing on 'Loopback' 1 0.000000 127.0.0.1 → 127.0.0.1 TCP 68 53942 → 65432 [SYN] Seq=0 Win=65535 Len=0 MSS=16344 WS=32 TSval=940533635 TSecr=0 SACK_PERM=1 2 0.000057 127.0.0.1 → 127.0.0.1 TCP 68 65432 → 53942 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=16344 WS=32 TSval=940533635 TSecr=940533635 SACK_PERM=1 3 0.000068 127.0.0.1 → 127.0.0.1 TCP 56 53942 → 65432 [ACK] Seq=1 Ack=1 Win=408288 Len=0 TSval=940533635 TSecr=940533635 4 0.000075 127.0.0.1 → 127.0.0.1 TCP 56 [TCP Window Update] 65432 → 53942 [ACK] Seq=1 Ack=1 Win=408288 Len=0 TSval=940533635 TSecr=940533635 5 0.000216 127.0.0.1 → 127.0.0.1 TCP 202 53942 → 65432 [PSH, ACK] Seq=1 Ack=1 Win=408288 Len=146 TSval=940533635 TSecr=940533635 6 0.000234 127.0.0.1 → 127.0.0.1 TCP 56 65432 → 53942 [ACK] Seq=1 Ack=147 Win=408128 Len=0 TSval=940533635 TSecr=940533635 7 0.000627 127.0.0.1 → 127.0.0.1 TCP 204 65432 → 53942 [PSH, ACK] Seq=1 Ack=147 Win=408128 Len=148 TSval=940533635 TSecr=940533635 8 0.000649 127.0.0.1 → 127.0.0.1 TCP 56 53942 → 65432 [ACK] Seq=147 Ack=149 Win=408128 Len=0 TSval=940533635 TSecr=940533635 9 0.000668 127.0.0.1 → 127.0.0.1 TCP 56 65432 → 53942 [FIN, ACK] Seq=149 Ack=147 Win=408128 Len=0 TSval=940533635 TSecr=940533635 10 0.000682 127.0.0.1 → 127.0.0.1 TCP 56 53942 → 65432 [ACK] Seq=147 Ack=150 Win=408128 Len=0 TSval=940533635 TSecr=940533635 11 0.000687 127.0.0.1 → 127.0.0.1 TCP 56 [TCP Dup ACK 6#1] 65432 → 53942 [ACK] Seq=150 Ack=147 Win=408128 Len=0 TSval=940533635 TSecr=940533635 12 0.000848 127.0.0.1 → 127.0.0.1 TCP 56 53942 → 65432 [FIN, ACK] Seq=147 Ack=150 Win=408128 Len=0 TSval=940533635 TSecr=940533635 13 0.001004 127.0.0.1 → 127.0.0.1 TCP 56 65432 → 53942 [ACK] Seq=150 Ack=148 Win=408128 Len=0 TSval=940533635 TSecr=940533635 ^C13 packets captured Reference This section serves as a general reference with additional information and links to external resources. Python Documentation - Python’s socket module - Python’s Socket Programming HOWTO Errors The following is from Python’s socket module documentation: “All errors raise exceptions. The normal exceptions for invalid argument types and out-of-memory conditions can be raised; starting from Python 3.3, errors related to socket or address semantics raise OSErroror one of its subclasses.” (Source) Here are some common errors you’ll probably encounter when working with sockets: Socket Address Families socket.AF_INET and socket.AF_INET6 represent the address and protocol families used for the first argument to socket.socket(). APIs that use an address expect it to be in a certain format, depending on whether the socket was created with socket.AF_INET or socket.AF_INET6. Note the excerpt below from Python’s socket module documentation regarding the host value of the address tuple: .” (Source) See Python’s Socket families documentation for more information. I’ve used IPv4 sockets in this tutorial, but if your network supports it, try testing and using IPv6 if possible. One way to support this easily is by using the function socket.getaddrinfo(). It translates the host and port arguments into a sequence of 5-tuples that contains all of the necessary arguments for creating a socket connected to that service. socket.getaddrinfo() will understand and interpret passed-in IPv6 addresses and hostnames that resolve to IPv6 addresses, in addition to IPv4. The following example returns address information for a TCP connection to example.org on port 80: >>> socket.getaddrinfo("example.org", 80, proto=socket.IPPROTO_TCP) [(<AddressFamily.AF_INET6: 10>, <SocketType.SOCK_STREAM: 1>, 6, '', ('2606:2800:220:1:248:1893:25c8:1946', 80, 0, 0)), (<AddressFamily.AF_INET: 2>, <SocketType.SOCK_STREAM: 1>, 6, '', ('93.184.216.34', 80))] Results may differ on your system if IPv6 isn’t enabled. The values returned above can be used by passing them to socket.socket() and socket.connect(). There’s a client and server example in the Example section of Python’s socket module documentation. Using Hostnames. This is in contrast to the typical scenario of a client using a hostname to connect to a server that’s resolved by DNS, like. The following is from Python’s socket module documentation: ) The standard convention for the name “localhost” is for it to resolve to 127.0.0.1 or ::1, the loopback interface. This will more than likely be the case for you on your system, but maybe not. It depends on how your system is configured for name resolution. As with all things IT, there are always exceptions, and there are no guarantees that using the name “localhost” will connect to the loopback interface. For example, on Linux, see man nsswitch.conf, the Name Service Switch configuration file. Another place to check on macOS and Linux is the file /etc/hosts. On Windows, see C:\Windows\System32\drivers\etc\hosts. The hosts file contains a static table of name to address mappings in a simple text format. DNS is another piece of the puzzle altogether. Interestingly enough, as of this writing (June 2018), there’s an RFC draft Let ‘localhost’ be localhost that discusses the conventions, assumptions and security around using the name “localhost.” What’s important to understand is that when you use hostnames in your application, the returned address(es) could literally be anything. Don’t make assumptions regarding a name if you have a security-sensitive application. Depending on your application and environment, this may or may not be a concern for you. Note: Security precautions and best practices still apply, even if your application isn’t “security-sensitive.” If your application accesses the network, it should be secured and maintained. This means, at a minimum: System software updates and security patches are applied regularly, including Python. Are you using any third party libraries? If so, make sure those are checked and updated too. If possible, use a dedicated or host-based firewall to restrict connections to trusted systems only. What DNS servers are configured? Do you trust them and their administrators? Make sure that request data is sanitized and validated as much as possible prior to calling other code that processes it. Use (fuzz) tests for this and run them regularly. Regardless of whether or not you’re using hostnames, if your application needs to support secure connections (encryption and authentication), you’ll probably want to look into using TLS. This is its own separate topic and beyond the scope of this tutorial. See Python’s ssl module documentation to get started. This is the same protocol that your web browser uses to connect securely to web sites. With interfaces, IP addresses, and name resolution to consider, there are many variables. What should you do? Here are some recommendations that you can use if you don’t have a network application review process: For clients or servers, if you need to authenticate the host you’re connecting to, look into using TLS. Blocking Calls A socket function or method that temporarily suspends your application is a blocking call. For example, accept(), connect(), send(), and recv() “block.” They don’t return immediately. Blocking calls have to wait on system calls (I/O) to complete before they can return a value. So you, the caller, are blocked until they’re done or a timeout or other error occurs. Blocking socket calls can be set to non-blocking mode so they return immediately. If you do this, you’ll need to at least refactor or redesign your application to handle the socket operation when it’s ready. Since the call returns immediately, data may not be ready. The callee is waiting on the network and hasn’t had time to complete its work. If this is the case, the current status is the errno value socket.EWOULDBLOCK. Non-blocking mode is supported with setblocking(). By default, sockets are always created in blocking mode. See Notes on socket timeouts for a description of the three modes. Closing Connections An interesting thing to note with TCP is it’s completely legal for the client or server to close their side of the connection while the other side remains open. This is referred to as a “half-open” connection. It’s the application’s decision whether or not this is desirable. In general, it’s not. In this state, the side that’s closed their end of the connection can no longer send data. They can only receive it. I’m not advocating that you take this approach, but as an example, HTTP uses a header named “Connection” that’s used to standardize how applications should close or persist open connections. For details, see section 6.3 in RFC 7230, Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing. When designing and writing your application and its application-layer protocol, it’s a good idea to go ahead and work out how you expect connections to be closed. Sometimes this is obvious and simple, or it’s something that can take some initial prototyping and testing. It depends on the application and how the message loop is processed with its expected data. Just make sure that sockets are always closed in a timely manner after they complete their work. Byte Endianness See Wikipedia’s article on endianness for details on how different CPUs store byte orderings in memory. When interpreting individual bytes, this isn’t a problem. However, when handling multiple bytes that are read and processed as a single value, for example a 4-byte integer, the byte order needs to be reversed if you’re communicating with a machine that uses a different endianness. Byte order is also important for text strings that are represented as multi-byte sequences, like Unicode. Unless you’re always using “true,” strict ASCII and control the client and server implementations, you’re probably better off using Unicode with an encoding like UTF-8 or one that supports a byte order mark (BOM). It’s important to explicitly define the encoding used in your application-layer protocol. You can do this by mandating that all text is UTF-8 or using a “content-encoding” header that specifies the encoding. This prevents your application from having to detect the encoding, which you should avoid if possible. This becomes problematic when there is data involved that’s stored in files or a database and there’s no metadata available that specifies its encoding. When the data is transferred to another endpoint, it will have to try to detect the encoding. For a discussion, see Wikipedia’s Unicode article that references RFC 3629: UTF-8, a transformation format of ISO 10646: .” (Source) The takeaway from this is to always store the encoding used for data that’s handled by your application if it can vary. In other words, try to somehow store the encoding as metadata if it’s not always UTF-8 or some other encoding with a BOM. Then you can send that encoding in a header along with the data to tell the receiver what it is. The byte ordering used in TCP/IP is big-endian and is referred to as network order. Network order is used to represent integers in lower layers of the protocol stack, like IP addresses and port numbers. Python’s socket module includes functions that convert integers to and from network and host byte order: You can also use the struct module to pack and unpack binary data using format strings: import struct network_byteorder_int = struct.pack('>H', 256) python_int = struct.unpack('>H', network_byteorder_int)[0] Conclusion We covered a lot of ground in this tutorial. Networking and sockets are large subjects. If you’re new to networking or sockets, don’t be discouraged by all of the terms and acronyms. There are a lot of pieces to become familiar with in order to understand how everything works together. However, just like Python, it will start to make more sense as you get to know the individual pieces and spend more time with them. We looked at the low-level socket API in Python’s socket module and saw how it can be used to create client-server applications. We also created our own custom class and used it as an application-layer protocol to exchange messages and data between endpoints. You can use this class and build upon it to learn and help make creating your own socket applications easier and faster. You can find the source code on GitHub. Congratulations on making it to the end! You are now well on your way to using sockets in your own applications. I hope this tutorial has given you the information, examples, and inspiration needed to start you on your sockets development journey. What Do You Think? Real Python Comment Policy: The most useful comments are those written with the goal of learning from or helping out other readers—after reading the whole article and all the earlier comments. Complaints and insults generally won’t make the cut here.
https://realpython.com/python-sockets/
CC-MAIN-2018-34
refinedweb
14,228
66.84
I know I'm going to end up feeling really stupid, but I give on this one. I have the following code: First off, I know about the endless loop, I'm trying to get one thing working at a time.First off, I know about the endless loop, I'm trying to get one thing working at a time.Code: #include <fstream> #include <iostream> int main(void) { std::ifstream file; char temp[1024]; file.open("20110803.txt"); while (1) { file.getline(temp, 1024); std::cout << temp << std::endl; } file.close(); return 0; } The text file is ~1.5 GB and when I run the program, only about the first 4.8 MB is shown before it shows nothing but newlines, that is, it does not get all the way through the file. What the hell is going on here?
http://cboard.cprogramming.com/cplusplus-programming/145086-reading-lines-file-printable-thread.html
CC-MAIN-2014-23
refinedweb
140
84.47
VeriSign Sued Over SiteFinder Service 403 dmehus writes "It was only a matter of time, the pundits said, and they were right. Popular Enterprises, LLC., an Orlando, Florida based cybersquatting so-called 'search services' company, has filed a lawsuit in Orlando federal court against VeriSign, Inc. over VeriSign's controversial SiteFinder 'service.' While PopularEnterprises has had a dodgy history of buying up thousands of expired domain names and redirecting them to its Netster.com commercial "search services" site, the lawsuit is most likely a good thing, as it provides one more avenue to pursue in getting VeriSign to terminate SiteFinder. According to the lawsuit, the company contends alleges antitrust violations, unfair competition and violations of the Deceptive and Unfair Trade Practices Act. It asks the court to order VeriSign to put a halt to the service. VeriSign spokesperson Brian O'Shaughnessy said the company has not yet seen the lawsuit and that it doesn't comment on pending litigation." Arrrrrr! (Score:4, Funny) Re:Arrrrrr! (Score:4, Funny) Talk like a pirate day [talklikeapirate.com] Nice tactic. (Score:5, Informative) Arguing that they get for free what other companies must pay for is probably one of the easier arguments for win, since it proves itself nearly by definition. I applaud the jackass who pays to abuse typos. At least they've finally proven their worth. I dunno about that. (Score:4, Insightful) Re:Nice tactic. (Score:5, Informative) [petitiononline.com] comparison (Score:5, Funny) - most hated company on the internet - most stupid business moves - most obvious 'shoot self in foot' maneuvre I expect that slashdot would implode if SCO sued Verisign for this maneuvre. Do you cheer because one of them will lose? Or groan because one will win? Re:comparison (Score:4, Funny) Cheer, because both of them would be wasting money on a lawsuit. Cry, if that lawsuit provided precedent for either of their parasitic business models. Re:comparison (Score:3, Funny) Suggestion: broken ribbon protest (Score:4, Interesting) Y'know those "ribbon" stuff people used to put on their webpages as a sign of protest? Well here's my suggestion, every protester should use a "broken ribbon" logo on their webpage that's pointed to a random nonexistent url e.g. random.nonexistent.site.com. e.g. img src="" height=1 width=1 (Leaving the angle brackets out because Slashdot's engine sucks - it's too stupid to treat plain old text as plain old text.) You should use a random img url but it doesn't have to change much if at all. The height and width should be set to 1 so that if some idiot tries to push an offensive image, it doesn't get seen by the person viewing your webpage. You could in theory construct a broken ribbon logo with an html table of different 1x1 imgs (all different URLs). 16 by 16 pixel icon could be 64 requests to nonexistent domains (drawing the ribbon), and the rest point to single background 1x1 image. Then if Verisign figures out a cheap way to deal with all the SYN packets heading their direction and still redirect users to a webpage, they'll have solved the "defend against DDOS SYN flood" problem. Some people say there's no technical solution to this problem. But add enough people and this might work. Slashdot and a few other popular sites could do this too. Pert Peeve (Score:5, Interesting) Re:Pert Peeve (Score:5, Funny) A great minor evil. That's a new one on me. Re:Pert Peeve (Score:4, Informative) The pool (Score:5, Funny) "Unfair advantage"? (Score:5, Interesting) I guess people will figure that the end justifies the means, but the argument still seems a little distasteful. Re:"Unfair advantage"? (Score:2, Informative) No, I think their complaint is that Verisign is in charge of baking the pies in the first place... it's hard to develop market share for your product, if users are diverted upstream. Re:"Unfair advantage"? (Score:5, Insightful) This is clearly abuse of monopoly. Re:"Unfair advantage"? (Score:5, Insightful) Or, put another way, Mountain View would be perfectly satisfied if the result of the lawsuit was that Verisign allowed other cybersquatters to grab mistyped domains for free also, creating a huge happy cybersquatting family. Somehow I don't think the rest of us would be quite as delighted though. Re:"Unfair advantage"? (Score:5, Insightful) Re:"Unfair advantage"? (Score:5, Interesting) What normally happens is this: People do a request for a site, e.g. intranet.internal.foo.org. The external DNS servers fail in that they don't come back with an answer, and then the client continues through its list of DNS servers until it gets to the internal servers where it gets an answer. What's happening now is that they ARE getting a good answer from the external servers, and the client is trying to connect to the 64.x.x.x address of Sitesearch. Now in most organisations the client isn't able to connect to that box (because its firewalled or whatever else), so it isn't a problem for VeriSign, however, it is a problem for the organisation, as the clients who are trying to work are getting given IP addresses for internal servers that are incorrect. I have had to change dial up settings on a few clients and change others over to using static IPs at the moment until a better solution comes around. Or even better till VeriSign stop doing this. Berny Re:"Unfair advantage"? (Score:4, Interesting) They screwed up resellerratings.com (Score:2) Most ISPs have blocked it (Score:4, Informative) Please reply to this and list names of fellow anti-VeriSign ISPs if your ISP has blocked this new "feature" as well. Thanks! I will enjoy analyzing this data. Re:Most ISPs have blocked it (Score:5, Informative) If you work in an ISP or other network infrastructure company, you know first-hand the degree of astonishment and rage that Verisign's move elicited; the fallout (spam filtration, security, network monitoring, etc.) goes far beyond HTTP. I don't think any of us slept much that night ... it only took a few hours to restore normal DNS behaviour, the remaining ten or so I spent in shock with my jaw scraping the floor. I've dealt with Verisign before (try getting decent documentation on the cybercash application library!) and knew they were greedy and stupid, but I wasn't counting on raw, unfettered eeeeeevil. Re:Most ISPs have blocked it (Score:5, Funny) Thats part of Verisigns new "Shock and Jaw" Campaign. Re:Most ISPs have blocked it (Score:3, Informative) Re:Most ISPs have blocked it (Score:5, Informative) Re:Most ISPs have blocked it (Score:3, Informative) Re:Most ISPs have blocked it (Score:3, Informative) Re:Most ISPs have blocked it (Score:3, Informative) According to this... (Score:3, Interesting) Verisign has hired Omniture to collect info on what people misspell. While the website may seem clean and useful, it may not be, depending on what your take on privacy is. Re:Most ISPs have blocked it (Score:3, Interesting) and the IEFT now has an Internet-Draft (Score:5, Informative) So this means.... (Score:2) We're on the side of the plaintiff? It's a bad sign if you're cheering this on. Yes, VeriSign is completely wrong here, but the other party isn't to be lauded, either. It's kinda like Carrot Top fighting Regis Philbin. Although Regis doesn't suddenly appear when I make a wrong turn. Is it possible Verisign's move will be irrelevant? (Score:5, Interesting) At the rate things are going, in a couple weeks, no one will be able to get to their search engine site at all, whether they want to or not. Someone probably deserves recompensation for the hassle, but it's looking like the Internet has proven resilient to even this "high level" attack. Re:Is it possible Verisign's move will be irreleva (Score:5, Insightful) At what cost? Routers are working harder, code has been introduced into core servers that has no technical reason to exist, and an IP address, or possibly a sizeable range of IP addresses are now blacklisted worldwide. Those IPs won't be usable for anything anymore, or at least until we see widespread adoption of IPv6. *cough* What the Internet doesn't need is to become even less of an end-to-end transport, less reliable. And we did it to ourselves. Re:Is it possible Verisign's move will be irreleva (Score:5, Insightful) But at the same time, if you take a step back, the rapid mobillization of the response to this is VERY impressive, and the rate at which the Internet is reconfiguring itself to get rid of the trouble is quite amazing. Remember, three days ago, people were moaning about how this would be a disaster, DNS would be broken, spam filters would be rendered impotent, etc etc. I'm just saying that, objectively, if you look at this sort of like a body repelling a bacterial attack, the rate at which it's been countered is quite amazing, and shows how well the Internet is fundamentally put together. Re:Is it possible Verisign's move will be irreleva (Score:3, Insightful) Re:Is it possible Verisign's move will be irreleva (Score:3, Interesting) Well, not really. Just that no A records can reliably point into those blocks now, since the "quick fix" that tons of people used just blocked a few subnets owned by verisign. Of course, verisign has bunches of subnets where they can point this thing, and that quick fix is going to expire Try this in I.E. (Score:2, Funny) Cross Site Scripting Bug (Score:4, Informative) don't u love these spokespeople (Score:4, Funny) Re:don't u love these spokespeople (Score:3, Funny) Unless you are a SCO spokesperson, then the story would go a little like this: another annoying 'feature' of sitefinder (Score:5, Interesting) Verisign Sucks. They always have and always will. Re:another annoying 'feature' of sitefinder (Score:3, Interesting) Agreed. I realized this when I got a phone call two weeks after I registered my first domain asking if I needed their 'services' for hosting. Of course, the sales pitch made it sound like my domain would not work without their services. I realized this again when I got a letter in the mail telling me to renew a domain b/c it was about to expire. What's the big deal, you say? The domain wasn't registered with them, but they made it sound like if I didn't se Hello, Pot? This is kettle! (Score:4, Insightful) Excellent; battle of the twits (Score:3, Insightful) Also, users of course do not get a 404 when a domain doesn't exist. The domain freakin' doesn't exist, so the DNS lookup itself fails (should get NXDOMAIN) and the browser reports an error in domain resolution. But this is nice; I want to see all these leeches in the cybersquatting and "World Wide Web" enhancement business pitted against each other. Don't badmouth Netster too bad (Score:5, Informative) Timeline: 1997 or so: I registered tylereaves.com, mainly for use in e-mail 2000: I let the domain lapse, not really using it, and tired of paying $40 a year or so for it (Hey, registering was expensive in '97!) 200?: Netster becomes the owner of tylereaves.com 2003: I nicely ask for it back. 2003: I get my domain back. They didn't even charge me the trasnfer fees. Re:Don't badmouth Netster too bad (Score:3, Interesting) There's a notice on one of their policy pages that they'll give a domain to anyone with a valid claim (Trademark, etc). I e-mailed the provided address, stating that A: I was the original registrant and B: It's my name. They got back to me in under 24 hours to arrange the transfer. Someone at Network Solutions responded to me. (Score:5, Interesting) The drone informed me in a form letter that VeriSign's practices were "well within the guidelines" established by the document Domain Name System Wildcards in Top-Level Domain Zones [verisign.com]. After deconstructing this, we are left with: VeriSign is within the guidelines of the document VeriSign wrote on the matter. Uhm... Re:Someone at Network Solutions responded to me. (Score:5, Insightful) Notice that they only address HTTP and SMTP in the guidelines. I guess there really aren't any other protocols worth speaking of. (https maybe? Hmm - I wonder what happens there) Technical defense against hijacked domains (Score:5, Informative) This is a good time to look at Bob Frankston's dotDNS proposal [circleid.com] for a layer of reliable but meaningless domain names. dotDNS lookups can be made self-verifiable using public-key signatures, but without the costly chain of trust required by DNSSEC methods. The validity of a dotDNS binding can be verified easily by the querier, without relying at all on the server that provided the putative binding. dotDNS does not solve the whole problem, since any layer that translates from humanly meaningful names to dotDNS names is still vulnerable to hijacking. But the reliable and verifiable name bindings in dotDNS will make it *much* easier to switch name-resolution services when we are dissatisfied with their policies. dotDNS is a cheap and immediately deployable positive step toward fixing the DNS mess, requiring no approval by any central agency. It's time for a visionary sponsor to step forward and just do it. I'm not surprised... (Score:5, Funny) Re:I'm not surprised... (Score:4, Insightful) Re:I'm not surprised... (Score:3, Informative) basically there is a point in the code where the cgi paramater url is assi Electronic Communication Privacy Act (Score:5, Interesting) wherein, "intercept" means the aural or other acquisition of the contents of any wire, electronic, or oral communication through the use of any electronic, mechanical, or other device; The ECPA also provides that "In a civil action under this section, appropriate relief includes--(1) such preliminary and other equitable or declaratory relief as may be appropriate;(2) damages under subsection (c); and (3) a reasonable attorney's fee and other litigation costs reasonably incurred.. Seems like a good case can be that emails to mistyped addresses are being intercepted by Verisign. Certainly, the emails where not intended to be sent to Verisign, and they appear to be collecting some information from the email (the from address). When will people learn? (Score:4, Insightful) Verisign delusional (Score:5, Interesting) So they are attributing a slashdotting, and a lot of media interest to people being positive about the service. I haven't seen one article, comment or anything that was even remotely positive. What are these guys on? He also claims they are fully compliant with every RFC. I don't see how this is possible, unless they have found some loophole. Re:Verisign delusional (Score:3, Insightful) As far as the RFCs go, maybe the internet architects never thought of this abuse. I'd have less of a problem (Score:4, Interesting) would list as a suggested site. But it doesn't. It lists a number of domains that are off quite a few letters more than 1. If it were at least making an intelligent attempt at getting the user where they wanted to go it could be argued that it is at least useful. Microsoft's search that comes up when you get a DNS error on some domain names is excellent about getting you where you actually wanted to go. Verisign either gives a half assed attempt at correcting the user or deliberatly ignores domains that aren't registered through them. Despite the fact they get money regardless of who you register through. Now we just need a credible plaintiff. Preferably a class action suit to maximize damages. Ben Right... (Score:5, Funny) Then when the press questions the astronomers on how their orbital calculations could have been so wrong, the astronomers (being the clever guys they are) will say, "but are calculations were right!" and then erupt in maniacal laughter. I for one welcome our new...[looks up at the sky]...never mind, I didn't start to say anything. Nope, nothing at all. Null space needs to remain null (Score:5, Insightful) The fact that ICANN didn't block this move is further evidence than this organization is totally useless and political. Along the same vein, I disagree with MS's misleading implementation of the IP-not-found error page to redirect users to their proprietary search engine. The Internet community should rally against any entity that seeks to appropriate undefined address space for their own gain. If Verisign is allowed to do this, what we're likely to see is each major ISP and browser manufacturer follow suit and hijack undefined space to promote their own systems. Imagine if you dialed a wrong number on the telephone and you got an advertisement for the phone company. What if local broadcasters bombarded all the unused frequency spectrum with their own promotions. This has less to do with Verisign than it does to protect the sanctity of null space. It makes me wonder if someone has a patent on silence yet? Re:Null space needs to remain null (Score:5, Informative) No, there's too much prior art, but John Cage has a copyright on 4'33" of it. KFG Re:Null space needs to remain null (Score:3, Funny) Damn, that'd deep. Note to self (Score:3, Funny) SCO (need you ask?) Verisign (screwed by em long before this) SBC (for not blocking Verisign) Microsoft (ya just gotta) RIAA (You don't sue your customers. Solve the problem!) Sun (for the abomination called Java) Gray Davis (because he DOES suck) Cruz Bustamante (Don't give him a CHANCE to suck) Note to self: Get more RAM for Notes to self Funny Stuff (Score:3, Funny) "We didn't find" "There is no web site at this address." Only in a perfect world... sitefinder can't find verisignsucks.com (Score:5, Interesting) I don't agree (Score:3, Interesting) Copy of the Lawsuit and More Details (Score:3, Informative) home.businesswire.com/portal/site/google/index.js Copy of lawsuit: search.netster.com/about/lawsuit.asp [netster.com] Sorry, I forgot to include these links in my submission. Post away! Cheers, Doug Dear VeriSign, Thanks for the spam. (Score:5, Interesting) Someone a few months ago mentioned to me that Sendmail has a feature where, upon receiving mail, it will check the domain of the sender. If the domain does not exist, it has a forged From: header and is obviously spam. Thanks to Verisign's efforts to piss me off, every DNS query on a nonexistant Since this "service" has been implemented, I've gone from 7-8 spams a day to 30-35. Thanks a lot, assholes. Re:Dear VeriSign, Thanks for the spam. (Score:4, Funny) To: abuse@verisign.com From: Dear DNS administrators, The mail server I am administering is experiencing a problem with spam. I have not getten check_rcpt rule checks in the now returning an A record, even though they are not registered domains. Please correct this error in your servers. Thank you, Terms of Us (Score:4, Interesting) 2. You may have accessed the VeriSign Service(s) by initiating a query to our DNS resolution service for a nonexistent domain name. 14. By using the service(s) provided by VeriSign under these Terms of Use, you acknowledge that you have read and agree to be bound by all terms and conditions here in and documents incorporated by reference. I'm not sure how the came up with the fact that I, the end user, made a query to their DNS server. In fact, I did not. My ISP may be using their services, but I personally have no legal relationship with Verisign whatsoever. My ISP may be using their services, but that in no way establishes a relationship between myself and Verisign. IMO, unless you're querying Verisign directly, their terms of use cannot possibly apply -- which means that they apply to almost noone. I would challenge them to show any log that shows my IP address accessing their service. If they can't, then I did not in fact access their service. And what's worse is the implication that I can bound by "Terms of Use" that I have never seen, based on the assumption that I made the query, when in fact the query mas made to a DNS server at my ISP (and again, I don't really care how my ISP handles that request as long as it sends me the requested info. Journalists should not write tech stories (Score:5, Funny) Typically, Internet users are shown a generic "404 -- cannot be found" page when a Web address does not exist. Sooooo, if the web server can't be found, who's sending the HTTP 404 response (which incidentally means that a file on a server doesn't exist...)? Why could they accomplish surprise? (Score:3, Interesting) The backlash against VS should have started BEFORE they went through with this decision -- and that backlash should have been OVERWHELMING, as in, every sysadmin with DNS should have been complaining, ISP's should have been filing motions for restraining orders, and ICANN should have been ready to pull the gTLD contract once and for all. Alexa (Score:3, Informative) maybe it's time to give DNS back to the public? (Score:5, Insightful) Interesting point... (Score:4, Interesting) (Try,, etc Lies in Verisign's terms of use (Score:5, Interesting) You may have accessed the VeriSign Service(s) by initiating a query to our DNS resolution service for a nonexistent domain name. We are unable to resolve such queries through the DNS resolution service. They are, and they do. They resolve such queries to 64.94.110.11. You need to reject their terms of use! (Score:5, Interesting) I have informed them that if they cannot stop providing me with this service, (for which I do not accept their terms, and by which I cannot be bound) then they will have to contact me to negotiate a new set of terms to which I do agree. I would imagine that if every user that is upset by this new 'service' was to do the same then Verisign would have to do 'something' about it. Official Verisign Response (Score:4, Funny) Official Verisign Response [verisign.com] How come noone complains about other TLDs? (Score:3, Insightful) Hey- they stole my domain! (Score:3, Interesting) how to call Verisign and complain (Score:4, Informative) +1 703-742-0914 (worldwide) +1 888-642-9675 (toll free US/Canada) When you call, select: * 1 (purchase an product or renew an exist product) * then 7 (all other questions) I recommend that you be patient with the Verisign rep that answers the phone. That person may not fully understand the issue / problem, and they are unlikely to personally be responsible for the Verisign decision. Remember that you are objecting what Verisign as a company is doing. Don't yell at the rep. Be polite but firm. Ask Verisign to stop the wildcarding now. Explain why what they are doing is wrong (such as being unable to determine of a EMail message is being sent from a bogus / non-existent domain because thisdomaindoesnotexist.com resolves to 64.94.110.11). If you do business with Verisign now, tell them that you will switch vendors unless Verisign stops this practice in X weeks. (fill in the X) You might want to leave your phone number and request a callback. Anonymous complaints do not go as far. If you are in the US, you might want to contact your local member of congress and object about what Verisign is doing. Let Verisign know that you are doing this when you call. Yes, they might flush your complaint down Re:how to call Verisign and complain (Score:3, Interesting) I called them just now and basically said the stuff above. I own a few domain names bought from them, and will be transferring them to another provider. When I told them why, they read off a script that told me why their service was so great. Here's their answers and my responses: "Before, the user would get an unhelpful error message. Now, users always know where to go!" "That's good on paper, but the problem is that DNS is an inappropriate area to conduct that redirection. Yahoo or Google.com are w how to complain about Verisign to ICANN (Score:5, Informative) you can file a complaint about Verisign to ICANN by using their: Good has come out of this for me. (Score:3, Funny) Now I can charge my clients for setting up a DNS server on their local networks on any spare crap machine they have lying around, making their networks more resilient to ISP DNS outages and crap like this. Now I have every excuse I might need to move all my clients name registrations to another registrar ASAP, and all the reason I need to not use VeriSign, or be plagued by their idiot customer service ever again! Thank you Verisign, for teaching me how to laugh about love...again. Re:I've never understood (Score:3, Informative) There's nothing wrong about cybersquatting, but it's Just Not Right(TM). Re:I've never understood (Score:3, Informative) Owning a domain you don't use (Score:5, Informative) But why? There's no real market in domain names any more. Verisign tried to make one. GreatDomains used to have thousands of listings, and you'd see things like "Asked: $25,000. Bid: $20." Now Verisign only has "premium domains" on GreatDomains, ones like "record.com". There are only 66 domains for sale, and few sales. Re:Owning a domain you don't use (Score:3, Funny) Aah those were the days. Some idiot for example paid me $1500 for a T0P10.COM domain...probably buyer didn't understand that the second O is a zero. not quite (Score:3, Informative) Not quite. Owning a domain is a separate issue from DNS. Owning a domain means you have an entry in a domain registry. It does not mean you have a DNS entry. Owning a domain means you have paid your money and signed up and that you have the right to have your domain added to the DNS. A lame delegation is something different. A lame delegation is when there are NS records that exist in the DNS, but they point to the address This isn't cybersquatting. (Score:5, Insightful) [Not that I'm surprised...the first sign that things like this were going to happen was when IE started replacing webserver error messages with their own if they decided your error message wasn't big enough, and replacing 'server not found' with links to their search engine] So well, your 40 acres comparison falls through as it's more the equivalent of someone saying 'all this is mine until someone else buys it' and then, after you buy your plot, they still claim the area that you haven't built on yet, even though you have the deed to it. Re:I've never understood (Score:4, Informative) First, a history lesson. '40 Acres and a Mule' wasn't a pioneer issue. What it is true that during the western rushes, various federal lands were put up for auction or claim by pioneers. The lands were not, however, specified to be 40 acres, but varied in size based on the territory and the specific land grant. For that matter, according to one of my HS Social Studies teachers (a dozen years ago), there were still federal lands for claim in parts of Alaska. That teacher was known to embellish the truth, so I won't put any varacity statement with that. '40 acres and a mule' were reparations for slaves in the south. They were instituted by a Northern (Union) general, during the aftermath of the civil war, and were later reveresed by an presidential executive order. So, in short, your parellel falls a little short. If the ICANN were to pass a ruling granting johnny-come-latelies names from vast corporate pools, that would be comprable. So, what's wrong with cybersquatting: Well, with the federal land grants, if you occupied and developed the federal lands for a specified period of time, they became yours. You could sell or otherwise use them as you wished. Here, cybersqquatters either are taking a developed item (debatably property) and using its good will and value for an interest contrary to the orginal owners. Which would be a violation of the land grants, so thats one point where your analogy fails. The other type of cybersquatter (who speculates on names or misspellings) is also abusing the good will of the originator, but may be a valid comparison. It is, however, annoying, to get redirected away from what you wanted because of a typo, and from the other side, a squatter who is taking an otherwise useful resource and making it near-useless is neither providing a valid service or generating good will. Re:Homesteading (Score:5, Insightful) Cybersquatters do no such thing. There's a difference between registering coffee.com to build a coffee site and registering to resell it later. Cybersquatters are more akin to ticket scalpers than to homesteaders. Re:SWAT comes back to mind ... (Score:2) Dr. Evil quite prominently pronounced his "R"s Re:what the fuck? (Score:5, Insightful) Wake up. If you want to find a site, you use Google. If you want to go to a non-existant one, you should damn well be told there's nothing there. Re:what the fuck? (Score:3, Informative) Yes, thank you Ayn Rand. And how do you give them competition? Ask them to relinquish control of their root servers and institute yours in their place? Or maybe start a whole new internet? Yeah, that's going to work. Let's face it. Verisign broke the rules (ie: RFCs) which were des Re:what the fuck? (Score:4, Insightful) You are missing the whole reason everyone is so upset. Verisign DOESN'T HAVE the rights. They DO NOT OWN the .com or .net domains. They have entered an agreement with ICANN where they are the designated people who ADMINISTER the domains. They are being financially compensated to provide a service related to .com and .net; this does not mean they own them!! Think about this distinction. If you'd like an analogy, think of mutual funds. Mutual funds are owned by shareholders; however, they pay a fund administrator to manage them. The administrator has the power to make all kinds of changes, but this does NOT mean he owns the mutual fund! If the administrator decided he was going to manipulate the direction of the mutual fund to maximize his own personal income instead of the fund's income, he'd be taken down faster than you can say "Martha Stewart". Re:what the fuck? (Score:4, Insightful) The .uk the TLDs are run by Nominet, a not-for-profit organisation that allows anyone to register as a registrar. They manage the .uk namespace but have no commerical interest in it. Given that VeriSign have now demonstrated that they can't be trusted not to take advantage of their position for commerical gain a similar organisation to Nominet should be setup to manage the .com and ..net domains. Re:Give them a break, they just want more hit (Score:5, Funny) Re:BIND patch available to block site finder (Score:3, Informative) The bug is that NS lookups for non-cached domains fails. nslookup set type=ns geek.com Fails if not already cached by named nslookup geek.com set type=ns geek.com Always works
https://tech.slashdot.org/story/03/09/19/039214/verisign-sued-over-sitefinder-service
CC-MAIN-2017-51
refinedweb
5,390
63.09
Out of the many modules available in python, one of them is the math module. Using this math module, we can access several mathematical functions. This module consists of several functions such as logarithmic functions, trigonometric functions, representation functions, etc., as well as mathematical constants. In this article, we shall be looking into one such mathematical constant – python e constant. What are mathematical constants By using the math module in python, we can access mathematical constant from the module. We can use these constants for carrying out mathematical operations and defining formulas in python. The values returned by these constants are equal to their values as defined in mathematics. The mathematical constants in python’s math module are : - math.e - math.pi - math.tau - math.inf - math.nan Note: Here, math.inf and math.nan is available for python versions 3.5 onwards, and math.tau is available for python versions 3.6 and onwards. About Python Math e The math e is a mathematical constant in python. It returns the value of Euler’s constant which is an irrational number approximately equal to 2.71828. Euler’s constant is used in Natural Logarithms, calculus, trigonometry, differential equations, etc. The Syntax of Accessing the Euler’s Constant from the math module is: math.e The return value of math.e constant is a floating-point value of Euler’s constant. Examples of Python Math e Let us take a python code to understand the math e constant. First, we shall import the math module. And then, we will print the return value of math.e constant. import math print(math.e) The output of the constant is: 2.718281828459045 Else, we can also use math e constant like this: from math import e print(e) It will generate same output. 2.718281828459045 We can also use the math constant to create formulas. Suppose, if we want to define the particular function, f(x) = (e^x – 1)/x, then we can use math e constant to achieve that. We shall be defining a user-defined function func() which takes a single argument which is the value of ‘x’ from the formula. Here, math.e shall represent ‘e.’ import math def f(x): return ((math.e ** x) - 1)/x print(f(2)) Output: 3.1945280494653248 Python e Constant Using math.exp() The exp() is a function present in the math module which is used to output the exponential power of the euler’s constant. The syntax is: math.exp(n) It takes one argument ‘n’, which can be any positive or negative number. The argument is taken as the power value to which the Euler’s constant has to be raised. As the output, it returns the python e constant raised to the given power (e^n). import math print(math.exp(1)) The output will be e ^ 1. It is similar to math.e 2.718281828459045 Python e Constant Using numpy.exp() Similar to the exp() function in the math module, we also have an exp() function in the numpy library. The syntax of the exp() function in numpy is: numpy.exp(x, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj]) It accepts an array as the argument. The output of the function is an n-dimensional array that consists of the calculated values. The array values are taken as the power to the python e constant. First, we shall import the numpy library in python. Then, we shall take an array containing a single element – 1. Then, we shall pass that array as an argument to the numpy.exp() function and print the result. import numpy as np array = np.array(1) print(np.exp(array)) The output is: 2.718281828459045 We can also print multiple powers of the python e constant at the same time. For that, we shall add multiple values in the array as the powers to the constant. import numpy as np array = np.array([1,2,3,4,5,6,7,8,9,10]) print(np.exp(array)) Output: 2.20264658e+04] Plotting math e The graph of math e plotted will be an exponentially increasing graph. We shall try to plot the first 10 powers of the Euler’s constant using math e constant. For that, we shall have to import the matplotlib library also along with the math module. Then with a for loop, we shall generate the powers of Euler’s constant by raising numbers to the math e constant and store it into a list. Then we shall plot that list using the plot() function. import math import matplotlib.pyplot as plt list1 = [] for i in range(0,10): list1.append(math.e ** i) plt.plot(list1) plt.show() Output: As seen, it is an exponentially increasing graph. Using Constant e as Base for log The main application of Euler’s constant is in the natural logarithm. We can use the constant e as the base for the log. For that, we shall make use of the log() function present in the math module. We pass the value whose log has to be calculated as the first argument and the base as the second argument. import math print(math.log(2,math.e)) Output: 0.6931471805599453 Also, Read: - Repr(): Discovering a Special Method in Python - Diving into Python’s Numpy Random Poisson - An Insight into the Numpy Searchsorted Function - Match Case Python: New Addition to the Language - Everything You Should Know About Numpy recarray That was all about python e constant. If you have any questions in mind, let us know in the comments below. Until next time, Keep Learning!
https://www.pythonpool.com/python-e-constant/
CC-MAIN-2021-43
refinedweb
945
68.67
import "gopkg.in/kubernetes/kubernetes.v0/pkg/client/record" Package record has all client logic for recording and reporting events. doc.go event.go events_cache.go fake.go type EventBroadcaster interface { // StartEventWatcher starts sending events recieved from this EventBroadcaster to the given // event handler function. The return value can be ignored or used to stop recording, if // desired. StartEventWatcher(eventHandler func(*api.Event)) watch.Interface // StartRecordingToSink starts sending events recieved from this EventBroadcaster to the given // sink. The return value can be ignored or used to stop recording, if desired. StartRecordingToSink(sink EventSink) watch.Interface // StartLogging starts sending events recieved(source api.EventSource) EventRecorder } EventBroadcaster knows how to receive events and send them to any EventSink, watcher, or log. func NewBroadcaster() EventBroadcaster Creates a new event broadcaster. type EventRecorder interface { // Event constructs an event from the given information and puts it in the queue for sending. // 'object' is the object this event is about. Event will make a reference-- or you may also // pass a reference to the object directly. // 'reason' is the reason this event is generated. 'reason' should be short and unique; it will // be used to automate handling of events, so imagine people writing switch statements to // handle them. You want to make that easy. // 'message' is intended to be human readable. // // The resulting event will be created in the same namespace as the reference object. Event(object runtime.Object, reason, message string) // Eventf is just like Event, but with Sprintf for the message field. Eventf(object runtime.Object, reason, messageFmt string, args ...interface{}) // PastEventf is just like Eventf, but with an option to specify the event's 'timestamp' field. PastEventf(object runtime.Object, timestamp util.Time, reason, messageFmt string, args ...interface{}) } EventRecorder knows how to record events on behalf of an EventSource. type EventSink interface { Create(event *api.Event) (*api.Event, error) Update(event *api.Event) (*api.Event, error) } EventSink knows how to store events (client.Client implements it.) EventSink must respect the namespace that will be embedded in 'event'. It is assumed that EventSink will return the same sorts of errors as pkg/client's REST client. FakeRecorder is used as a fake during tests. func (f *FakeRecorder) Event(object runtime.Object, reason, message string) func (f *FakeRecorder) Eventf(object runtime.Object, reason, messageFmt string, args ...interface{}) func (f *FakeRecorder) PastEventf(object runtime.Object, timestamp util.Time, reason, messageFmt string, args ...interface{}) Package record imports 12 packages (graph). Updated 2016-07-25. Refresh now. Tools for package owners.
https://godoc.org/gopkg.in/kubernetes/kubernetes.v0/pkg/client/record
CC-MAIN-2019-30
refinedweb
413
61.63
. I recently heard of the name change to ADEP (surprisingly I haven’t heard more about it), but how is ADEP 10.0 different from liveCycle designer 10.0. (which I have ). Scott: I don’t think you have LC Designer 10.0. The previous designer to ship was Designer ES2 (9.0) that went out with Acrobat X. Unless you happen to have a beta version of the ADEP designer that shipped before re-branding? John Oops, you’re right I only have the livecycle that came with acrobat 10.0. So does that mean there is an upgrade available? Scott: Yes, there’s a new Designer. Trial download is available at: enjoy John Hi John, I’m struggling though my first macro, and have come across a difference in the JavaScript as run in the Reader runtime and as a macro in Designer. Is this possible? Try function dbg(msg) { designer.println(msg.toString().replace(/[\n]/gm, “\r\n”) + “\r\n”); } var xmlstring = ‘<?xml version=”1.0″ encoding=”UTF-8″?>\ <root xmlns:dd=””>\ <parentBodySupportAttachment>\ <attachment status=”” xmlns=””>\ <name dd:minOccur=”0″ dd:nullType=”exclude”/>\ </attachment>\ </parentBodySupportAttachment>\ <otherSupportAttachment dd:minOccur=”0″>\ <attachment status=”” xmlns=””>\ <name dd:minOccur=”0″ dd:nullType=”exclude”/>\ </attachment>\ </otherSupportAttachment>\ </root>; var e4x = new XML(xmlstring.replace(/^/, “”)); dbg(“xml=” + xmlstring); dbg(“e4x=” + e4x); Run in designer the namespace on the second attachment element is removed, so you end up with; But if I try the same code in Reader then it works fine. I did try not using e4x but was having trouble generating nodes with the appropriate namespace, would you expect the following to work? xfa.datasets.createNode(“dataValue”, “nullType”, “”) Seems to create a node but not with the correct namespace/prefix. Bruce: Just as a background, Designer does have a different JavaScript processor than Reader. Designer uses ExtendScript (more details here: ) I did a quick check, and it does look like there is a bug in the E4X processing in Extendscript, resulting in your namespace getting dropped. I wouldn’t expect the createNode method to work for you because that API is not rich enough to specify the namespace prefix. I think your best bet is to manipulate the XML as a string and use loadXML/saveXML to read/write it: xfa.datasets.dataDescription.loadXML(‘<root …’); If you want to continue to use E4X, you could try working with smaller fragments. good luck, John Is there any more documentation for callExternalfunction than what is on the Adobe web site? All the documentation states is: “When this function is called, it passes an interface that the DLL file uses to call back into Designer to retrieve the current selection, and to get or set dialog box variables.” What is the definition for this interface? How do I ensure a dll is compatible with it? Dan: A sample would help. It’s on my “to do” list. I can’t answer the question about the interface (yet). But here’s some more detail on the rest: When script calls callExternalFunction, the DLL name specified (which must not include path elements, only a filename such as “foo.dll”) is loaded from the directory containing the currently executing script file, and GetProcAddress is used to find the function identified by the sFunctionName parameter. A sample call would be: designer.callExternalFunction(“DesignerExtension”, “ShowMyDialog”, “user data here”); This would load a DLL called DesignerExtension.dll from the same directory that the plugin is installed, look up a function called ShowMyDialog, and call it, passing the “user data here” string as a wide character string (const wchar_t *). The function can return a string, which will be returned from the designer.callExternalFunction call. The string is returned as an HGLOBAL containing a wide character string. Here’s an example of a function in a DLL which shows the passed-in string in a dialog, and returns “yes” or “no” based on which option the user clicks: extern “C” __declspec(dllexport) HGLOBAL _cdecl ShowMyDialog(HWND hwndParent, const wchar_t *pszArgument) { int nResult = ::MessageBox(hwndParent, pszArgument, L”DLL Function Sample”, MB_YESNO); // Allocate some memory for the result string HGLOBAL hMem = GlobalAlloc(0, 64); if (!hMem) return 0; wchar_t *pMem = (wchar_t *)GlobalLock(hMem); wcscpy_s(pMem, 30, nResult == IDYES ? L”yes” : L”no”); ::GlobalUnlock(hMem); return hMem; } Hope this helps John @Dan: Probably it is documented in ExtendScript documentation. You should check Adobe Bridge SDK docs. I can see some great uses for external extension mechanism, like connecting to source code repository or diffing two source files! Is there some kind of C++ SDK for Designer like there is e.g. for InDesign? Maciej: Unfortunately, there is no published SDK for Designer. The use-cases you’re looking at are probably not yet possible with the current macro capability — but it would be great to hear what enhancement requests you need to build them. John
http://blogs.adobe.com/formfeed/2011/08/macros-in-adep-designer-10-0.html
CC-MAIN-2019-22
refinedweb
808
54.93
DocTests in seperate file not working Hi, I'm having issues getting the DocTests to run as a seperate file using the built in DocTests feature (creating a file name.doctest file and pressing play). The DocTest file works when calling existing packages, but if I do the following: New top-level folder called "HelloWorld" New script called "helloworld.py" with the following code: def get_hello(name): return "Hello " + name Then create a "helloworld.doctest" with the following: >>> import helloworld >>> helloworld.get_hello("Richard") 'Hello Richard' Then run the DocTest by pressing play (pressing play using the default template passes), I get the following error: ImportError: No module named 'helloworld' NameError: name 'helloworld' is not defined If I add the DocTest into the comment of the method, running DocTest works on that script. But I want to remove clutter and have the tests in a seperate file and use this feature. def get_hello(name): """ >>> get_hello("Richard") 'Hello Richard' """ return "Hello " + name Is there something I'm missing? Any help will be most appreciated IIRC, the issue is that doctests don't get sys.path modified the way real scripts do. You should be able to add sys.path.append(yourpathname) and it should work. Unfortunately, file is also not defined, so you cannot easily just use relative paths. Thanks JonB. The testing of the get_hello function works. However the test fails on the line of the import helloworld. It says Got: (the info of sys.path). This does not stop me testing my code now, however it just has one failure due to this. Any thoughts? Thanks again. >>> import sys; _path = '/private/var/mobile/Containers/Shared/AppGroup/.../Pythonista3/Documents/HelloWorld'; sys.path.append(_path) >>> import helloworld >>> helloworld.get_hello("Richard") 'Hello Richard' I created a DocTest file for the actual module I want to test and weirdly I don't get the import issue... I don't see any difference except the module I'm testing is two folder layers deep ( .../Apps/AppName/appname.pyrather than .../HelloWorld/helloworld.py). But as it's fully working with the module I actually want to test, that's fine for me for now. If you just ran the .py file, the folder will already be in sys.path.
https://forum.omz-software.com/topic/3455/doctests-in-seperate-file-not-working
CC-MAIN-2021-49
refinedweb
373
65.93
Opened 9 years ago Closed 5 years ago #3110 closed Bug (wontfix) ManyToMany filter_interface widget renders incorrectly in IE 6 when the field is in a fieldset with class collapse Description When I put a ManyToManyField with filter_interface=True inside a fieldset with class "collapse", it renders the left (available) portion of the widget correctly, but the right (chosen) part renders as a normal select widget with all currently chosen values as options. This causes the problem that when the object is saved, only the first (currently selected) existing chosen option is saved as the value for that field. Interestingly, if I add a new value to the chosen side (pick one on the left, then click the right arrow button), it seems to cause the right side of the widget to reevaluate itself and render correctly. I'm only seeing this when the widget is within a collapsed fieldset. I'm using the 0.95 release. Attachments (1) Change History (16) Changed 9 years ago by Adam Endicott <leftwing17@…> comment:1 Changed 9 years ago by Simon G. <dev@…> - Resolution set to invalid - Status changed from new to closed Can you confirm if this is still a problem on a current checkout? comment:2 Changed 9 years ago by gwilson - Resolution invalid deleted - Status changed from closed to reopened I don't think we should close tickets just because they haven't been confirmed. comment:3 Changed 9 years ago by Adam Endicott <leftwing17@…> Unfortunately this is pretty difficult for me to test on my current setup, I don't have a windows machine that can access anything I can set up for a quick test. I spent a few minutes trying to set something up testing on trunk, and didn't get it going. I can confirm that in Django 0.96 it fails as described above in IE6, and works as it should in IE 7 (as well as Firefox). Hopefully somebody with a windows setup can test it in trunk. This should be all the model needed to test it: from django.db import models class Foo(models.Model): name = models.CharField(maxlength=10) class Thing(models.Model): name = models.CharField(maxlength=10) foos = models.ManyToManyField(Foo, filter_interface=True) class Admin: fields = ((None, {'fields': ('name',)}), ('Foos', {'classes': 'collapse', 'fields': ('foos',)}),) comment:4 Changed 9 years ago by Adam Endicott <leftwing17@…> - Cc leftwing17@… added comment:5 Changed 9 years ago by ludo@… I can confirm this bug is still present with a trunk checkout. comment:6 Changed 9 years ago by ubernostrum - Owner changed from nobody to xian - Status changed from reopened to new - Triage Stage changed from Unreviewed to Accepted - Version changed from 0.95 to newforms-admin Reassigning to xian since he's doing newforms-admin JS stuff. comment:7 Changed 8 years ago by Karen Tracey <kmtracey@…> - Keywords nfa-someday added Problem reported against old admin, should not block merge of newforms-admin. comment:8 Changed 8 years ago by garcia_marc comment:9 Changed 8 years ago by Karen Tracey <kmtracey@…> comment:10 Changed 8 years ago by Karen Tracey <kmtracey@…> comment:11 Changed 8 years ago by Alex Karen, dumb question, did you mean that it is confirmed working in IE7, or hasn't been checked? comment:12 Changed 8 years ago by Karen Tracey <kmtracey@…> Sorry, I see my comment could be interpreted either way. I did try IE7 and cannot recreate the problem there. It seems to affect only IE6. comment:13 Changed 5 years ago by lrekucki - Severity changed from normal to Normal - Type changed from defect to Bug comment:14 Changed 5 years ago by julien - UI/UX set comment:15 Changed 5 years ago by oinopion - Easy pickings unset - Resolution set to wontfix - Status changed from new to closed If it's IE6 only, then it's wontfix, as we don't support IE6 anymore in admin. screenshot example of the problem
https://code.djangoproject.com/ticket/3110
CC-MAIN-2016-22
refinedweb
654
58.72
IaC during reading the article. Infrastructure as bash history Let us imagine that you are onboarding on a project and you hear something like: "We use Infrastructure as Code approach". Unfortunately, sometimes it means Infrastructure as bash history or Documentation as bash history. It is almost a real situation, i.e. Denis Lysenko described this situation at his speech How to replace infrastructure and stop worrying(RU). Denis shared the story how to convert bash history into upscale infrastructure. Let us check source code definition: a text listing of commands to be compiled or assembled into an executable computer program. If we want we can present Infrastructure as bash history like code. It is a text & it is a list of commands, it describes how a server was configured, moreover, it is: - Reproducible: you can get bash history, execute commands and probably get working infrastructure. - Versioning: you know who logged in, when and what was done. Unfotunately, if you lost server you will be able to do nothing because there is no bash history, you lot it with the server. What is to be done? Infrastructure as Code On the one hand, this abnormal case Infrastructure as bash history can be presented as Infrastructure as Code, but on the other hand, if you want to do something more complex than LAMP server, you have to manage, maintain, modify the code. Let us chat about parallels between Infrastructure as Code development and software development. D.R.Y. We were developing SDS (software-defined storage). The SDS consists of custom OS distributive, upscale servers, a lot of business logic, as a result, it has to use real hardware. There was sub-task periodically install SDS: before publishing new release we had to install it and check. At first look it is a very simple task: - SSH to host and run command. - SCP a file. - Modify a config. - Run a service. - ... - PROFIT! I was believing Make CM, not bash is a good point of view, however, there was no complex logic, so bash was a pretty good and reasonable choice. Time was ticking, we were facing different requests to create new installations in a slightly different configuration. We were SSHing into installations, and running the commands to install all needed software, editing the configuration files by scripts, and finally configuring SDS via Web HTTP rest API. After all that the installation was configured and working. This was pretty common practice, but there is a lot of bash scripts. Unfortunately, each script was like a little snowflake depending on who was copy-pasted it. It was also a real pain when we were creating or recreating installation. I hope you have got the main idea, that at this stage we had to constantly tweak scripts logic until the service is ok. There is a solution for that. It is D.R.Y There is D.R.Y. (Do not Repeat Yourself) approach. The main idea is to reuse already existing code. It sounds extremely simple. In our case, D.R.Y. was meaning: split configs and scripts. S.O.L.I.D. for CFM The project was growing, as a result, we decided to use Ansible. There were reasons for that: - Bash should not contain complex logic. - We had some amount of expertise in Ansible. There was an amount of business logic inside Ansible code. There is an approach for putting things in source code during the software development process. It is called S.O.L.I.D.. From my point of view, we can re-use S.O.L.I.D. for Infrastructure as Code. Let me explain step by step. The Single Responsibility Principle A class should only have a single responsibility, that is, only changes to one part of the software's specification should be able to affect the specification of the class. You should not create a Spaghetti Code inside your infrastructure code. Your infrastructure should be made from simple predictable bricks. In other words, it might be a good idea to split immense Ansible playbook into independent Ansible roles. It will be easier to maintain. The Open-Closed Principle Software entities… should be open for extension, but closed for modification. In the beginning, we were deploying the SDS at virtual machines, a bit later we added deploy to bare metal servers. We had done it was as easy as pie for us because we just added an implementation for bare metal specific parts without modifying the SDS installation logic. The Liskov Substitution Principle Objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program. Let us look wilder. S.O.L.I.D. it is possible to use in CFM in general, it was not a lucky project. I would like to describe another project. It is an out of box enterprise solution, it supports different databases, application servers and integration interfaces with third-party systems. I am going to use this example for describing the rest of S.O.L.I.D.. I.e. in our case, inside infrastructure team there is an agreement: if you deploy ibm java role or oracle java or openjdk, you will have executable java binary. We need it because top*level Ansible roles depend on that. Also, it allows us to swap java implementation without modifying application installing logic. Unfortunately, there is no syntax sugar for that in Ansible playbooks it means that you must keep it in mind during developing Ansible roles. The Interface Segregation Principle Many client-specific interfaces are better than one general-purpose interface. In the beginning, we were putting application installation logic into the single playbook, we were trying to cover all cases and cutting edges. We had faced the issue that it is hard to maintain, so we changed our approach. We understood that a client needs an interface from us(i.e. https at 443 port) and we were able to combine our Ansible roles for each specific environment. The Dependency Inversion Principle One should "depend upon abstractions, [not] concretions." - High-level modules should not depend on low-level modules. Both should depend on abstractions (e.g. interfaces). - Abstractions should not depend on details. Details (concrete implementations) should depend on abstractions. I would like to describe this principle via anti-pattern. - There was a customer with a private cloud. - We were requesting VMs in the cloud. - Our deploying logic was depending on which hypervisor a VM was located. In other words, we were not able to reuse our IaC in another cloud because top-level deploying logic was depending on the lower-level implementation. Please, don't do it Interaction Infrastructure it is not only code, it is also about interaction code <-> DevOps, DevOps <-> DevOps, IaC <-> people. Bus factor Let us imagine, there is DevOps engineer John. John knows everything about your infrastructure. If John got hit by a bus, what would happen? Unfortunately, it is almost a real case. Some time things happen. If it has happened and you do not share knowledge about IaC, Infrastructure among your team members you will face a lot of unpredictable & awkward consequences. There are some approaches for dealing with that. Let us chat about them. Pair DevOpsing It is like pair programming. In other words, there are two DevOps engineers and they use single laptop\keyboard for configuring infrastructure: configuring a server, creating Ansible role, etc. It sounds great, however, it did not work for us. There were some custom cases then it partially worked. - Onboarding: mentor & new person get a real task from a backlog and work together — transfer knowledge from mentor to the person. - incident call: During troubleshooting, there is a group of engineers, they are looking for a solution. The key point is that there is a person who leads this incident. The person shares screen & ideas. Other people are carefully following him and noticing bash tricks, mistakes, logs parsing etc. Code Review From my point of view, Code review is one of the most efficient ways to share knowledge inside a team about your infrastructure. How does it work? - There is a repository which contains your infrastructure description. - Everyone is doing his changes in a dedicated branch. - During merge request, you are able to review delta of changes your infrastructure. The most interesting thing is that we were rotating a reviewer. It means that every couple of days we elected a new reviewer and the reviewer was looking through all merge request. As a result theoretically, every person had to touch a new part of the infrastructure and have average knowledge about our infrastructure in general. Code Style Time was going, we were arguing during review sometimes because reviewer and committer might use different code style: 2 spaces or 4, camelCase or snake_case. We implemented it, however, it was not a picnic. - The first idea was to recommend to use linters. Everyone had his own development environment: IDE, OS… it was tricky to sync & unify everything - The idea evolved into a slack bot. After each commit the bot was checking source code & pushing into slack messages with a list of problems, unfortunately, in the vast majority of cases, there were no changes after messages. Green Build Master Next, the most painful step was to restrict commits into the master for everyone, only via merge requests & tests have to be ok. It is called Green Build Master. In other words, you are 100% sure that you can deploy your infrastructure from the master branch. It is a pretty common practice in software development: - There is a repository which contains your infrastructure description. - Everyone is doing his changes in a dedicated branch. - For each branch, we are running tests. - You are not able to merge into the master branch if tests are failing. It was a tough decision, hopefully, as a result at review there was no arguing about code style and amount of smelling code was decreasing. IaC Testing Besides code style checking, you are able to check that you can deploy or recreate your infrastructure in a sandbox. What's for? It is a sophisticated question and I would like to share a story instead of an answer. Were was a custom auto-scaler for AWS written in Powershell. The auto-scaler did not check cutting edges for input params, as a result, it created tons of virtual machines and the customer was unhappy. It is an awkward situation, hopefully, it is possible to catch it on the earliest stages. On the one hand, it is possible to test the script & infrastructure, but on the other hand, you are an increasing amount of code and making infrastructure more complex. However, the real reason under the hood for that is that you are putting your knowledge about infrastructure into tests, you are describing how things should work together. IaC Testing Pyramid IaC Testing: Static Analysis You can create the whole infrastructure from scratch for each commit, but usually, there are some obstacles: - The price is stratospheric. - It requires a lot of time. Hopefully, there are some tricks. You should have a lot of simple, rapid, primitive tests in your foundation. Bash is tricky Let us take a look at an extremely simple example. I would like to create a backup script: - Get all files from the current directory. - Copy the files into another directory with a modified name. The first idea is: for i in * ; do cp $i /some/path/$i.bak done Pretty good. However, what if the filename contains space? We are clever guys, we use quotes: for i in * ; do cp "$i" "/some/path/$i.bak" done Are we finished? Nope! What if the directory is empty? Globing fails in this case. find . -type f -exec mv -v {} dst/{}.bak \; Have we finished? Not yet… We forgot that filename might contain \n character. touch x mv x "$(printf "foo\nbar")" find . -type f -print0 | xargs -0 mv -t /path/to/target-dir Static analysis tools You can catch some issues from the previous example via Shellcheck. There are a lot of tools like that, they call linters and you can find out the most suitable for you IDE, stack, environment IaC Testing: Unit Tests As you can see linters can not catch everything, they can only predict. If we continue to think about parallels between software development and Infrastructure as Code we should mention unit tests. There are a lot of unit tests systems like shunit, JUnit, RSpec, pytest. But have you ever heard about unit tests for Ansible, Chef, Saltstack, CFengine? When we were talking about S.O.L.I.D. for CFM, I mentioned that our infrastructure should be made from simple bricks/modules. Now the time has come: - Split infrastructure into simple modules/breaks, i.e. Ansible roles. - Create an environment i.e. Docker or VM. - Apply your one simple break/module to the environment. - Check that everything is ok or not. ... - PROFIT! IaC Testing: Unit Testing tools What is the test for CFM and your infrastructure? i.e. you can just run a script or you can use production-ready solution like: Let us take a look at testinfra, I would like to check that users test1, test2 exist and their are part of sshusers group: def test_default_users(host): users = ['test1', 'test2' ] for login in users: assert host.user(login).exists assert 'sshusers' in host.user(login).groups What is the best solution? There is no single answer for that question, however, I created the heat map and compared changes in this projects during 2018-2019: IaC Testing frameworks After that, you can face a question how to run it all together? On the one hand, you can do everything on your own if you have enough great engineers, but on the other hand, you can use opensource production-ready solutions: I created the heat map and compared changes in this projects during 2018-2019: Molecule vs. Testkitchen In the beginning, we tried to test ansible roles via testkitchen inside hyper-v: - Create VMs. - Apply Ansible roles. - Run Inspec. It took 40-70 minutes for 25-35 Ansible roles. It was too long for us. The next step was use Jenkins / docker / Ansible / molecule. It is approximately the same idea: - Lint Ansible playbooks. - Lint Ansible roles. - Run a docker container. - Apply Ansible roles. - Run testinfra. - Check idempotency. Linting for 40 roles and testing for ten of them took about 15 minutes. What is the best solution? On the one hand, I do not want to be the final authority, but on the other hand, I would like to share my point of view. There is no silver bullet exists, however, in case of Ansible molecule is a more suitable solution then testkitchen. IaC Testing: Integration Tests On the next level of IaC testing pyramid, there are integration tests. Integration tests for infrastructure look like unit tests: - Split infrastructure into simple modules/breaks, i.e. Ansible roles. - Create an environment i.e. Docker or VM. - Apply a combination of simple break/module to the environment. - Check that everything is ok or not. ... - PROFIT! In other words, during unit tests, we check one simple module(i.e. Ansible role, python script, Ansible module, etc) of an infrastructure, but in the case of integration tests, we check the whole server configuration. IaC Testing: End to End Tests On top of the IaC testing pyramid, there are End to End Tests. In this case, we do not check dedicated server, script, module of our infrastructure; We check the whole infrastructure together works properly. Unfortunately, there is no out of the box solution for that or I have not heard about them(please, flag me if you know about them). Usually, people reinvent the wheel, because, there is demand on end to end tests for infrastructure. So, I would like to share my experience, hope it will be useful for somebody. First of all, I would like to describe the context. It is an out of box enterprise solution, it supports different databases, application servers and integration interfaces with third-party systems. Usually, our clients are an immense enterprise with a completely different environment. We have knowledge about different environments combinations and we store it as different docker-compose files. Also, there matching between docker-compose files and tests, we store it as Jenkins jobs. This scheme had been working quiet log period of time when during openshift research we tried to migrate it into Openshift. We used approximately the same containers (hell D.R.Y. again) and change the surrounding environment only. We continue to research and found APB (Ansible Playbook Bundle). The main idea is that you pack all needed things into a container and run the container inside Openshift. It means that you have a reproducible and testable solution. Everything was fine until we faced one more issue: we had to maintain heterogeneous infrastructure for testing environments. As a result, we store our knowledge of how to create infrastructure and run tests in the Jenkins jobs. Conclusion Infrastructure as Code it is a combination of: - Code. - People interaction. - Infrastructure testing. - It's cross post from personal blog - Lessons Learned From Writing Over 300,000 Lines of Infrastructure Code & text version - Integrating Infrastructure as Code into a Continuous Delivery Pipeline Source:
https://techplanet.today/post/lessons-learned-from-testiting-over-200-000-lines-of-infrastructure-code
CC-MAIN-2020-10
refinedweb
2,889
65.93
Why is Kubernetes getting so popular? At the time of this article, Kubernetes is about six years old, and over the last two years, it has risen in popularity to consistently be one of the most loved platforms. This year, it comes in as the number three most loved platform. If you haven’t heard about Kubernetes yet, it’s a platform that allows you to run and orchestrate container workloads. Containers began as a Linux kernel process isolation construct that encompasses cgroups from 2007 and namespaces from 2002. Containers became more of a thing when LXC became available in 2008, and Google developed its own internal ‘run everything in containers mechanism’ called Borg. Fast forward to 2013, and Docker was released and completely popularized containers for the masses. At the time, Mesos was the primary tool for orchestrating containers, however, it wasn’t as widely adopted. Kubernetes was released in 2015 and quickly became the de facto container orchestration standard. To try to understand the popularity of Kubernetes, let’s consider some questions. When was the last time developers could agree on the way to deploy production applications? How many developers do you know who run tools as is out of the box? How many cloud operations engineers today don’t understand how applications work? We’ll explore the answers in this article. Infrastructure as YAML Coming from the world of Puppet and Chef, one of the big shifts with Kubernetes has been the move from infrastructure as code towards infrastructure as data—specifically, as YAML. All the resources in Kubernetes that include Pods, Configurations, Deployments, Volumes, etc., can simply be expressed in a YAML file. For example: apiVersion: v1 kind: Pod metadata: name: site labels: app: web spec: containers: - name: front-end image: nginx ports: - containerPort: 80 This representation makes it easier for DevOps or site reliability engineers to fully express their workloads without the need to write code in a programming language like Python, Ruby, or Javascript. Other benefits from having your infrastructure as data include: - GitOps or Git Operations Version Control. With this approach, you can keep all your Kubernetes YAML files under git repositories, which allows you to know precisely when a change was made, who made the change, and what exactly changed. This leads to more transparency across the organization and improves efficiency by avoiding ambiguity as to where members need to go to find what they need. At the same time, it can make it easier to automatically make changes to Kubernetes resources by just merging a pull request. - Scalability. Having resources defined as YAML makes it super easy for cluster operators to change one or two numbers in a Kubernetes resource to change the scaling behavior. Kubernetes has Horizontal Pod Autoscalers to help you identify a minimum and a maximum number of pods a specific deployment would need to have to be able to handle low and high traffic times. For example, if you are running a deployment that may need more capacity because traffic suddenly increases, you could change maxReplicasfrom 10to Controls. YAML is a great way to validate what and how things get deployed in Kubernetes. For example, one of the significant concerns when it comes to security is whether your workloads are running as a non-root user. We can make use of tools like conftest, a YAML/JSON validator, together with the Open Policy Agent, a policy validator to check that the SecurityContextof your workloads doesn’t allow a container to run as a root. For that, users can use a simple Open Policy Agent rego policy like this: package main deny[msg] { input.kind = "Deployment" not input.spec.template.spec.securityContext.runAsNonRoot = true msg = "Containers must not run as root" } - Cloud Provider Integrations. One of the major trends in the tech industry is to run workloads in the public cloud providers. With the help of the cloud-provider component, Kubernetes allows every cluster to integrate with the cloud provider it’s running on. For example, if a user is running an application in Kubernetes in AWS and wants that application to be accessible through a service, the cloud provider helps automatically create a LoadBalancerservice that will automatically provision an Amazon Elastic Load Balancer to forward the traffic to the application pods. Extensibility Kubernetes is very extensible, and developers love that. There are a set of existing resources like Pods, Deployments, StatefulSets, Secrets, ConfigMaps, etc. However, users and developers can add more resources in the form of Custom Resource Definitions. For example, if we’d like to define a CronTab resource, we could do it We can create a CronTab resource later with something like this: apiVersion: "my.org/v1" kind: CronTab metadata: name: my-cron-object spec: cronSpec: "* * * * */5" image: my-cron-image replicas: 5 Another form of Kubernetes extensibility is its ability for developers to write their own Operators, a specific process running in a Kubernetes cluster that follows the control loop pattern. An Operator allows users to automate the management of CRDs (custom resource definitions) by talking to the Kubernetes API. The community has several tools that allow developers to create their own Operators. One of those tools is the Operator Framework and its Operator SDK. The SDK provides a skeleton for developers to get started creating an operator very quickly. For example, you can get started on its command line with something like this: $ operator-sdk new my-operator --repo github.com/myuser/my-operator Which creates the whole boilerplate for your operator including APIs and a controller like this: $ operator-sdk add api --api-version=myapp.com/v1alpha1 --kind=MyAppService $ operator-sdk add controller --api-version=myapp.com/v1alpha1 --kind=MyAppService And finally build and push the operator to your container registry: $ operator-sdk build your.container.registry/youruser/myapp-operator If developers need to have even more control, they can modify the boilerplate code in the Golang files. For example, to modify the specifics of the controller, they can make changes to the controller.go file. Another project, KUDO, allows you to create operators by just using declarative YAML files . For example, an operator for Apache Kafka would be defined with something like this, and it allows users to install a Kafka cluster on top of Kubernetes with a couple of commands: $ kubectl kudo install zookeeper $ kubectl kudo install kafka Then tune it also last few years, Kubernetes has had major releases every three or four months, which means that every year there are three or four major releases. The number of new features being introduced hasn’t slowed, evidenced by over 30 different additions and changes in its last release. Furthermore, the contributions don’t show signs of slowing down even during these difficult times as indicated by the Kubernetes project Github activity. The new features allow cluster operators more flexibility when running a variety of different workloads. Software engineers also love to have more controls to deploy their applications directly to production environments. Community Another big aspect of Kubernetes popularity is its strong community. For starters, Kubernetes was donated to a vendor-neutral home in 2015 as it hit version 1.0: the Cloud Native Computing Foundation. There is also a wide range of community SIGs (special interest groups) that target different areas in Kubernetes as the project moves forwards. They continuously add new features and make it even more user friendly. The Cloud Native Foundation also organizes CloudNativeCon/KubeCon, which as of this writing, is the largest ever open-source event in the world. The event, which is normally held up to three times a year, gathers thousands of technologists and professionals who want to improve Kubernetes and its ecosystem as well as make use of some of the new features released every three months. Furthermore, the Cloud Native Foundation has a Technical Oversight Committee that, together with its SIGs, look at the foundations’ new and existing projects in the cloud-native ecosystem. Most of the projects help enhance the value proposition of Kubernetes. Finally, I believe that Kubernetes would not have the success that it does without the conscious effort by the community to be inclusive to each other and to be welcoming to any newcomers. Future One of the main challenges developers face in the future is how to focus more on the details of the code rather than the infrastructure where that code runs on. For that, serverless is emerging as one of the leading architectural paradigms to address that challenge. There are already very advanced frameworks such as Knative and OpenFaas that use Kubernetes to abstract the infrastructure from the developer. We’ve shown a brief peek at Kubernetes in this article, but this is just the tip of the iceberg. There are many more resources, features, and configurations users can leverage. We will continue to see new open-source projects and technologies that enhance or evolve Kubernetes, and as we mentioned, the contributions and the community aren’t going anywhere.Tags: bulletin, containers, kubernetes, serverless, stackoverflow 31 Comments Good article – accurate. One thing to note is that many seem to believe K’s provides higher “availability” of a system. This simply is not true in that, barring applications crashing randomly out of memory (bugs)… A system (globally) does not “discover” working K-clusters without more work. That is, K is not going to solve: – Client cannot access the cluster (need many clusters, “deep” health-checks, DNS-resolution support, etc). – Unit-of-order (UOO), at-least-once-delivery and other transaction contracts for a system of many clusters across regions (earlier point). Otherwise, it does nicely for local cluster management and can be pushed to provide heuristic auto-scaling (though not without more magic). I was excited when I first met Kubernetes but today I just think it is very complex and difficult to manage infrastructure and kinda obsolete. To be honest I am working right now with Server-less and AWS and whole Kubernetes to me now looks like a crap. Let me name few things. Operators – you get something not so reliable and not so good if you are looking for Operators for databases cluster compare to Amazon RDS. It is very complex, difficult to set-up, in Amazon RDS, you set-up and manage RDS cluster just by 3 clicks. Development. Let me tell you one thing, Server-Less is the future. I just do not see many reasons why I should take care of container images. Why I should care about OS? Let me tell you next thing – networking. In Kubernetes, it is extremely complex. If you use AWS and Server-Less, you just define your VPC, Gateway and that’s it. I just do not think that Kubernetes is a future anymore. You will end-up paying a big bucks for managing your infrastructure on Kubernetes anyway, you will end up with obsolete docker containers and very complex setups. Kubernetes is simply definitely not a future. You can build with it a castle but you still have to manage LEGO bricks yourself and this will not only slow-down you a lot in future (you are going to find out that 90% of the work is just supporting current infrastructure not adding user requests) but it will be also costly. The reason why Server-Less is not used more is that with Kubernetes you can take an old crappy code and crappy development and do a new painting to let it looks good. You can take 20 years old code and package it in Kubernetes. That’s the greatest thing about Kubernetes. You cannot do that with Server-Less. Server-Less is completely new paradigm, no backwards compatibility. That’s the largest problem with Kubernetes. It is an old paradigm painted with a fresh new color. I agree with your comment. This is also why I think serverless is much cheaper for new applications. “Server-less” just means you don’t run the server. There are many negatives such as extreme vendor lock-in and more “magic” in the hosting layer. But there is still a host, running on a server, that requires a control structure. To put it more bluntly, these toys aren’t meant for developers like you. Tom you’ve somehow taken Martin’s comment as a personal attack. No need to discourage his ability as a developer. And if you are going to play that game, a sharp developer will tell you to keep a simple architecture. For every complex problem there is an answer that is clear, simple, and wrong. H. L. Mencken So disparaging as it might be, this comment has some merit. I started programming when I was 12. That was oh, over 30 years ago and I STILL program every day writing applications for Fortune 500, startups, etc. Suffice to say what I have learned is that it is so easy to just setup a web-page and have no idea how it actually works (HTTP et al). Big AWS fan and think K’s is overblow and agree – overly complex for what it does but… this whole ‘serverless’ stuff is nonsense for anything but the most trivial of applications. These are some of my opinions about this, would love to chat more! – Serverless is a different paradigm: yes – Kubernetes can get quite complex: yes – AWS has over 100 people working on Lambda: yes – This hides the complexity but it’s no simple task. – AWS has hundreds of engineers working on other services to support Lambda (S3, SQS, API gateway, Fargate, etc): yes – Lambda is cheaper: it depends. – If for example, you are serving millions of API requests 24×7 spinning up a Lambda function it can get even more expensive. – If you are doing a few one-off asynchronous operations it can save a lot of money since it’s pay as you go. All of these technologies have their place. I never understand why so many conclude that technology-A is obsolete and technology-B is the new thing that will completely eliminate tech-A from existence. Decades of technology innovation should have taught us that this is now generally how it plays out. Mainframes still exist, and still have a solid (if not more narrow) use case. On-prem virtualization still has a place. So does IaaS. So does PaaS. So do containers. So does orchestration. Server-less also has a place. Many of these technologies will continue to co-exist for years, until one of the other technologies can do the overlapping jobs more easily or more cost-effectively than some of the others. (Or, better marketing comes into play for one tech, or the right org jumps on the bandwagon of a specific tech). “Old” is not automatically bad. “New” is not automatically good. It is best to ensure that you are using technology that makes sense for your business needs, without closing your eyes to everything else that doesn’t fit those needs right now, but might in the near future. It never makes sense to turn technology into religion. Today’s new, is tomorrow’s old. Ask the “mini computer” folks. -ASB I thought you’re referring to when using word “Server-Less”. Are you using this as a general term, or something third? Server-less IS a software engineering concept: Kubebuilder() is worth mentioning for creating operators. The link to the insights survey should be You absolutely miss the point that editing YAML-files is absolutely cumbersome. One never knows whether a value turns into a string (1.1 vs. 1.1.1) or where this snippet was, you came across last week. Managing a sample project with two or 17 pods that run a simple container per pod are easy to maintain. Make it an Istio-config in a mesh of 100+ pods from more than 10 teams with about 5 programming languages and sufficient scripting/ node/ edge computing/ static content and you’re up and running in >3months. This article is about how Kubernetes is where it is right now. Yes, YAML is a pain point that many have realized, it can be cumbersome when implemented at scale. It’s unclear where the industry will go forward but there are a few projects that are trying to address some of the problems: – – – – etc I would love to chat more! The reason is you write such stories and start to believe in yourself. Oh no now even others write about it too, we must be on something. We don’t use it as it wouldn’t improve our business no need for it. This is the only sensible approach. Regrettably, it’s very much the exception. Please fix the the broken link to dev.insights I have a problem: The link above is not accessible. Ooof. Sorry about that, fixed. A nice article, but for anyone arriving here thinking they should immediately get on the K-bandwagon, there are opposing articles available, and before moving to kubernetes, one should definitely consider security (which Kubernetes has had significant problems with in the past). Also, consider cost – in most cloud providers (for example), running a Kubernetes cluster is pretty expensive – and far more expensive than running a handful of containers in their container services (eg. ECS, App Engine, etc). You still get to write infrastructure as code there – just you’re writing Terraform instead of Kubernetes. It won’t earn you so much “cred” when you’re geeking out with your pals, but you’ll have more money to buy the beers. IMHO, Kubernetes is the “big data” of containers/infrastructure – it’s pretty good if you know what you’re doing. However, it turns out that most people using it don’t really need it and could have used something else for a fraction of the cost. Caveat Architect 😉 Kubernetes has, hand in hand with Docker, become the latest fad – companies want Kubernetes because “KuBErnETeS gOoD”. Azure & AWS have been doing the same thing (but better) long before this pile of crap became the hype and will be doing it long after it’s as dead as Google Wave. I only hope that happens soon. Why are you so pessimistic dude ! Why do you think it’s a fad ? Or are you just sitting in your corner which the world will be the same as you know it and nothing will change and hoping that your knowledge will be valuable 10 years from now, things are changing and evolving, that’s Tech. Azure have been created far after Kubernetes and they are doing a good job in some areas but not all of them, AWS has very good products but again “very good” not a “good” platform, i’m sorry i used AWS and Azure, they are crap. Kubernetes will stay around up to and probably while after you are gone Shows how clueless you are! Kubernetes is the future! In the beginning, being on Google+, being exposed to the daily hype flow of Google+, I was of course interested in k8s. I waited, set up my own 1 node, tried to go deeper, but it’s just too complex. I’m not stupid, but I’m no IQ superstar either. The only people who would actually benefit from k8s are people like Kelsey Hightower aka rockstar devs. His job is promoting k8s. When he does it, everything looks simple and easy. When you try that you’re met with landmine after landmine, stuff that has never been mentioned like requiring shared storage. k8s is an absurd waste of resources. Run your own database installation for every project? k8s is for rich people, for big companies. For people who can dedicate a team on just k8s. I’m not one of those. For me even docker is too much additional effort. I’m also too poor for AWS,GCE,Azure and other public clouds. I have 2 servers with 32GB RAM each that cost 100€/month combined and both has each 1gbit/s flat traffic. If I have a project that takes off I put it on a separate server in a VM so I can move hosts every few years. You guessed it, I have only 1 successful project that justifies paying an additional 50€/month for a server. Sometimes I wish I worked in one of those rich companies so I could be hipster too. But I’m not and I need to be cost effective and time effective. k8s is neither. Sorry, Darko (and everyone else), I don’t know if I can ask here and it certainly has nothing to do with the Kubernetes subject but… you said that you “have 2 servers with 32GB RAM each that cost 100€/month combined and both has each 1gbit/s flat traffic”. I’d really need something like that, since I’m paying more or less the same for a single server with worse specs. Can you tell me where do you have those servers, please? Thank you very much. you comment is a reflexion of your choice of career, comparing simple servers to k8s is like comparing cars to dog, yeah maybe they have something in common (they can both move) but they are remotely similar to each other. There is a place and tool for each task, if k8s is not for you then fine but going out with such a useless comment to people, companies and communities working on delivering something bigger then what a single person could comprehend is remarkable. I which we lived in a world with every person in Tech could just shut the f**** up, do their job and stop lashing on other people efforts Technologies like Ranchers, GKE, AKS, etc are more likely to become mainstream hiding the complexity of Control Plane and even various data plane quirks making it more developer friendly. Kubernetes is still the best way for Orchestration of hundreds of microservices. Combine it with managed kubernetes with cloud providers and you have a technology which is here to stay for sometime, IMHO. My 2 cents. Word of blogger advice, why not balance information like this with the drawbacks and pain points of using Kubernetes? Why not give some context where using Kubernetes is useful versus where it’s overkill? This is overkill for a lot of organizations and fanboy, hand-waiving literature like this will confuse the less critical developers into thinking they need to use this. I think Kubernetes is a great tool that permits most people and enterprises (very small to very large) to have an High Availabilty Infrastructure at very low costs (the costs depends on many factors not only software side). But k8s implies a dedicated know how: a DevOps that spends some time to configure and maintain this infrastructure. Anyone can use k8s into AWS, Azure, Google…and maybe many others including VPS or Bare Metal. All depends of the specific project and long term vision of the enterprise, project and so on. In a precedent project, some year ago, i hav replaced an entire infrastructure with kubernetes with 70 microservices including databases (not so many ok… 🙂 ) with many benefits: scalability, observability, real time infra changes, cost reduce, more control and very less pains. The other side is that all developers and sysadmins (DevOps) must change it’s point of view: all services can fail at any time and must be replaceable in a few seconds. For me: everything can fail, and we must known that is it! Serverless for that i know is very usefull for specific tasks, but any serverless system is very specific, k86 aim to a standard for infrastructures. If you have a VPS whats happen if a service is under ddos attack? If you use a serverless how can install and manage a CMS? Any tachnology have pros and cons and the choice depends on many factors: peoples, passions, know how 🙂 Lots of people get heated about Kubernetes taking over the spotlight, honestly it’s not taking over the spotlight. Serverless computing will still remain a good solution to start projects in the cloud, however it’s only very specific workloads with standard dependencies. Containers and Kubernetes are lot more flexible in terms of the workloads, not only you can run you APIs in there but also crunch data with Dask or Spark. You can also host your front end servers, as well as your backend. That is not to say you have to replace everything you have on your stack, you can continue to use hosted Databases, and serverless APIs. Kubernetes is not the future for everything, but is certainly the future of hybrid cloud computing. Kubernetes is not my solution of choice for deployments. It’s great for monolithic, permanent applications, but for serverless/lambdas and atomic/functional microservices I have yet to encounter an application where it adds more complexity than it resolves.
https://stackoverflow.blog/2020/05/29/why-kubernetes-getting-so-popular/
CC-MAIN-2021-31
refinedweb
4,156
61.77
Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More Search - "font" - - Happy April Fool's! - Windows 8 == Best OS - Apple is fairly priced - PHP > C++ - Java === JavaScript - facebook > devrant - Github useless, use .zip - Comic Sans best terminal font - Nobody needs a web developer because Wordpress much better - Linux is for virgins22 - I have a confession to make. I do most of my java coding in comic sans ;-; IT MAKES ME HAPPY FOR SOME REASON28 - - - - My boss intentionally changed my IDE font color to black since im using dark theme. I was literally confused a solid 30 minutes. FFFFUUUUUCK16 - - I just overheard a conversation in my uni and I'm horrified. They want to use Comic Sans as the main font in the app they're going to develop I hope I just imagined this3 - - Being a web developer somedays makes me feel like: Wow! Look at that super awesome thing these guys coded, and I'm just sitting here aligning divs and changing font colors.12 - #tower-of-pisa { font-style: italic } Entire building falls down Fuck, forgot a semicolon. I hate CSS.4 - - Art director said: "Please make everything smaller?" Developer: "Everything?" Art Director: "Yes everything. Font size, panel height, panel width, input height and width, photos, headers, paragraphs, everything" Developer: "ctrl + '-' will that work?"10 - Teacher: *writes <!doctype html> in his web page* Teacher: Of course we're going to use HTML 5 for our application! Also teacher: *continues to demonstrate <center> and <font> tags* Ffs...11 - - - - - Whenever someone at our university forgets to lock his laptop, we change his default font in Eclipse to Comic sans 😂 Just to Show him why security is important.10 - - - 6 hours, with non developers, to decide on the font size of a website. Had to listen to 6 hours of pseudo psychological bullshit about how a 16pt thin Helvetica or something made people "feel". And I'm a backend dev... I haven't written a line of css in two years... I only interact with the website through functional tests.9 - - - - - - - Never thought an Icon Font library would make a million bucks in Kickstarter. Check out Font Awesome's campaign if you're a fan. They've got some sweet backer rewards.9 - - For the first time in weeks I have energy enough to work on rewriting the security/privacy blog again! Post sort order will be fixed :) Font ideas, anyone? And other feature ideas are welcome as well ;)31 - - - TIL if a free fonts website ever asks you to register to download, you can just search GitHub for that file/font name, somebody most likely committed it before.17 - - Well. Here is the story. Started making a React app a week ago with a friend; his job was styling some jsx i "already" wrote. hmmmm. and i just pulled his last commit. file [index.scss] 3 lines: body { font-size: 16px; } 😐😐😐😐😐14 - its 2016, and they still believe that office skillz are enough for CS101.. boy u have to allocate memory in runtime without leaks by end of semester, not just make a text bold with a fancy font..2 - @dfox !rant, it's the Feature request Possibility to post `a code snippet with monospaced font` would be usefull. Or even ``` def multiline_code(): from 2 to Inf: "Lines of code" ``` Sth like in markdown.9 - Here's what I saw today To add space between two rows of images. <div> //A row of images </div> <h1 style="color:white">a</h1> <div> //A row of images </div>11 - I caught my client SLASH boss SLASH project man SLASH designer SLASH "I want to make facebook but better" designing with microsoft word That explain the calibri font in their logo9 - Alright lets work on the security/privacy blog again. Things I've got in the making right now: dark theme by default, font change and an rss feed! Let me know what you'd like to see :) I'll also reveal a new domain name soon!40 - - When the coffee you had in the morning kicks in all at once. Now my heartbeat is faster than my keystrokes4 - - - - - I was just browsing for freelancing jobs, found a NodeJs one that didn't sound like crap. > Plz download attached project description Ok *download and open PDF* >Comic sans all over the place with blue and purple font color NOPE!5 -....... - When the pm learns how F12 and use Google console to change HTML style, for example the color of the font. He proclaims produly to everyone, I can code like you guys now.2 - Handwritting fonts bring back warm memories of that one time in school where they said we should write some delphi code with a pen. Good times16 - Got called up today by my org's cyber security team. Reason: Installed a font called "Hack" (...) 🤦♂️🤦♂️🤦♂️🤦♂️🤦♂️1 - Needed devRant font awesome icon for my personal website. Couldn't find. Thought of creating a new issue when I found this.. Request every dev to thumbs up this issue at - - - Client wants to use a specific font. Client can't find the font. Google can't find the font. We Are Doomed1 - - - - - Do not change your '/usr' permissions. Yours sincerely, A linux user who's been fucked up(twice) doing this because he wanted to install a fucking font.4 - - - - - Devs: "Accessibility? Not a high priority for now" Also Devs (in tech conferences): "I can't read, can you please increase the font size?" "Also, dark mode everywhere please"4 - Software engineering course. Professor wants to show us some Java code. *Opens eclipse* *Font super small* Student: can you please increase the font size? Professor: sure. *Can't find the correct setting to do that* Professor: does anyone know how to increase font size? Some student at the back: copy the code to notepad++. :/ Cool professor though..9 - - - - Just two wishes! #1 : Way to download images rather than having to screenshot #2 : Code with syntax hilighting #3 : I know I'm of by one, but a coding font maybe?5 - - - looking at my uncle's webpage because he wants to switch hosts. The logo has a white background, we want to change that. Uncle: "It's not supposed to look like this" Me: "the jpg file type is not transparent, you have to make something like a png." U: *pulls up word* "hmm... I don't have that font on this laptop..." M: *slowly loses sanity*1 - The default font for the Bulma CSS framework is triggering my OCD. JUST WHY ARE THEY NOT ALIGNED FFS?!11 - I feel sorry for myself as i thought Ariana Grande was a font *shame* *shame* *shame* *voices from game of thrones in my head now* - "Ok can you make the background color black and the font green?" "Sure" "Okay now a 3D WebGL game but build from scratch" - - Just received a client who wants an updated wordpress theme, Thank God he came to me. I feel sorry that he had to run a site with this font. Going to the site for the first time gave me goosebumps.12 - - A german blogger i occasionally read wrote something about finding the correct programming font for the personal liking and linked this: Seems kinda fun. I am currently using Fira Code but "Cousin" looks kinda interesting. Wanted to share3 - - - - - - <<< prank victim today. Swapped font so it would not display special characters and changed characters in my unit tests here and there... Took me 40mins with headphones on, before the thought that I'm not at fault occurred... Once you forget to lock your machine when going lunch..2 - For people still struggling to find the perfect font for their favorite editor, look at what I found: Test them before introducing bloat to your font directory.6 -. - I can't tell what bothers me the most about this ad... The font size in the IDE, the random mix of unrelated computer equipment, the amazing opportunity to "work for free", or the mirrored displays13 - Fuck you, discord. Fuck you for not using a monospaced font for the code block on mobile. Renders my beautiful function to align stuff useless on mobile...7 - - Ubuntu mono font is such a delight to use as a code font. Changed all my IDE / Atom / Notepad++ fonts to use that as default now. :-) - - So after two hours of debugging I get to know that Chrome doesn't differentiate between font-weight 100-500 unless on a Mac, and IE does, but IE doesn't support the <picture > tag 😶10 - CSS quick maffs: Using viewport units to define font size but sometimes it's too small? Instead of font-size: 10vw; use font-size: calc(10vw + 20px); This will make sure that font size is AT LEAST 20 px no matter the viewport width. Treat the resulting font size like a function of viewport width and feel free to experiment with it. With calc in that case you can achieve the best typeface responsiveness possible.13 - - *Designs front-end sends it to my boss* Boss: Looks good. No changes needed. *Hosts the design* Boss: Ah, well these icons need to be different maybe and this font is too boring, try something else. *Cries internally*4 - You will realize that your life is fucked up when you write ' <i class="fa fa-laptop"></i>' in devRant instead of using emoji(💻).3 - - - - - - What the fuck happened to laravel docs, what fuckwad thought the only docs across the entire internet that are properly readable, need its shit fucked up and made into borderless, bold fontweight, shit font dogshit.13 - - FFFUUUU!!! Damn Windows april update! After a LOT of problems with drivers, bluetooth, etc. it even partially corrupted the font of a program, the console shows a list of data from a medical image database so i had a micro heart attack when i first saw this tinking the database was corrupted (i was checking out a problem)! I bet it's the "smart" font re-sizing!!!5 - FRIDAY MADNESS: As I was so busy coding, one colleague was taking a break and distracts me as he's done with his task. As he approached, I snobbed. Him: Dude, did you know that there's a generator for all the images in sprite? Me: really? How? Him: spritegen.website-performance.org. What's cool about it is that the html and css are already generated just like in font awesome. For example, that i tag... Me: cool. I wonder dude, why would they use i tag when it makes the text italicize, right? Him: right. Probably because its used for icons also because icons starts with letter i. Me: LOL. Him: LOL. - - - - - Futurism.com Please fix your fonts craze, on top of all the mixing, you have the hardest in-article font to read of any I can recall right now2 - - I HATE NETBEANS. Why the fuck is it's interface so out of date. I just had to increase its font size and took me half an hour just to find the option to do that.20 - just arriving at devRant and seeing you guys use one of my favorite font ... let's create an account ;-)5 - I want a font-awesome icon for devRant. Would be nice for mine and others' sites. Any designers around to submit as a suggestion for next version (or for other icon libraries)?5 - - Downloaded SQL assigment Scroll fown to find the tables and the data A table with the worst font to use1 - - Kinda curious if there are any devs who don't use a monospace font when coding and what drove you to do such a horrible inhumane thing?7 - developing add-ons for Casio calculators is definitely the best experience. No syntax or error highlighting. Average failed builds between successful builds: 12 🤔 I won't mention the default font for the code editors in there is Arial... - - - At first I wasn't crazy about it,. but I think I'm hooked now. I am a big fan of fira mono. I switched my xterm to use it, and it's really distinct and helpful. I highly recommend giving it a try. - - WHAT THE FUCK???? UX level: 9000 I do like dark themes in many places. But IMO this is just too much And the font..? Is this registration for witch-hunt or satanists' party?17 - - When I signed up for the technical college and heard that we would have media tech classes, I was expecting frontend stuff like HTML, CSS and JS, not how to make folders and change the font in notepad. Fuck.1 - A facebook cover photo inspired by this rant: !rant I used this font:... and finally, I used this psd (gimp should be able to use it: - Has anyone here paid for a font? I'm thinking about dropping $200.00 for the Operator Mono font, I use Fira Code but that cursive typeface is SOOO FANCY.9 - Burn I hell whoever designed this font. I just spend 20 minutes trying to figure out what special thing this for-loop does if it's just from 0 - 12 - - I'm looking for a kind of "lorem ipsum" but not for text, for code. Some kind of random code generator with configurable language to test code rendering, compare programming fonts, etc. Any suggestions?9 - - - *slamms door open* *screams as loud as he can* "FREE FONT DOWNLOAD" Wait what? *screaming even louder* "FREE SATORI SANS FONT DOWNLOAD IT BELOW" Wtf stop screaming.1 - I was wondering why it had a duck when I use python, was thinking maybe it is a font issue .. which I ignored for months. Today I say the tongue.2 - - Google's new Material design, with more curvy elements with broad borders, woth cocky font ...... is UGLY as FUCK 12 year kid at work11 - Not fired, but shot by my college, if I create a ticket that our software-ui isn't rendered correctly with font size 721 - - Anyone have the devrant font file .ttf Is it open to the public? I saw one in github but i cannot believe whether it os legit or fake1 - - When the WYSIWYG editor needs to go back to school for coding. <span style="font-weight: bold;"><br></span> How is that even useful!?1 - Apparently did Microsoft released their own font called Cascadie Code... I am still pretty keen on Fira Code, although i see myself browsing this page upon occassion - Schrodingers font size: When you follow the directions given in the style guide, but it still doesn't match the mockups. The chosen font size can be considered both wrong and right until examined by a member of the design team. - "I want you to make a font, it needs to be heavy, but not too heavy. Like it should be able to float on water. So, bold enough to get a person's attention but not screaming" ...ok2 - - - - - - - So, apparently, content entry is front end development. It's all right there on our old site, just copy and paste it ... Complete with millions of annoying span and font tags you used.2 - Had a teacher that taught the intro to programming course by youtube videos. He wanted us to build the same app he was making in the videos as the assignment. He kept his code snippets private and expected us all to type it all out. He was on a 1080p resolution desktop... but his videos were 360p resolution with 8pt font..... - I love italics in combination with font ligatures! Looks so great! What do you think? Italics yay or nay?16 - New toy for frontend devs: OpenType 1.8 Variable Fonts. 1 font file to rule them all. Manipulate on the fly fluidly the font weight via css and javascript. - - - Down with Helvetica code blocks in devRant! When you write something inside `these quotes` in Telegram, it gets rendered in a monospace font.1 - What's your favorite terminal font? I'm on the lookout. I've gone through Ubuntu mono, fira code and fira mono, and I'm currently on jetbrains mono. They're all lovely, but I know there's a universe of fonts out there, and I'd like to know what others are using.19 - - - I decided to try a new mono font in my editor, this is a relatively new font called IBM Plex. I can hear the sounds of a 1401 crunching through the punch cards while the printing out curlies that scream THIS IS SERIOUS BUSINESS. Mmmmm....I like it.6 - Developing an app in Unity, gotta add some icons, my boss tells me unironically: - "hey, use font-awesome!" Yeah, right, like I can use HTML tags into unity or go check the specific code for the specific symbol, are you out of your mind?7 - Just checked my college's website and every thesis has to use times new roman. But thats a proprietary font?? I mean they do provide student licenses for windows but why the fuck is using a proprietary font a requirement?11 - I must be some kind of retard to think that a fallback font would actually handle the characters not handled by the previous fonts. I hate configuring fonts so fucking much4 - - Funny how fucken emojis, which came into applications (web or even native) by completely overusing them on mobile, don't really work on mobile.1 - - - Hey guys, this isn't a rant. I just really want to know what is the font style used in devRant's logo?2 - - - Big military company is using their custom font for everything. Is there a way to find documents that use a specific font? All hail to the corporate identity!4 - FontAwesome just made my day... PiedPiper from Silicon Valley is listed as one of the websites that uses their tech! + PiedPiper is fucking awesome and so is their website ()2 - !rant Thought you guys might be interested in this new-ish (compared to say... Futura) programming font, Iosevka. It's configurable and can be compiled from source but also has pretty good defaults. Really enjoyed using it these past few weeks.2 - - - - - Installing Fedora Workstation as a dual booting system Found this, only me thinks that's really close font to Comfortaa? (That used on devRant)2 - I had a developer put the css font declaration on every css class, instead of the body... The site used just one font 🤔😂1 - Found an article where two fonts were combined to achieve an awesome looking effect for vscode, The fonts were Fira Code + Operator Mono Tried to download Operator Mono but...14 - Surprised noone has yet preached the amazingness of this font here. - You know you have been coding with CSS Preprocessor for far too long when you typed this in regular stylesheet and wondering why it doesn't work... #container { .wrapper { text-align: center; } } Why the font doesn't align to center!!! @&$#+%*^2 - - - When you can't decide on a font I'm using Zooper widgets, tasker and my Python assistant. I want a clean looking font, but I can't decide1 -.4 - - !rant, just asking the wise devRant community for advice: What is a good laptop that works well with linux? Thanks in advance :)11 - - - - I love unicode-table.com for what it does, but this does not seem right... - So i'm making a menu for my friend. He shows me a menu he made on his iPad, all in Chalkboard SE (identical to Comic Sans), lined up using tabs and spaces, and asked for the same font. I'm not joking.2 - What terminal font and om-my-zsh theme are you using? I'm using Inconsolata font and bullet-train theme 😎3 - - designer sent over a mockup that uses Illustrator's missing font color as button/brand color. why??3 - - - I just had to spend 10 minutes of my life going through 674 lines of HTML code and deleting every font tag, with there being at least one font tag on every line.2 - - - Two identical websites. Both have identical files, settings, and contents. Both have identical style.css files. One has H1-H6 headings that display in the "Rye" font as I've specified. The other's "Rye" font is completely AWOL. I'm just getting the site default. But the sites are exact clones of each other?!?!?!5 - - Coding font of choice? I want to use Inconsolata but the warm embrace of Menlo is too much to resist. Oh, and maybe I should clear out some fonts...6 -. - - I can't find a good theme for my jetbrains ides. Can you please recommend some color scheme, font or theme?4 - Any programming font suggestions? I want to use Fira Code for its ligature but Sublime don't have support for ligature, I tried it working in VSCode but it's slow af compared to Sublime11 - Sometimes I just can't be arsed to write static_cast<> () and go crazy and use a c cast. What a risk taker I am.1 - Holy shit font rendering is a pain in the ass.. Why are there no proper guides for harfbuzz and pango out there? - - What's your favorite monospace font for use in a terminal emulator? I enjoy Fira Mono, myself, but if you're using something you like, I would love to know what it is and why you like it.7 - Why is it that everything looks so ugly in Ubuntu? By everything, I mean the IDEs (Eclipse/Intellij), editors (sublime/vs code) and even the web pages. They look more clean and pleasing in Windows or Mac. Is there a extension or plugin that'll make things look "pleasing"? Sure, I can edit the font to be anything I want in vs code, but it is only for the editor. The sidebar and the menu still is in default system font (I don't like Ubuntu font)4 - - Last work day before the new year and I was trying to make the manager understand that I couldn't say the max number of characters of a line because the font wasn't monospaced 😓1 - How to do SEO in the easy way On a white background write the keywords many times and make it color: white Use a tiny font-size and make the text unselectable You're welcome1 - Loving FiraCode! Amazing how a simple editor font change can make a huge difference in the programming experience.1 - - - Is there good monospaced font for windows? specially android studio, i don't know who the hell is choose this shits for MS.. MS your fonts are sucks, believe me : - - I so f#!ing hate how "font-weight: bold" looks on mozilla (the bottom one) compare to the chrome. Chrome looks so modern and elegant >< . Or is this some compatibility properties that i got to add ?6 - - Want a simple but terrible annoying prank? Change the keyboard map from UTF8 to ASCII or vice versa and set the system font to something funky like a Greek or Cyrillic variant... :)1 - Please, before exporting anything in whatever editor you use, check if it is in UTF-8. Today I didn't knew why my new font wasn't working in certain places and I later discovered that more than 9000 characters were replaced by the replacement character... - - - - Asking for a friend. Anyone here know how to get code syntax highlighting in Photoshop? Screenshot and removing the background doesn’t help. It screws up the font of the code. 🙏🏻4 - I have an idea to request an icon to FontAwesome project then I went to their github project and found this:... It's over a year, hope FA team apply the update - It's just really unexpected for me, but I'm about to uninstall mx player. Font catch loading take toooo long and there is no way to solve it. Goodbye MX, you were a good player and you are not any more.7 - - - - Where has the Fira code font been all my life? It's epic and free. Nearly spent £150 on the operator font - Trying to improve my console experience with Windows, I had antergos with Sindragosa in background and blue console font before, now it's time for Doomguy + yellow font and I like it :P2 - - - ! Rant In our office I'm the only one who installed plugins on my sublime text and make sure i have a great font, nice theme. All of them just plain stock sublime text.4 - - What are your favorite fonts for your system? I use SFNS Display and Menlo for the terminal. Whats your flavor?4 - Has anybody ideas how to convince my friends that Comic Sans is bad? They are all using it on their phones!!!1 - (!rant && question) For the front-end devs out there, do you guys prefer using font-weight to change the weight of the font or do you guys prefer using separate names (Font Name Black, Font Name Bold, and so on)2 - - I really wanted font ligatures so I took the plunge and ditched Consolas for Iosevka. I didn't think I could love another font, but oh my! - We had a 65y old teacher who was mathematic, she didnt even know how to ctrl f, or to make the font of code bigger. Context: 1y software engineering bachelors degree. - - - That Moment when you write something in an editor that provides a font where some stuff lookes almost the same. When col1 becomes coll :|6 - I was using using mate, today I switched to antergos, the font does not look good as it did in ubuntu. Anything I am missing... - Does anyone know of a font, or something, where i can get brand logo icons in their original colors? Because i dont want to throw a whole folder of icons in my projects manually every time5 - Hey devrant, any chance we could get a setting for monospace font, or would that break too many things?2 - - - - so here's a rant/question So I'm having an issue with vertical-metrics in a font I'm using which results in mac aligning it in the middle and windows aligning it at the bottom. I tried re-converting it for webfont (fixing vertical metrics in fontsquirrel) but not luck so far, any ideas? PS vertical-align don't work either3 - - - That feel when you find that one font that both looks good and renders ponysay's ponies well ... yet the powerline glyphs are vertically offset T_T (It's Fira Mono for Powerline btw.)1 - - Been playing around with bootstrap and mvc, with custom fonts. Feedback on font. - Just discovered font ligatures in vs code with the new font Cascadia Code. I didn't know I could love coding more. - You guys, what commonly fontFamily you are using in making an android app? I'm not good in looking a good font, hoping all of you help me. Thanks very much.3 - I ban agency as one of the industry I avoid to work in, you know why? Because too much of manual work like changing the font size of this and that which I think waste too much of my time! - - - - - student here. just spent over an hour working on my final project trying to figure out why space padding "wouldn't work" in my strings... i wasn't using a monospaced font. *facepalm* - - Any way to increase tab font size in xcode 12? Or Apple only gave an option to increase navigator font size (well three options instead of customizable font size .... but at least I can now better read dir tree of my project lol) Top Tags
https://devrant.com/search?term=font
CC-MAIN-2021-10
refinedweb
4,644
73.27
If you have been reading this blog, you'll know I collect sports cards. It's fun to share what I have with other collectors (by posting scans on sportscollectors.net, facebook, etc.). A few years ago I bought a Brother MFC-9130CW all in one printer/scanner that I use to do my scanning. I usually set it to scan documents in legal format so I can fit 9 cards on a scan (3 rows by 3 columns). And since I want to do this most efficiently, I generally save them as a 200dpi pdf file with multiple pages. The requirements: The process: I like using Python for automating things, as it seems to have libraries for most things I want to do. So I figured it would be a good candidate for this project. #paths source_file = '/tmp/sample_cards_file.pdf' out_dir = '/tmp/process/' # page setup rows = 3 cols = 3 # defines where the last run of this left off on (first item will be 1.jpg if 0) starting_count = 0 #spacing offsets top_offest = 0 left_offset = 260 bottom_offset = 0 right_offset = 0 spacer = 40 vert_spacer=0 pip install pdf2image pip install poppler # I used this syntax instead do of pip when running this in a jupyter notebook # conda install -c conda-forge pdf2image # conda install -c conda-forge poppler from pdf2image import convert_from_path # function that converts multipage pdf to individual # jpeg images. Function returns list of image paths. def convert_pdf_to_jpegs(pdf_path, out_dir): file_paths=[] pages = convert_from_path(pdf_path, 500) page_count = 1 for page in pages: image_path="{}temp_page_{}.jpg".format(out_dir, str(page_count)) #add to the list file_paths.append(image_path) #save converted file page.save(image_path, 'JPEG') page_count += 1 return file_paths from PIL import Image def split_images_from_page(image_path, out_dir, row_count, col_count, start_number): # opens the image file Im = Image.open(image_path) # calculates height and width of image full_width = Im.width full_height = Im.height image_height = int((full_height - vert_spacer - top_offest - bottom_offset) / rows) image_width = int((full_width - spacer - left_offset - right_offset) / cols) image_count = 0 row_current=1 #iterates through rows and columns while row_current <= row_count: col_current = 1 while col_current <= col_count: # calculates the coordinates on the image to crop croppedIm = Im.crop((left_offset + ((col_current - 1) * image_width) + spacer, top_offest + ((row_current - 1) * image_height), min(left_offset + (col_current * image_width) + ((col_current - 1) * spacer), Im.width), top_offest + (row_current * image_height) + vert_spacer)) # if you wanted to resize the image to 300 width and 420 height # croppedIm = croppedIm.resize((300, 420)) # saves the image to specified directory croppedIm.save("{}/{}.jpg".format(out_dir, start_number+image_count)) col_current += 1 image_count += 1 row_current+=1 import os page_count = 1 # convert pdf to jpeg file for each page file_paths = convert_pdf_to_jpegs(source_file, out_dir) # split each page into images for file_path in file_paths: number_start = starting_count + (page_count-1) * (rows * cols) + 1 split_images_from_page(file_path, out_dir, rows, cols, number_start) page_count += 1 # clean up delete jpeg files for each page for file_path in file_paths: os.remove(file_path) Here is my ipython notebook with the code detailed above: pdf_scan_process.ipynb Jay Grossman techie / entrepreneur that enjoys: 1) my kids + awesome wife 2) building software projects/products 3) digging for gold in data sets 4) my various day jobs 5) rooting for my Boston sports teams:
http://jaygrossman.com/post/2020/07/28/Automating-Image-clean-up-with-Python.aspx
CC-MAIN-2022-27
refinedweb
510
52.8
Part 2: Asynchronous Mapping Asynchronous functions allow you to give different tasks to different members of the multiprocessing.Pool. However, giving functions one by one is not very efficient. It would be good to be able to combine mapping with asynchronous functions, i.e. be able to give different mapping tasks simultanously to the pool of workers. Fortunately, Pool.map_async provides exactly that - an asynchronous parallel map. Create a new python script called asyncmap.py and copy into it from multiprocessing import Pool, current_process import contextlib import time def sum( (x, y) ): """Return the sum of the arguments""" print("Worker %s is processing sum(%d,%d)" \ % (current_process().pid, x, y) ) time.sleep(1) return x+y def product( (x, y) ): """Return the product of the arguments""" print("Worker %s is processing product(%d,%d)" \ % (current_process().pid, x, y) ) time.sleep(1) return x*y if __name__ == "__main__": a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] b = [11, 12, 13, 14, 15, 16, 17, 18, 19, 20] work = zip(a,b) # Now create a Pool of workers with contextlib.closing( Pool() ) as pool: sum_future = pool.map_async( sum, work ) product_future = pool.map_async( product, work ) sum_future.wait() product_future.wait() total_sum = reduce( lambda x,y: x+y, sum_future.get() ) total_product = reduce( lambda x,y: x+y, product_future.get() ) print("Sum of sums of 'a' and 'b' is %d" % total_sum) print("Sum of products of 'a' and 'b' is %d" % total_product) Running this script, e.g. via python asyncmap.py should result in something like Worker 843 is processing sum(1,11) Worker 844 is processing sum(2,12) Worker 845 is processing sum(3,13) Worker 846 is processing sum(4,14) Worker 844 is processing sum(5,15) Worker 846 is processing sum(6,16) Worker 843 is processing sum(7,17) Worker 845 is processing sum(8,18) Worker 846 is processing sum(9,19) Worker 843 is processing sum(10,20) Worker 845 is processing product(1,11) Worker 844 is processing product(2,12) Worker 843 is processing product(3,13) Worker 844 is processing product(4,14) Worker 845 is processing product(5,15) Worker 846 is processing product(6,16) Worker 844 is processing product(7,17) Worker 845 is processing product(8,18) Worker 846 is processing product(9,19) Worker 843 is processing product(10,20) Sum of sums of 'a' and 'b' is 210 Sum of products of 'a' and 'b' is 935 This script provides two functions, sum and product, which are mapped asynchronously using the Pool.map_async function. This is identical to the Pool.map function that you used before, except now the map is performed asynchronously. This means that the resulting list is returned in a future (in this case, the futures sum_future and product_future. The results are waited for using the .wait() functions, remembering to make sure that we don’t exit the with block until all results are available. Then, the results of mapping are retrieved using the .get() function of the futures. Chunking By default, the Pool.map function divides the work over the pool of workers by assiging pieces of work one by one. In the example above, the work to be performed was; sum(1,11) sum(2,12) sum(3,13) etc. sum(10,20) product(1,11) product(2,12) product(3,13) etc. product(10,20) The work was assigned one by one to the four workers on my computer, i.e. the first worker process was given sum(1,11), the second sum(2,12), the third sum(3,13) the then the fourth sum(4,14). The first worker to finish was then given sum(5,15), then the next given sum(6,16) etc. etc. Giving work one by one can be very inefficient for quick tasks, as the time needed by a worker process to stop and get new work can be longer than it takes to actually complete the task. To solve this problem, you can control how many work items are handed out to each worker process at a time. This is known as chunking, and the number of work items is known as the chunk of work to perform. You can control the number of work items to perform per worker (the chunk size) by setting the chunksize argument, e.g. future_sum = pool.map_async( sum, work, chunksize=5 ) would suggest to pool that each worker be given a chunk of five pieces of work. Note that this is just a suggestion, and pool may decide to use a slightly smaller or larger chunk size depending on the amount of work and the number of workers available. Modify your asyncmap.py script and set the chunksize to 5 for both of the asynchronous maps for sum and product. Re-run your script. You should see something like; Worker 1045 is processing sum(1,11) Worker 1046 is processing sum(6,16) Worker 1047 is processing product(1,11) Worker 1048 is processing product(6,16) Worker 1045 is processing sum(2,12) Worker 1046 is processing sum(7,17) Worker 1047 is processing product(2,12) Worker 1048 is processing product(7,17) Worker 1045 is processing sum(3,13) Worker 1048 is processing product(8,18) Worker 1047 is processing product(3,13) Worker 1046 is processing sum(8,18) Worker 1045 is processing sum(4,14) Worker 1047 is processing product(4,14) Worker 1046 is processing sum(9,19) Worker 1048 is processing product(9,19) Worker 1047 is processing product(5,15) Worker 1046 is processing sum(10,20) Worker 1045 is processing sum(5,15) Worker 1048 is processing product(10,20) Sum of sums of 'a' and 'b' is 210 Sum of products of 'a' and 'b' is 935 My laptop has four workers. The first worker is assigned the first five items of work, i.e. sum(1,11) to sum(5,15), and it starts by running sum(1,11), hence why sum(1,11) is printed first. The next worker is given the next five items of work, i.e. sum(6,16) to sum(10,20), and starts by running sum(6,16), hence why sum(6,16) is printed second. The next worker is given the next five items of work, i.e. product(1,11) to product(5,15), and it starts by running product(1,11), hence why this is printed third. The last worker is given the next five items of work, i.e. product(6,16) to product(10,20), and it starts by running product(6,16), hence why this is printed fourth. Once each worker has finished its first item of work, it moves onto its second. This is why sum(2,12), sum(7,17), product(2,12) and product(7,17) are printed next. Then, each worker moves onto its third piece of work etc. etc. If you don’t specify the chunksize then it is equal to 1. When writing a new script you should experiment with different values of chunksize to find the value that gives best performance. Exercise Edit your script written in answer to exercise 2 of Parallel Map/Reduce, in which you count all of the words used in all Shakespeare plays (e.g. an example answer is here). Edit the script so that you use an asynchronous map to distribute the work over the pool. This will free up the master process to give feedback to the user of the script, e.g. to print a progress or status message while the work is running to reassure the user that the script has not frozen. For example while not future.ready(): print("Work is in progress...") time.sleep(0.1) Add a status message to your script to reassure the user that your script hasn’t frozen while it is processing. (note that you can call your script using python -u countwords.py shakespeare/* to use the -u argument to stop Python from buffering text written to standard output) If you get stuck or want inspiration, a possible answer is given here.
https://chryswoods.com/parallel_python/async_map.html
CC-MAIN-2018-17
refinedweb
1,362
62.17
Slashback: Gnoogle, PlayStation, Assault 193 Location, location, location. A lot of people were interested in the Google contest whose winner was announced last week; Dan Egnor creator of that entry, writes "FYI, I've released the code for the winning Google contest entry under the GPL." You mean they weren't just saying Hi? Anonymous Goodfella writes: "In an update to the Dangers of Being a Microbiologist, the AP [news.com.au] is reporting an attack on a Tennessee state medical examiner who gave evidence to an inquiry into the death of infectious diseases researcher Don Wiley. Coroner O.C. Smith was left tied with barbed wire to an apparent explosive." Jakob Nielsen says Flash No Longer Evil Allen Varney writes "Given that Flash MX now supports the back button, Unicode, and accessibility, and has introduced p$user interface components, usability guru Jakob Nielsen today updated his famous 'Flash: 99% Bad' rant from October 2000. (Scroll down to see the update, stirringly titled 'Flash Now Improved.') His Nielsen Norman Group has formed a strategic alliance with Macromedia to start educating one million Flash designers in the fundamentals of good design. You did know that Flash .SWF is now an open format, right?" Step 47: remove blindfold, scream. For those anxiously awaiting (or judiciously pondering) the Linux upgrade kit for the PS2, some words to consider from reader silvaran, who writes: "I just received my Playstation 2 Linux kit in the mail. I was disappointed to find that none of the monitors (3) that I had function properly with it. So I took to following the instructions on a blind install. It's not the most elegant of solutions, but it works. You need a blank memory card to install, but everything else is included in the kit. I'm on my way to a full Linux installation, complete with 100mbit networking, 40-gig HD and a USB keyboard and mouse; also included are full documentation on taking advantage of the PS2 hardware under Linux." That blind install looks not for the faint of heart -- still, it would be nice if every distro included a simple walk through like that for when a monitor just isn't handy :) Reader microwerx adds some a few more words of advice and caution: "[T]he PS2 Linux Kit will not read CDRs, so you'll have to use the supplied 10/100 Ethernet Adapter to get stuff in and out of the machine. One very good thing about the PS2 Linux Kit was the documentation regarding the Emotion Engine chip, etc. There's at least 2000+ pages of information regarding how it all works in glorious PDF format. There is also a OpenGL-like library (ps2gl) that supports the hardware. I also understand that SDL also works. Another is the amount of equipment you receive. You get a USB mouse and keyboard, a 10/100MBPS Ethernet Adaptor, A VGA convertor, and a 40Gb Hard drive. And all of this stuff appears to have some future use (you may have to remove Linux to use them nonetheless). So, once again, unless you just want the novelty of having a PS2 Workstation, developing console games, or setting up a small home server, I don't believe that you'll gain too much additional functionality. An overall rating of 3 1/2 stars out of 5 is certainly in order (because after all, it is for game development)." Good price for all this stuff? (Score:2, Funny) Is there some kind of catch? The whole thing seems like a pretty good deal. Maybe Sony isn't a bunch of bastards after all? Re:Good price for all this stuff? (Score:2, Informative) Re:Good price for all this stuff? (Score:5, Informative) 2 PS2 Linux DVDs 1 40GB Hard Drive 1 10/100 Ethernet Adaptor 1 Sony Black USB Keyboard with 1 USB Port 1 Sony Black USB 3 Button Mouse 1 VGA Cable only for use with SYNC-ON-GREEN capable monitors 1400 pgs of manuals in PDF form. These are assembly language manuals for the EE (emotion engine) core of the playstation. You get no printed versions of these, only install documentation) Remember, you have to add $25 to the cost for an extra memory card, as it will be formatted to contain your linux kernel. And I used the 'Blind-Install' with absolutely no problems. You simply must be a little careful. Hope this helps. Re:Good price for all this stuff? (Score:2) Haven't you heard of Gnoogle? (Score:4, Funny) Have you no SHAME? (Score:4, Funny) Typo (Score:4, Funny) Now you've done it...you've made the Debian team cry! Flash (Score:3, Funny) But someone who spends a measurable amount of time evangelizing (sp) Flash's ability to use the Back button and loses sleep over people creating custom scrollbars needs to either a) go outside, b) get laid, or c) both. Re:Flash (Score:2) But be careful if you're doing both at once. critical pricks (Score:1) Gnougle? (Score:2, Informative). Jakob Nielsen Humor (Score:5, Funny) Driving Over Jakob Nielsen [urbanev.com] Re:Jakob Nielsen Humor (Score:2) Hmmm... (Score:5, Funny) Completed the Blind install (Score:4, Informative) I just took it very slowly (One keypress at a time)and ticked off the boxes, It worked first time (Only because I managed to keep the cats off the keyboard) I was dissappointed to find that the kit did not work with any of my monitors either (I'm waiting on a 2nd Hand 17" Sony to arrive as I can't hog the TV all night). Its a good sales ploy by Sony, apparently a lot of people are buying new and used Sony monitors for their kits as they are the most likely monitors to work. It didn't take long before I had X up and running and little while latter had KDE installed. Its not very usable through the TV (Even at 80cms) some of the fonts are quite hard to read, also getting a little frustrated with having to ALT and move windows all the time in X. Re:Hmmm... (Score:2, Insightful) Re:Hmmm... (Score:1) .swf is a small part (Score:1, Interesting) Anyway, I don't care. I don't have a flash plugin on any of my boxen, and I couldn't be happier. Have yet to see a site I want to read that requires flash. Until Pokey is published as excuse me? (Score:2, Informative) Re:.swf is a small part (Score:1, Informative) PS2 Linux Project? (Score:2, Interesting) Geez... 15 posts and they all have to do with 'gnoogle.' *sigh* Re:PS2 Linux Project? (Score:3, Informative) No, it doesn't. So how "Interesting" is that post now? Re:PS2 Linux Project? (Score:1) Re:PS2 Linux Project? (Score:2) I searched and found this [slashdot.org] article, so I guess that answers about 1/3 of my own question, although it seemed like a fair question after looking at the modchip webpage: Works Perfect with all PS2 CD-R and DVD-R Backups! Re:PS2 Linux Project? (Score:2) Re:PS2 Linux Project? (Score:2, Insightful) dreamcast - $50 .... wait is that it?... YES... you can buy a keyboard and mouse if you want but to get linux on you need no kit... just downlaod a dreamcast distro and burn it onto a rgular cd-r not only is it easy but its better than the ps2... the ps2 has 4 megabytes of video ram which have an insame amount of bandwidth... although it can equal the dreamcasts 8 meg opengl (w/full screen antialiasing of course so if you want a hackers toy get a dreamcast... you can get a 10/100 mb network card for it jsut like the ps2 and its FAR easier to program for Re:PS2 Linux Project? (Score:2) Does Mozilla Run on PS2? (Score:3) I think it's a good deal just to get a web appliance in the livingroom, but, I want my Mozilla. I thought about using it as an Xterminal to run my regular broswer but that leaves it depending on my pc. anyone here tried either approach? Re:Does Mozilla Run on PS2? (Score:1) Re:Does Mozilla Run on PS2? (Score:1) Re:Does Mozilla Run on PS2? (Score:2, Informative) Re:Does Mozilla Run on PS2? (Score:1) Re:Does Mozilla Run on PS2? (Score:2) I've been using the kfm (aka Konqueror 1.1) that comes with the Sony distro and having better luck. Vik Use Netfront embedded (Score:2) I installed netfront from the Japanese PS2 on the US kit. It runs just fine with SSL, JS, etc. =) Look on google and you can find a copy easy, but you might have to register your email/etc in Japanese to download. Also don't listen to those guys in #ps2linux on OPN, since most of them are Qt trolls. The attack on the medical examiner... (Score:4, Informative) Re:The attack on the medical examiner... (Score:2, Insightful) Dead researchers. (Score:2, Interesting) Re:Dead researchers. (Score:4, Insightful) It's all pretty X-files, and while quite a few "microbiologists" (defined loosely, as some of the people have not really been true microbiologists) have died under mysterious circumstances lately I can't shake the feeling that the story is being "shaped" into this whole conspiracy dogma format. Anyway, here's [fromthewilderness.com] a link to one of the nutball sites (this is Mike Vreeland's "The Government Made 9-11 Happen" site) which has some writeups on it. Proceed with caution. You're reading heavy spin here... Re:Dead researchers. (Score:5, Informative) Sorry everyone, I don't normally reply to my own posts, but after thinking about it for a bit I realized it would be irresponsible to have included a link to a crazy site like Vreeland's without also including a link to a sane analysis of why he is in fact a nutter. Here is a careful, balanced, and thoughtful examination of The 9-11 X-Files [thenation.com] Re:Dead researchers. (Score:1, Funny) Re:Dead researchers. (Score:2, Informative) 1) It's Mike Ruppert, not Vreeland. Delmart "Mike" Vreeland is either a former Navy officer with a thing for identify theft and credit card fraud, or a Navy intelligence officer with some scary info, depending who you ask on which day. Ruppert loved the guy at first, but some of his more erratic behaviour and dodging is making him a bit wary. Ruppert is a former LA cop who was supposedly fired in 1978 while trying to expose CIA involvement in LA drug dealing activities. Journalist Gary Webb saw his career torn to shreds for reporting similar happenings a few years ago in the San Jose Mercury. 2) Corn and Ruppert have an ongoing, somewhat nasty rivalry. The article you link elicited this response [guerrillanews.com] from Ruppert. Re:Dead researchers. (Score:2) Ruppert, right. I suppose many would argue that checking the names before I use blithely use them would be a good thing. My bad. Re:Dead researchers. (Score:2, Troll) Blind installs... for real? (Score:4, Interesting) In the same way that modern distros "do enough" to get X windows installing and running, and then switch to a graphical installer, I can imagine a "blind" installer doing what's required to install a sound driver and speech synthesizer, and then talking the user through the rest of the installation (questions about partitions, etc.). As someone else alluded too, this could also be useful for a sighted person doing an install on a headless machine. Does anything like this exist currently? Re:Blind installs... for real? (Score:1) Yeap, there are tools/setups for linux (& the BSDs) that will work with brail terminals, and text-to-speach terminals (actual boxes, not software). Re:Blind installs... for real? (Score:2) Re:Blind installs... for real? (Score:4, Informative) Nielsen Norman Group web site UNUSABLE! (Score:5, Funny) The Jakob Neilsen story was on ActionScript.com [actionscript.com] (a Flash news blog) yesterday. Here is a list of the HORRIBLE USABILITY BUGS on the Nielsen Norman Group [nngroup.com]'s own web site. Fortunately (unfortunately for my karma? 1) broken graphic at bottom of page 2) click on People, you go to Services 3) click on Services, you go to Publications 4) click on Publications, you go to Events 5) click on Events, you go to About 6) click on Jakob Nielsen, you go to Don Norman's web site 7) click on Donald A. Norman, you go to Ask Tog 8) click on Nielsen Norman Group Members, you go to Events 9) click on User Experience 2001/2002, you go to Services 10) click on Usability Testing and Reviews, you go to Process and Strategy 11) click on Process and Strategy, you go to Seminars 12) click on Contacting, you go to the MM/JN press release on Yahoo proof? (Score:2) btw, for those of you who might not believe me (because the site has since been fixed), here is the Google-cached Nielsen Norman Group [216.239.39.100], broken links and all! (thank you again, google) step 48: (Score:1, Offtopic) But First! (Score:1) We must get Underpants! Then ?? THEN profit! Open-source cross-platform tools for swf (Score:4, Informative) Re:Open-source cross-platform tools for swf (Score:2) You can also use the ming library with PHP to generate flash content on-the-fly: On PS2 Blind Install (Score:5, Informative) The blind install given above would work, but this is not necessary, if you call us up we'd help you through the setup. The current PS2 installation should work on the majority of the monitors out there, I know the sync is fixed at 60mhz and that was probably the oversight one of us made. But this would work on 95% of all the monitors out there and if your monitor was purchased after 95, this would work perfectly. For the rest, instead of following the blind install, please e-mail our support or call us, we'd fedex our to those who need it. Please understand that following the directions given on the link on this story might cause damage to your monitor, since all monitors are not alike. (But I've rarely come across such things in recent times). Also, we have a simple 3d wrapper for Quake that you can download from Bryan's page. Please see his weblog for more details. This wrapper would allow you to patch the existing _SDL_ version of the quake source to make it run on PS2. Enjoy hacking Q1 and PS2. On the issue of mouse droppings, you need to edit the video configuration and set XV_BUG_PS2FIX on in the Xconfig file. This was an oversight too and is fixed in the lastest pack we have. If your installing a custom distribution you need to do this as well. On debian, we tried to get their installer to work, but the maintaniners have been very rude to our questions and that's the reason why we don't have an intro to debian installation. If there are any debian power users who installs base fine, please send us an e-mail with the steps taken. Appercate your patience and goodwill. Wil Linux for ever Re:On PS2 Blind Install (Score:1) Re:On PS2 Blind Install (Score:2, Insightful) Regards, Spooticus Re:On PS2 Blind Install (Score:1) Re:On PS2 Blind Install (Score:2, Informative) Mike Hirohito Re:On PS2 Blind Install (Score:2, Informative) Re:On PS2 Blind Install (Score:4, Informative) Linux kit (Score:3, Interesting) It drives the monitor at 1280x1024 @ 75Hz which is better than I expected. The boot DVD lets you boot any kernel you like, there's already a BSD port. You need the disk to boot so unless you can press silver DVD's you can't distribute the games very far. As stated before they don't document the BIOS calls for accessing the DVD drive without a 'is this a Sony disk' check. But if you walked their drivers in a debugger you could probably figure it out, though all that would give you is a DVD/CD player, you still couldn't boot without their DVD or a harware modification. The biggest problem with it as a general purpose machine is probably the measly 32 Megs of RAM. I might look into this, but it probably requires more than just installing new chips. But it isn't a general purpose machine, one of the memory transfer rates is 38 GB/s... just try that on a PC.... Re:Linux kit (Score:1) But it isn't a general purpose machine, one of the memory transfer rates is 38 GB/s... just try that on a PC.... huh? Almost all new computers (even at CompUSA) have 20-30 GB hard disks. These are full-featured PCs, not game machines. Re:Linux kit (Score:2) I haven't tried the firewire, but the USB works, though there aren't a lot of drivers. The kernel is 2.2.1 with patches. Someone has ported 2.2.40, but it can't use the couple binary drivers from Sony. You could probably get any device working, but some porting would be involved. anyone want to help with a new ps2 linux site (Score:2, Interesting) if you would like to help leave a message on the board or drop me an email frank@ps2linuxkit.com thank you, frank Linux Cheat Codes? (Score:4, Funny) Use a real OS! (Score:5, Informative) Actually, OpenBSD has one on the CD liner with a printout of what you would be able to see if you had a monitor attached! PS2 blind install was great! (Score:2, Informative) If you happen to have to run through the blind install, make sure that you select the appropriate display setting near the end. Without thinking, I put in display=pal, which naturally didn't work for me in the states. (Fortunately, they've ammended the doc to tell you to choose pal or ntsc; when I ran through it, it only listed pal.) The 320x240 resolution you get with a standard TV isn't flattering, making me long for an HTDV. *sigh* One can always dream. Was I the only one who, upon checking the forums at the Playstation 2 Linux site [playstation2-linux.com], found that a lot of the wrong types of people are getting this kit? I'm talking about the ones wondering why this is better than installing Linux on a PC, or who have never used Linux before. If you're a complete Linux newbie, the PS2 kit will be...frustrating. Re:PS2 blind install was great! (Score:2, Insightful) Still, it's been a kick compiling packages for the mipsel. So far it makes for a great MP3 client for my server, XMAME will be good for yuks once a bug is sorted out, and I still have all that graphics demo code to crack open. It's not for everyone, but I'm having fun. Re:PS2 blind install was great! (Score:2, Insightful) This Doesn't Change Things (Score:3, Insightful) So Nielsen's partnering with Macromedia to educate people on proper Flash design. It's a PR gesture on Macromedia's part to silence one of Flash's most vocal critics, but it's not going to accomplish much in the real world. The real Flash offenders are not going to attend a Macromedia seminar on usability or study Nielsen's guidelines. That would restrain their "creativity" -- most of them use Flash specifically because they want to be different, which is the antithesis of Nielsen's usability mantra. My browser filters out all swf files, so if you use Flash and you don't provide an HTML alternative (most sites don't), I'll never see your content. That's a good thing. I don't want to play "chase the links as they fly across the screen" or listen to your music blended with the mp3 I'm playing. Fireworks are exciting, pretty toys too, but each July 4 police scour the streets for people who set them off because they're dangerous in the hands of most people. Re:This Doesn't Change Things (Score:4, Informative) [Flash] takes control away from the user and places it in the hands of a "designer" who may not have any experience in building user interfaces. So does HTML. More abstractly, so does any user interface kit. The user isn't in charge of the way an application--or a web page--presents information to him; the designer is. It may indeed be better to put "designer" in quotes, but that doesn't change who has the power. Flash has lent itself to a lot of abuse, and Flash MX no doubt does, too. The difference is that Flash MX adds components for consistent user interface widgets if designers choose to use them, and it offers a lot more ability to pull data back from the server--in other words, to behave more like a real client application, as opposed to the broken model for HTML "applications" we currently have. Sure, if you give people multimedia design tools, a lot of people will design horrendous multimedia--for a while. Desktop publishing software enabled more people to quickly make absolutely horrendous typeset documents than ever before. Would you argue that it'd have been better if we'd stayed with lead type? Re:This Doesn't Change Things (Score:2) Completely wrong. You're pretending not to understand the separation between presentation and content that HTML/SGML/CSS/etc tries to encourage. When I inserted a <P> between these two paragraphcs, I didn't say "insert a blank line" or "indent the next line one-half inch." I said "this is a logically new paragraph; please display it accordingly." The point is that some people might like paragraphs displayed as in books, with an indentation on the new paragraph, and some people like paragraphs displayed as is usual in web browsers and email - with a blank line between paragraphs. The point is, it's their choice. If I want to, I can write a style sheet which over-rides the crap set in web pages. I know that red on black is difficult to read, but angst-ridden teens don't know this. I see this trend among "web designers" (and those quote marks are intentional, BTW) that they like to make links look like regular text, eg, not underlined (text-decoration: none). I like to scan documents looking for links (eg, some guy's useless rant about some problem which finally links to the technical specification which prompted the rant) and I want my links underlined. CSS lets me do this; flash doesn't. The whole point behind CSS is to separate presentation and content. An important part of that is allowing alternate modes of presentation. If you use green and blue to separate two completely different parts of a page, colorblind people won't be able to tell the difference between the two. I want all my text to appear large, as I want to preserve my 20/14 vision and I can damned well scroll when I want to (but with 20/14 vision, I can discern extremely small features when I try). I seriously doubt most "web designers" have considered issues such as this. Your analogy with typesetters and web designers doesn't work. Typesetters have hundreds of years of experience behind them. Professional typesetters know that humans can maximize their reading speed when lines contain an average of sixty-five characters per line (open any decent book, and do the math if you don't believe me - and then compare that to Microsoft Word's default layout policies and you'll be enlighted as to the problem). Professional typesetters use serif fonts for body text because it aids reading speed and decreases eye fatigue, yet many "web designers" prefer sans-serif fonts for body text because it looks "cleaner" to them. The quality of desktop publishing improved when professional typesetters starting using the same electronic typesetting tools as the desktop publishers instead of using manual and photographic processes. The current generation of "web designers" don't know jack, and I don't see any improvement on the horizon as there is very little crossover between the true artistry of typesetting and the wiggling, squirming, float-over, abstract guess-what-this-does to hold your interest crap which is Flash. Re:This Doesn't Change Things (Score:2, Interesting) Professional typesetters do use sans serif fonts on-screen, because here the resolution is not good enough for the serifs, and so they act as noise rather than reading cues. Unfortunately I cannot find the article ATM, but somewhere on the web is an interview with the person who designed the free Microsoft fonts (who is a professional type-face designer), and he explains what he did to ensure optimal readability on-screen -- an interesting read... Dissappointments with the PS2 Linux Kit (Score:5, Informative) Right now I am sitting without a kit, but I'll get to that in a second. I pre-ordered my kit on March 7th. I received an email which I assumed to be the confirmation. In my email header it said: " Your PlayStation.com Order #711699 has been d." I even took a cursory look at the message and it looked just like a receipt from any other online store. What I failed to do was read the actual message. It was in fact telling me my credit card (for no apparent reason) was declined. I admit I should have read the message more closely, but it would have been nice if an actual confirmation didn't look exactly the same. I realized this error on May 25th. After finally receiving my kit I eagerly ripped everything open and got my PS2 hooked up. Having done my homework, I was very happy to see it talk nicely to my SOG compatible monitor. I even commented "wow, this is a really nice quality keyboard." So I threw in my Linux Disc 2 DVD since, again, I failed to read. This time it was pure excitement to blame. Disc 2 had placed in the disc holder on top, with Disc 1 below it. This was highly intitive. The install was going normally. After the RTE loads it looks just like a RedHat install. I got all the way up to the point of partitioning my hard drive. Being that I've been using Linux for longer than I can remember, I defaulted by selecting fdisk. After I was done I hit 'w' to write my table, and nothing happened. In fact the PS2 locked up. I couldn't believe it. So I rebooted. I very quickly found that the keyboard had failed, as it would no longer respond. Neither my Desktop (Mandrake) or my laptop (Win2k) would recognize it as a USB device. Of course this happened at 8:55pm. 5 minutes before all of the electronic stores in town closed. So the next morning I went to Fry's and bought a $20 USB keyboard. I came home and got Linux installed. Again this concept of reading got to me. The final dialog says something that reads like: "Press Enter, Put Disc 1 in, and reboot." So I did. I was greated to a hard drive FSCKing itself, a corrupted modules.conf, and an ethernet adapter that wouldn't init. So I re-installed. This time I read the screen more carefully. Apparently it is intiutively obvious that you are to wait 2-3 minutes while the system shuts down. It would have been nice if they let you see the shutdown progress (or told you to wait.) (I know I ragged people for not reading when they bought the kit, but I am willing to admit I should have read all of the above more carefully.) Finally my machine is up and running. I even have XMMS complied and installed. So I hook it up to my stereo, connect to it remotely, and mount a NFS share. I'm ready to listen to MP3s on my surround sound system for the first time ever. I launch XMMS and my PS2, again, freezes. After rebooting I am told I can't login because the system has lost power and is rebooting. Uh huh. So I login at the console and do a proper reboot. This time XMMS loaded without a hitch. It played exactly 1 mp3 and locked up again. This time I realized that it was not the PS2 locking up, but the network adapter. This is becoming a known problem at the aforementioned website. Finally, my last woe in this whole story. In order to replace my USB keyboard (BTW, all of the components come in their own retail boxes) I must return the entire kit. Yes, playstation.com is incapable of only replacing 1 component. They instead, insisit, I ship back the entire kit (at my cost) to get a keyboard replaced. How nice. If the network adapter issue isn't solved in the next 30 days, then I am going to sadly return the kit, as so far, it hasn't been worth $200. RTFM (Score:4, Informative) I find it odd that someone who even admits they have reading problems still insists on dumping all of the blame on Sony. I had no trouble at all setting up Linux on my PS2 (though admittedly I have the Japanese version; maybe somebody screwed something up for the US release). As far as the network adapter goes, I've had zero problems, even while doing a raw disk dump over the network. I do, on the other hand, recall splay locking up on me once or twice. Try setting the playback rate to 48000 Hz, since the PS2 Linux driver can't handle anything else natively, and see if that helps. This is also mentioned in the manual, by the way (at least the Japanese one). Also, when I had a keyboard problem—which just turned out to be me typing too fast for the keyboard's specs—I was able to send just the keyboard back to Sony and use the PS2 via Ethernet in the meantime. Maybe you didn't communicate clearly that it was just the keyboard that was defective? Re:Read The Fucking Message (Score:2) As for the network adapter. Now you are just trying to make it look like I'm whining for no reason. You are offering vauge suggestions to show me I'm being zealous. The issue has nothing to do with mp3s. It is not related to the sound system, it is not related to splay, and it has nothing to do with the playback rate. There is nothing in the paper manual about changing the playback rate. I say that only because I haven't had a chance to read the electronic versions. The issue with the network adapter is soley based on network traffic. It just so happened (as I found out) that streaming mp3s caused the issue to initially occur. Perhaps if you RTFM (message) then you would see that transfering large files caused the problem as well. Do you really think I did not express my concern about not sending the entire kit back? I asked the woman several times if she was sure about it. She came back and said "Yes, my manager says we can not accept the invdividual components. We must receive the entire kit so that we can send you an entirely new one. Please note on the return sheet that only the keyboard is defective." So, as you can see, she fully understood that only 1 component needed to be returned. Furthermore if you had, again, RTFM, you would see that using it remotely was causing problems. Therefore your suggestion, again, would not have done any good. The next time you want to disagree with someone and try to discredit them, please take the time to actually read their message. It would also be helpful to provide actual suggestions. (Not suggestions that show you thumbing your nose at the orginal poster.) RE: "GNoogle" (Score:1) RE: .SWF Open file Format - Contest (Score:1) Design the best flash presentation within 25 lines of code. Looking forward to the results! Playstation disappoints (Score:1) however good start i await the updates On the note of the header i was slightly disappointed that the linux kit lacked alot of modern software that i feel should have been included. For instance it includes KDE however it is version 1x now this seems odd that at least version 2 could have snuck in ? Same thing with gnome it does include gnome however the version of gnome is very old, i can't recall the actual version at the top of my head. Another small detail i found odd that it was missing was a mp3 player i had to install the SDL library to have plaympeg available to me i would think they could have at least included mp3blaster or mpg123 at the most ? And to top it off it lacked any kind of real web browser, and for some reason w3m wanted to be displayed in kanji all the time and i still haven't quite figured that out yet. Lastly i did happen to be fortunate enough to have a compliant montior so the install was fairly easy it is obivious that sony is in deals with red hat on this. However once installed configuring it to display on my television was a major pain and it wasn't even mentioned in the manual on how to do it, it only hinted that it could be done but not exactly how. Flash, the answer to the Windows GUI (Score:3, Interesting) A fair number of game GUIs (the 2D parts used for setup and such) are written in Flash and executed with a non-Macromedia Flash engine. This is done so that the Flash authoring tools can be used. This approach could be applied to other applications. It's probably more suitable for things like a music player than a system administration program, but it's an option. Most importantly, it lets you separate the GUI part from the programming part, which means the GUI designer can get some real work done. GPL code former MS employee? (Score:2, Interesting) "My code is available to the public under the terms of the GNU Public License [ofb.net]." No wonder he got fired! Flash may no longer be 99% bad. (Score:2) However, it still should not replace HTML. Flash will be better than HTML for writing online applications, because you can get immediate feedback, and also don't have to deal with the statelessness of HTML. One place where this has stuck out is the spell checker that is included with IMP. It can't really be interactive (as with a word processor), and so it's a lot less usable. Perhaps an optional Flash spell checker would be helpful. A big problem with Flash that still (AFAIK) hasn't been fixed is that when people use it to create entire sites (replacing HTML), their site is essentially invisible to search engines. Maybe Google will solve this problem, but for now, I think that the best course of action is to use HTML whenever possible, and to use Flash when it would provide better functionality (and not just because it will look cool). Or, you could provide a non-Flash alternative, and that will be indexed. There's also the problem that Flash isn't usable by everyone; people who browse non-visually, use a text-mode browser, or who simply haven't installed the plugin will not be able to use whatever portion of your site is in Flash. Just something to keep in mind. Re:Gnoogle (Score:3, Informative) Re:Gnoogle (Score:2) GNU/Google (Score:4, Funny) Re:GNU/Google (Score:1) Re:GNU/Google (Score:1) -Chris Re:GNU/Google (Score:2) Founder of the GNU Project [gnu.org] and the Free Software Foundation. [fsf.org] Everything2 [everything2.com] is helpful for these types of questions. Re:GNU/Google (Score:3, Funny) If you need to know more than that, see this [everything2.com] for a fairly good idea who RMS is. RMS's homepage is at [stallman.org] Please Do Not Feed the Trolls. Odds are you're going to get a resonse that purports RMS to be the goatse.cx guy or something. I can neither prove nor disprove these claims, so you'll have to draw your own conclusions. Re:GNU/Google (Score:2) Don't be too hard on yourself. I thought the part about "insists that everything that even sat next to GNU software in the refrigerator must now be called GNU/whatever it used to be called" was pretty funny. Re:GNU/Google (Score:2) For a good introduction to what RMS is saying, try the philosophy section [gnu.org] of GNU's website, particularly the "GNU manifesto." Re:Gnoogle (Score:2, Interesting) Re:Gnoogle (Score:4, Funny) You expect to get knarma for that pnost? Re:Gnoogle (Score:1, Redundant) The correct spelling for Open Source Google modules is: Gnugle. Re:Gnoogle (Score:2, Funny) Re:NAS appliance (Score:1) Re:NAS appliance (Score:1) Probably not, you can use readxa to copy the mpeg-1 files off the vcd and then ftp them where ever you want.... Re:One Simple solution: SVG. (Score:2, Interesting) How long has Flash been around? Like 6 years? And how many authoring tools are there? Something like 3? How many viewers? Uh... one, right? By contrast, SVG has been a W3C recommendation for all of 8 months, and I know of at least 4 aurhoring tools (not to mention the one I'm making right now, or numerous text editors) and 2 major viewers (along with a host of upcoming handheld viewers). Looks like open standards promote competition and innovation... who would have thought!? Not to knock Flash... it has its uses. But before you commit to a technology on which to build a serious data-driven website with interactive graphics, do yourself a favor and check out SVG. The SVG-Wiki [protocol7.com] is a good place to start. Re:PS2 Linux Kit (Score:2, Insightful) Most PC monitors are designed to be as inexpensive to manufacture as possible. This means that "extra" features (that any decent monitor _SHOULD_ have) such as sync-on-green get left out. Don't expect to be able to be a cheapskate and buy the least expensive monitor you can find and still have it work as well as a much nicer (but more expensive) monitor. Re:Macromedia just trying to delay adoption of SVG (Score:2) Mozilla is the only browser I know of (perhaps now Konqueror too?) that actually supports SVG natively - embedded SVG in mozilla is part of the standard document tree, can be styled, transformed, manipulated etc all using the standard tools. I've seen some pretty impressive demos, transforming ChemML into SVG etc to form chemical diagrams. Mozilla has nowhere near complete support for it though, SVG is a huge spec. Thing is, IE doesn't support the technology necessary to make it work. You can't write COM objects for instance that plug directly in to the Trident rendering engine, it's based on Mosaic which was already out of date even when MS screwed over its creators to get hold of it. Unless Microsoft does a Gecko-style rewrite so people can plug in support for new XML namespace renderers, the only support for SVG on Windows will be from the Adobe plugin, which doesn't really give you all the benefits. Mark my words, SVG has an uphill struggle against Flash. Flash is here, it works, millions are familiar with it, and it has a truckload of features. SVG embedded using a standard plugin doesn't offer any real advantages over Flash, which is a real shame.
https://slashdot.org/story/02/06/03/1521230/slashback-gnoogle-playstation-assault
CC-MAIN-2017-43
refinedweb
6,769
71.95
IRC log of tagmem on 2002-07-22 Timestamps are in UTC. 19:13:50 [RRSAgent] RRSAgent has joined #tagmem 19:14:03 [Ian] --------- 19:14:10 [Ian] FTF meeting in Vancouver. 19:14:26 [Ian] Action TB: Send information about hotels and other options for meeting. 19:14:33 [Ian] ------------- 19:15:19 [Ian] New issues: 19:15:23 [Ian] 1. Bad practice: Overriding HTTP content-type with a URI reference.See email from TBL. 19:16:41 [Ian] Resolved: Accepted as issue contentTypeOverride-24 19:17:04 [Ian] 2. "Deep linking illegal" bad practice? See email from Tim Bray. 19:17:11 [Ian] 19:17:19 [DanC] fyi, Links and Law: Myths 19:18:07 [Stuart] q? 19:18:41 [Ian] DC: I am nervous about scope...but no objection to accepting this as an issue. 19:19:24 [Ian] RF: I support that in general that there is nothing to prevent deep linking on the net. But the way copyright works, if you are using someone's content in a way that denies them business value, you're screwed anyway. 19:20:29 [Ian] TB: Way to do this is to use an authentication scheme, or use referrer field. They didn't do any of this. If they had done that and the bad guys had worked around this, then they would have had grounds to take them to court. 19:20:53 [Ian] RF: I agree with you in principle. But the ruling in this case was about one news org taking content from another. 19:21:47 [Ian] Action PC: Ask Henrik Frystyk Nielsen to provide us with a precis of the ruling. 19:22:32 [Ian] PC: We should tell people that if they publish a URI and don't want someone to get at it, what they can do. 19:22:59 [Ian] TB: There has been litigation about this several times. I think this will keep coming up. 19:23:04 [Roy] 19:23:08 [Ian] RF: I agree with the principle in general. I am worried about the actual facts of the case. 19:23:35 [DanC] following the "if it jams, force it; if it breaks, it needed replacing anyway" principle, let's do make this an issue. i.e. better to ask forgiveness than permission. 19:23:51 [Ian] Resolved: Accept this as issue deepLinking-25. 19:23:51 [DanC] 1/2 ;-) 19:24:00 [Ian] =============== 19:24:01 [Ian] Arch doc 19:24:48 [DanC] Ian: I was relieved of some AB admin duties. (e.g. meeting summaries) 19:25:00 [Ian] IJ: Steve Ziles picking up some of my duties. 19:25:04 [DanC] ... also, as of 1Aug, my writing assignment there should go on hold. 19:25:14 [DaveO] woohoo! 19:25:47 [Ian] Refined comment: 1 Aug I will give new draft to AB (if not sooner). Then I will focus on arch doc. 19:25:59 [Ian] CLOSED TBL 2002/05/05: Negotiate more of IJ time for arch doc 19:26:10 [Ian] ACTION RF 2002/06/24: Write a paragraph on technical and political aspects of URIs and URI References. 19:26:49 [DanC] pointer to what you've already written? 19:26:51 [Ian] RF: I sent content across several emails. I could summarize that content. 19:27:21 [Ian] RF: I will send to TAG today. 19:27:27 [Ian] # Action DC 2002/07/08: Propose text for section 1.1 (URI Schemes) 19:27:29 [DanC] re my action, I'm about 2/3rds of the way thru: 19:28:12 [Ian] DC: I started working on table. Idea was to take column headings. I couldn't find any column headings that generalized to whole URI schemes. 19:28:29 [Ian] I did write a FAQ: 19:28:43 [Ian] DC: Some progress. 19:29:57 [Ian] TB: I have revised my comments. See edits from SW: 19:30:07 [DanC] "save early. save often" 19:30:56 [Ian] TB: I think SW's and CF's points are good ones. 19:31:02 [Stuart] q? 19:31:22 [Ian] # Action DC: 2002/07/15: Generate tables of URI scheme props from RDF. (Progress) 19:31:54 [Ian] # Action DC 2002/07/08: Ask Michael Mealing when IETF decided not to use HTTP URis to name protocols. Awaiting reply. 19:31:59 [Ian] DC: I need to get more from MM. 19:32:37 [Ian] DC: Harder to find out "what's going on there". I am not willing to do intelligence on the whole IETF. 19:33:08 [Ian] TB: There is some perception that the IETF will not be using http URIs for namespaces as a matter of policy. We need to find out if that's the case (and if so, we are likely to disagree). 19:33:33 [Ian] DC: No way to get a clear answer unless policy part of an RFC. 19:33:45 [Ian] RF: We can ask the IESG a question about what drafts they are rejecting. 19:33:52 [Ian] DC: Do they *owe* me an answer? 19:33:56 [Ian] RF: Don't know. 19:34:07 [Ian] DC: I can try. But last time I put something on their agenda, it took them 9 months to answer. 19:34:52 [Ian] Resolved: change action from ask Michael Mealing to ask the IESG. Change to "if and when" IETF decided... 19:35:04 [Ian] DC: As for action about the table of URI schemes, I find not useful. 19:35:21 [Ian] DC: I hope the table is not in the arch doc. I find it to be extremely misleading. 19:35:56 [Ian] - Relative URI column is misleading. It's supported syntactically for all schemes. 19:36:07 [Ian] RF: URN scheme doesn't allow "/" for hierarchy. 19:36:17 [Ian] DC: But if there's a slash in there, then there's hierarchy. 19:36:27 [Ian] SW: Is there hierarchy to mailto? 19:36:52 [Ian] RF: Find the useful bit and put into the table. 19:36:56 [Ian] DC: I tried and was unable. 19:37:23 [Ian] DC: Maybe a way to rephrase the relative URi column to be useful. 19:37:38 [Ian] DC: For spelling equivalence, if you want to be sure - spell the same way. 19:37:46 [Ian] DC: I couldn't make sense of functional equivalence at all. 19:37:52 [Ian] DC: For admin, I couldn't find pattern. 19:38:06 [Ian] RF: I liked that because it pointed out that there was no diff between DNS authority and URN authority. 19:38:17 [Ian] DC: That's a point worth making, but the table doesn't make it. 19:38:43 [Ian] (The differences between URNs and DNS are not interesting, but the table doesn't convey that.) 19:39:14 [Ian] RF: Can we play with the RDF? 19:39:15 [Ian] DC: Yes. 19:40:30 [Ian] TB: The purpose of the table was not to be exhaustive, but to cover a few characteristics of deployed schemes as active discouragement to reinvent the wheel. 19:40:44 [Ian] TB: Something modest and hand-prepared would probably do the job. 19:41:02 [DanC] ... 19:41:22 [DanC] "each valid use of a URI reference unambiguously identifies one resource 19:41:23 [DanC] " 19:41:29 [Ian] q+ 19:41:41 [Ian] q 19:41:44 [Ian] q- 19:43:09 [Ian] From Norm: 19:43:11 [Ian] Definition: a URI identifies a resource that is amenable to 19:43:11 [Ian] unambiguous interpretation by recursive application of a finite set of 19:43:11 [Ian] specifications, beginning with the specification that governs the 19:43:11 [Ian] scheme of the URI. 19:43:15 [Ian] 19:44:12 [DanC] not clear that we're saying the same thing. 19:44:23 [Ian] I think we should try to align there. 19:44:29 [Ian] # Action SW 2002/07/15: Draft text for arch principle on absolute addressing preferred over context-sensitive addressing. 19:44:44 [Ian] Done: 19:45:46 [Ian] TB: I think SW's draft is solid. Some feedback on www-tag. 19:46:29 [Ian] SW: I will include the bang-style addressing (email) in the rationale. 19:46:48 [Ian] DC: "Does the Web use local or global naming?" 19:47:04 [Ian] TB: I'm not sure that the anecdote adds much to the discussion. 19:47:15 [Ian] DC: Global naming scales better in certain ways (generic principle). 19:47:31 [Ian] DC: I think the principle applies to absolute URI refs (but not "../foo"). 19:48:07 [Ian] RF: NEWS URIs are context dependent. 19:48:36 [Ian] RF: You have to phrase this in a way that says that it's ok to refer to a context-dependent resource if the resource is meant to be context-dependent. 19:49:02 [Ian] RF: E.g., reference to a newsgroup that is relative to your local access point. 19:49:09 [Ian] RF: Whereas nntp: is a global context. 19:49:12 [Ian] RF: The details get nasty. 19:49:42 [Ian] RF: You need to identify what you mean by resource first, then say whether the resource needs to be globally consistent ("sameness" must be global). 19:50:17 [Ian] RF: When we insert this text in the arch doc, we need to be concerned about the nasty details. 19:50:28 [Ian] RF: I suggest not to say "resource or concept", just "resource". 19:51:50 [DanC] does 30Aug still look likely? 19:52:12 [DanC] ah; I was thinking that sayid 30Jul 19:52:49 [Ian] ============= 19:52:52 [Ian] # Internet Media Type registration, consistency of use. 19:52:52 [Ian] * Action PC 2002/07/08: Propose alternative wording for finding regarding IANA registration. Refer to "How to Register a Media Type with IANA (for the IETF tree) " 19:53:02 [Ian] PC: Not done. 19:54:13 [Ian] PC: I hope to have done this week. 19:54:23 [Ian] =========== 19:54:24 [Ian] # Qnames as identifiers: 19:54:24 [Ian] * Action NW 2002/07/15: Republish finding. What is status of this? Can we close qnameAsId-18? 19:54:33 [Ian] 4. Formatting properties: 19:54:33 [Ian] * What is status of issue formattingProperties-19? 19:54:35 [Ian] Both done. 19:54:50 [Ian] Action IJ: Update status sections to indicate that these are approved findings. 19:54:59 [Zakim] +??P5 19:55:10 [Ian] zakim, ??P5 is David 19:55:11 [Zakim] +David; got it 19:55:43 [Ian] ---------------------------------------- 19:55:51 [Ian] 1. httpRange-14: Need to make progress here to advance in Arch Document. See thread started by Tim Bray 19:56:26 [Ian] DC: Stalement. Two rational arguments. 19:56:38 [Ian] TBL: If TBL confirms NW's writings, 19:56:42 [Ian] then I have a simple answer. 19:56:50 [Ian] s/TBL:/RF:/ 19:57:38 [Ian] RF: My answer is that this conflates the URI with the method. When you perform a GET, you get back a document. But you don't define the resource type using the method. 19:58:27 [Ian] RF: If we define URIs in terms of what they are when you do a GET, then yes, HTTP URIs refer to documents. But we don't define URIs that way. 19:58:37 [Stuart] q? 19:58:48 [Ian] TB: I thought there was material talk on the list about the workings of RDF. 19:59:07 [Ian] TB: RDF should be able to talk about resources and representations of resources. If it can't, it's broken. 19:59:49 [Ian] DC: I can try to take TBL's place for a minute. He wants to conclude that "http: => document" and wants to ensure that those are disjoint from people and cars. 20:00:21 [Ian] RF: If the definition is not consistent across all methods, then the definition is wrong. 20:00:58 [Ian] SW: When TBL talks of "docs" he's talking about document resources (not representations). 20:01:07 [Ian] SW: And RF is referring to representations. 20:01:53 [Norm] q+ 20:02:48 [Ian] RF: Yes, I know that TBL is using that as a basis point. But it still doesn't work - there are still HTTP resources on the Web that are not remotely documents. 20:02:48 [Ian] RF: The distinction TBL wants to make falls over in practice. 20:02:50 [Ian] DC: I'd like a term that means "bits + mime type" (other than representation).... 20:02:52 [Ian] RF: We chose "entity body" since we couldn't come up with a better term... 20:03:10 [Ian] DC: I refer to bits + mime type as a document. And resources that act like documents, I call "works" 20:03:43 [Ian] TB: There does seem to be a meaningful distinction between a think that's basically its representation, and more abstract things like a weather report. 20:03:46 [Norm] Stuart: are you using the queue? 20:04:03 [Stuart] q? 20:04:06 [TBray] q? 20:04:33 [Norm] ack norm 20:04:38 [TBray] q+ 20:04:42 [Ian] SW: How to make progress on this? 20:04:58 [Ian] DC: I'm willing to try to convince TBL. 20:05:05 [tim-mtg] tim-mtg has joined #tagmem 20:05:26 [Ian] TB: we need language for this (about what an HTTP resource is...) 20:05:56 [Ian] Tim, people are asking what breaks if the TAG doesn't follow your path. 20:06:03 [tim-mtg] According to my philosophy? 20:06:17 [Norm] wrt range of HTTP 20:06:28 [Ian] TB: I suggest procedurally that we ask TimBL to react to proposed text and www-tag comments and help us make progress on this issue. 20:06:38 [tim-mtg] I'm not sure that I can do this in IRC. 20:06:44 [Ian] [Proposed text from Dan's FAQ or Norm's text.] 20:07:01 [TBray] indeed, but don't worry, we're giving you action items to sort it out :) 20:07:11 [tim-mtg] It is possible to have a system in which the publicher of a URI cetermines what it actually idenifies 9car opr web page) but all the systems like 20:07:29 [tim-mtg] dubloin code which assume blithley that a wbe page is idenified by its URI have to be put on hold 20:07:34 [Ian] Action SW: Persuade TimBL to write an exposition of his position. 20:08:02 [tim-mtg] until they can check with the publichers of the URI to find out whether in fact the URI identifies something which is nota web page. 20:08:36 [Ian] ---------------- 20:08:37 [Ian] # uriMediaType-9: Status of negotiation with IETF? See message from DanC. 20:08:37 [Ian] * Action TBL: Get a reply from the IETF on the TAG finding. 20:08:47 [TBray] q- 20:09:03 [Ian] DC: TimBL proposed this for June IETF/W3C meeting. Didn't get far then. 20:09:09 [Ian] DC: We are next meeting 12 November. 20:09:48 [Roy] [out for a minute] 20:11:56 [Ian] SW: I'm not sure what our best course of action is here. 20:12:07 [Roy] [back] 20:12:21 [Ian] SW: Either a mid-November backstop or a possible forum (W3C CG) before then. 20:12:31 [Ian] SW: Do we need to be more active than that? 20:12:37 [Ian] DC: I'm at a loss. 20:12:52 [Ian] SW: Then I propose that we try to get on agenda of CG first, or next IETF/W3C meeting otherwise. 20:12:57 [Ian] ================= 20:13:14 [Ian] # 20:13:14 [Ian] * 20:13:14 [Ian] # RFC3023Charset-21: 20:13:14 [Ian] * Action CL: Send copy of information sent to tag regarding RFC3023Charset-21 to www-tag. 20:13:58 [Ian] Open. 20:14:42 [Stuart] q? 20:14:45 [Ian] ------------------------- 20:14:55 [Ian] # Status of URIEquivalence-15. Relation to Character Model of the Web (chapter 4)? See text from TimBL on URI canonicalization and email from Martin in particular. 20:14:58 [Ian] TB: This is serious. 20:15:12 [Ian] TB: Martin seems to be saying "deal with it" 20:15:30 [Ian] DC: Two reasons: 20:16:11 [Ian] a) The only way you can be sure that a consumer will notice that you mean the same thing is that you've spelled it the same way. I think that they're not wrong. Nothing wrong with string compare. 20:16:26 [Ian] In general, it's an art to gather that something spelled differently means the same thing. 20:16:45 [DaveO] Q+ 20:16:57 [Ian] TB: If we believe that, should there be a recommendation that "when you do this, only %-escape when you have to, and use lowercase letters." Where should that be written? 20:17:04 [Ian] DC: Shortest path to target is the I18N WG. 20:17:05 [Stuart] ack DanC 20:17:07 [Zakim] DanC, you wanted to disagree that it's wrong 20:17:32 [Ian] DC: RFC 2396 applies equally to all URI schemes. 20:18:01 [DanC] generating absolute from relative URI is not scheme-specific. 20:18:01 [Ian] DO: There are absolutization scheme(s) and things like scheme-specific rules (e.g., generating an absolute) and we should take this into account when we talk about doing a string compare. 20:18:09 [Ian] RF: Different issues here. 20:18:58 [Ian] RF:. 20:19:04 [DanC] there's stuff like and , which are specified, in a scheme-specific manner, to mean the same thing. 20:19:31 [Ian] DO: So, canonicalize according to scheme and generic rules, then compare. 20:19:46 [Ian] RF: The only entity that does the canonicalization is the URI generator; not at comparison time. 20:20:09 [Ian] RF: Inefficient to canonicalize at compare time. 20:20:15 [TBray] q+ 20:20:33 [Stuart] ack DaveO 20:21:29 [TBray] no, paul's in the room with me, and we hear it but aren't doing it 20:21:35 [Ian] RF: Making a URI absolute is scheme-independent. That's required so we can add schemes later on. 20:22:06 [Ian] DC: There was a backlash in the XML community about saying absolutize. 20:22:15 [Ian] TB: That was a different issue. 20:22:40 [TBray] q? 20:23:01 [Ian] DC: I don't understand the difference. 20:23:08 [Stuart] q+ 20:23:14 [Ian] DO: Namespaces used as identifiers rather than for dereferencing. 20:23:43 [Ian] DO: Requiring absolute URIs was meant to facilitate authoring. 20:24:14 [Stuart] ack TBray 20:25:47 [Ian] TB: I hear people arguing that string comparison necessary. I think there needs to be a statement of principle to get good results: 20:25:55 [Ian] a) don't %-escape unless you have to. 20:26:04 [Ian] b) use lowercase when doing so. 20:26:09 [Ian] TB: Where do we take these suggestions? 20:26:40 [Ian] TB: (a) We have a section on the arch doc on comparing URIs or (b) ask I18N WG to deal with this. 20:26:48 [Ian] RF: Or add a stronger suggestion to the URI spec itself. 20:27:01 [Ian] TB: That's a wonderful answer. 20:27:08 [Stuart] ack Stuart 20:27:20 [Ian] RF: I can add this to the issues list (section on URI canonicalization). I can't promise that it will be answered there. 20:27:41 [Ian] DC: I don't think we should punt this entirely. 20:28:09 [Ian] DC: For URIs, it's fine to do string compare. For URI references, it's fine to absolutize and then do string compare. That works for me. 20:28:28 [Ian] SW: I agree with TB that we should have something in arch doc. That should be in sync with the emerging URI spec. 20:28:50 [Ian] DO: How about as little as "there are good rules for doing this; go see the URI spec and the IRI specs for more info..." 20:28:58 [DanC] "Can the same resource have different URIs? Does identify the same resource as? " 20:29:04 [DanC] -- 20:30:03 [Ian] DC: Is it useful to do a finding in the mean time? 20:30:16 [Ian] IJ: I hope to harvest from Dan's FAQ. 20:30:28 [Ian] TB: I think that if in arch doc, probably don't need a finding. 20:31:10 [Ian] Action IJ: Harvest from Dan's FAQ. 20:31:18 [Ian] Resolved: the Arch Doc should mention this issue. 20:31:48 [Ian] ----------- 20:31:54 [Ian] DO: Possible regrets for next week. 20:31:57 [Zakim] -Vancouver 20:31:59 [Ian] NW, SW: Regrets. 20:32:00 [Zakim] -David 20:32:02 [DanC] order? next meeting was earlier in the agenda. 20:32:05 [Ian] PC: Tentative regrets. 20:32:06 [DanC] we're adjourned, yes? 20:32:15 [Ian] ADJOURNED 20:32:37 [Zakim] -Roy 20:32:41 [Roy] Roy has left #tagmem 20:32:41 [Zakim] -Stuart 20:32:44 [Zakim] -Ian 20:32:48 [Ian] RRSAgent, stop
http://www.w3.org/2002/07/22-tagmem-irc
CC-MAIN-2015-27
refinedweb
3,608
83.66
Devel::TraceDeps - track loaded modules and objects $ perl -MDevel::TraceDeps your_program.pl And the real fun is to pull a tree of dependencies off of your test suite. $ perl -MDevel::eps=tree -S prove -l -r t $ ls tracedeps/ And of course no Devel:: module would be complete without an obligatory cute little shortcut which needlessly involves the DB backend: $ perl -d:eps whatever.pl TODO: a cute little shortcut which needlessly claims an otherwise very funny-looking toplevel namespace. $ perl -MapDeps whatever.pl Devel::TraceDeps delivers a comprehensive report of everything which was loaded into your perl process via the use, require, or do($file) mechanisms. Unlike Devel::TraceLoad, this does not load any modules itself and is intended to be very unintrusive. Unlike Module::ScanDeps, it is designed to run alongside your test suite. For access to the resultant data, see the API in Devel::TraceDeps::Scan. In tree mode, forking processes and various other runtime effects *should* be supported but surprises abound in this realm -- tests and patches welcome. TODO reports on shared objects loaded by DynaLoader/XSLoader. TODO somehow catching the 'use foo 1.2' VERSION assertions. This is handled by use() and is therefore outside of our reach (without some tricks involving $SIG{__DIE__} or such.) I think these are going to be very pathological cases since I've already run a fair body of code through this without any visible hitches. If you try to require("5.whatever.pm"), it might fail. If a required module expects to do something with caller() at BEGIN time (e.g. outside of import()), we have problems. If I could think of a good reason to rewrite the results of caller(), I would. The tree setting goes all the way down into any perl subprocesses by setting ourselves in PERL5OPT. This is probably what you want if you're trying to package or bundle some code, but needs a knob if you're trying to do something else with it. The PERL5OPT variable gets dropped if you use taint. Patches welcome!.
http://search.cpan.org/~ewilhelm/Devel-TraceDeps-v0.0.3/lib/Devel/TraceDeps.pm
CC-MAIN-2016-36
refinedweb
343
65.93
#include <std/disclaimer.h> /* * Your warranty is now void. * * I am not responsible for bricked devices, dead SD cards, * thermonuclear war, or you getting fired because the alarm app failed. * YOU are choosing to make these modifications, and if * you point the finger at me for messing up your device, I will laugh at you. :P * blah blah blah you get the point. */ So for those of you living under a rock (jk), I already maintain Void Kernel for the One S, so why would I bother creating another? Well @Rapier started having issues, so I PM'd him a patch. He sais he would test it later but commented on how Void LP and IceCode LP were feeling a bit sluggish (they share the same base) where as the stock CM 12 Kernel is super smooth. I then tried the CM 12 stock kernel and realized that he was right. So I forked the CM stock kernel and added some basic features and sent it to him. He commented on how the OC values weren't sticking properly and the Benchmarks were really low but how it was smooth. So I fixed the OC, and added @show-p1984's MSM_MPDEC and MSM_THERMAL to speed it up and when I benchmarked it, I got 6788 in quadrant. That was a couple of hours ago. Now, I'm releasing it into the wild. The main focus of the Bricked Edition (like Bricked Kernel before it) is speed and stability. Undervolting is doable but you have to do it yourself and if you get random reboots I will not help you there. Enjoy Frosted Kernel everybody! I'm really sorry for the noobish question, but how do I tweak this kernel's settings? Which app should I use? For Bricked edition you can use TricksterMOD, for Intelli edition you can use FauxClock (but that's paid app) Sent from nowhere over the air... (Thanks derkleinebroicher!) Post #1: Disclamer, Why I created this Kernel Post #2: Download and Features for Bricked Edition Post #3: Changelog for Bricked Editon Post #4: Download and Features for Intelli Edition Post #5: Changelog for Intelli Edition Post #6: Download and Features for Void Edition Post #7: Changelog for Void Edition Post #8: Link to Discussion Thread XDA:DevDB Information Frosted Kernel, Kernel for the HTC One S Contributors javelinanddart, Rapier, jrior001, rmbq, AKToronto, show-p1984, flar2, faux123, Rapier, Winstarshl, derkleinebroicher, unimatrix.ø, Loreane Van Daal Source Code: github.com/FrostedKernel/android_kernel_htc_msm8960 Kernel Special Features: Version Information Status: Stable Created 2015-02-15 Last Updated 2015-03-21
https://forum.xda-developers.com/htc-one-s/development/kernel-frosted-kernel-javelinanddart-t3031529
CC-MAIN-2019-30
refinedweb
430
69.52
In this react native beginner’s tutorial, we are going to learn how to set up the tools and environment we will use to enable us start building cross platform android and iOS applications using react native. The good thing about react native is that if you have a good background in Javascript, learning react native will be a breeze. But there is nothing wrong if you do not know Javascript, with little devotion you can learn and master it. Before we start, we are going to install some software that android and react will use to run in our windows operating system. Install JDK If you do not have Java already installed in your system, you can start by installing Java Development Kit. If not, you can follow this link to download and install JDK in your system. Install Android Studio Android Studio is the official development IDE for android although you can use any other IDE of your choice. If you have not yet install Android Studio, kindly move to the official android studio to download and install the latest version of android studio. One good thing about android studio is when you install it, it will also setup the environmental variable for you. Node.Js We are going to install Node.js. If you have not installed Node.js in your system, this is the best time to do it. Node.js is a Javascript that runs in a server. Move over to the Node.js official website to download Node.js windows installer for 32 or 64 bits depending on your system specification. The good thing is that the installer is bundled with NPM which is Node.js package manager. This will help us to download and install some react native packages like React Native command line tool we might need for the development. Once you finish with Node.js installation, you can open windows command line as an administrator and check if node.js as been installed properly with this following command node -v Install React Native CLI Now that we have installed most of the tools we need, we are going to install React Native CLI which we will use to set-up our first React Native project. There are many usage of React Native CLI which we will see later. With the command line still open, type this command and hit enter – npm install -g react-native-cli This will install react native cli in the global environment. The images below shows how it will appear if everything works for you. Lets Start Building We are good to go now with creating our react native application. With your command line still open, type the following command react-native init projectname and hit enter. Please note that you can choose any name of your choice for your project. Also, you can us the cd command to change to the directory you will want to store you project. Note that this command once issues will take a little to complete. Once it is completed, you will see the below output in your command line. Once you have created your project, go to your project folder, double click to open it. Once it is open, you will see a folder structure like the image shown below. Inside the folder, you will see android folder which will contain android app source code, ios folder for ios source code, node_modules for node packages and tests folder for test scripts. Please not that starting from React Native version 0.49, there is no index.ios.js and index.android.js. You will see only index.js. Javascript IDE and Android IDE In this tutorial, I am going to use Visual Studio Code for writing react native code and we will open the android folder inside Android Studio and run the application using android emulator I have set up. You are free to use any IDE for your React Native code like atom and so on. In visual studio code, go to file menu and select open folder and navigate to where you store your project and select android folder to open it in your IDE. Once it is open, look at the root folder and open the file name index.js. The code content of the generated file is shown below. /** * Sample React Native App * * @flow */ import React, { Component } from 'react'; import { Platform, StyleSheet, Text, View } from 'react-native'; const instructions = Platform.select({ ios: 'Press Cmd+R to reload,\n' + 'Cmd+D or shake for dev menu', android: 'Double tap R on your keyboard to reload,\n' + 'Shake or press menu button for dev menu', }); export default class App extends Component<{}> { render() { return ( <View style={styles.container}> <Text style={styles.welcome}> My first react native app. Amazing feature </Text> <Text style={styles.instructions}> To get started, edit App.js </Text> <Text style={styles.instructions}> {instructions} </Text> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', alignItems: 'center', backgroundColor: '#F5FCFF', }, welcome: { fontSize: 20, textAlign: 'center', margin: 10, }, instructions: { textAlign: 'center', color: '#333333', marginBottom: 5, }, }); This is the entry page for our application. From the code, you will see that react uses component as a building block for pages. The default app class is the root component for the application. In the first part of the code, you can see that the import statement is used to import React instances and components. Finally, the style-sheet is use to style the content of the page. We will go deeper on this topics in our subsequent tutorials. Android Studio Open the android folder in android studio. Once it is open and there is no error, you can click on the run button to select emulator. If you have not set up an emulator, you can do so within your android studio. Once this is done, it is the time to run our react native android app. Lets Test Our React Native Android Installation Now that everything is set up and our command line interface still open, we are going to write the following command to run our react native android app in our emulator. react-native run-android Please note that if the package server is not running, you should issue the command react-native start to start the server. If everything goes well, you will see an image similar to the image below. While the emulator is running, you can press control + m to open up the dev settings. From the list, you can select live reload. This will monitor changes in the project files and folders. With this we have come to the end of this tutorial. I hope you have learn a thing. Please bear in mind that there are many errors that might occur during this process. You can easily use Stackoverflow.com or other online resources to research about the errors. You can also use the comment box below to ask questions Hi, This is good tutorial..I did everything as you said, but getting following error The development server returned response error code: 500
https://inducesmile.com/facebook-react-native/how-to-setup-react-native-for-android-app-development-in-windows/
CC-MAIN-2020-34
refinedweb
1,179
72.76
2013-09-18 Meeting Notes John Neumann (JN), Dave Herman (DH), István Sebestyén (IS), Alex Russell (AR), Allen Wirfs-Brock (AWB), Erik Arvidsson (EA), Eric Ferraiuolo (EF), Doug Crockford (DC), Luke Hoban (LH), Anne van Kesteren (AVK), Brendan Eich (BE), Brian Terlson (BT), Rick Waldron (RW), Waldemar Horwat (WH), Rafeal Weinstein (RWN), Boris Zbarsky (BZ), Domenic Denicola (DD), Tim Disney (TD), Niko Matsakis (NM), Jeff Morrison (JM), Sebastian Markbåge (SM), Oliver Hunt (OH), Sam Tobin-Hochstadt (STH), Dmitry Lomov (DL), Andreas Rossberg (ARB), Matt Sweeney (MS), Reid Burke (RB), Philippe Le Hégaret (PLH), Simon Kaegi (SK), Paul Leathers (PL), Corey Frang (CF) 4.2 Reconsider decision to make Typed Arrays non-extensible (Cont) RW: (recapping yesterday's discussion) EA: Can we pick 3 people to champion a recommendation? RW: Ideal (support in the room) Consensus/Resolution - 2 People to come back with a recommendation: - Dmitry Lomov - Allen Wirfs-Brock Post meeting update: Dmitry and I discussed the issue and our recommendation is that Typed Array instances should come into existance being extensible. This is only about the objects created by the current set of built-in typed array constructors (or any subclasses derived from them). It does not imply that fixed size array types introduced by the future typed objects proposal will necesarily also be extensible. -- Allen As part of that consensus, variable-length (but not fixed-length!) typed array instances that are part of a future Typed Object spec should also be extensible in the same way as current Typed Array objects. In that way, full compatibility and equivalence between say "Uint8Array" and "new ArrayType(uint8)" will be maintained. As part of typed objects proposal, we will also consider having a different "type constructor" names for variable- and fixed-length typed arrays (e.g. "new ArrayType(uint8)" vs. "new FixedArrayType(uint8, 10)"). -- Dmitry 4.4 Symbols Dave Herman presenting (follow up slides) DH: Symbols: object or primitive? Open issues - privacy - object or primitive (1) Statelessness - Symbols should not share state - Encapsulates key and nothing else (2) Cross-Frame Compatibility obj[iterator] = function*() {}; Another frame must also know what this iterator symbol is EA: Workers? AWB: Only an issue when you want to move a value... EA: Case where you use a name as a brand (branding using public symbols do not work. Branding needs true private state.) YK: can these be structure-cloned? (3) Methods DH: The one place that most objects can't have methods is prototypeless objects, but they can have instance methods. For most most of the interesting data (strings) there are things you can do on them: alert.call(); Math.sin(0); document.getElementById("body"); DH: if you only allow things to work via functions with arguments, you are turning off a powerful tool (4) Mutable Prototypes Monkey-patching standard methods is a best practice The evolution of the web depends on it DH: This is important to the language DH: mutable prototypes are the way that developers provide a consistent platform across user agents they don't control STH: This is assuming that Symbol will grow methods YK: It's actually an assumption that it won't DH: If we freeze a prototype, we're closing the door to ever evolving the API and closing the door to user code experimenting - No experience to show that freezing a prototype "works" WH: By "mutable prototype", you mean add and change methods, not change the prototype DH: Yes, changing the shape of the prototype object AR: (general defense of mutability, in prototypes and otherwise) AWB: Freezing the prototype can be undone safely, if there is reason in the future. AR: this is not a conservative position, despite the claim. DH: It doesn't matter, if we design depending on the invariance that the prototype is immutable then we can't change (5) Non-Answers (6) Shallow-Frozen Objects O.getPrototypeOf(iterator).foo = 12; - Fails Desideratum #1: is stateful - Fails Desideratum #3: distinct cross frame iterators WH: What doesn't work? DH: The standard iterator symbol would be different in different frames because they exist in different heaps (6) Deep-Frozen Objects O.getPrototypeOf(iterator).foo = 12; // strict error - Fails Desideratum #4: no evolution YK: How does the current spec deal with the function.prototype linkage AWB: The prototype is null DH/YK/AWB: this is incoherent, doesn't work at all. (7) Missed? Something about prototype-less wrappers, fill in from slides... (8) JS already has an answer for this! Autowrapping of primitives - typeof iterator === "symbol" - Get/Call operations auto wrap - Prototype state is global per-frame "People think auto-wrapping is gross" - provides uniform OO surface - does so without runing immutability - doesn't ruing API patchability - need a solution for value types (9) Remaining Issues - [[ToPropertyKey]] of Symbol objects" auto-unwrap? Does it matter in practice? - Worry about toString for symbols and Symbol objects? Does it matter in practice? WH: We should be consistent with the way we already do things. We don't unwrap boolean wrappers when used in a condition; we shouldn't unwrap symbol wrappers when used to look up a property. If you use Boolean(false) in an if statement, it will evaluate true. AWB: In ES5, there was special treatment of wrappers, e.g. in JSON. - no reason you should have a wrapper value in most contexts - I would say, don't use YK: Is it important that === works cross frame or just the indexing - Have a mechanism that allows them to DH: A kind of object that overloads === YK: No, don't - If you go with new Symbol() returns object with mutable prototype DH: No "object" solution works because of methods - Need to move to next slide - w/r to toString, I can't construct a plausible scenario where this would be encountered YK: Accidentally construct a wrapper WH: People explicitly convert to string before doing an operation. It would be impractical to make symbols survive the various kinds of string conversions. ARB: Lot's of existing code that has code paths that converts a value to a string to get property key, would subtly misbehave with implicit symbol-to-string conversion (10) typeof Extensibility We don't know that it wont break the web MSIE "unknown type may simply be rare enough to be undiscovered. Fallback: "object" with [[Get]] et al that behave like auto-wrapper? (Object.isValue()?) WH: It won't break the web because existing scripts won't see the new type. Seeing one of these things requires changing the script. WH: We said that array subclasses are new and safe because unmodified code won't see them break things like Array.prototype.slice.call; why isn't the same argument applicable to symbols? AWB: when I did symbols originally, as primitives and not wrappers, it was a bug farm, because everywhere that assumed a primitive type, now had to explicitly account for the possibility of a symbol. The standard library in particular had to do this. In practice, anyone who is writing a general-purpose library would have to do the same thing. If you have wrappers, then the value "turns into an object" when you use them as such, which works fine. ARB: I implemented symbols in V8 with wrappers and there were no places where they needed special handling. AWB: yes I agree! Wrappers address most of the issues. YK: (Concerns about existing code that might not be resilient to typeof) DH: the most straightforward thing to do is extend typeof, but you can have code that's not resilient to new typeof values. (More discussion of existing library code having to deal with symbols). DH: Many new things in ES6 are objects with new APIs, so passing them in would violate basic interfaces expected. Whereas for typeof, many APIs will take any value whatsoever, and then figure out what to do via discriminating via typeof. So introducing a new typeof will break their assumption that they can handle any case. JM: I don't think it's so different; they will fail in similar ways. AWB: This is different; it's at the core of the language, and there is a different expectation. DC: Yes, this is different. There is an idiom that will fail, and it's a common. switch (typeof something). DH: if the only conservative extension guarantee we can make is that if you don't use any new features, everything is fine... DH: the difference is that there are APIs that say "I'll take any JavaScript value," they don't mean "I'll take any JavaScript value that was present in the ES5 era." This is different from saying "I'll take something with an array interface," and you passing in something like Map which happens to violate the array interface. WH: you are picking the wrong strawman. We are introducing things that do keep to the array interface (array subclasses) but break existing idioms such as using Array.prototype.slice.call to coerce to an Array object. DH: right, I'm picking Jeff's strawman, not yours. AWB: the issue with slice() is about realms, not kinds of objects, or subclasses. WH: The slice problem is not about realms. It arises merely if you introduce an array subclass and never use more than one realm. WH/AWB: (arguing about what they're arguing about) JM: You say it's clear when you pass in the wrong interface to an array-expecting vs. an all-expecting. Can you expand on why that's clear? DH: Difference between an api says "I will take an arraylike thing" and I will operate correctly and API that accepts "any" JM: but it's not about types for the "any" APIs; they usually just pass them through, e.g. a datastore API. LH: Rare that any API says "I'll accept any" DH: we're talking about parametricity, passing through the value vs. inspecting it. My experience is that I see type inspection of the type-any inputs a lot. LH: How often does an API take type "any" DH: A lot of JS programs don't protect against wrong values WH: No such thing as an API reliably taking type "any" and portably doing anything useful with it. Suppose we later introduce a new Decimal primitive that obeys the IEEE standard in that a DecimalNaN is not === to itself. Would a map work with it? Code that exhaustively type-dispatches primitives simply must change once in a while. YK: (recalling a point AWB made) people using typeof to defeat autowrap, tell the difference between a thing that auto-wraps and a thing that does not DH: The reason we shouldn't be afraid, is typeof of serves the purpose ... EA: If typeof primitive symbol returns "object" there is no way to distinguish a primitive symbol from a wrapped symbol. DH: we'd have to add a gross check for distinguishing - slipperly slope WH: "the future is bigger then the past" and it seems we're trying to perpetually mortgage the future to fix a relatively transitory fear we're not even sure is real. The cost is forever having yet another mechanism to distinguish the types of primitives. DH: Then there is the "browser game theory" argument, who will implement in the face of danger first?! WH: (interrupts) DH: I also like finishing my sentences. I'm willing to give it a try, but... EA: FWIW, V8 is shipping typeof symbol under a flag and no bug reports DH: under a flag is not the web - Willing to take this back to SM implementors AWB: Explicitly checking for "object" DH: Existing code breaking, this needs to be considered. - New code can find old code that can be fixed RW: Will put Symbol through jQuery in test suite to see what "breaks" Agreement that auto-wrapping is the way to go WH: Yes, use auto-wrapping the way we know it re: toString DH: One way or another we'll have to have it, whether it throws or produces a string DH: There are values that throw, like proto-less objects - conversion to string is not infallable in JavaScript ARB: implemented two toString methods - Symbolprototype.toString => throws to avoid the implicit coercion hazard - Objectprototype.toString => applicable to Symbol, but have to be explicit AWB: Plausible that Symbols could have a printable "name" DH: Proposal summary: - Symbols are primitive - typeof is "symbol" - standard Symbol prototype - construct to create wrapped symbol - Symbol wrapper object does not auto unwrap. ToPropertyKey will eventually call Symbol.prototype.toString which will throw. ARB: for the record: that is exaclty what V8 implements YH: to the implementers in the room, please make the error message when you use a wrapper very nice. ARB: V8 gives explicit error message DD: new Symbol() is a probable footgun; should it do anything at all? Maybe just throw? Because we don't have symbol literals, so unlike new Number(5) vs. Number(5) vs 5, the choice is not obvious; people will try to do new Symbol() not realizing that this creates a wrapper. RW: Agree. ARB: but then it's weird that there is an object you can get access to, but whose constructor doesn't actually work. DD: but the only way you can get access to these objects is via the this value inside Symbol.prototype methods, in sloppy mode. DH: no, Object(primitiveSymbol) would give you back the wrapper. DD: ah, damn. AWB/ARB: Discussing valueOf returns, used in contexts where a numeric value is expected DC: But string.valueOf produces strings AWB: Do you anticipate future value types having auto-wrapping? DH: BE has thoughts about typeof modifyability RW: Defer until BE is present. Agreement AWB: How does user code define a "well known Symbol"? One that works across Realms? DH: not sure - Standard library, we can create something that exists everywhere - Library code, not sure ARB: Think this is a serious problem that needs to be addressed DH: Proposal - Agree that it's bad to have a Symbol with BOTH a private and public form - Private by default? Public by access? - Need to solve the remaining proxy leak problem AWB: We've discussed and concluded that we're nowhere near a solution to the private state problem. YK: Mark's solution inverts where the transaction occurs DH: If private symbols really behave like WeakMaps and you invert where they live, then they are truly private YK: You want the proxy to trap them. Why? AWB: It's just a property name. DH: If you have access to the symbol EA: What about iterator? STH: If you want the proxy to have iterator behaviour, you need access to @@iterator ... AWB: Mistake to conflate symbols, which are guaranteed unique property keys, with private state. There is a temptation to do it, but we run into problems. I want private state, but there are better ways to do it. DH: I am pretty exhausted from years of this debate, but from all the things we have to decide, this has to be decided now. And I am willing to fall on either side of the fence (private vs. public vs. both), for the sake of resolving this, but we need to resolve it. LH: agree; we can't leave this meeting without a decision on this. But Arv's proposal (GUIDs) does give us a path. AWB: but Arv's proposal doesn't solve privacy at all; it's just about the representation of symbols (strings vs. real symbols). RW/DH: strings as symbols have bad usability and problematic to use. YK: No solution to the enumerability question (are symbols enumerable) DH: yes, if we were going to go with this, we would just have iterator and create be a shitty string. YK: worse, it would have to be a UUID published in the spec. AWB/DD/AR: underscores are better than GUIDs. DD/RW: Symbols not for private state, use a WeakMap. Symbols for uniqueness, move on. DH: Arv, what is your issue with just having public symbols? EA: they don't carry their own weight. They behave like strings, except for reflection. (silence) AWB: there is one difference. They are in a different meta-level of the system. There is no way that user data can coincidentally be confused for a symbol. DD: also, symbols do not show up in JSON.stringify. DH: This allows us to add a meta level. Just like __proto__ in the past. Underscores are, until now, our magic feather that we wave around to say "This is the meta-level! This will not be confused with data!" And that's BS. LH: unless you enforce the distinction, people will build abstractions that break through the layers. EA: before we had for-in; then we gave people getOwnPropertyNames and people started using that; now we're going to give them getOwnPropertyKeys and they'll use that. discussion about the string display YK: debugger could recognize that it's a uuid and replace with a readable value discussion about "__" names. LH: if you use any english word, someone could create a conflict. If you use "__", there is no accidental conflict. YK: Using "iterator" or " __iterator__", no one can reliably ducktype for an ES6 iterator STH: leaving for lunch, in favor of Symbols EA: I'm still in favor of keeping symbols, sorry for derailing the discussion a bit; what helps is the possibility of removing getOwnPropertyKeys. That makes them secure in the absence of proxies. LH: but then existing libraries cannot implement a real mixin that moves symbol-keyed properties over to a target object (if symbols aren't given reflective capabilities) (Brief discussion of making Object.mixin be the only way to do this.) DH: Object.mixin that can transfer symbols plus proxies allows you to reimplement Object.getOwnPropertyKeys. AWB: Still think we should have Symbols, getOwnPropertyKeys WH: I object to getOwnPropertyKeys (because of too many historical layers: in every spec we seem to be adding yet another layer of enumerating the properties of an object with a we-really-mean-it-use-this-one-instead-of-the-previous-edition's vibe). DD/AR/DH: (discussion of Object.mixin + proxies trick.) YK: It sounds like we're slipping down the path of doing privacy with symbols again, and we're going to appease people for the wrong LH: the concern for getOwnPropertyKeys is that people would just use it in place of getOwnPropertyNames. Maybe if we separate into getOwnPropertySymbols + getOwnPropertyNames that will make it sufficiently painful that people won't just use getOwnPropertyKeys together. Consensus/Resolution - Symbols are a new primitive type with regular wrapper objects - typeof symbol === "symbol" - implicit conversion to string throws - new Symbol throws - Symbols are public, not private ok that they leak to Proxy - Symbols are unique - Only exposed via Object.getOwnPropertySymbols instead of Object.getOwnPropertyKeys Object.mixincopies both symbol and string properties Additionally: - AWB commits to bringing a proposal for user defined well-known symbol registration 6. Post ES6 Spec Process (Rafael Weinstein) - slid.es/rafaelweinstein/tc39-process RWN: put together some thoughts after the last meeting with DL, EA, AWB, etc. RWN: most of it is good except that it's date driven and the consensus requirements lead to a high-stakes game for getting features into the game. RWN: The second problem is that with large-quanta spec releases, there's a varrying maturity level for proposals. Stable stuff is "held hostage" to newer features. RWN: as we near a release, we end up with large pressure around things which may or may-not make it. Argue that this is destructive to feature quality. RWN: we also have an informal process. It occurs to us that acceptance of features comes before details are sorted out. Implementers, therefore, lack a clear signal about when it's time to start implementing features. Might be unavoidable, but other groups show a different way (W3C, e.g.) RWN: we also have a single spec writer who bears the full burden of authoring spec text. RWN: a few ideas: - decouple additions from dates - put structure around stages of maturity - what does each stage mean? Get clarity RWN: non-goal: componentize the spec or break apart responsibility from the whole group. Also a non-goal to change the rate of evolution (necessarialy). RWN: looked at how the W3C works and tried to extract the bits that seem to work well. A 4-stage process: 1.) proposal 2.) working draft 3.) candidate draft 4.) last call RWN: At a (much more) regular interval, we'd post (smaller delta) drafts to ECMA. AWB: do these stages line up with W3C terminology? (sort of, not really) RWN: proposals outline the problem, need, API, key algorithms, and identification of cross-cutting concerns. Also, and identified champion. Acceptance signifies the idea that the solution is something the committee wants to keep working on. RWN: note that we don't explicitly slate a specific revision is targeted for a proposal. That comes later. AWB: concerned that we might accumulate accepted proposals that there's no activity on. How can we structure a cull? BE: as needed. FileSystem API as example. RWN: the analog might be the "deliverables list" used by W3C -- removing something from the list on the wiki could be that thing DH: Not to componentize? Seems like there is something of a componentization and that's the value? RWN: don't want to abandon the goal of language coherence. CSS did this wrong and have lots of weirdness as a result. Non-communicating editors lead to pain. This model is different: everything merges into a single spec. DH: how is this different to what we're doing now? Maybe this is a smaller tweak? BE: What this does is adds more staging before "proposal" RWN: this is saying the first stage doesn't have spec text, but the second stage does. DH: Makes a lot of sense, might make sense to spell out the earlier "incubator" stage. RWN: so there might be a stage-0, which is sort of the strawman we've had before RWN: what we want to see at stage 2 is draft spec text. It can have early-quality notes, etc. but thought should be put into the text for the feature before we collectively accept the feature. RWN: there are a couple of key things to look at: can we decouple spec editions from specific features? what are the substantives stages of maturity? BE: quick question: the i18n spec was on a different track, is this only for core stuff (quotes FakeAlexRussell??) (sort of, might be a way to draw stuff into the main spec) RWN: stage 3 is the "Candidate Draft". It signifies that the committee thinks the scope of the feature is set. We can incorporate changes, but the key thing is that implementations are potentially costly. This stage is a green-light for implementing and final feed-back RWN: stage 4 is "last call draft". 2 implementations and an acceptance test that they pass. Once accepted at this stage, the draft can be scheduled for the next spec to be published. RWN: what about dependencies? The committee isn't absolved of this. IT's up to us to manage them and there isn't any silver bullet. We need to make decisions. RWN: thought a lot about linkage as a part of this. A champion's interests might work against the language (ducking dependencies, etc.). The committee still needs to advise and continue to look over the landscape. (discussion) AWB: implicit in this is redefining the role of the editor to be more of an EDITOR, and less of an author. Should probably have a role in advancing proposals. RWN: so still a world where there's a single editor? AWB: yes. (general agreement) PLH: Noting that some of the process order might be confusing/out of order, with regard to naming? RWN: yes, "last call" means somethign different in W3C that doesn't map well YK: the year might be a red-herring. The point isn't the date and the goal isn't to rush things under the wire. RWN: (refers to Chrome release process) (not quite Chrome, but close and relevant: developers.google.com/v8/launchprocess ) AWS: some of the non-technical overhead can be offloaded DL: part of the goal is to help offload the work, getting more people writing more spec text. (notes that this happened for Proxies and O.o) DL: inside the v8 team, we don't have a ton of visibility into the maturity of features. BE: spidermonkey has shipped many things over the years, but at a cost (discussion about implementations and style) RWN: so we can imagine that you'd have different styles of implamentations at each stage? Makes sense. (agreement) AVK: the w3c is removing the last couple of these steps PLH: there's a new draft on github somewhere (some discussion that you need implementations, hence the new W3C process) AR: the chrome process shows that some features might slip multiple features, and that's very good for overall quality. AWB: are the criteria here entry or exit criteria? (discussion) WH: What about mutualy beneficial features? AR: that's the dependency question, we talked about that RWN: it's sort of arbitrary, but that exists no matter what. There's no silver bullet. It's the job of the committee to keep an eye on what's in flight. Not sure a process can ensure that we can do that well or poorly. WH: not componentizing is good, but want to make sure that the process doesn't get in the way. BE: true. AWS: if we see things that are tightly linked, we might treat them that way RWN: as I said earlier, the committee can choose to merge them WH: is the intent that the spec will be written by many people? or a single author? RWN: the hope is that we'll have more authors for sections of the text, and it'll continue to be the responsibility of the (single) editor to maintain quality. YK: I've found it useful to go throuh the exercise of writing spec text LH: I like that aspect of this proposal quite a lot DH: I've found it useful to write things in pseudo-code when exploring many alternatives...there's a cost for writing it out that way AR: things are meant to get more "specy" and improve in quality over time (Reviewing previous approaches to specificying new features) BE: I think ES7 should follow this AWB: Yes STH: As long as we're realistic about how much process change can really be helpful DH: Smaller features can ship and large pieces can take the time that they need. DH: need a way to post features in progress WH: Difficult to do refactorings the spec if various folks write parts of it independently. BE: Integration step left out? (eg. when does feature integration to the spec occur?) - huge costs - potentially huge conflicts - need to identify necessary changes early as possible WH: Concerned that the one-edition-per-year timeline is unrealistic both for us and for our users. WH: Once-per-year would be too much of a moving target for users. For example, writing (and re-reading) books about ECMAScript would be difficult. WH: Imagine trying to fast-track one edition per year through ISO, with yet another one done in ECMA by the time the previous one gets done in ISO. Also note that ISO has been known to generate interesting comments. ??: We don't need to send every edition to ISO. ??: Yes we do. They don't like it when you update an existing ISO standard and don't send them the update. ??: ISO likes their specs updated once every three years. WH: How many simultaneous internal versions of the spec (the master Word document) would we maintain? Three? AWB: One. WH: Really? Let's say we'd plan to ship a new edition every December. When would we fork our internal spec to work on new features for the next edition while preparing to send the current edition to the General Assembly? AWB: Every January WH: Then we'd be editing two editions simultaneously almost all the time. AWB: I can handle it. WH: Yes, but can the reviewers of the spec handle it? We have enough trouble getting folks to re-read stuff as it is. WH: Once every two years would be more reasonable. Consensus/Resolution 5.10 Function parameter scoping and instantiation Andreas Rossberg Default Parameters/Arguments Goals: Convenience Feature - Readable! Non-goal: subtle expressiveness Should be able to understand the defaults without looking at the function body ARB: Two Issues - Scoping related to the function body (examples of really weird cases) Solution: - Defaults should behave as if provided by wrapper function Solution: - Evaluate defaults in seperate scope - Can see "this", "arguments" and function name (where applicable) - Can see other parameters - Cannot see bindings created in function body - Cannot see bindings created in function body LATER (via eval) Evaluation Order function f(x = y, y = 2) {} function f(x = eval("y"), y = 2) {} function f(x = (y = 3, 1), y = 2) {} ARB: Preferably should be const bindings in that scope (not the function body) AWB: (describes the TDZ) Solution: - parameters have TDZ - Initialized in sequence WH: No distinction between a missing parameter and explicit undefined? AWB: We agreed on that a long time ago. BE: I thought there was agreement/discussion? (referring to: rwaldron/tc39-notes/blob/master/es6/2012-11/nov-29.md#proposal-part-2 ) (need slide examples) DH: Most concerned with implicit rebinding STH: The rebinding is only observable (discussion re: mutation in parameter bound closures) STH: Can fix this while preserving ARB: Can change the "nutshell" to meet the needs of the concern items: const => let BE: In the example that binds AWB: - parameters are in separate scope contour - visible to the body scope - the body is disallowed from creating - "namespace" for parameters NM/RW: (agreeable points about curly brace boundaries reinforcing scope) BE: Summary: - Outer Scope - Parameter Scope - Function Body Scope YK: (recalling names declared in parameter scopes being rebound in the function body) AWB: I can express this with one Environment Record ARB: Cannot, because of eval. A delayed eval in the parameter list must not see bindings from the body function g() { return 3 * 3; } function f(h = () => eval("g()")) { function g() { return 2 * 3; } h(); } AWB: Agreed DH: (post-clarification) - Two Scopes - The Function Head/parameter list - The Function Body In the function head/parameter list, cannot see the scope to the right (the function body). AWB: Any new syntax in the parameters, changes the handling? (vast disagreement) AWB: The spec currently says var-like bindings. If you have new syntax, they're still var-like - Duplicates are illegal - Rules about redeclaration // If... function f(x) { var x; } // changes to... function f(x = {}) { var x; } // No difference. // But changes to... function f(x = {}) { let x; } // Error for redeclaration. (clarification re: nothing changes var bindings to let bindings) WH: (whiteboard) What is the value of y in this example? 5 or 2? function f( x=(y=3, 2), y ) { console.log( x, y ); } f(undefined, 5); (discussion without a clear resolution) CF: What about: var y = 2; function f( x=y, y=3 ) { console.log( x, y ); } f(); BE & Others: y is shadowed result is (undefined, 3) WH: What is the value of y in this example? 2, undefined, or 5? function f(x = (y = undefined, 7), y = 5) { ... } f(undefined, 2); AWB: The original value of the parameter is used to decide whether to default it or not. BE: Surprised. Unhappy with having to store the original values of the parameters, thereby making for two copies of each one. AWB: Already need to do this for the arguments object. BE: The arguments object is easy to statically detect. These are more insidious. (no clear resolution) ARB: Fundamendally these are mutually recursive bindings. BE: We agreed on two scopes. Head and body. - If another parameter has a default? Consensus/Resolution - Two Scopes - Head/Parameter List - Body - Temporal dead zone? - Details unresolved? 4.5 Modules Update Dave Herman - [slides](need to commit for a link) Generic Bundling Slide (Debate about hash as the delimiter. Agreement that this discussion can take place elsewhere.) DH: the browser loader is not something that belongs in Ecma-262. It's a separate spec. We can do it concurrently. We definitely want to start now and get feedback early, but it doesn't need to block ES6. (Discussion of confusion on parsing vs. evaluation timing. Custom loaders can implement the desired esoteric use case; see caching slides.) DH/LH/JM: Use case under discussion is lazy module execution, like AMD bundles or previous named module declarations. If you have a console.log inside a module, is there a way for that not to get executed? DH: we may need to check to ensure that is possible, but it probably is. And the simplification of removing named module declarations still seems worth it. 4.6 Unbound variable checking Dave Herman DH: Proposes that if m is an imported module, then m.bar should be a compile-time error if the module doesn't have a property named bar. WH on whiteboard: Should this be a static error in that case? module m from "foo"; with (a) { m.bar; } ?: Modules are in strict mode and don't allow 'with'. WH: But this isn't a module; it's just referencing one. Module loading DH: it used to be that <script async> would be able to do I/O, including import declarations; I've relaxed that. Now <script> can do that. DD/DH: (clarification that you can use module syntax in scripts, not just modules> BE/DH: (discussion of allowing <script> without async to load modules.> AR: note that inline scripts with async or defer attributes currently do not impact execution or parsing. This may change in the future. JM: if people want to use import in a synchronous script definition, that should be OK; just throw DH: that was the direction I was moving, but LH was objecting to. And DD has an interesting point that if we don't let import happen at the top level, that would work well too. STH: What do we like? YK: Adding a form to HTML that says "this is a module." This reduces the need to allow imports in scripts. BE: that would mean we're betting on getting something into HTML DH: yes, but you could just use the loader API. BE (whiteboard): four cases <script>(1)</script> <script src="...">(2)</script> <script async>(3)</script> <script src="..." async>(4)</script> WH: how do you load a module without async scripts? YK/LH: System.load("module") WH: and you wouldn't need to import System or similar DH: no, that's just a global WH: but we have features that require modules, e.g. iterator DH/YK: yes, but you can just do System.get("std:iterator").iterator. WH/DH: if it's hard to use a module inline in the page, then it's hard to write the good code we want them to write. DH: this is something that needs to happen for multiple reasons, so it should happen in HTML. YK: import in top-level scripts doesn't give us modules in top-level scripts, only import in top-level scripts. JM: so how do you enter the module system from HTML? DH: two ways. The loader API, or the hypothetical <module>. BE (whiteboard); top level script looks like let { keys, values } = System.get("@iter"); DH: BTW JS practictioners, I'd like to reiterate if you have concerns about the standard module system. LH: Implementers will ship iterators before modules, so we need a way to get at these things more easily. DD (jokingly): We can just use a proxy to trap @@iterator in the meantime. DH: I really think this how-to-enter-the-system conversation can occur outside TC39. BE: so we can provide two top-level environments. BE: OK, this is all about separation of standards-body concerns. DH: and this helps not block TC39. (Discussion somehow turns back to <script> vs. <script async> getting module-loading abilities.) BE (to LH): so you're worried about an attractive nuisance, people doing more synchronous loading than they should LH: Well today, import always succeeds, but with this proposal, it's order dependent, like today's System.get. WH: <module> as a new HTML element won't work due to HTML parsing issues. Note that scripts contain un-HTML-escaped <'s (and even larger chunks of HTML tags) while other HTML elements don't. An HTML parser wouldn't know how to properly skip past an element (such as the proposed <module>) that it doesn't know about. DH: I think <script type="module"> or similar is going to be necessary, for many reasons. DH: so to recap, there's the two possibilities: allow gradual integration via import etc. in scripts, or the green path where you enter the module system once and then are there. JM: Facebook wants both, so we can do initial page load and async load. DH: that's fine, you can do that with System.set in the initial page load. DH/BE: (Agreement that this should go in other standards bodies.) Back to Static Checking LH: back to static checking? BE: you have to do label checking. It's not that bad. ARB/BE/DH: we have to implement to find out. BE: how much parsing do you have to do? ARB: so that's in the pre-parser for V8 BE/DH/ARB: (discussion of V8's pre-parser) DH: somewhere in between a reader (along the lines of SweetJS) and a parser, and that's what I don't understand. ARB: it's a parser, but it just glosses over a lot of the grammar. ARB: to be completely honest, we would like to get rid of this thing. BE: so we won't know if adding this static checking for modules has implementation consequences, until implementers actually go implement it. So if they have appetite for it, we should try to do that. DH: JSHint or TypeScript could do all these things... We need to at the very least provide the basic foundation. But that would shut the door on further static things. BE: V8, do you guys have an appetite for trying it? ARB: I'd like to try, but not sure if it's possible within the ES6 time frame. BE: and what about Chakra? LH: we can try it, but we don't know... DH: it would be OK with me to close the door on static things like guards. BE (to ARB): wait I'm confused. If you're doing import/export checking, aren't you doing about the same work you'd be doing for full static variable checking? ARB/LH: no DH: import/export is top-level only; you don't have to walk the full AST LH: you would have to freeze the global environment at the point in which the static checking happens, and test against that DH: yes, that's right BE: OK, so maybe it's enough to have import/export checking. That spot-in-time check could be a problem. Yes, this is a problem for monkey-patching. DH: every time we go through these cases it takes hours to remember the global object semantics. AWB: I thought we concluded a long time ago that we had to preserve global semantics. DH: clarifies: only talking about within the body of a module. - Check the script against the current state of the Global object at compile time - This is an unsound and incomplete analysis, but, it's one that you can program to. BE: so if we say that module bodies do not have this type of static name checking, we're closing the door to guards, hygenic macros, type checking, ... WH: how does it close the door to guards? DH: we always talk about guards as if we knew what their semantics were... BE: OK, well, how about truly static stuff like types or macros. DH: my experience in ES4 was that it was fighting with the dynamic aspect of the language WH: in Lisp we have a multi-level time-of-execution (i.e. eval-when) system... it was very messy... BE: I think static types and static metaprogramming as an option are shown to be not possible, really, via the fact that TypeScript and Dart are both basically WarnScript. DH: I think that it's been shown that tooling is generally how the web solves this problem. LH: and we could do this outside the language itself, the opt-in could be e.g. opening the debug tools instead of being in a module body. STH: But, nobody's said that this is a horrible feature, there's just some implementer reluctance. DH: JSHint works fine; modules alone will allow JSHint's undefined variable checking to work without having to provide a large list of globals. LH: we've started creeping a little bit toward doing more static analysis, but this would be a big step. DH: what do you mean static analysis. LH: I mean more early errors. We added more in ES5, e.g. duplicate variables. ES6 has added more with let and const. This is the next big jump. It's not clear where that's trying to go... We could go much further, we could build the whole linter into that point. DH: I have years of experience writing Racket code, which works exactly like this. Once you're in module code, you have static variable checking. LH: but no global object in the scope chain. DH: actually kind of, but yes, people don't use it nearly as much as on the web. DH: The static variable checking is both unsound and incomplete; the former is because of snapshot-in-time globals, and the latter is because of the halting problem. WH: I want a way to get static variable checking but also monkey patching. Perhaps declare which global bindings you might want to monkey-patch. checks on import/export
https://esdiscuss.org/notes/2013-09-18
CC-MAIN-2019-18
refinedweb
7,084
63.49
Plugin::Installer Call the plugin's compiler, install it via quality_to_ref, then dispatch it using goto. package Myplugin; use base qw( Plugin::Installer Plugin::Language::Foobar ); ... my $plugin = Myplugin->construct; # frobnicate is passed first to Plugin::Installer # via AUTOLOAD, then to P::L::Foobar's compile # method. if what comes back from the compiler is # a referent it is intalled in the P::L::F namespace # and if it is a code referent it is dispatched. $plugin->frobnicate; The goal of this module is to provide a simple, flexable interface for developing plugin languages. Any language that can store its object data as a hash and implement a "compile" method that takes the method name as an argument can use this class. The Plugin framework gives runtime compile, install, and dispatch of user-defined code. The code doesn't have to be Perl, just something that the object handling it can compile. The installer is language-agnostic: in fact it has no idea what the object does with the name passed to its compioer. All it does is (by default) install a returned reference and dispatch coderefs. This is intended as a convienence class that standardizes the top half of any plugin language. Note that any referent returned by the compiler is installed. Handing back a hashref can deposit a hash into the caller's namespace. This allows for plugins to handle command line switches (via GetoptFoo and a hashref) or manipulate queues (by handing back an [udpated] arrayref. By default coderefs are dispatched via goto, which allows the obvious use of compiling the plugin to an anonymous sub for later use. This make the plugins something of a trampoline object, with the exception that the "trampolines" are the class' methods rather than the object itself. Extracts the package and method name from a call, dispatches $package->compile( $name ), and handles the result. Results can be installed (if they are referents of any type) and dispatched (if they are coderefs). The point of this is that the pluing language is free to compile the plugin source to whatever suits it best, Plugin::Installer will install the result. In most cases the result will be a coderef, which will be installed as $AUTOLOAD, which allows plugins to resolve themselves from source to method at runtime. Stub, saves passing back through the AUTOLOAD unnecessarly. Plugin classes that need housekeeping should implement a DESTROY of their own. During compilation, Plugin::Install::AUTOLOAD places an {install_meta} entry into the object. This is done via local hash value, and will not be visible to the caller after the autoloader has processed the call. This defines switches used for post-compile handling: my $default_meta = { install => 1, dispatch => 1, storemeta => 0, alt_package => '', }; Does a referent returned from $obj->compile get installed into the namespace or simply dispatched? This is used to avoid installing plugins whose contents will be re-defined during execution and called multiple times. Is a code referent dispatched (whether or not it is installed into a package)? Some methods may be easier to pre-install but not dispatch immediately (e.g. if they involve expensive startup but have per-execution side-effects). Setting this to false will skip dispatch of coderef's even if they are installed. Package to install the referent into (default if false is the object's package). This the namespace passed with the method name to 'qualify_to_ref'. This can be used by the compiler to install data or coderef's into a caller's namespace (e.g. via caller(2)). If this is used with storemeta then ALL of the methods for the plugin class will be installed into the alternate package space unless they set their own alt_package when called. Store the current metdata as the default for this class? The metadata is stored by class name, allowing an initial "startup" call (say in the constructor or import) to configure appropriate defaults for the entire class. Note that if install is true for a coderef then none of these matter much after the first call since the installed method will bypass the AUTOLOAD. Corrilary: If a "setup" method is used to set metadata values then it probably should not be installed so that it can fondle the class' metadata and modify it if necewsary on later calls. This also means that plugin languages should implement some sort of instructions to modify the metadata. Example plugin class with simple, working compiler. Little language for bulk data filtering, including pre- and post-processing DBI calls; uses Plugin::Install to handle method installation. Installing symbols without resoting to no strict 'refs'. Extracting the basetype of a blessed referent. Trampoline object: construction and initilization are put off until a method is called on the compiled object. Steven Lembark <lembark@wrkhors.com> Florian Mayr <florian.mayr@gmail.com> Copyright (C) 2005 by the authors; this code can be reused and re-released under the same terms as Perl-5.8.0 or any later version of the Perl.
http://search.cpan.org/~lembark/Plugin-Installer-0.04/lib/Plugin/Installer.pm
CC-MAIN-2014-15
refinedweb
834
63.29
Area The class Area is a member of com.here.android.mpa.venues3d . Class Summary public class Area extends com.here.android.mpa.venues3d.SpatialObject, java.lang.Object This class is a base class that represents a physical area within a Venue . [For complete information, see the section Class Details] See also: Constructor Summary Method Summary Class Details This class is a base class that represents a physical area within a Venue . It is extended by the classes OuterArea and Space , both of which have a bounding box and center coordinates, and possibly a GeoPolygon . This class can not be instantiated directly. Subclasses OuterArea and Space can be obtained by methods on Level . See also: Constructor Details Area (AreaImpl impl) Package Private Constructor Parameters: impl The impl object to be constructed of. Method Details public GeoBoundingBox getBoundingBox () This method retrieves the bounding box for this Area . Returns: An object representing the bounding box for the given area. public GeoCoordinate getCenter () This method retrieves the center of the bounding box of the Area . Returns: An object containing the geographic coordinates of the center of the given area. public String getName () This method retrieves the human-readable name related to the holder of the spatial area. This can be, for example, the name of a shop. Returns: The string containing the name. public GeoPolygon getPolygon () This method retrieves the GeoPolygon for this Area , if it exists. Returns: A GeoPolygon or null.
https://developer.here.com/documentation/android-premium/topics_api_nlp_hybrid_plus/com-here-android-mpa-venues3d-area.html
CC-MAIN-2018-22
refinedweb
239
57.87
there' d:/_dsetup/arm/winarm/bin/../lib/gcc/arm-elf/4.1.0/../../../../arm-elf/l ib/interwork\libc.a(makebuf.o): In function `__s makebuf':makebuf.c:(.text+0x3c): undefined reference to `_fstat_r' :makebuf.c:(.text+0x110): undefined reference to `isatty' d:/_dsetup/arm/winarm/bin/../lib/gcc/arm-elf/4.1.0/../../../../arm-elf/l ib/interwork\libc.a(mallocr.o): In function `_ma lloc_r':mallocr.c:(.text+0x40c): undefined reference to `_sbrk_r' :mallocr.c:(.text+0x4b4): undefined reference to `_sbrk_r' d:/_dsetup/arm/winarm/bin/../lib/gcc/arm-elf/4.1.0/../../../../arm-elf/l ib/interwork\libc.a(stdio.o): In function `__scl ose':stdio.c:(.text+0x10): undefined reference to `_close_r' d:/_dsetup/arm/winarm/bin/../lib/gcc/arm-elf/4.1.0/../../../../arm-elf/l ib/interwork\libc.a(stdio.o): In function `__sse ek':stdio.c:(.text+0x3c): undefined reference to `_lseek_r' d:/_dsetup/arm/winarm/bin/../lib/gcc/arm-elf/4.1.0/../../../../arm-elf/l ib/interwork\libc.a(stdio.o): In function `__swr ite':stdio.c:(.text+0x94): undefined reference to `_lseek_r' :stdio.c:(.text+0xb8): undefined reference to `_write_r' d:/_dsetup/arm/winarm/bin/../lib/gcc/arm-elf/4.1.0/../../../../arm-elf/l ib/interwork\libc.a(stdio.o): In function `__sre ad':stdio.c:(.text+0xe4): undefined reference to `_read_r' collect2: ld returned 1 exit status gmake: *** [rtosdemo.elf] Error 1 It seems the low level functions for the libc.a is not defined , how can it works? Thanks! hotislandn wrote: > there' >... > > It seems the low level functions for the libc.a is not defined , how can > it works? The newlib (libc) needs interface-functions to the hardware: system calls or syscalls. Please use the search-function of the forum. It's a Martin Thomas Got it.Thanks! Suggest add that syscalls file to this example. Hence there a 7x256-Ek in hand,this example will be tested. Following on from this thread, I copied over syscalls.c from the AT91SAM7S-BasicUSB example, updated the make file and am struggling with more errors.. The first problem I ran into was: syscalls.c: In function 'my_putc': syscalls.c:21: warning: implicit declaration of function 'AT91F_US_TxReady' syscalls.c:22: warning: implicit declaration of function 'AT91F_US_PutChar' syscalls.c: In function 'my_kbhit': syscalls.c:27: warning: implicit declaration of function 'AT91F_US_RxReady' syscalls.c: In function 'my_getc': syscalls.c:33: warning: implicit declaration of function 'AT91F_US_GetChar' ... and lots of unused parameter warnings (Followed by a link error) syscalls.o: In function `my_putc':syscalls.c:(.text+0x60): undefined reference to `AT91F_US_TxReady' :syscalls.c:(.text+0x74): undefined reference to `AT91F_US_PutChar' syscalls.o: In function `_read_r':syscalls.c:(.text+0xe0): undefined reference to `AT91F_US_RxReady' :syscalls.c:(.text+0x118): undefined reference to `AT91F_US_GetChar' This is because there are two headers "lib_AT91SAM7X256.h" and "ioAT91SAM7X256.h". They are quite similar. Not sure which to use..? lib_AT91SAM7X256.h contains declaration of AT91F_US_TxReady and others - but generates more compile errors when I try and use it, starting off with.. ../FreeRTOS/Source/portable/GCC/ARM7_AT91SAM7S/lib_AT91SAM7X256.h:60:28: error: macro "AT91F_AIC_ConfigureIt" passed 5 arguments, but takes just 4 ..... and opens a new can of worms! When I read this posting: I can't help think I missed something. Does anyone have step by step instruction on how to get this to work? Here is a list of resources I used.. Extract from WinARM help (Relevant to AT91): Also a background off IAR EW means I don > This is because there are two headers "lib_AT91SAM7X256.h" and > "ioAT91SAM7X256.h". They are quite similar. > Not sure which to use..? I have not read the X256 examples so far but from the Atmel-sources I know for the AT91SAM7S64 the ioAT*.h is are the register-defintions for the IAR-compiler. For gcc I use the file AT91*.h. So for AT91SAM7S64 it's AT91SAM7S64.h and not ioAT91SAM7S64.h, it might be similar for the 7X. > lib_AT91SAM7X256.h contains declaration of > AT91F_US_TxReady and others - but generates more compile errors when I > try and use it, starting off with.. The lib*.h-files are different. They define some simple inline-functions to make hardware-access a little easier. Make sure to define the __inline before including this header-file (I use #define __inline static inline). Include the register-definitions before including the lib*.h. From one of the Atmel-examples from the WinARM-collection: #include "AT91SAM7S64.h" #define __inline static inline #include "lib_AT91SAM7S64.h" Try to use the same order with the files for the 7X. I hope that I will receive a board with a SAM7X soon. If I have some time I will provide (ported) examples in the WinARM example-collection for this target. > ../FreeRTOS/Source/portable/GCC/ARM7_AT91SAM7S/lib_AT91SAM7X256.h:60:28: > error: macro "AT91F_AIC_ConfigureIt" passed 5 > arguments, but takes just 4 > > ..... and opens a new can of worms! Try to recompile with the "include-order" mentioned above. > When I read this posting: > > I can't help think I missed something. Does anyone have step by step > instruction on how to get this to work? > > Here is a list of resources I used.. > > Extract from WinARM help (Relevant to AT91): Oh, it's not a "help", it's just a small collection of information from which I think that they might be useful or answers to FAQ I've got by > Try to use the include files which come with the source first since the developer of the code used them for testing. If it does not work you can try to replace the files with those from the newest Atmel "kit-examples" but you might have to port some code. > Also a background off IAR EW means I don Thanks, that's got me going again.. I'm in the process of getting the 'common' examples provided with FreeRTOS working using WinARM on the AT91SAM7S64. I am doing this by 'stripping down' the lwIP example and using a WinAVR example style makefile and syscalls.c. Hope to pass it your way depending on how far I get. If anyone (else!) is struggling with this, I can send you what I have so far. My email is: arm@jnewcomb_NoSPHAM_.com I'm using one of these, check out the price of the module, eval kits and JTAG debugger; you can't go far wrong! Jon. I've gotten all the errors tha have been presented on this page, and using the advice presented I've been able to get rid of all of them (thanks!) except for one (doh!). My final error is: Linking: .out/ARMUSB.elf arm-elf-gcc -mcpu=arm7tdmi -I. -gdwarf-2 -DROM_RUN -DVECTORS_ROM -O1 -Wall - Wcast-align -Wimplicit -Wpointer-arith -Wswitch -Wredundant-decls -Wreturn-type -Wshadow -Wunused -MD -MP -MF .dep/ARMUSB.elf.d .out/SAM7A3Assembly.o .out/SA M7Ainit.o .out/main.o .out/hid_enumerate.o .out/dbgu.o .out/WinARMsyscalls.o --output .out/ARMUSB.elf -nostartfiles -Wl,-Map=.out/ARMUSB.map,--cref -lc -lm -lc -lgcc -lstdc++ -T../../../../Root/Builds/BuildComps/ArmResources/AT91SAM7A 3-ROM.ld c:/winarm/bin/../lib/gcc/arm-elf/4.1.0/../../../../arm-elf/lib\libc.a(ex it.o): I n function `exit':exit.c:(.text+0x28): undefined reference to `_exit' collect2: ld returned 1 exit status make: *** [.out/ARMUSB.elf] Error 1 I noticed in the syscalls.c example provided with the 7SUSB example that the _exit function is in an #ifdef 0. I removed this, but it still made my linker sad, so I put it back. I'm porting example IAR code for the new SAM7A3 eval board over to WinARM. My apologies if this has been answered somewhere else. I did some searching on the forum and wasn't able to find it, but it doesn't mean its not already there. Thanks I have seen exit in the startup code. Maybe it is expected to be startup code and therefor the #if 0... but your startup code hasn't this code. Look for something like that in startup.S or whatever the name of your startup code is and add the last lines or write a function with _exit. // ---------------------------- snip ----------------------------- // ################################################## // Branch on C code main function (with interworking) // ################################################## .global main // int main(void) ldr r0, =main // __main bx r0 .size _start, . - _start .endfunc // // The function main() should be a closed loop and should not return. // .global _reset, reset, exit, abort .func _reset _reset: reset: exit: abort: b . // loop until reset .size _reset, . - _reset .endfunc .end // ---------------------------- snip ----------------------------- Got it! Thanks Stefan! Now its off to the Gods of Google to find out the compiler flags I need to get gcc to compile a .bin file so I can actually program my eval board :) Holy mother of god the map file is huge with all the newlib stuff in it! The one from the IAR compiler was 27k, this one is 97k. I know atleast some of that will be because IAR is optimized for Atmel and GCC is not optimized because it has to be used by everyone but damn... I'm still adjusting the makefile to spit out a .bin file instead of .hex, but the IAR's bin file was 3k, and the GCC hex file was 127k. Different between .bin and .hex? I dunno, but 124k is a lot either way. Couldn't find a way to edit previous posts, so this makes post number 3 in a row, sorry! Figured out how to make the .bin file. Its 45.4K in size... still seems like a lot considerin the IAR one was 3K... Sounds like we need some code... Completely untested on real hardware, but hay, it compiles without any errors under WinARM. Wahay! You will have to check the paths in the makefile. Check out the giff for locations of the FreeRTOS directory in relation to the working directory. One thing that niggles me is the fact that UART routines in the FreeRTOS example dir (ARM7_AT91FR40008_GCC) have lots of 'critical' sections where interrupts are disabled. My syscalls.c seems to just fire stuff at the hardware.. anyone got ideas as to where to go next, I'm all ears. In response to your last question, here is the last part of the build. map file is also in the provided zip file. main.elf : section size addr startup 68 1048576 prog 33848 1048644 .data 2092 2097152 .bss 24428 2099244 .debug_abbrev 3134 0 .debug_info 18327 0 .debug_line 4542 0 .debug_frame 2968 0 .debug_loc 4133 0 .debug_pubnames 1515 0 .debug_aranges 680 0 .debug_str 533 0 .comment 1161 0 Total 97429 PS, if anyone wants to look at the map file and say 'that will NEVER work because ...', you would be doing me a favour. Jon Newcomb. Hallo, Since some days I own the AT91SAM7X-EK. I want shortly report how to built the FreeRTOS demo "lwIP_Demo_Rowley_ARM7" for this evaluationboard, because I had the same probs like threadopener hotislandn. 1. WinARM-20060606.exe unzip to C:\ 2. FreeRTOSV4.0.3.zip unzip to C:\ 3. Copy directory C:\FreeRTOS\Demo\lwIP_Demo_Rowley_ARM7 to C:\WinARM\examples\FreeRTOS_4_0_2\Demo\lwIP_Demo_Rowley_ARM7 4. Copy file C:\WinARM\examples\at91sam7s64_basicusb\AT91SAM7S-BasicUSB\src\syscalls. c to C:\WinARM\examples\FreeRTOS_4_0_2\Demo\lwIP_Demo_Rowley_ARM7\syscalls.c 5. Edit file C:\WinARM\examples\FreeRTOS_4_0_2\Demo\lwIP_Demo_Rowley_ARM7\syscalls.c : Insert after line "#include "Board.h"" following two lines: #define __inline static inline #include "lib_AT91SAM7X256.h" 6. The macro "AT91F_AIC_ConfigureIt" is defined twice: in AT91SAM7X256.h and in lib_AT91SAM7X256.h One has comment out the complete macrodefinition at the end of the file: C:\WinARM\examples\FreeRTOS_4_0_2\Source\portable\GCC\ARM7_AT91SAM7S\AT9 1SAM7X256.h "#define AT91F_AIC_ConfigureIt( irq_id, priority, src_type, newHandler )" , (14 lines at all) 7. Edit the makefile C:\WinARM\examples\FreeRTOS_4_0_2\Demo\lwIP_Demo_Rowley_ARM7\makefile : The line "syscalls.c" with '\' after port.c has to be inserted: # # Source files that can be built to THUMB mode. # FREERTOS_THUMB_SRC= \ ../../Source/tasks.c \ ../../Source/queue.c \ ../../Source/list.c \ ../../Source/portable/GCC/ARM7_AT91SAM7S/port.c \ syscalls.c 8. call make, rtosdemo.bin will be created. 9. I load the binfile with SAM-BA into the board. Don't forget to execute the script "Boot from flash (GPNVM2). On next power-on the webserver ist running. I tried the steps on a virgin installation. Hope it help's. Many thanks to the others, they helped me a lot. Sven Koop Sven Koop wrote: > ... Thanks for this detailled description. I might be a good idea to e-mail this to Richard Barry from FreeRTOS.org. Maybe he will include a "non-IDE" arm-elf version of his 7X-example in the next version of FreeRTOS. Martin Thomas Sorry, there was an error in my posting above. The problem is: there is an macro "AT91F_AIC_ConfigureIt" in AT91SAM7X256.h and an function with the same name in lib_AT91SAM7X256.h. I solved the problem in the following way: (replace point 6. in my posting above with following) 6. In C:\WinARM\examples\FreeRTOS_4_0_2\Source\portable\GCC\ARM7_AT91SAM7S\lib _AT91SAM7X256.h the function prototype "AT91F_AIC_ConfigureIt" has to be renamed in "AT91F_AIC_ConfigureItH" (line 55) and the one and only call of this function in line 197 (in function "AT91F_AIC_Open") has to be correct in "AT91F_AIC_ConfigureItH". This workaround works fine for me. I allways compile all files with "make clean" and "make", because the makefile does not watch for all headerfiles. Sven Koop
https://embdev.net/topic/129113
CC-MAIN-2018-22
refinedweb
2,220
70.09
![if !IE]> <![endif]> Mapping Often it is useful to map the elements of one stream to another. For example, a stream that contains a database of name, telephone, and e-mail address information might map only the name and e-mail address portions to another stream. As another example, you might want to apply some transformation to the elements in a stream. To do this, you could map the transformed elements to a new stream. Because mapping operations are quite common, the stream API provides built-in support for them. The most general mapping method is map( ). It is shown here: <R> Stream<R> map(Function<? super T, ? extends R> mapFunc) Here, R specifies the type of elements of the new stream; T is the type of elements of the invoking stream; and mapFunc is an instance of Function, which does the mapping. The map function must be stateless and non-interfering. Since a new stream is returned, map( ) is an intermediate method. Function is a functional interface declared in java.util.function. It is declared as shown here: Function<T, R> As it relates to map( ), T is the element type and R is the result of the mapping. Function has the abstract method shown here: R apply(T val) Here, val is a reference to the object being mapped. The mapped result is returned. The following is a simple example of map( ). It provides a variation on the previous example program. As before, the program computes the product of the square roots of the values in an ArrayList. In this version, however, the square roots of the elements are first mapped to a new stream. Then, reduce( ) is employed to compute the product. // Map one stream to another. import java.util.*; import java.util.stream.*; class StreamDemo4 { public static void main(String[] args) { // A list of double values. ArrayList<Double> myList = new ArrayList<>( ); myList.add(7.0); myList.add(18.0); myList.add(10.0); myList.add(24.0); myList.add(17.0); myList.add(5.0); // Map the square root of the elements in myList to a new stream. Stream<Double> sqrtRootStrm = myList.stream().map((a) -> Math.sqrt(a)); // Find the product of the square roots. double productOfSqrRoots = sqrtRootStrm.reduce(1.0, (a,b) -> a*b); System.out.println("Product of square roots is " + productOfSqrRoots); } } The output is the same as before. The difference between this version and the previous is simply that the transformation (i.e., the computation of the square roots) occurs during mapping, rather than during the reduction. Because of this, it is possible to use the two-parameter form of reduce( ) to compute the product because it is no longer necessary to provide a separate combiner function. Here is an example that uses map( ) to create a new stream that contains only selected fields from the original stream. In this case, the original stream contains objects of type NamePhoneEmail, which contains names, phone numbers, and e-mail addresses. The program then maps only the names and phone numbers to a new stream of NamePhone objects. The e-mail addresses are discarded. //Use map() to create a new stream that contains only //selected aspects of the original stream. import java.util.*; import java.util.stream.*; class NamePhoneEmail { String name; String phonenum; String email; NamePhoneEmail(String n, String p, String e) { name = n; phonenum = p; email = e; } } class NamePhone { String name; String phonenum; NamePhone(String n, String p) { name = n; phonenum = p; } } class StreamDemo5 { public static void main(String[] args) { // A list of names, phone numbers, and e-mail addresses. ArrayList<NamePhoneEmail> myList = new ArrayList<>( ); myList.add(new NamePhoneEmail("Larry", "555-5555", "Larry@HerbSchildt.com")); myList.add(new NamePhoneEmail("James", "555-4444", "James@HerbSchildt.com")); myList.add(new NamePhoneEmail("Mary", "555-3333", "Mary@HerbSchildt.com")); System.out.println("Original values in myList: "); myList.stream().forEach( (a) -> { System.out.println(a.name + " " + a.phonenum + " " + a.email); }); System.out.println(); // Map just the names and phone numbers to a new stream. Stream<NamePhone> nameAndPhone = myList.stream().map( (a) -> new NamePhone(a.name,a.phonenum) ); System.out.println("List of names and phone numbers: "); nameAndPhone.forEach( (a) -> { System.out.println(a.name + " " + a.phonenum); }); } } The output, shown here, verifies the mapping: Original values in myList: Larry 555-5555 Larry@HerbSchildt.com James 555-4444 James@HerbSchildt.com Mary 555-3333 Mary@HerbSchildt.com List of names and phone numbers: Larry 555-5555 James 555-4444 Mary 555-3333 Because you can pipeline more than one intermediate operation together, you can easily create very powerful actions. For example, the following statement uses filter( ) and then map( ) to produce a new stream that contains only the name and phone number of the elements with the name "James": Stream<NamePhone> nameAndPhone = myList.stream(). filter((a) -> a.name.equals("James")). map((a) -> new NamePhone(a.name,a.phonenum)); This type of filter operation is very common when creating database-style queries. As you gain experience with the stream API, you will find that such chains of operations can be used to create very sophisticated queries, merges, and selections on a data stream. In addition to the version just described, three other versions of map( ) are provided. They return a primitive stream, as shown here: IntStream mapToInt(ToIntFunction<? super T> mapFunc) LongStream mapToLong(ToLongFunction<? super T> mapFunc) DoubleStream mapToDouble(ToDoubleFunction<? super T> mapFunc) Each mapFunc must implement the abstract method defined by the specified interface, returning a value of the indicated type. For example, ToDoubleFunction specifies the applyAsDouble(T val ) method, which must return the value of its parameter as a double. Here is an example that uses a primitive stream. It first creates an ArrayList of Double values. It then uses stream( ) followed by mapToInt( ) to create an IntStream that contains the ceiling of each value. // Map a Stream to an IntStream. import java.util.*; import java.util.stream.*; class StreamDemo6 { public static void main(String[] args) { // A list of double values. ArrayList<Double> myList = new ArrayList<>( ); myList.add(1.1); myList.add(3.6); myList.add(9.2); myList.add(4.7); myList.add(12.1); myList.add(5.0); System.out.print("Original values in myList: "); myList.stream().forEach( (a) -> { System.out.print(a + " "); }); System.out.println(); // Map the ceiling of the elements in myList to an IntStream. IntStream cStrm = myList.stream().mapToInt((a) -> (int) Math.ceil(a)); System.out.print("The ceilings of the values in myList: "); cStrm.forEach( (a) -> { System.out.print(a + " "); }); } The output is shown here: Original values in myList: 1.1 3.6 9.2 4.7 12.1 5.0 The ceilings of the values in myList: 2 4 10 5 13 5 The stream produced by mapToInt( ) contains the ceiling values of the original elements in myList. Before leaving the topic of mapping, it is necessary to point out that the stream API also provides methods that support flat maps. These are flatMap( ), flatMapToInt( ), flatMapToLong( ), and flatMapToDouble( ). The flat map methods are designed to handle situations in which each element in the original stream is mapped to more than one element in the resulting stream. Related Topics Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.
https://www.brainkart.com/article/Mapping---Java-Stream-API_10694/
CC-MAIN-2019-47
refinedweb
1,197
59.09
15 July 2010 12:12 [Source: ICIS news] SINGAPORE (ICIS news)--Sinopec subsidiary Yangzi Petrochemical plans to shut its three PTA units in Nanjing, with a total capacity of 1.3m tonnes/year, in mid-August for a month to carry out maintenance, a company source said on Thursday. Yangzi operates two 350,000 tonne/year PTA units at its Liuhe site in the outskirts of ?xml:namespace> The shutdown was in line with maintenance at its upstream 650,000 tonne/year naphtha cracker, the source said. Its upstream 800,000 tonne/year paraxylene (PX) plant would also be shut , he added. “We had proposed to the headquarters to delay the shutdowns at PTA units to avoid the same period shutdown with another PTA major, but it was rejected as it is an integrated maintenance,” the source said. ICIS had earlier reported Xiamen Xianglu Petrochemical planned a half-month shutdown at its 1.5m tonne/year PTA units in mid August. “We decided to cut contract volumes to our clients in August,” said the source, adding that Sinopec was still under negotiation with clients about the reduction of volumes. “Domestic PTA supply will definitely be tightened at that time,” said an Zhejiang-based end-user, adding that the company was buying some September PTA futures on the Zhengzhou Commodity Exchange in a bid to secure sufficient feedstock. “But the demand side is also likely to fall in August and September because of intensive shutdowns at downstream polyester sector,” a trader noted. Five polyester plants, in Zhejiang province, totalling 1.44m tonne/year capacity, had announced 15-day shutdown plans in August due to restrictions on electricity use of industries in Zhejiang province, the trader said. Other polyester plants in the area were expected to reduce operating rates, which would in turn pull down the demand for PTA,
http://www.icis.com/Articles/2010/07/15/9376715/yangzi-petrochemical-plans-month-long-nanjing-pta-turnarounds.html
CC-MAIN-2015-06
refinedweb
307
59.13
Opened 5 years ago Closed 5 years ago #587 closed enhancement (fixed) Support __metaclass__ in Python classes Description (last modified by scoder) Add support for metaclasses in Python classes. Problem: now class object is created with empty attributes dict (only __module__ is set) and then attributes are assigned via PyObject_SetItem() To support metaclasses attributes dict should be initialized first then __metaclass__ attribute should be handled in __Pyx_CreateClass() Attachments (7) Change History (16) comment:1 Changed 5 years ago by scoder Changed 5 years ago by vitja first try on implementing metaclasses comment:2 Changed 5 years ago by vitja - Cc vitja.makarov@… added metaclass-try000.diff classmethods are broken, all other tests are passed. This patch implements new behavior of new method and staticmethod now they are PyCFunction objects. Changed 5 years ago by vitja forget test for metaclass Changed 5 years ago by vitja fix classmethod issue comment:3 Changed 5 years ago by vitja With latest patch now works: - __metaclass__ (global __metaclass__ is not now handled, Python3 support not tested) - staticmethod as decorator - classmethod as decorator - class methods can now be decorated comment:4 Changed 5 years ago by vitja Metaclasses in Py3 have a little bit different syntax: class Meta(type): pass class Foo(object, metaclass=Meta): pass Changed 5 years ago by vitja Fix reference leak comment:5 Changed 5 years ago by scoder - Milestone changed from wishlist to 0.13.1 - Owner changed from somebody to vitja Latest patch (metaclass-py3.diff) pushed here: With some cleanup here: The next steps are to create two new classes that represent the looked-up metaclass and the class namespace (potentially created by the 'prepare' method of the metaclass). Changed 5 years ago by vitja Really support namespace in py3 prepare comment:6 Changed 5 years ago by scoder Things I noticed: The patch lacks tests for passing kwargs to the metaclass, and ISTM that this feature can't work. The PyClassBasesNode class seems to be useless. Just use a plain TupleNode instead. The KeywordArgsNode is specific enough to be named PyClassMetaclassKeywordArgsNode. I doubt that it would be used anywhere else in the future. Utility code is best injected at code generation time using code.globalstate.use_utility_code() instead of the older (and somewhat quirky) env.use_utility_code(). comment:7 Changed 5 years ago by scoder The error handling code in __Pyx_Py3MetaclassPrepare is very redundant. The way this is commonly handled in Cython utility functions is a "bad:" label that cleans up all local references (potentially using Py_XDECREF for NULL initialised variables) and that is simply jumped to with a "goto bad" on error. A potential optimisation could special case this (which I consider the most common case): class MyType(*bases, metaclass=MyMetaclass, other=False): ... i.e. only explicit keyword arguments, including the metaclass, no **kwargs. Here, no metaclass node and no metaclass lookup would be needed, and the metaclass object could be used directly. If there are no other keyword arguments, the keyword dict could even be NULL. Not sure how tricky this is to implement, though. Changed 5 years ago by vitja Python3-style metaclass implementation comment:8 Changed 5 years ago by vitja py3-prepare.2.patch - Actually implement py3-style metaclasses with real support for prepare and namespaces Between py4-prepare.patch - fix KeywordArgsNode? - add some tests for kwargs + keywords, and kwargs only Changed 5 years ago by vitja KeywordArgsNode? optimization and one more testcase comment:9 Changed 5 years ago by scoder - Resolution set to fixed - Status changed from new to closed Pushed, followed by a couple of cleanups. Thanks a lot! Relevant mailing list threads:
http://trac.cython.org/ticket/587
CC-MAIN-2016-07
refinedweb
601
60.14
Type: Posts; User: xetea Check out the inspect function in Firebug, I can't live without it. It's addictive. :) I'm trying to help you here. Sorry if my last post was a bit harsh, but NogDogs code should work and we can't really do anything for you unless you post some code. So what happens? We need to... I made some code to illustrate the situation here: <?php /* Edit this part. */ $imHelpful = false; /* Below is a fact. */ function deny($statement) { return ($statement !=... Do you have Firebug? It's really awesome when you want to figure out how a site's layout works. Anyway, this html should work: <div style="width: 100%; min-width: 980px; overflow: hidden;">... Use the min-width css property. I would suggest the same thing. Check out mod_rewrite. It seems like javascript treats the method you added as an eventlistener as a static method. That means it can't use the attributes of an object, since it can be called even though there are no... The second link doesn't work at all, I guessed it should be and that site worked fine for me. Your html is pretty messed up. It seems to work fine in Firefox 3.6.11, can you post a screenshot of it? Or did you post a link to the test site? Was that a question? I'm just curious, why did you store member information in html files? It should be possible to write some script that extracts the information from each file and saves it in an excel document as... It's possible with some text-editing and some Php. Are all your customers in the same form tag? In that case you're gonna have to replace all 'name="member_id"' with 'name="member_id[]"' (and also... Yep, that would be the downside. It might be smarter to do something like this: RewriteEngine on RewriteRule ^user/([^/\.]+)/?$ user.php?username=$1 [NC] So to reach a user's page you would... Ah, sorry about that. I didn't look carefully enough. I don't know if this is the most effective way to do it, but this might work: var counts = []; for (i = 0; i < myArray.length; i++) {... I don't think there's a living soul that knows the whole php manual by heart. Being able to look up information is very important. You don't have to know how to do everything (that would be... I have to admit I'm no expert in mod_rewrite. But if you post your HTML document I might find some answers. Float: right should be for for content_sub. Use array_count_values(). change it to: RewriteEngine on RewriteRule ^([^/\.]+)/?$ user.php?username=$1 The important part is the question mark after the slash. That means that there can, but doesn't have to, be a... Yes, you have to use PHP as well. My point was that if you want to do anything more advanced than an ordinary file upload with the <input type="file" ... /> element you have to use third party... This can't be done with only HTML, CSS, JavaScript and PHP. You have to use Flash or something like that. Try SWFUpload: Why in the world would you want to switch from UTF-8 to any other character encoding? UTF-8 can encode ANY character (well, any character in UNICODE), ISO-8859-1 can only encode western european... I bet there are a lot of people here that are able to do what you want, but not many can help you without having a better explanation of what you're trying to do and what you've done so far. Yeah, that's the problem. Including something is pretty much like copy-pasting the code of the included file to the including file.
http://www.webdeveloper.com/forum/search.php?s=3a627f949cf4830e0d03af7c4d99b660&searchid=3309485
CC-MAIN-2014-15
refinedweb
633
85.59
I'm trying to figure out how to do this program. A program is required to print and read a series of exam scores for students enrolled in a math class. The class average is to be computed and printed at the end of the report. Score can range form 0 - 100. The last record contains a blank name and score of 999 and is to not be included in the calculations. -a do while structure to control the reading of exam scores until it reaches a score of 999 - an accumulator for total score - an accumulator for total students This is what I have so far: #include <iostream> #include <iomanip> #include <string> using namespace std; int main() { string name; int score; int average; int total_score; int total_students; int temp_one; int temp_two; cout << "Please enter students name and test score 0 - 100 with a space in between." << endl; temp_one = total_score; temp_two = total_students; total_score = 0; total_students = 0; do { cin >> name >> score; } while (score < 999); cout << name << score << endl; cout << "The class average is: " << average << endl; }
https://www.daniweb.com/programming/software-development/threads/400746/c-program-dowhile-help
CC-MAIN-2018-47
refinedweb
174
63.32
XML::Stream - Creates an XML Stream connection and parses return data XML::Stream is an attempt at solidifying the use of XML via streaming.. XML: we all know threading in Perl is not quite up to par yet. This issue will be revisted in the future. using debugfh. debuglevel determines the amount of debug to generate. 0 is the least, 1 is a little more, N is the limit you want. debugtime determines wether a timestamp should be preappended to the entry. style defines the way the data structure is returned. The two available styles are: tree -) to. from is needed if you want the stream from attribute to be something other than the hostname you are connecting from. myhostname should not be needed but if the module cannot determine your hostname properly (check the debug log), set this to the correct value, or if you want the other side of the stream to think that you are someone else. The type determines the kind of connection that is made: "tcpip" - TCP/IP (default) "stdinout" - STDIN/STDOUT "http" - HTTP HTTP recognizes proxies if the ENV variables http_proxy or https_proxy are set. ssl specifies if an SSL socket should be used for encrypted communications. This function returns the same hash from GetRoot() below. Make sure you get the SID (Session ID) since you have to use it to call most other functions in here. If srv is specified AND Net::DNS is installed and can be loaded, then an SRV query is sent to srv.hostname and the results processed to replace the hostname and port. If the lookup fails, or Net::DNS cannot be loaded, then hostname and port are left alone as the defaults. OpenFile(string) - opens a filehandle to the argument specified, and pretends that it is a stream. It will ignore the outer tag, and not check if it was a <stream:stream/>. This is useful for writing a program that has to parse any XML file that is basically made up of small packets (like RDF). Disconnect(sid) - sends the proper closing XML tag and closes the specified socket down. Process(integer) - waits for data to be available on the socket. If a timeout is specified then the Process function waits that period of time before returning nothing. If a timeout period is not specified then the function blocks until data is received. The function returns a hash with session ids as the key, and status values or data as the hash values.; my $status = $stream->Connect(hostname => "jabber.org", port => 5222, namespace => "jabber:client"); if (!defined($status)) { print "ERROR: Could not connect to server\n"; print " (",$stream->GetErrorCode(),")\n"; exit(0); } while($node = $stream->Process()) { # do something with $node } $stream->Disconnect(); ########################### # example using a handler use XML::Stream qw( Tree ); $stream = new XML::Stream; $stream->SetCallBacks(node=>\&noder); $stream->Connect(hostname => "jabber.org", port => 5222, namespace => "jabber:client", timeout => undef) || die $!; # Blocks here forever, noder is called for incoming # packets when they arrive. while(defined($stream->Process())) { } print "ERROR: Stream died (",$stream->GetErrorCode(),")\n"; sub noder { my $sid = shift; my $node = shift; # do something with $node }. This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~dapatrick/XML-Stream/lib/XML/Stream.pm
CC-MAIN-2013-20
refinedweb
541
64
Cadence PSF file utilities Project description What? psf_utils is a library allows you to read data from a Spectre PSF ASCII file. Spectre is a commercial circuit simulator produced by Cadence Design Systems. PSF files contain signals generated by Spectre. This package also contains two programs that are useful in their own right, but also act as demonstrators as to how to use the library. They are list-psf and plot-psf. The first lists the available signals in a file, and the other displays them. Accessing the Results You can use the PSF class to read ASCII Parameter Storage Format files. When instantiating the class you pass in the path to the file and then the resulting PSF object contains a dictionary that containing the signals. For example, the following lists is a: from psf_utils import PSF from inform import Error, display kinds = { 'float double': 'real', 'float complex': 'complex', } try: psf = PSF('adc.raw/tran.tran') for signal in psf.all_signals(): kind = signal.type.kind kind = kinds.get(kind, kind) display(f'{signal.name:<15} {signal.units:<12} {kind}') except Error as e: e.terminate() This example plots the output signal: from psf_utils import PSF from inform import Error, display import matplotlib.pyplot as plt try: psf = PSF('adc.raw/tran.tran') sweep = psf.get_sweep() out = psf.get_signal('out') figure = plt.figure() axes = figure.add_subplot(1,1,1) axes.plot(sweep.abscissa, out.ordinate, linewidth=2, label=out.name) axes.set_title('ADC Output') axes.set_xlabel(f'{sweep.name} ({PSF.units_to_unicode(sweep.units)})') axes.set_ylabel(f'{out.name} ({PSF.units_to_unicode(out.units)})') plt.show() except Error as e: e.terminate() abscissa and ordinate are NumPy arrays. Utility Programs Two utility programs are installed along with the psf_utils library: list-psf and plot-psf. The first lists the signals available from a PSF file, and the second displays them. They both employ caching to speed up access to the data. They also cache the name of the PSF file so that it need not be given every time. plot-psf also caches its arguments, so if you run it again with no arguments it will simply repeat what it did last time. For example, here is a typical session: # display signals in PSF file > list-psf -f resistor.raw/pnoise.pnoise Using pnoise.raw/pnoise.pnoise. R1:flicker R1:total R2:fn out R1:thermal R2:rn R2:total # display them again, this time in long form > list-psf -l Using pnoise.raw/pnoise.pnoise. R1:flicker A²/Hz real (12042 points) R1:thermal A²/Hz real (12042 points) R1:total A²/Hz real (12042 points) R2:fn A²/Hz real (12042 points) R2:rn A²/Hz real (12042 points) R2:total A²/Hz real (12042 points) out A/√Hz real (12042 points) # display only those that match R1:* > list-psf -l R1:* Using pnoise.raw/pnoise.pnoise. R1:flicker A²/Hz real (12042 points) R1:thermal A²/Hz real (12042 points) R1:total A²/Hz real (12042 points) # display a graph containing signals that start with R1: > plot-psf R1:* # display the thermal noise of R1, and then the total noise minue the flicker noise > plot-psf R1:thermal R1:total-R1:flicker # display a graph containing only out > plot-psf out > plot-psf # display out again Converting to PSF ASCII psf_utils only supports PSF ASCII files. As an alternative, libpsf is a Python package that can read both ASCII and binary PSF files. Or, you can use the Cadence psf program to convert various types of simulation results files into PSF ASCII format. To use it, simply specify the input and output files: > psf -i adc.raw/tran.tran -o adc.raw/tran.psfascii > list-psf adc.raw/tran.psfascii In this example there is nothing special about the ‘psfascii’ suffix, it is simply mnemonic. Rather, the output is in ASCII format because the -b (binary) option is not specified. Releases - Latest development release: - Version: 0.4.0Released: 2019-09-26 - 0.4 (2019-09-26): - Allow glob patterns to be passed to both list-psf and plot-psf. - 0.3 (2019-09-25): - Fix import errors in plot-psf command. - 0.2 (2019-09-25): - Fix dependencies. - 0.1 (2019-09-25): - Initial version Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/psf-utils/0.4.0/
CC-MAIN-2019-51
refinedweb
733
59.09
First, click File > Publish Settings > Flash, tick the "Permit debugging" checkbox, then click OK and retest. By ticking "Permit debugging" you will see more detailed/helpful error messages. Error messages' exact contents depend on the error type (compile-time or run-time), the exact error and the location (timeline or class file) of your problematic code. But, in all scenarios (except for errors that occur during an asynchronous event like file loading), the error message will indicate the exact line number of the problematic code, if you tick "Permit debugging". You may find some error messages need no explanation (like compiler error "1021: Duplicate function definition"). But if you need further explanation, check the Flash help file Appendixes (Help > Flash Help > ActionScript 3.0 and Components > ActionScript 3.0 Reference for the Adobe Flash Platform > Appendixes), where you'll find Compiler Errors and Run-Time Errors which comprise a complete listing of all error messages often with additional and sometimes helpful information. As of this writing that link is endixes.html but that may change especially as updated help files are published by Adobe. This is the first place you should check when you encounter an error message that you do not completely understand. The additional information may be enough to save hours of hair-pulling frustration. But if that's not enough help, don't be afraid to use a search engine. Searching for "Flash as3 error xxxx" should bring up all sorts of information. Some of it may be helpful. Below are the error messages (in numeric order) most commonly cited in the ActionScript 3 forum and help for resolving. 1009: Cannot access a property or method of a null object reference. This is the most common error posted on the Adobe Flash-related Forums. Fortunately, most of these errors are quick and easy to fix (assuming you read the Permit Debugging section). If there is only one object reference in the problematic line of code, you can conclude that object does not exist when that line of code executes. For example, if there is no object mc on-stage, var mc:MovieClip; - mc.x = 0; is the simplest code that would trigger a 1009 error. The variable mc has been declared but is null because no movieclip exists and and it has not been created. Trying to access the x property (or any other property of mc) is going to result in a 1009 error because you cannot access a property of a null (or non-existant) object. To remedy this problem, you must create the referenced object either on-stage or with actionscript. To create the object with code, use the "new" constructor: var mc:MovieClip = new MovieClip(); - mc.x = 0; Precisely the same error will occur in many different guises. Here's the same error trying to reference a null object method: var s:Sound; - s.play(); If there's more than one object in the problematic line of code, for example: mc1.x = mc2.width+mc2.x; use the trace() function to find which object(s) is(are) null: trace(mc1); trace(mc2); mc1.x = mc2.width+mc2.x; Of course, typos are every coders burden and are a common cause of 1009 errors. Instead of comparing two names and trying to confirm they are the same, (eg, an instance name in the properties panel and an actionscript name), it's more reliable to copy the name from one location and paste it to the other. Then you can be certain the names are the same. There are, however, more obtuse ways to trigger a 1009 error when you create and delete objects on-stage and especially if you use timeline tweening. I have seen some 1009 errors that are impossible to diagnose without checking the history panel. And if that panel is cleared or the problem was created far enough in the past that the steps causing the error are no longer present in the history panel, there's no way (known to me) to deduce what caused the error. However, those errors can still be fixed. If you are certain you have an on-stage object whose instance name (in the properties panel) matches your actionscript reference and it is in a frame that plays no later than your code referencing that object, you may need to take a few steps backwards to solve the problem. But first make sure that frame plays before your object reference. You can always confirm that by using the trace() function: Place a trace("frame") in the keyframe that contains your object. (If you have a document class it will need to extend the MovieClip class to do this.) And place a trace("code") just above the problematic line of code and test. If you see "code" and then your error message, your code is executing before your frame plays. Fix your code so it does not execute until the object exists. I cannot tell you how to do that because the solution will vary depending on your project. If you see "frame" then "code", you have confirmed your object exists before your code tries to reference it so something else is causing the problem. This always boils down to Flash being confused because you have or had more than one object with that reference name. To solve this problem, move the on-stage object to its own layer, clear all keyframes in that layer except one, copy its reference from your actionscript and paste into the properties panel. Use movie explorer to confirm there is no actionscript creating a duplicate reference and there is no other on-stage object with the same reference. Then test. There will be no 1009 referencing that object if those steps are followed and the actionscript referencing the object executes no sooner than the keyframe containing the object. Now, add other needed keyframes and change the object properties, as needed. Another way this error can occur is when trying to reference a loader's content property before loading is complete: var loader:Loader = new Loader(); - loader.load(new URLRequest("test.swf")); // make sure you have a test.swf in the correct directory trace(MovieClip(loader.content).totalFrames); // will trigger a 1009 error Because a loader's content property is null until loading is complete, use an Event.COMPLETE listener and listener function before referencing the loader's content. var loader:Loader = new Loader(); - loader.contentLoaderInfo.addEventListener(Event.COMPLETE,completeF); - loader.load(new URLRequest("test.swf")); // make sure you have a test.swf in the correct directory function completeF(e:Event):void{ trace(MovieClip(loader.content).totalFrames); // no error } Also, a DisplayObject's root and stage properties are null until the object is added to the display list. This error frequently occurs in a DisplayObject's class file where the object is created using the "new" constructor, the object's class constructor executes and a stage reference is made before the object is added to the display list. For example: var custom_mc:Custom_MC = new Custom_MC(); addChild(custom_mc); will trigger this error if Custom_MC looks something like: package { import flash.display.MovieClip; public class Custom_MC extends MovieClip { public function Custom_MC() { // stage is null at this point triggering a 1009 error this.x=stage.stageWidth/2; } } } To test if the problem is whether an object has been added to the display list, you can use: trace(this.stage); added above the problematic line of code. The trace output will be null. For example: package { import flash.display.MovieClip; public class Custom_MC extends MovieClip { public function Custom_MC() { trace(this.stage); this.x=stage.stageWidth/2; } } } To remedy this problem, use the Event.ADDED_TO_STAGE listener before trying to reference the stage: package { import flash.display.MovieClip; public class Custom_MC extends MovieClip { public function Custom_MC() { this.addEventListener(Event.ADDED_TO_STAGE,init); } private function init(e:Event):void{ this.x=stage.stageWidth/2; // stage is defined when this executes } } } 1013: The private attribute may be used only on class property definitions. This is commonly caused by mis-matched curly brackets {} in a class file in a line of code above the line that is mentioned in the error message. 1046:Type was not found or was not a compile-time constant: xxxx You forgot to save the xxx class file, you have a typo or you need to import the needed class, xxxx.. To remedy the later, open the Flash help files>Classes>xxxx. At the top you will see a package listed (eg, fl.controls) that you will use in the needed import statement. Add the following to your scope: import fl.controls.xxxx (And, in this example, the needed class type is a component so you'll need that component in your fla's library, too.) 1061: Call to a possibly undefined method xxxx through a reference with static type flash.display:DisplayObject. You are trying to reference a method defined on a MovieClip timeline using dot notation but the Flash compiler does not recognize that MovieClip as a MovieClip. To remedy explicitly cast the MovieClip. For example, you have a function/method xxxx() defined on your root timeline which you are trying to reference using, - root.xxxx(); To remedy, cast root as a MovieClip, MovieClip(root).xxxx(); 1067: Implicit coercion of a value of type xxxx to an unrelated type yyyy. You are trying to assign an object from class xxxx a value from class yyyy. The most common error of this type seen on the Adobe forums is when trying to assign text to a TextField and the code forgets to use the TextField's text property: var tf:TextField = new TextField(); tf = "This is test text."; // should be using tf.text = "This is test text."; 1078: Label must be a simple identifier. I get this error frequently because I'm speed typing and instead of a semi-colon, I have the shift-key pressed and add a colon at the end of a line of code. Check the line of code mentioned in the error message and look for a misplaced colon (typically the last character in the line). 1118: Implicit coercion of a value with static type xxxx to a possibly unrelated type yyyy. Often loaded content needs to be cast. For example, if you load a swf and try and check a MovieClip property (e.g., totalFrames), you will need to cast it as a MovieClip: var loader:Loader = new Loader(); - loader.contentLoaderInfo.addEventListener(Event.COMPLETE,completeF); - loader.load(new URLRequest("someswf.swf")); function completeF(e:Event):void{ trace(e.target.loader.content.totalFrames): // error expected trace(MovieClip(e.target.loader.content).totalFrames)); // no error expected } Or, if you are loading xml: var xml:XML; var loader:URLLoader = new URLLoader(); - loader.addEventListener(Event.COMPLETE,completeF); - loader.load(new URLRequest("somexml.xml")); function completeF(e:Event):void{ xml = e.target.data // error expected xml = XML(e.target.data); // no error expected } When you use getChildByName, the compiler only knows the object is a DisplayObject. If you want to use a property that is not inherited by DisplayObjects (e.g., mouseEnabled), you need to cast the object: for (var i:int=0;i<4;i++) { var opBtn:Btn_operator = new Btn_operator(); // where the Btn_operator class // extends an InteractiveObject class opBtn.name=i.toString(); addChild(opBtn); } // then you can use: for (i=0;i<4;i++) { opBtn = Btn_operator(getChildByName(i.toString())); opBtn.mouseEnabled = false; } 1119: Access of possibly undefined property xxx through a reference with static type yyyy You're trying to reference a property that doesn't exist for that class type. You either have a typo or, if the Flash type is something like DisplayObject and, you are trying to reference a property of a class, which does have the property you are trying to reference, explicitly cast your object. For example, if you have a DisplayObject (eg, dobj) added to the root timeline: - root.dobj; // may trigger a 1119 error MovieClip(root).dobj; // will not trigger a 1119 error In fact, almost always when you use root in ActionScript 3 you will need to case it as a MovieClip unless you have a document class that extends the Sprite class. In that situation, you will need to cast root as a Sprite: MovieClip(root); // or Sprite(root): Also if, this.parent is a MovieClip: - this.parent.ivar; // may trigger a 1119 error MovieClip(this.parent).ivar; // will not trigger a 1119 If you're using one of your own class objects, var opBtnParent:Sprite = new Sprite(); addChild(opBtnParent); for (var i:int=0;i<4;i++) { var opBtn:Btn_operator = new Btn_operator(); opBtnParent.addChild(opBtn); } // then you can use: for (i=0;i<opBtnParent.numChildren;i++) { Btn_operator(opBtnParent.getChildAt(i)).mouseEnabled = false; } Or, you may be trying to reference an object created outside the class. In that situation make sure your class is dynamic, for example: dynamic public class YourClass extends MovieClip{ You can now add properties to YourClass instances from outside YourClass. 1120:Access of undefined property error xxxx You either have a typo, you have forgotten that ActionScript is case sensitive or you have defined the variable somewhere but your reference is out of scope of the defined variable. Typically, the later occurs when you make a variable local to a function and then try and reference the variable outside that function. For example, function f():void{ var thisVar:String = "hi"; } trace(thisVar); will trigger and 1120 error because thisVar is local to f() (as a consequence of prefixing its definition with the keyword var inside a function body. To remedy, use the following (but you will need to call f() before thisVar has value "hi"): var thisVar:String; function f():void{ thisVar = "hi"; } trace(thisVar); Check chapter 2, function scope for more information. 1120: Access of undefined property Mouse. (import the needed class: flash.ui.Mouse): import flash.ui.Mouse; 1151: A conflict exists with definition xxxx in namespace internal. You have more than one of the following statements in the same scope: var xxxx:SomeClass; // and/or var xxxx:SomeClass = new SomeClass(); To remedy, change all but the first to: xxxx = new SomeClass(); 1178: Attempted access of inaccessible property xxxx through a reference with static type YYYY. Variable xxxx typed private in class YYYY when should be protected, internal or public. 1180: Call to a possibly undefined method addFrameScript. You have code (or even an empty space) on a timeline and your document class is extending the sprite class. To remedy, extend movieclip or remove code from all timelines. To find timeline code, use movie explorer and toggle only the show actionscript button. 1180: Call to a possibly undefined method xxxx If you think you have defined that method, check for a typo. i.e., check your method's spelling and use copy and paste to eliminate typo issues. Or, you have a path problem. To remedy, check your path and check the relevant (movieclip or class) scope section in chapter 2 to resolve path problems. Or, if xxxx is a class name, you failed to define/save that class in the correct directory. To remedy, save the needed class file and make sure the method xxxx is defined in your class. Or, you are trying to use a method defined in a class that has not been imported. To remedy, import the needed class. Or, you are trying to use a method xxxx that is not inherited by your class. For example, trying to use addEventListener in a custom class that does not extend a class with the addEventListener method will trigger a 1080 error. To remedy, extend a class that has this method. Or, you nested a named function. To remedy, un-nest it: In actionscript, you can use named and anonymous functions. A named function has the form: function f1():void{ } An anonymous function has the form: var f1:Function = function():void{ } In the code below, both f1() and f2() are named functions and f2() is nested in f1(). function f1():void{ trace("f1"); function f2():void{ trace("f2"); } } If you use this code, at no time will f2() be defined outside f1(). If you try and call f2() from outside f1(), no matter how many times you call f1(), you will generate an 1180 error: call to a possibly undefined method. To remedy, un-nest: function f1():void{ } function f2():void{ } Alternatively, nest an anonymous function in a named function: var f2:Function; function f1():void { f2 = function():void{ }; } Or, nest two anonymous functions: var f2:Function; var f1:Function = function():void { f2 = function():void{ }; } Note f2 is declared outside f1. That is required if you want to call f2 from outside the f1 function body. If you tried: function f1():void { var f2:Function = function():void{ trace("f2"); }; f2(); // this will work } But, if you try: function f1():void { var f2:Function = function():void{ trace("f2"); }; } f1(); f2(); You will trigger an 1180: Call to a possibly undefined method f2 error. You can expect the same error if you nest a named function in an anonymous function. You can nest name and anonymous functions if you only call the nested function from within the nesting function body. For example, the following two snippets will not trigger an error. Snippet 1: function f1():void{ trace(1); f2(); function f2():void{ trace(2); } f2(); } f1(); Snippet 2: var f1:Function=function():void{ trace(1); f2(); function f2():void{ trace(2); } f2(); } f1(); Nevertheless, while you can nest a named function in this situation without triggering an error, you should not. There is no benefit to nesting a named function and, in addition to triggering 1180 errors, it makes your code less readable. 1203: No default constructor found in base class xxx. Failure to call the superclass (or no constructor). To remedy, either create a constructor or use super() in your already existing constructor. SecurityError: Error #2000: No active security context. The file you are trying to load does not exist. You probably have a typo or a path problem. If your file loads when tested locally but triggers that error when test online, look for case mis-matches. For example, trying to load file.SWF will load file.swf locally but not online. TypeError: Error #2007: Parameter text must be non-null. You are trying to assign undefined text to a TextField's text property. For example, the following will trigger this error: var tf:TextField = new TextField(); var a:Array = [1,2]; var s:String; - tf.text = s; // and - tf.text = a[2]; // will both trigger this error Error #2044: Unhandled IOErrorEvent:. text=Error #2035: URL Not Found.. 2136: The SWF file contains invalid data. You are trying to use the "new" constructor with a document class. To remedy, either remove that class from your document class or remove the "new" constructor applied to that class. 5000: The class 'xxx' must subclass 'yyy' since it is linked to a library symbol of that type. Can occur because a previous error stoned the compiler triggering errors that do not exist. But if it's the first (or only) error listed, your 'xxx' class must usually extend a MovieClip or SimpleButton. Right click your library symbol, click properties and check the base class listed there. 5008: The name of definition 'Xxxx' does not reflect the location of this file. Please change the definition's name inside this file, or rename the file. The file name and the class name must match. For example, package { public class Xxxx { public function Xxxx(){ } } } must be saved as Xxxx.as (and case matters).
http://forums.adobe.com/docs/DOC-2542
CC-MAIN-2014-15
refinedweb
3,262
63.59
Understanding the Resolvers File Structure As you may have seen, all core plugins use a similar file and folder structure for resolvers, and we recommend that you do the same for custom plugins. The basic premise is to have the folders and files reflect the hierarchy of the resolvers object, making it easier to find the resolver code you're looking for. Before explaining the file structure, there are two concepts you need to fully understand: - What is a resolver map? - Each Reaction plugin can register its own resolver map, and all registered resolversobjects are then deep merged together into a single object, which is what is provided to the GraphQL library as the full resolver map. If two plugins have conflicting terminal nodes in the resolver tree, the last one registered wins. Currently plugins are loaded in alphabetical order by plugin folder name, grouped by "core" first, then "included", and then "custom". Now, let's take the core payments plugin as an example. Here is what a GraphQL resolver map for the payments plugin would look like in a single file: import { encodePaymentOpaqueId } from "@reactioncommerce/reaction-graphql-xforms/payment"; export default { Payment: { _id: (node) => encodePaymentOpaqueId(node._id) }, Query: { async availablePaymentMethods(_, { shopId }, context) { // ... }, async paymentMethods(_, { shopId }, context) { // ... } } }; You could save that file as resolvers.js in /server/no-meteor in the payments plugin, and then import it and register it: import Reaction from "/imports/plugins/core/core/server/Reaction"; import resolvers from "./server/no-meteor/resolvers"; import schemas from "./server/no-meteor/schemas"; Reaction.registerPackage({ graphQL: { resolvers, schemas }, // ...other props }); While that would work and may even be fine for a simple custom plugin, there are some downsides. We want to be able to test each complex resolver function, which is easier if each function is in its own file. Also, plugins typically keep growing, so our single file might become too large to easily understand over time. So instead we break into separate files and folders as necessary. Whether you use files or folders at each level is up to you and should be based on how complex the functions are and whether they need unit tests. - All files either export a function or an object as their default exports - All index.jsfiles import all of the folders or files at the same level of the tree, exporting them in an object. - We name the files and folders and default import variables to match the object keys, which allows the exports to use ES6 object literal shorthand, and makes it easy to visualize how the folder structure maps to the final resolversobject. Here's how the payments plugin folder structure looks splitting that one file into multiple: For a full understanding, look through these files in the codebase.
https://docs.reactioncommerce.com/docs/next/graphql-resolvers-file-structure
CC-MAIN-2019-22
refinedweb
461
52.29
Manage Transactions in MySQL – Lesson 1 (Note:. Getting in on the Transactions If you’re relatively new to databases, or if MySQL has been the database on which you’ve cut your teeth, you may not even know what a transaction is. Put simply, a transaction is a series of SQL statements that are executed as a single unit; either all the statements are executed completely or none are executed at all. Why are transactions so important? Consider a fairly typical Web application: a shopping cart. In a simple shopping cart application, you’re likely to have tables that look a little something like this (only less simplified, and with real info instead of the dummy data): products +------------+-------------------+---------------------------+---------------+ | product_id | product_name | product_description | product_price | +------------+-------------------+---------------------------+---------------+ | 1 | Old Pair of Socks | Plenty of holes | 4.25 | | 2 | Old T-shirt | Ketchup stains everywhere | 6.75 | +------------+-------------------+---------------------------+---------------+ inventory +------------+----------+ | product_id | in_stock | +------------+----------+ | 1 | 25 | | 2 | 12 | +------------+----------+ buyers +----------+-------+---------+------------+------------+ | buyer_id | fname | lname | phone | fax | +----------+-------+---------+------------+------------+ | 1 | John | Doe | 7185551414 | NULL | | 2 | Jane | Johnson | 7185551414 | 2126667777 | +----------+-------+---------+------------+------------+ orders +----------+----------+-------------+ | order_id | buyer_id | order_price | +----------+----------+-------------+ | 1 | 1 | 11.00 | +----------+----------+-------------+ order_items +----------+------------+---------------+ | order_id | product_id | product_price | +----------+------------+---------------+ | 1 | 1 | 4.25 | | 1 | 2 | 6.75 | +----------+------------+---------------+ You can see how these tables fit together in this diagram: When it comes time to process an order, you have to run several SQL statements within a script (written in PHP, Perl, Java, or whatever language you prefer). In the script, you want to take a look at what items the buyer wants, see if there’s adequate inventory to complete the order, and if there is adequate inventory, you want to complete the order. In pseudo-code, the script used to complete the order would look something like this: get buyer data and shopping cart data from web forms insert buyer data into buyer table start order by creating row in orders table get current order_id for each item desired check available inventory if inventory is available write item to order_items table decrement inventory endif end for loop get total for items for the order update orders table I simplified this listing so that it’s easier to spot a potential problem. Consider what would happen if the power failed on the machine hosting the database just as it was in the middle of checking the inventory of the first item in the order. You’d restart your machine to find a row in the orders table without any child rows in the order_items table. It’s quite possible you’d be left with data that was, to a large extent, incomprehensible. You wouldn’t know what orders had been placed unless your customers sent you email, wondering just when you planned to send them the old pairs of socks they had requested not the best way to run a business. If you had transaction support in place, however, you could treat this group of statements as a single unit. If for any reason the database failed to complete all of the statements in their entirety, the data would revert (or roll back) to the condition it was in prior to the first statement’s execution. Of course there’s a bit more to transactions. A transaction-capable database must implement four specific properties. Wanna know what they are? Turn the page. Are You on ACID? Transaction-capable databases must implement four properties, collectively known by the mnemonic acronym ACID. - Atomicity: Transactions must be singular operations. Either all statements are executed as a single (atomic) unit or none are executed at all. - Consistency: In a transactional database, data will move from one consistent state to another. There will never be a point during a transaction when data is partly processed. - Isolation: The dealings of one transaction will not be visible to other clients until the transaction is completed successfully or rolled back. You can be sure that the data available to one transaction is accurate because it is isolated from changes other clients might make. - Durability: When a transaction completes successfully, the changes are permanent. Nothing, not even a disk crash or power failure, will erase the changes made by a successfully completed transaction. SQL servers that allow transactions make use of several keywords: BEGIN WORK, COMMIT, and ROLLBACK. The phrase BEGIN WORK lets the SQL server know that the SQL statements that follow are part of a transaction. The transaction is not completed until either a COMMIT or ROLLBACK statement is executed. COMMIT writes the changes to the database. Once a transaction has been successfully COMMITed, only another successfully committed SQL statement can alter the data. No crashes or concurrently run SQL statements will effect the data. The ROLLBACK command tells the database that all of the statements within the transaction should be ignored and the database should revert to the point it was in prior to the start of the transactions. In the case of a crash, all transactions that were not expressly committed are automatically rolled back. You can now revisit the pseudo-code first presented on the previous page. I’ve improved the listing by incorporating a transaction. get buyer data and shopping cart data from web forms insert buyer data into buyer table BEGIN WORK start order by creating row in orders table get current order_id for each item desired check available inventory if inventory is available write item to order_items table decrement inventory else set error variable to true endif end for loop if error variable is true ROLLBACK else get total for items for the order update orders table COMMIT Notice the ROLLBACK that I’ve added. This ensures that if inventory isn’t available to complete even a portion of a user’s request, the entire order is ignored. No row will be written to the orders or order_items table. Are you beginning to see how good transactions can be? In a Web environment, you can expect multiple users (database clients or threads) to be accessing the script simultaneously. Therefore, the inventory of a given item will be changing continually. But when using transactions, you don’t have to worry that a user will complete an order for an item that is actually out of stock. That’s because of the I (for Isolation) in ACID. In a transactional environment, each transaction is isolated from the other, so one transaction cannot see the working of another transaction until the first one is complete. The last sentence was a bit of a simplification, but it’s pretty close, and it’s good enough for now. In order to isolate one transaction from another, database systems must implement some sort of locking scheme. That is, the database needs a way for one client (or thread) to lock out (or isolate itself) from all other clients. Locking is a key element of transactions. I’ll be talking about this subject extensively throughout this tutorial, starting on the next page. Lockdown! Locking makes some portion of data the property of a single client. That client says, in effect, “this data here is mine, and the rest of you can only do what I expressly permit.” Locks can have one of two effects: A lock may prevent other clients from altering data (with UPDATE or DELETE statements) or a lock may prevent all access to some data preventing UPDATEs, DELETEs, and even SELECTs. To understand locking mechanisms in MySQL, you first need to recognize that MySQL is an unusual product. It isn’t really a single, unified piece of software. Rather, it uses technology from several different sources, and the way you implement your transactions in MySQL largely depends on the table type you use. Each table-type uses a different method of locking, and the differences in those locking mechanisms will effect how you write your code. MyISAM tables are very fast for SELECTs, but they have some serious drawbacks when it comes to locking. These shortcomings are what prevented MySQL from implementing some key database features, including transactions. Looking at the way MyISAM struggles with locking, you really begin to appreciate the power and value of actual transactions. So before we get into BDB, Gemini, and InnoDB, let’s first take a look at the limitations of MyISAM’s table-level locking. Using MyISAM Tables I already mentioned that MyISAM tables (usually MySQL’s default table type) don’t support transactions. This is largely because MyISAM tables offer only table-level locking, which means that locks can only be placed on entire tables. So if you want to prevent a single row in a table from being changed, you need to prevent all rows in the table from being changed. Take our inventory table as an example. If one client is buying an item, you’ll want to check the inventory, and if the item is there in sufficient quantity, you’ll decrement the number available after the sale. To make sure the quantity doesn’t change between the time you check the availability and the time your change the inventory, you’ll want to put a lock on that row. But because MyISAM offers only table-level locking, you’ll have to cut off access to all other rows in the table. MyISAM offers two types of table-level locks, a read lock and a write lock. When a read lock is initiated by a client, all other clients are prevented both from making changes to the table via INSERTs, DELETEs, or UPDATES. To see how a read-level lock works on a MyISAM table, open up two copies of the MySQL command-line client. Then create a table and insert some data with the statements below create table inventory ( product_id int not null primary key, in_stock int not null, index index_on_in_stock(in_stock) )type=myisam; INSERT INTO inventory (product_id, in_stock) VALUES(1,25), (2,12); Now in one client, place a read lock on the inventory table with the following command: LOCK TABLES inventory READ; Now in the second copy of the client run a SELECT * FROM inventory. You’ll see that the command executes just fine. However, if you try to run an UPDATE, it’s a different story. Try the following command with the lock in place. </pre> UPDATE inventory set in_stock=24 where product_id=1; </pre> You’ll see that this copy of the client does not respond. It’s locked out and can’t execute the command. Only the client that placed the lock can change the table. Now go back to the first copy of the command-line client and release the lock with: UNLOCK TABLES; Once the lock is released the second client will be free to run the UPATE command and change the row. A write lock prevents other clients from even running SELECTs on the locked table. You can place a write lock with the following command: LOCK TABLES inventory WRITE; Go back and run the previous experiment but this time issue a WRITE lock. You’ll see that the other client is prevented from doing anything, even reading from the locked table with a SELECT. So now you know all the locks that are possible with MyISAM tables. As I mentioned earlier, the folks at MySQL AB (who run MySQL development) often argue that transactions really aren’t necessary. They say that by properly applying locks and writing clever SQL, you should be able to avoid the need for transactions. Below I’ve written some pseudo-code that adds transaction-like abilities to the shopping cart pseudo-code using MyISAM tables. INSERT query into buyers table. run last_insert_id() to get buyer_id run INSERT into orders table run last_insert_id() to get order_id get write lock on inventory table for each of the items in the order get quantity from the inventory table if quantity is > 0 insert into order_items table update inventory table subtracting the ordered item elseif quantity = 0 delete all items from order_items with current order_id delete item from orders table with current order_id update inventory table to replenish items previously subtracted set error variable to true break from for loop if error variable is not true update orders table with the current order_id, adding the order_total else output error I might be able to clean this up a bit and remove a few of the SQL statements, but I think you see the way you’d need to go writing transaction-like code with MyISAM tables. While this code might add a degree of isolation (remember the I in ACID) by way of locks, there are other ACID properties missing here. Most notable is the lack of consistency: The data moves through several inconsistent states before the script is done. If the power goes out during one these inconsistent phases, you’ve got problems. While ISAM locks are better than nothing, they are really no substitute for true ACID transactions. In Lesson 2 of this tutorial, I’ll discuss locking and transactions in BDB, InnoDB, and Gemini tables. Come on back, ya’here.
http://www.webmonkey.com/2010/02/manage_transactions_in_mysql_-_lesson_1/
CC-MAIN-2014-15
refinedweb
2,156
59.03
Basically, I'm looking for something that offers a parallel map using python3 coroutines as the backend instead of threads or processes. I believe there should be less overhead when performing highly parallel IO work. Surely something similar already exists, be it in the standard library or some widely used package? DISCLAIMER PEP 0492 defines only syntax and usage for coroutines. They require an event loop to run, which is most likely asyncio's event loop. I don't know any implementation of map based on coroutines. However it's trivial to implement basic map functionality using asyncio.gather(): def async_map(coroutine_func, iterable): loop = asyncio.get_event_loop() future = asyncio.gather(*(coroutine_func(param) for param in iterable)) return loop.run_until_complete(future) This implementation is really simple. It creates a coroutine for each item in the iterable, joins them into single coroutine and executes joined coroutine on event loop. Provided implementation covers part of the cases. However it has a problem. With long iterable you would probably want to limit amount of coroutines running in parallel. I can't come up with simple implementation, which is efficient and preserves order at the same time, so I will leave it as an exercise for a reader. You claimed: I believe there should be less overhead when performing highly parallel IO work. It requires proof, so here is a comparison of multiprocessing implementation, gevent implementation by a p and my implementation based on coroutines. All tests were performed on Python 3.5. Implementation using multiprocessing: from multiprocessing import Pool import time def async_map(f, iterable): with Pool(len(iterable)) as p: # run one process per item to measure overhead only return p.map(f, iterable) def func(val): time.sleep(1) return val * val Implementation using gevent: import gevent from gevent.pool import Group def async_map(f, iterable): group = Group() return group.map(f, iterable) def func(val): gevent.sleep(1) return val * val Implementation using asyncio: import asyncio def async_map(f, iterable): loop = asyncio.get_event_loop() future = asyncio.gather(*(f(param) for param in iterable)) return loop.run_until_complete(future) async def func(val): await asyncio.sleep(1) return val * val Testing program is usual timeit: $ python3 -m timeit -s 'from perf.map_mp import async_map, func' -n 1 'async_map(func, list(range(10)))' Results: Iterable of 10 items: multiprocessing- 1.05 sec gevent- 1 sec asyncio- 1 sec Iterable of 100 items: multiprocessing- 1.16 sec gevent- 1.01 sec asyncio- 1.01 sec Iterable of 500 items: multiprocessing- 2.31 sec gevent- 1.02 sec asyncio- 1.03 sec Iterable of 5000 items: multiprocessing- failed (spawning 5k processes is not so good idea!) gevent- 1.12 sec asyncio- 1.22 sec Iterable of 50000 items: gevent- 2.2 sec asyncio- 3.25 sec Concurrency based on event loop works faster, when program do mostly I/O, not computations. Keep in mind, that difference will be smaller, when there are less I/O and more computations are involved. Overhead introduced by spawning processes is significantly bigger, than overhead introduced by event loop based concurrency. It means that your assumption is correct. Comparing asyncio and gevent we can say, that asyncio has 33-45% bigger overhead. It means that creation of greenlets is cheaper, than creation of coroutines. As a final conclusion: gevent has better performance, but asyncio is part of the standard library. Difference in performance (absolute numbers) isn't very significant. gevent is quite mature library, while asyncio is relatively new, but it advances quickly. You could use greenlets (lightweight threads, basically coroutines) for this, or the somewhat higher-level gevent lib built on top of them: (from the docs) import gevent from gevent import getcurrent from gevent.pool import Group group = Group() def hello_from(n): print('Size of group %s' %) Yields output: Size of group 3 Hello from Greenlet 31904464 Size of group 3 Hello from Greenlet 31904944 Size of group 3 Hello from Greenlet 31905904 Ordered ('task', 0) ('task', 1) ('task', 2) Unordered ('task', 2) ('task', 1) ('task', 0) The standard constraints of lightweight-vs-proper-multicore-usage apply to greenlets vs threads. That is, they're concurrent but not necessarily parallel. Quick edit for people who see this in the future, since Yaroslav has done a great job of outlining some differences between Python's asyncio and gevent: Why gevent over async/await? (these are all super subjective but have applied to me in the past) - Not portable/easily accesible (not just 2.X, but 3.5 brought new keywords) - async and await have a tendency to spread and infect codebases - when someone else has encapsulated this for you, it's super duper nice in terms of development and readability/maintainability - In addition to above, I (personally) feel like the high-level interface of gevent is very "pythonic". - Less rope to hang yourself with. In simple examples the two seem similar, but the more you want to do with async calls, the more chance you have to fuck up something basic and create race conditions, locks, unexpected behaviors. No need to reinvent the noose imho. - Gevent's performance scales past trivial examples and is used and tested in lots of production environments. If you don't know much about asynchronous programming, it's a good place to start. Why asyncio and not Gevent? - If you can guarantee a version of Python and don't have access to 3rd party packages/pip, it gives you out of the box support. - Similar to above, if you don't want to be tied in to a project that's been slow to adopt Py3k, rolling your own small toolset is a good option. - If you want to fine tune things, you're in charge!
http://www.dlxedu.com/askdetail/3/73449d5e60c29b1a1cdfca106b440cc3.html
CC-MAIN-2018-39
refinedweb
949
57.98
Create a Desktop Application Using Angular, Bootstrap and C# Create a Desktop Application Using Angular, Bootstrap and C# It's possible to create a desktop appliction using JavaScript, and this tutorial will show you how to do it, using Angular and Bootstrap for the presentation layer. Join the DZone community and get the full member experience.Join For Free Last. Why Would You Do This? Well, I don’t know why YOU would do this, but the reason I’m doing this is because the more I do on the web, the less able I am to work with Windows Form, and I haven’t even bothered learning WPF. I decided several years ago that I would niche down over web technologies. And yet, I want to write this desktop application. I tried to use Windows Form, which I am most familiar with, and just got frustrated. I want to use a grid control. But what I want to do with the control is something more like what I would do with Angular’s ui-grid than what I can do with the grid control built into Windows Form. I’m sure someone who really knew the desktop side of the fence would be able to do what I want to do. But I want to leverage what I know. And eventually, I may move the whole thing to Node.js even though to get the thing up and running, I am going to use C# for the main processing. Rendering HTML The first step toward getting all of this working is to just get HTML to render inside of a Windows Form (or WPF) executable. I decided to use Windows Forms because I don’t need any of the WPF goodness that WPF would give me. But you could tweak most of the setup I’m going to give you so that it would work with WPF if that’s your preferred platform. So, let’s start out by creating a Windows Form based application. Once you have the project loaded, you’ll want to grab the CefSharp Windows Forms DLL’s and related files. You can use NuGet to get these installed. Just search for, “CefSharp.WinForms”. Because chromium uses Win32 or Win64 based C++ DLLs, you’ll need to configure your project to run as one or the other project. This part was a little tricky. What I found was that just changing the project settings for the default configuration named “Any CPU” was not enough. What you need to do is to create a new project named “x64” or “x32” and change the settings there. Try compiling now, before you add any code. If you’ve configured the project correctly with the CefSharp DLLs it should compile. The next thing you want to do is to insert the Chromium Browser control into the form. Yes, it is a control like any other control. No, you won’t find it on your toolbar. No, it isn’t worth adding to the toolbar. It is the only control that is going to be on the form so all you need to do is add it to the form using a few lines of code. First, add a private variable to hold the browser control. It doesn’t need to be a member variable to get the HTML to render, but you’ll want it to be private later on. So, just make it private to start with. Then, in your Load() method, add the following code: private void Form1_Load(object sender, EventArgs e) { // Initialize CefSharp Cef.Initialize(); // Create a new browser window _browser = new ChromiumWebBrowser(""); // Add the new browser window to the form Controls.Add(_browser); } You will also need code in your FormClosing() method. You can create this in Visual Studio by selecting it from the dropdowns in the upper right corner of the code window. private void Form1_FormClosing(object sender, EventArgs e) { Cef.Shutdown(); } OK. Compile and run. You should be able to load the Google web site and see it in your Windows Form. Using Our Own Files OK, so we’ve proven that we can render HTML inside of a Windows Form application. But that won’t do us much good if we want to run code on our own. Most of the places on the web that talk about loading HTML inside of a desktop application using Chromium suggest that you copy the HTML files over as content and use the file:// protocol to load them. But there are two problems with doing that. First, I don’t want the files generally accessible to whoever has this installed. What if someone decides to change those files? The second problem I have is even worse. Assuming I could live with the files being available on the file system, Angular doesn’t work from the file system. It wants to run from. So at the very least, we need for our files to LOOK like they’ve been served from a web server. Fortunately, we can solve both of these problems. Make Our Files Resources To start with, we’ll just add one file. Since it will be the beginning of our main application, name the file index.html and place it in a directory called “web” off the root of your project. Put enough HTML in there that you’ll know the file actually got loaded. Then in the file properties, mark the file as an “Embedded Resource” instead of “Content” To load this file as a resource, you’ll use code that looks something like this: var assembly = Assembly.GetExecutingAssembly(); var textStream = assembly.GetManifestResourceStream ("TopLevelNamespace.web.index.html"); Make it LOOK Like it Came From a Server This is where some of the magic starts to happen. The Chromium APIs have code that will let you register a pre-canned response object with a URL using a dictionary. So, all we need to do is change the text string that we returned in the code above into a response object and register it with Chromium. The code to do that looks like this: var factory = (DefaultResourceHandlerFactory)(browser.ResourceHandlerFactory); if (factory == null) return; var response = ResourceHandler.FromStream(textStream); factory.RegisterHandler("", response); And now, when we tell Chromium to load “” it will render the index.html file from our EXE. Cool! Now, loading each file like this is going to get rather tedious pretty fast. So what we need is a mechanism for loading all of the files in our web directory automatically. For this we need to be able to iterate over all of our resources in the web namespace and register them with an associated “http://” tag. Since the best that we can do is get a list of all of the resources in our assembly, we will have to do some filtering to only register stuff in the “web” namespace. But, there is another issue. All of the resources are going to be listed as “TopLevelNamespace.web.subnamespace.filename.extension” and we want to register them as “”. So there is a bit of string manipulation that we need to go through to register everything correctly. // Get the list of resources var resourceNames = Assembly.GetExecutingAssembly() .GetManifestResourceNames(); // For each resource foreach (var resource in resourceNames) { // If it isn't in the "web" namespace, skip it. if (!resource.StartsWith("TopLevelNamespace.web")) continue; // Strip out the namespace that we don't need. var url = resource.Replace ("TopLevelNamespace.web.", ""); // Function I made that turns the // resource into a textStream var r = LoadResource(url); // Make the namespace look like a path url = url.Replace(".", "/"); var lastSlash = url.LastIndexOf("/", StringComparison.Ordinal); url = url.Substring(0, lastSlash) + "." + url.Substring(lastSlash + 1); // Register the response with the URL factory.RegisterHandler("" + url, ResourceHandler.FromStream(r)); } Now that I’ve explained all of the code. The full class for loading the resources looks like this: class RegisterWebsite { public static void Load(ChromiumWebBrowser browser) { var factory = (DefaultResourceHandlerFactory) (browser.ResourceHandlerFactory); if (factory == null) return; var response = ResourceHandler .FromStream(LoadResource("index.html")); factory.RegisterHandler("", response); var resourceNames = Assembly.GetExecutingAssembly() .GetManifestResourceNames(); foreach (var resource in resourceNames) { if (!resource.StartsWith("TopLevelNamespace.web")) continue; var url = resource.Replace("TopLevelNamespace.web.", ""); var r = LoadResource(url); url = url.Replace(".", "/"); var lastSlash = url.LastIndexOf("/", StringComparison.Ordinal); url = url.Substring(0, lastSlash) + "." + url.Substring(lastSlash + 1); factory.RegisterHandler("" + url, ResourceHandler.FromStream(r)); } } static Stream LoadResource(string filename) { var assembly = Assembly.GetExecutingAssembly(); var textStream = assembly .GetManifestResourceStream("TopLevelNamespace." + filename); return textStream; } } There is some obvious room for improvement here. But the basics are there, you can tweak as needed. The main entry point is the Load method where we pass in a pointer to the browser control we created when we started this project. Getting JavaScript to talk to C# Now that we have the basics out of the way, we need to get the two halves of our project talking to each other. The first half is that we need a way for our JavaScript client side code to retrieve data and send notifications to our server side code. Fortunately, the mechanisms for doing this are already built into Chromium. Any C# object can be registered with Chromium as a JavaScript object so that any property will become a JavaScript field and any method will become a JavaScript method. The API to make this happen looks like this: _browser .RegisterJsObject("NameYouWantJavaScriptToSeeThisObjectAs", cSharpObjectHere); In our JavaScript code, we would find that the window object now has a field named “NameYouWantJavaScriptToSeeThisObjectAs” Getting C# to talk to C# The reverse is just as easy. _browser.ExecuteScriptAsync(string) takes a string that is the JavaScript that you want to execute. Getting the Communication To Play Nice with Angular But getting this all to play well with Angular requires just a little bit more. You may find that code on your screen that depends on a field or method that was registered with RegisterJsObject does not update when it should. In fact, I would guess that this would happen most of the time because our C# object knows nothing of Angular and Angular knows nothing of our C# object. So to fix this, we will need to make sure we $watch our C# object in our angular code. $scope.$watch(function() {return window.RegisteredObject.property}, function(){ $scope.someField = window.RegisteredObject.property; }); What this code does is that it tells Angular to check this field when it goes through its $digest cycle. If it has changed since the last time it looked, it should run the second function that was passed in to $watch(). But this isn’t the only code you will need to add. Whenever you make a change to something on the C# side that the Angular code needs to reflect, you’ll need to tell Angular to run the $digest() cycle manually. To do that, you’ll use that ExecuteScriptAsync() method to run some JavaScript. The easiest way to do this is to just run it off the top level $scope object. The way you find the top level $scope object is to use JavaScript to find the element that you marked as “ng-app” in your HTML. Once you’ve done that, you will see that it has a scope() method hanging off of it. So this code will force a $digest cycle on everything from the top level $scope all of the way down. _browser.ExecuteScriptAsync( "angular.element('#IdOfViewThatHasAControllerAttached')."+ "scope().status = 'this is a new status';angular." + "element('[ng-app]').scope().$digest();"); Where #IdOfViewThatHasAControllerAttached is an ID of a element in a view that you’ve associated with a controller. You’ll still want your controller to pull from the C# JavaScript object for the initial load because the DIV may or may not be there when you do the push. Personally, I prefer the $watch method. There is less to think about on the C# side. And that’s how you create a desktop application using Angular, Bootstrap and C#. Published at DZone with permission of Dave Bush , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/create-a-desktop-application-using-angular-bootstr
CC-MAIN-2020-29
refinedweb
2,022
65.22
Today I’d like to give you a brief introduction to the Phoenix framework and demonstrate it in action. Phoenix is a relatively new server-side framework (compared to Rails or Django) written in Elixir. It boasts great performance and fault-tolerance so it is getting quite popular these days. In this article, we will talk about the basic components of Phoenix and create a very simple web application so that you can get a sense of how Phoenix applications look like. The source code for this article can be found on GitHub. So, Phoenix? Yeah, Phoenix. It is a modern server-side web framework written in Elixir (which, in turn, runs on an Erlang virtual machine) built with Model-View-Controller pattern in mind. Many of its concepts are very similar to the ones presented in Rails or Django, so if you are familiar with any one of those, you’ll get the basics of Phoenix very fast. “Why do we need yet another MVC framework?”, you might ask. Well, because Phoenix offers both productivity, stability and performance at the same time. Some developers blame Rails to be slow: that’s not the case with Phoenix which can serve countless number of clients with ease while providing nice fault tolerance. This is possible thanks to Elixir’s magic that has features like concurrency, monitors and supervisor trees out of the box. Therefore, if you’ve never tried using Phoenix, I really recommend doing so. Unfortunately, Phoenix also has some downsides as well. One problem is that Phoenix’s community is rather small (though, very friendly!) and, consequently, package ecosystem is somewhat poor at the moment. This is because Phoenix is not as mature and popular as Rails/Django yet. There are quite a few Phoenix resources on the net and (probably even worse) there are not many third-party libraries (called packages) available. For example, if you need to implement authentication in Rails, there are a lot of solutions out there. Moreover, each of them has its own philosophy so you may either pick a full-fledged solution (like Devise) or something barebones (like AuthLogic). With Phoenix you have about three solid options but unfortunately at least one of them is not that actively maintained and has quite a lot of issues. Still, I have to say that the community tends to grow so new cool solutions will inevitably appear. Another thing that may potentially become a problem for beginner developers is the need to learn Elixir language itself that does have a bunch of rather complex points. For example, this is a functional language with immutable data and, of course, without any classes, inheritance and that stuff. As long as many developers start their journey by learning some OOP language, getting the grasp of functional programming may take some time. Also, learning OTP basics can be quite painful, though it does not require you to be a computer genius. On the other hand, studying a new programming paradigm is really useful and can change your attitude towards the development process quite significantly. All right, enough with introductions — let’s quickly discuss the major components of a Phoenix application! Major Components Each Phoenix application typically has the following parts: - Endpoint which is the alpha and omega of any request lifecycle. It provides special plugs to apply to request and handles it until the router kicks in. - Router forwards the request to the proper controller and action based on the defined rules. Also, it provides special helpers to generate route paths and also performs some additional work. - Controllers are composed of actions that handle requests, prepare the data to pass into the views, invoke the proper views or perform redirects. All in all, you can say that controllers are thin layers between models and views. - Views act as presentation layers and render the corresponding templates. Note that in Phoenix, views are modules that define helper functions and decorate data; they are not files with some HTML or JSON (like in Rails, for example). - Templates are the files with the actual contents that is included in the response. For example, a template may contain HTML with dynamically substituted data (which is prepared by the controller). - Channels are needed to manage sockets which allow to establish realtime communication between clients and a server using persistent connections. - PubSub is needed for the channels to function properly. For instance, it allows clients to subscribe to various topics. That’s pretty much it about major Phoenix components. As you see, most of them are quite common and you have probably met them before (if you have prior experience with web frameworks, of course). Now let’s proceed to the next section and try to create a very simple application in order to see Phoenix in action! Installing Phoenix Installing Phoenix is not a complex task. You’ll need to perform the following steps: - Install Erlang 18 or later (as Elixir runs on Erlang VM). All major operating systems are supported so you should not have any difficulties - Install Elixir 1.4 or higher. All major operating systems are supported as well. - Install Hex package manager by running mix local.hex command in your terminal (make sure that Erlang and Elixir’s executables are present in the PATH) - Install Phoenix itself by running mix archive.install This is it! Phoenix, however, does have some optional dependencies that you will probably want to install as well: - NodeJS. Well, actually, the Node Package Manager, but as long as NPM requires Node, you’ll have to install them both. NPM, in turn, is required by Brunch.io (that compiles assets like JS or CSS) to download its dependencies. - Database management system: PostgreSQL, MySQL, Microsoft SQL, or MongoDB. Of course, you may create an application without a database at all, but in this demo we’ll try integrate our app with Postgres, so install it to follow along. Phoenix also supports SQLite3 but this solution is suitable only for the development environment. Once you have everything installed on your PC, proceed to the next section where we are going to bootstrap our first Phoenix application. Our First Phoenix App So, after having installed everything, run the following command: mix phx.new phoenix_sample It is going to create a new application Named phoenixsample and prepare its skeleton for us. Note that the app’s name must be written in lowercase. Code generator is going to ask you whether you’d like to install all the dependencies, so type Y and press Enter. Our new project is going to be created with a Postgres support by default but if you’d like to stick with, say, MySQL, simply provide the –database mysql option when running the phx.new script. In order to be able to boot the project, we have to provide some configuration for our DBMS. Open the config/dev.exs file, scroll to the very bottom and find the following piece of code: config :phoenix_sample, PhoenixSample.Repo, adapter: Ecto.Adapters.Postgres, username: "user", password: "secret", database: "phoenix_sample_dev", hostname: "localhost", pool_size: 10 Tweak all these parameters as needed and then run: mix ecto.create This is going to create a new database for you. After that start the server by running: mix phx.server Next, navigate to and make sure that Phoenix welcoming page is being displayed. Creating a Simple Page Controller, View, and Template Now that we have bootstrapped our application, let’s add some components to it. First of all, let’s create a new controller with a single index action that is going to invoke the corresponding view and render a given template. Controllers should be placed to the lib/phoenix_sample_web/controllers folder, so create a new albums_controller.ex file inside: defmodule PhoenixSampleWeb.AlbumsController do use PhoenixSampleWeb, :controller def index(conn, _params) do render conn, "index.html" end end That’s an Elixir module with an index/2 function. This function, in turns renders an index.html template that will be created in a moment. conn here is a special parameter that contains information about the request. _params contains request parameters. Next, we’ll require a view. It should be placed under the lib/phoenix_sample_web/views folder, so create a new albums_view.ex file there: defmodule PhoenixSampleWeb.AlbumsView do use PhoenixSampleWeb, :view end Note that the first parts of the view’s name and controller’s name must match. Lastly, create an index.html.eex template inside the lib/phoenix_sample_web/templates/albumsfolder: <h1>Albums</h1> The contents of this template will be interpolated into a generic layout which can be found in the lib/phoenix_sample_web/templates/layout folder. This layout defines the basic structure of the HTML page and has all the necessary tags like html, body, and others. Routes The last thing we have to do in order to see our first page in action is create a new route. All routes can be found in the lib/phoenix_sample_web/router.ex file. defmodule PhoenixSampleWeb.Router do use PhoenixSampleWeb, :router pipeline :browser do plug :accepts, ["html"] plug :fetch_session plug :fetch_flash plug :protect_from_forgery plug :put_secure_browser_headers end pipeline :api do plug :accepts, ["json"] end scope "/", PhoenixSampleWeb do pipe_through :browser # Use the default browser stack get "/", PageController, :index end end There is a lot going on here and we won’t discuss everything in-depth (as it’ll take a lot of time), but let me note some main things: - pipeline :browser defines a set of behaviour and transformations that should be applied to a request. Inside, there are set of plugs that are executed in a given order and perform some operation to the request. This pipeline is then referenced with a pipe_through :browser line of code. - scope “/”, PhoenixSampleWeb do acts like a namespace. You may create a new scope, for example /api. In this case, all routers will be nested under the /api namespace and you’ll have something like /api/users or /api/comments/new. - get “/”, PageController, :index is the route created by default. It means that whenever a GET request arrives to the root URL, it should be forwarded to the PageController, index action. Now let’s create a new route right above the get “/”, PageController, :index line: # ... scope "/", PhoenixSampleWeb do pipe_through :browser # Use the default browser stack get "/albums", AlbumsController, :index # <--- get "/", PageController, :index end That’s it! You may now visit the path and make sure the page is displayed properly. In the terminal, you should see an output similar to this one: [info] GET /albums [debug] Processing with PhoenixSampleWeb.AlbumsController.index/2 Parameters: %{} Pipelines: [:browser] [info] Sent 200 in 0┬╡s Also note that Phoenix has auto-reloading feature enabled in development. So, if you make some changes to a template, for example, the web page will be reloaded for you automatically and reflect these changes. Persisting Data Next, I would like to introduce an ability to persist data into the previously created database. And in order to do this, we need a new table. Of course, you can go ahead and create it manually using PGAdmin or the PSQL command line tool, but that wouldn’t be very convenient. On top of that, if you are going to deploy your project to a hosting provider, you’ll have to create the same table again. To overcome this problem, a concept of migrations was introduced in modern frameworks. Migrations are files with instructions explaining what operations should be applied to the database. With migrations you can, for example, easily create, modify and delete tables using a single command. On top of that, migrations can be rolled back to cancel an applied operation. Let’s generate a new migration and the corresponding schema using the following command: mix phx.gen.schema Album albums name:string singer:string track_count:integer I have specified the following options: I have specified the following options: - Album is the name of the schema (we’ll talk about schemas in a moment) - albums is the name of the table that we will create - name:string and singer:string means that I’d like to add name and singer fields with a type of string - track_count:integer – create a track_count field with a type of integer After this command finishes its job, a new migration priv\repo\migrations\20180313160658_create_albums.exs will be created. The numbers in the filename will be different for you — that’s a timestamp specifying when exactly this migration was generated. Take a look at the newly created file: defmodule PhoenixSample.Repo.Migrations.CreateAlbums do use Ecto.Migration def change do create table(:albums) do add :name, :string add :singer, :string add :track_count, :integer timestamps() end end end You can easily understand what’s going on here. We are creating an albums table with three fields of the specified type. timestamps() here means that the insert_at and updated_at fields should also be added to the table. These fields are updated automatically when a record is created and updated. Now apply the migration in the following way: mix ecto.migrate You’ll see an output similar to this one: [info] == Running PhoenixSample.Repo.Migrations.CreateAlbums.change/0 forward [info] create table albums [info] == Migrated in 0.0s It means that the table was created! Now let’s say a couple of words about the schema that can be found in the lib\phoenix_sample\album.ex file. Basically, schemas are used to map the Elixir values to some external data source, and vice versa. Also, they can be used to establish relationships with other schemas (for example, an album may have many tracks). Also, schema defines data validations. Our schema has the following contents: defmodule PhoenixSample.Album do use Ecto.Schema import Ecto.Changeset schema "albums" do field :name, :string field :singer, :string field :track_count, :integer timestamps() end @doc false def changeset(album, attrs) do album |> cast(attrs, [:name, :singer, :track_count]) |> validate_required([:name, :singer, :track_count]) end end schema “albums” defines structure of our data. changeset is a special function that defines which transformations should be applied to the data before it is persisted. validate_required, as you have probably guessed, defines validation rules. Specifically, it says that all three fields must have some value, otherwise the record won’t be persisted. Of course, you may introduce other validation rules as needed. That’s pretty much it! Let’s try to insert some value into the albums table. Adding Some Data To make things simple, we are not going to code a separate HTML form allowing to create albums. Instead, let’s run a special Elixir console where we can manipulate our data and perform other operations: iex -S mix Type the following code: PhoenixSample.Repo.insert(%PhoenixSample.Album{name: "Reload", singer: "Metallica", track_count: 13}) This is going to fire an INSERT operation and create a new album with the provided attributes. If one of the attributes is not set, the transaction will be rolled back according to the validation rules specified in the previous section. If everything is okay, you’ll see the following output: [debug] QUERY OK db=32.0ms INSERT INTO "albums" ("name","singer","track_count","inserted_at","updated_at") VALUES ($1,$2,$3,$4,$5) RETURNING "id" ["Reload", "Metallica", 13, {{2018, 3, 13}, {16, 13, 44, 267000}}, {{2018, 3, 13}, {16, 13, 44, 269000}}] {:ok, %PhoenixSample.Album{ __meta__: #Ecto.Schema.Metadata<:loaded, "albums">, id: 1, inserted_at: ~N[2018-03-13 16:13:44.267000], name: "Reload", singer: "Metallica", track_count: 13, updated_at: ~N[2018-03-13 16:13:44.269000] }} Note that the inserted_at and updated_at fields were populated automatically. On top of that, there is an id field which is assigned with a value of 1. This field is also added automatically, and it is marked as a primary key with autoincrement. Of course, it is possible to use some other field as a primary key. Great! The data is persisted, so why don’t we tweak our controller action to display them? Rendering Albums Render to the AlbumsController and modify the albums_controller.ex file in the following way: defmodule PhoenixSampleWeb.AlbumsController do alias PhoenixSample.{Repo, Album} # <--- 1 use PhoenixSampleWeb, :controller import Ecto.Query # <--- 2 def index(conn, _params) do render conn, "index.html", albums: Repo.all(from a in Album, select: %{:name => a.name, :singer => a.singer, :tracks => a.track_count}) # <--- 3 end end There are three key points here: 1. We are creating a new alias for our own convenience, otherwise we would have to write PhoenixSample.Repo and PhoenixSample.Album inside the index function. 2. We are importing Ecto.Query to be able to create complex queries 3. We are setting an albums variable that should be available inside the template. This variable contains all the albums fetched with the help of Repo.all. :select option explains which fields I’d like to fetch. Also, it specifies in what format should the result be returned. Having this code in place, we may tweak the template. Let’s utilize the Elixir comprehension to traverse the @albums list and display the necessary data on each step: <h1>Albums</h1> <%= for album <- @albums do %> <p>Name: <%= album.name %></p> <p>Singer: <%= album.singer %></p> <p>Tracks: <%= album.tracks %></p> <hr> <% end %> Boot the server again, navigate to the page and make sure that the data are displayed properly! Conclusion: – That’s all for today, folks. Of course, that was a very high-level overview of the Phoenix framework and we have covered only some introductory topics. Still, I really hope that by now you have at least basic feel of the Phoenix framework. Phoenix guides can be found at hexdocs.pm/phoenix website. It has many examples and very detailed explanations of all basic components and some advanced stuff like testing and deployment process, so be sure to browse it. Of course, crafting Phoenix applications requires solid knowledge of Elixir language. Eduonix is going to present an “Introduction to Elixir” course in the beginning of May covering basics and more advanced concepts of this language so stay tuned! Nice tutorial There is a typo: albumsfolder should be albums folder Thank you for the great tutorial. the **albumsfolder** part confused me but i corrected it by just writing *albums*.
https://blog.eduonix.com/web-programming-tutorials/introduction-phoenix-framework-works-action/
CC-MAIN-2021-04
refinedweb
3,025
64.71
Generate TSP instances and test hypotheses on them Project description Open TSP This library is designed to make it as easy as possible to generate and test TSP instances against hypotheses, as well as solving them with brute force for comparison. This is my first major project, having learnt Python over the last six months, so if you have style suggestions I'd love to hear them. The goal is to make this an easy library to work with, please feel free to email me with any functionality you feel would help. If you want to contribute implementations of algorithms for solving then I'd love to hear from you! Currently I have implemented brute-force and convex-hull. However, there is a long list of algorithms I'd love to include in the library, such as christofides and branch and bound. I don't have the knowledge to implement these myself though. This is why I'm making opentsp an open-source library, in the hopes that others who find it useful will add to it. If you have suggestions or code, please create a pull request on the Github page, which I'll put the link up for soon. Getting started To install opentsp in your local environment with pip: pip install opentsp Then, at the top of your module: from opentsp.objects import Generator Creating a TSP Instance To create a random TSP instance with eight nodes, using your console: >>> from opentsp.objects import Generator >>> gen = Generator() >>> instance = gen.new_instance(8) If you want to get the same values as in these examples, then use the below input: >>> instance = gen.new_instance(8, source='seed', seed=1234) To see what this creates: >>> instance Seed: 1234 Node count: 8 Edge count: 56 Nodes: {1: (47, 83), 2: (38, 53), 3: (76, 24), 4: (15, 49), 5: (23, 26), 6: (30, 43), 7: (30, 26), 8: (58, 92)} Edges: {1: ((47, 83), (38, 53), None, False, 0, good, 0), 2: ((47, 83), (76, 24), None, False, 0, good, 0), 3:...} Distance matrix: None Solve time: None Results: None It has a seed, which is used to generate the random nodes. The number of nodes and edges are not object attributes, they come from two class methods, I've found it handy to have that information as part of the printed representation. Then it shows the actual nodes and edges, the distance matrix (if generated), the solve time (if brute-forced), and the results. Results is dictionary of algorithms and their results as a path of nodes. Something to note here is that edge lengths are only calculated when first needed, and only calculated once. So you can see a 'None' value in the edges, because the lengths haven't been calculated yet. I'll address the Edge class attributes in more detail later. The instance has a bunch of methods for accessing different properties of a TSP problem, such as: - the number of nodes >>> instance.num_nodes 8 - number of edges >>> instance.num_edges 56 - the x and y values, for instance: >>> instance.x_values [47, 38, 76, 15, 23, 30, 30, 58] - the edge lengths: >>> instance.edge_lengths_as_list [7.0, 7.0, 12.806248474865697, 12.806248474865697, 14.212670...] - the n shortest edges: >>> instance.n_shortest_edges_of_instance(1) ((23, 26), (30, 26), 7.0, False, 0, good, 0) - etc The instance can also be solved, currently the default solve method is brute force. You can then print the instance's results dictionary to look at the results. >>> instance.solve() 'Brute force completed.' >>> instance.results {'convex_hull': [(15, 49), (23, 26), (76, 24), (58, 92), (47, 83), (15, 49)], 'brute_force': ((58, 92), (76, 24), (30, 26), (23, 26), (15, 49), (30, 43), (38, 53), (47, 83), (58, 92))} Instances can be generated from a variety of sources, currently these are: - Random A random eight digit seed is generated for the instance, and nodes with random values are generated from this seed. - Seed Pass in an integer to use as a seed. This is primarily helpful when you want to reproduce a series of random instances. When passing in your own seed make sure to set source=seed, otherwise the seed you pass will be ignored. E.g. inst = generate.Generator.new_instance(5, source='seed', seed=123) - CSV Currently this takes a CSV with two columns of values for the x and y coordinates. The top row must have the titles 'x' and 'y'. Any instance can also be viewed via matplotlib, try the two commands below: >>> instance.view(nodes=True, edges=True) >>> instance.view(result='brute_force') # must have solved the problem already (The string passed in the result argument above should match a key in the instance.results dictionary of the instance. If the problem was solved without need for brute force, then the 'brute_force' key won't be present. However, the 'optimal_solution' key is always present if a solution has been found.) Example Hypothesis-test Algorithm Here is an example algorithm, making use of open-tsp's features to generate and test many instances against a hypothesis: import open-tsp as ot # hypothesis: the shortest edge is always used in the optimal solution def hyp_test(): loop = True loops = 0 start = time.time() while loop is True and loops < 1000000: loopstr = f"Failed on loop: {loops}" p = ot.classes.Generator.new_instance(6, solve=True) shortest_edge = p.n_shortest_edges_of_instance(1) edge_path = p.solution_as_edges if shortest_edge in edge_path: loops += 1 else: loop = False p.view(plot_se=True) end = time.time() total_time = end - start print(loopstr) print("Took {0}'s to run.".format(total_time)) print("Finished") This function will generate up to a million instances, testing each one to see if the shortest edge is in the solution or not using the if test on line 13: if shortest_edge in edge_path:. When the test fails, it shows the instance that failed with matplotlib and highlights the shortest edge p.view(plot_se=True). As you can see, in 19 lines of code it's possible to test a hypothesis, with only a few lines actually concerned with generating the tsp instance and getting it's properties. Also, as in the example above, you can solve the instance as part of generating it with solve=True. Keep in mind that is most likely going to be a brute force solution, so trying to do this with a 20 node problem will take a fairly long while unless you are running your code on a very powerful supercomputer. Using Your Own Algorithms To use your own algorithm, define a function with an appropriate name, and make instance an argument. def brute_force(instance): Write your algorithm within the function. Then, within the function, you can pass in the instance being worked with and use any of it's attributes or properties. For instance, in my brute force algorithm: # store the last node to a variable, this will be used to complete the loops last_elem = instance.nodes[len(instance.nodes)] To check out the full brute force algorithm, look at the Solvers class in the Github repo. Overview of Classes Node Basic point-type class for defining nodes. I refer to the points in a tsp as 'nodes' rather than 'cities'. Cities is too specific for me, it could just as easily be towns, or cats, really. I prefer the more abstract 'nodes'. Necessary arguments: x, y Edge Defines the edges as having two nodes and a length. The two nodes inherently bound the length. Necessary arguments: node_one, node_two Path A path is a series of nodes, and has methods to find the length of the path, etc. Necessary arguments: a list of node objects Instance Each instance is essentially a container for the nodes, edges and algorithm results. Necessary arguments: none This class is not really meant to be created directly as that requires a lot of code, especially for generating the edges, and it would defeat the whole purpose of this library if you had to write all of that. So, the Generator class is used to create instances through the new_instance method, which contains all of that functionality. Generator Contains the main method for creating an instance - new_instance. The only required argument for new_instance is the number of nodes. In this case, the default is to generate random nodes. Solver Contains methods for solving instances. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/opentsp/
CC-MAIN-2021-10
refinedweb
1,410
70.53
Patent application title: CONTROL AND FEATURES FOR SATELLITE POSITIONING SYSTEM RECEIVERS Inventors: Mangesh Chansarkar (Fremont, CA, US) Sundar Raman (Fremont, CA, US) Sundar Raman (Fremont, CA, US) Charles P. Norman (Huntington Beach, CA, US) Paul A. Underbrink (Lake Forest, CA, US) Henry D. Falk (Long Beach, CA, US) James Brown (Laguna Beach, CA, US) Robert Harvey (San Francisco, CA, US) Peter Michali (Irvine, CA, US) Williams Higgins (Marlon, IA, US) Gensheng Zhang (Cupertino, CA, US) Qingwen Zhang (Irvine, CA, US) Assignees: SIRF TECHNOLOGY, INC. IPC8 Class: USPC Class: 34235774 Class name: Satellite radio beacon positioning system transmitting time-stamped messages; e.g. gps [global positioning system], glonass [global orbiting navigation satellite system] or galileo (ipc) receivers (ipc) power consumption Publication date: 2011-12-29 Patent application number: 20110316741 Abstract:. Claims: 1. A satellite positioning receiver comprising: a first subsystem within the satellite positioning receiver that processes satellite positioning signals into processed satellite signals; a second subsystem that receives processed satellite positioning signals; and a structure that enables the processed satellite positioning signals to be passed from the first subsystem to the second subsystem, where the data structure has a first circular data structure that receives the processed satellite signals and a second circular data structure that makes the processed satellite signals available to the second subsystem. 2. The satellite positioning receiver of claim 1, where the first data structure and the second data structure have a plurality of corresponding data locations with each pair of data locations forming a channel. 3. The satellite positioning receiver of claim 2, where the first subsystem waits for the second subsystem to finish with a channel before the first subsystem places processed satellite positioning signal data into the channel just accessed by the second subsystem. 4. The satellite positioning receiver of claim 1, where the first subsystem and the second subsystem access one of the channels at the same time until the first subsystem access another one of the channels. 5. A satellite positioning receiver, comprising: a signal processing subsystem having a signal input; a Fast Fourier Transfer (FFT) subsystem; and a signal tracking loop that receives parameters from the FFT subsystem and adjust the signal input to the signal processing subsystem and is controlled by an oscillator and augmented by a software tracking loop. 6. The satellite positioning receiver of claim 5 where the software tracking loop further comprises: an algorithm that uses the parameters in order to make an adjustment to the signal input of the signal processing system. 7. The satellite positioning receiver of claim 6, where the adjustment to the input of the signal processing system is make by the software tracking loop adjusting the operation of the NCO controlling the hardware tracking loop. 8. A satellite positioning receiver, comprising: an input subsystem; a Fast Fourier Transfer (FFT) subsystem; a signal processing subsystem; and a power control means that enables at least one of the input subsystem, Fast Fourier Transfer subsystem, and the signal processing subsystem to be placed in a reduced power state while the other subsystems are operating at normal power. 9. The satellite positioning receiver, where the power control means further includes: a first signaling bit that signals a reduced power state associated with the FFT subsystem; and a second signaling bit that signals a reduced power state associated with the signal processing subsystem. 10. A method of controlling a satellite positioning receiver in receipt of satellite positioning signals, comprising: determining a search strategy for acquiring the satellite positioning signals; configuring the satellite positioning receiver in accordance with the search strategy; processing the satellite positioning signals storing a plurality of data associated with the processing of the satellite positioning signal in a memory periodically; and enabling power control of the satellite receiver based on the processing of the satellite positioning signals and the plurality of data in the memory. 11. The method of claim 10, where storing further includes storing operational data associated with the satellite positioning receiver every 100 ms. 12. A method of synchronization in a satellite positioning receiver, comprising: receiving a positioning signal at a receiver; summing samples of positioning signals received at the satellite positioning receiver within a pre-selected range over a moving window of pre-selected length resulting in a plurality of different hypothesis; filling a plurality of bins with the sum of linear envelope over successive data bits where each bin is associated with one of the plurality of different hypothesis; determining the bin with the largest magnitude; and outputting a bit location for synchronization that enables the receiver to synchronize to the satellite positioning signal. 13. The method of claim 12, where the samples are inputs from a multi-point Fast Fourier Transform. Description: RELATED APPLICATIONS [0001] This application is a divisional of application Ser. No. 10/570,578, filed Mar. 1, 2006, entitled CONTROL AND FEATURES FOR SATELLITE POSITIONING RECEIVERS, which claims priority under 35 U.S.C. §119(e) of U.S. Provisional Application No. 60/499,961, filed on Sep. 2, 2003 and titled "A GPS SYSTEM", which application is incorporated by reference herein. [0002] U.S. Provisional Patent Application No. 60/546,816, filed on Feb. 23, 2004, entitled "CONTROL AND FEATURES FOR SATELLITE POSITIONING SYSTEM RECEIVERS", by Mangesh Chansarkar, Sundar Raman, James Brown, Robert Harvey, Peter Michali, Bill Higgins, Paul Underbrink, Henry Falk, Charles Norman, which a claim to priority is made and is incorporated by reference herein. [0003] PCT Patent Application No. PCT/US04/28542,. [0004] U.S. Provisional Patent Application No. 60/547,385, filed on Feb. 23, 2004 entitled "OVERALL SYSTEM ARCHITECTURE AND RELATED FEATURES, by Paul Underbrink, Henry Falk, Charles Norman, Steven A. Gronemeyer, which a claim to priority is made and is incorporated by reference herein. [0005] This application is also related to the following: [0006] U.S. patent application Ser. No. 10/696,522, filed on Oct. 28, 2003 and titled "MEMORY REALLOCATION AND SHARING IN ELECTRONIC SYSTEMS", by Nicolas P. Vantalon, Steven A. Gronemeyer and Vojislav Protic, which a claim to priority is made and is incorporated by reference herein. [0007] U.S. Pat. No. 5,901,171, filed on Apr. 25, 1996 and titled "TRIPLE MULTIPLEXING SPREAD SPECTRUM RECEIVER", by Sanjai Kohli and Steven Chen, which is incorporated by reference herein. [0008] Additional patents that are incorporated by reference and/or claimed to for priority: U.S. patent applications, all of which are currently pending: Ser. No. 09/498,893, Filed on Feb. 7, 2000 as a CIP of U.S. Pat. No. 6,044,105 that issued on Mar. 28, 2000 and was originally filed on Sep. 1, 1998; Ser. No. 09/604,595, Filed on Jun. 27, 2000 as a CIP of Ser. No. 09/498,893, Filed on Feb. 7, 2000 as a CIP of U.S. Pat. No. 6,044,105 that issued on Mar. 28, 2000 and was originally filed on Sep. 1, 1998; Ser. No. 10/369,853, Filed on Feb. 20, 2003, Ser. No. 10/632,051 filed on Jul. 30, 2003 as a CIP of Ser. No. 10/369,853 that was filed on Feb. 20, 2003; Ser. No. 10/712,789, filed Nov. 12, 2003, titled "COMMUNICATION SYSTEM THAT REDUCES AUTO-CORRELATION or CROSS-CORRELATION IN WEAK SIGNALS," by Gregory B. Turestslcy, Charles Norman and Henry Falk, which claims priority to U.S. Pat. No. 6,680,695, filed on Jul. 20, 2001, and issued on Jan. 20, 2004,; U.S. patent application Ser. No. 10/775,870, filed on Feb. 10, 2004, titled "LOCATION SERVICES SYSTEM THAT REDUCES AUTO-CORRELATION OR CROSS-CORRELATION IN WEAK SIGNALS," by Gregory B. Turestsky, Charles Norman and Henry Falk, which claims priority to U.S. patent application Ser. No. 10/244,293, titled "LOCATION SERVICES THAT REDUCES AUTO-CORRELATION OR CROSS-CORRELATION IN WEAK SIGNALS," by Gregory B. Turestsky, Charles Norman and Henry Falk, which claims priority to U.S. Pat. No. 6,466,161, filed on Jul. 20, 2001, titled "LOCATION SERVICES THAT REDUCES AUTO-CORRELATION OR CROSS-CORRELATION IN WEAK SIGNALS,"; Ser. No. 10/194,627, Filed Jul. 12, 2002, and titled MULTI-MODE GPS FOR USE WITH WIRELESS NETWORKS, by Ashutosh Pande, Lionel J. Garin, Kanwar Chadha & Gregory B. G Turestsky, that continuation of Ser. No. 10/068,751, filed Feb. 5, 2002 that was a continuation of U.S. Pat. No. 6,389,291, filed on Feb. 8, 2001 that claimed priority to U.S. Provisional Patent Application 60/255,076, Filed on Aug. 14, 2000; Ser. No. 10/385,198, Filed on Mar. 10, 2002 as a Continuation of U.S. Pat. No. 6,542,823, Filed on Apr. 19, 2002 that was continuation of U.S. Pat. No. 6,427,120, Filed Feb. 28, 2001 and claimed priority to U.S. Provisional Patent Application 60/255,076, Filed on Aug. 14, 2000. [0009] U.S. patent applications, all of which are currently pending: Ser. No. 10/155,614, filed May 22, 2002; Ser. No. 09/910,092, filed Jul. 20, 2001; Ser. No. 09/910,404, filed Jul. 20, 2001; Ser. No. 09/909,716, filed Jul. 20, 2001; Ser. No. 10/244,293, filed Sep. 16, 2002; Ser. No. 10/712,789, filed Nov. 12, 2003; Ser. No. 10/666,551, filed Sep. 18, 2003; Ser. No. 09/551,047, filed Apr. 18, 2000; Ser. No. 09/551,276, filed Apr. 18, 2000; Ser. No. 09/551,802, filed Apr. 18, 2000; Ser. No. 09/552,469, filed Apr. 18, 2000; Ser. No. 09/552,759, filed Apr. 18, 2000; Ser. No. 09/732,956, filed Dec. 7, 2000; Ser. No. 09/735,249, filed Dec. 11, 2000; Ser. No. 09/886,427, filed Jun. 20, 2001; Ser. No. 10/099,497 filed Mar. 13, 2002; Ser. No. 10/101,138 filed Mar. 18, 2002; Ser. No. 10/246,584, filed Sep. 18, 2002; Ser. No. 10/263,333, filed Oct. 2, 2002; Ser. No. 10/309,647, filed Dec. 4, 2002; Ser. No. 10/320,932, filed Dec. 16, 2002; Ser. No. 10/412,146, filed Apr. 11, 2003; Ser. No. 10/423,137, filed Apr. 25, 2003; Ser. No. 10/600,174, filed Jun. 20, 2003; Ser. No. 10/600,190, filed Jun. 20, 2003; Ser. No. 10/644,311, filed Aug. 19, 2003; Ser. No. 10/658,185, filed Sep. 9, 2003; Ser. No. 10/696,522, filed Oct. 28, 2003; Ser. No. 10/706,167, filed Nov. 12, 2003; Ser. No. 10/715,656, filed Nov. 18, 2003; Ser. No. 10/722,694, filed Nov. 24, 2003; Ser. No. 10/762,852, filed Jan. 22, 2004; and the application entitled SIGNAL PROCESSING SYSTEM FOR SATELLITE POSITIONING SIGNALS, filed Feb. 23, 2004 (Attorney Docket Number SIRF.P281.US.U2; Application Number not yet assigned). BACKGROUND OF THE INVENTION [0010] 1. Field of the Invention [0011] This invention relates generally to positioning systems. More specifically, this invention relates to methods and systems for implementing control and features in a satellite positioning system. [0012] 2. Related Art [0013] The worldwide utilization of wireless devices such as two-way radios, pagers, portable televisions, personal communication systems ("PCS"), personal digital assistants ("PDAs") cellular telephones (also known a "mobile phones"), Bluetooth enabled devices, satellite radio receivers and Satellite Positioning Systems ("SPS") such as the United States' Global Positioning Systems ("GPS"), also known as NAVSTAR, is growing at a rapid pace. Current tends are calling for the incorporation of SPS services into a broad range of electronic devices and systems, including Personal Digital Assistants (PDAs), cellular telephones, portable computers, automobiles, and the like. Manufacturers constantly strive to reduce costs and produce the most cost-attractive product possible for consumers. [0014] At the same time, the manufacturers attempt to provide a product as rich in features, and as robust and reliable, as possible. To a certain extent, technology and available development time place constraints on what features may be implemented in any given device. Thus, in the past, prior SPS devices have experienced drawbacks and limitations in areas that include, as examples, receiver managers, signal measurements, bit synchronization techniques, integrity monitoring, operational mode switching, measurement interpolation, hardware and software satellite signal tracking loops, and power control. Such drawbacks limit the performance, ease of use and robustness of the GPS enabled electronic devices, in addition to having an impact on sales and consumer desirability. [0015] Therefore, there is a need for overcoming the problems noted above, and other previously experienced. SUMMARY [0016] SPS receiver functionality may reside in a device that has additional functionality, such as, for example, wireless communication devices, tracking devices, and emergency location beacons. The SPS functionality in the device may include multiple subsystems that initialize, control and monitor the operation of the SPS functionality. Subsystems in turn may be made up of a number of software modules and hardware components/circuits that accomplish a desired SPS purpose. The subsystems may include an input sample subsystem, signal processing subsystem, FFT subsystem, memory subsystem, sequencer subsystem, and other miscellaneous subsystems. The subsystems may work together to implement location determination, power control, and configuration of the SPS receiver functionality, communication between subsystems, and communication with the additional functionality. An example of implemented SPS receiver functionality is a GPS receiver and the terms SPS and GPS may be used interchangeably. [0017] The software aspects of the SPS receiver functionality may be implemented in software as groupings of machine instructions that are stored in machine-readable devices, such as, for example, types of ROM (i.e. PROMS, EPROMS, ASICs and within controllers), magnetic storage (hard/floppy disks), and optical storage (CDs, DVDs, LaserDisc). When the machine instructions are executed, the control and features of the GPS receiver are achieved. [0019] The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. In the figures, like reference numerals designate corresponding parts throughout the different views. [0020] FIG. 1 illustrates a block diagram of an embodiment of a GPS receiver. [0021] FIG. 2 is a block diagram showing subsystems of a baseband chip from the GPS receiver of FIG. 1. [0022] FIG. 3 is a block diagram illustrating general data flow between subsystems of the GPS receiver of FIG. 1. [0023] FIG. 4 is a diagram of the division between software and hardware within the GPS receiver of FIG. 1. [0024] FIG. 5 is a module interaction diagram of the GPS receiver of FIG. 1. [0025] FIG. 6 illustrates the task within the ATX control module of FIG. 5. [0026] FIG. 7 illustrates the implementation layers within the GPS receiver of FIG. 1. [0027] FIG. 8 is a flow diagram for the GPS receiver control module of FIG. 5. [0028] FIG. 9 is a sequence diagram of the communication between the different modules of FIG. 5 in order to acquire location measurements. [0029] FIG. 10 is a sequence diagram of a recovery condition between the modules of FIG. 5. [0030] FIG. 11 is a sequence diagram of acquisition and tracking pre-positioning configuration of the ATX control module of FIG. 5. [0031] FIG. 12 is a sequence drawing of communication with the navigation module and quality of service module of FIG. 5. [0032] FIG. 13, a sequence drawing of power management via the power manager module of FIG. 5. [0033] FIG. 14 is a sequence drawing of the background task module of FIG. 5. [0034] FIG. 15 is a flow diagram of the signal processing subsystem of FIG. 2. [0035] FIG. 16 is an illustration of the master control state machine of FIG. 15. [0036] FIG. 17 is an illustration of the master control state machine for the FFT subsystem of FIG. 2. [0037] FIG. 18 is a channel sequencing control diagram illustrating the communication between the signal processing subsystem and the FFT subsystem using the memory subsystem. [0038] FIG. 19 is a list of lapping rules to prevent the signal processing subsystem from overwriting memory used by the FFT subsystems of FIG. 18. [0039] FIG. 20 is an illustration of the semaphore and interrupt structure for communication between the subsystems of FIG. 2, software and hardware. [0040] FIG. 21 is a bit level illustration of the semaphore and interrupt mask of the interrupt structure of FIG. 20. [0041] FIG. 22 is a flow diagram of time adjustment of the signal processing subsystem of FIG. 2 within a T1 phase. [0042] FIG. 23 is a flow diagram of the time adjustment of the FFT subsystem of FIG. 2 within a T1 phase [0043] FIG. 24 is a diagram of the match filter of FIG. 3 that is configurable by software [0044] FIG. 25 is a flow diagram of an expert GPS control system that resides in the GPS receiver controller of FIG. 5. DETAILED DESCRIPTION [0045] The discussion below is directed to a hardware and software architecture that provides control and features in a satellite positioning systems (SPS), such as the United States Global Positioning Satellite System commonly referred to as a GPS system. Specific features of the architecture include, as examples; SPS initialization of memory; control of data processing; subsystem communication; power control management, and an expert system receiver manager. The architecture and the control and feature systems described below are not limited to the precise implementations described, but may vary from system to system according to the particular needs or design constraints of those systems. [0046] Turning to FIG. 1, a block diagram of an embodiment of a GPS receiver 100, including a radio frequency ("RF") component 102 and a baseband component 104. In one embodiment, the RF component 102 and the baseband component 104 may interface with additional functionality provided by an original equipment manufacturer ("OEM") subsystem, or "host" processor 106 and OEM memory 108 over a bus 110. As will be described below, the baseband component 104 may communicate with a memory component 112. The memory component 112 may be separate from the baseband component 104. In other implementations the memory component 112 may be implemented within the baseband component 104. The RF component 102 may be directly coupled to an antenna 114 that is dedicated to the RF component 102. In other implementations, the antenna 114 may be shared by the RF component 102 and an OEM receiver (not shown). Optionally, the OEM memory 108 may be separate from the memory component 112 and independent from the baseband component 104. Other possible arrangements may include one or more RF components and one or more baseband components being on one or more chips with all of the required memory and processing power to perform the GPS functions. In yet other implementations, multiple chips may be used to implement the GPS receiver 100 and may be combined with technology such as flip-chip packaging. [0047]. [0048]. [0049] In FIG. 2, a block diagram shows subsystems of an embodiment of the baseband component 104, including an input sample subsystem 202, a signal processor subsystem 204, a FFT subsystem 206, a memory subsystem 208, a sequencer subsystem 210, and another "miscellaneous" subsystem 212. For convenience herein the subsystems may be referred to as groups of processes or task implemented along with associated hardware. The division of tasks or functionality between the subsystems typically is determined by design choice. [0050]. [0051] The input sample subsystem 202 receives signal data from the RF component 102, FIG. 1, and stores the signal data in RAM that is part of the memory subsystem 208, FIG. 2. raw digitized signal data or minimally processed decimated signal data may be stored in RAM. The ability to store the digitized RF signals may occur in one of two ways. The first is that data may be gathered by the input sample subsystem 202 in increments of 20 milliseconds and stored in RAM with the process being repeated over and over. The other approach is for the input sample subsystem 202 to use a cyclic buffer in RAM. For example the input sample subsystem 202 would fill a region of the RAM and then overwrite the data upon cycling through the buffers. Such an operational approach would have the software set up the signal processing subsystem 204 and the FFT subsystem 206 in such a way to process the signal data fast enough before the signal data is overwritten in the cyclic buffer. The operational approach may be selectable with the software configuring the approach that best meets the needs of the user and RF environment upon the GPS system 100 being initialized. In other embodiments, the operational approach used by the input sample subsystem 202 may be changed during operation of the GPS receiver 100. [0052]. [0053]. [0054] Turning to FIG. 3, a diagram of signal flow between the subsystems of the GPS receiver 100 of FIG. 1 is shown. A RF signal, such as a CDMA GPS satellite signal, is received by the RF component 102, FIG. 1, and passed to the input sample processing subsystem 202, FIG. 3. The input sample processing subsystem 202 may include an input sample processing block 302 and a Timer/Automatic Gain Control (AGC) block 303. The Timer/AGC block 303 is made up of a number of counters, timers, and alarm generators that are used to sample input pulses of the input signal. The Timer/AGC block 303 may also create interrupts, start software and hardware functions at known times as well as conducting synchronization, and frequency and phase measurement. The Timer/AGC block 303 may provide the ability to synchronize two systems or subsystems by generating precision time alignment pulses or by accepting input pulses from other systems in addition to making relative frequency and phase measurements. For example in systems having a low power real-time (RTC) clock with a low cost watch type crystal, the watch type crystal may be calibrated to GPS time by the Timer/AGC block 303 in order to use the low cost low power RTC during power control conditions. [0055]. [0056]. [0057]. [0058]. [0059]. [0060] The matched filter 308 may be configurable for various precision levels and code phase fractions. The. [0061]. [0062]. [0063] The cross-correlator 314 holds the output of the matched filter 308 in complex form (I,Q) for use by a cross-correlator removal process. In the cross-correlator removal process, some weak signal data from the past is required and strong signal processing is typically completed before the weak signals processing commences. This cross-correlator 314 provides the flexibility in allowing for more lag in the strong signal processing than other known approaches. [0064]. [0065]. [0066]. [0067]. [0068]. [0069]. [0070] A list of the largest eight peaks may be stored in memory to aid in selection of the largest peak. In other implementations, different amounts of the peaks may be stored. The list may be implemented as a linked list or other searchable data structure. [0071] In one implementation, the architecture obtains data bit synchronization for signals with a carrier to noise (C/N0) ratio at or lower than 21 dB Hz. Two different approaches are described below for resolving approximately 1 ins of ambiguity within a 20 ms data bit in the signal transmitted by a given satellite. In other words, the approaches accurately determine, within a 20 ms window, the time at which a data bit transition has occurred to accurately locate a bit transition in the transmitted signal. The approaches include a time domain histogram approach and a frequency domain histogram approach. [0072] In the time domain histogram approach, the architecture creates a time domain histogram from time domain samples of the signal transmitted by a given satellite. In summary, the architecture sums samples taken at a pre-selected rate (e.g., 1 ms samples) over a moving window with a pre-selected length (e.g., 20 ms). Subsequently, twenty different hypotheses are postulated, one for each 1 ms shift of the moving window. A histogram with twenty bins (each corresponding to a different hypothesis) is then built by accumulating the sum of the linear envelope, ( {square root over (I2+Q2)}), over successive data bits. The accumulation results in bins in the histogram of differing magnitudes. The bin with the largest magnitude corresponds to the hypothesis that is closest to the true data bit transition. [0073] In one implementation, the architecture may then obtain a refinement of the estimate by performing, a multipoint interpolation on the bins. For example, the architecture may perform a three-point interpolation using the largest bin and two adjacent bins, one on each side of the largest bin. [0074] In the frequency domain histogram approach, the architecture takes a moving window of pre-selected length (e.g., 20 ms). The window may include twenty (20) 1 ms samples. The architecture applies a sample to each of twenty (20) inputs of a multi-point Fast Fourier Transform (FFT) circuit. As one example, the FFT subsystem may determine a 32 point FFT. Subsequently, a pre-selected number (e.g., twenty) different hypotheses are postulated, for example one hypothesis for each 1 ms shift of the moving window and twenty corresponding FFT operations each corresponding to a unique hypothesis. [0075] The architecture may then build a two dimensional histogram. One axis or dimension of the histogram may correspond to the 32 FFT output bins, and the other axis may then correspond to the twenty hypotheses. The histogram may be built by accumulating the linear envelope, ( {square root over (I2+Q2)}), over successive data bits. The accumulation results in bins in the histogram of differing magnitudes. A bin may be a counter or a more complex structure, implemented in either hardware or software. The bin with the largest magnitude corresponds to the hypothesis that is closest to the true data bit transition and for the frequency that is closest to the input carrier frequency. Hence, a search across the frequency dimension gives the architecture the closest frequency. At that frequency, the architecture then searches the hypothesis axis for the best bit synchronization (bit transition) hypothesis. [0076] Simulation results are presented below to highlight the performance of the two approaches noted above. The simulations assume equally likely random data bits of +1/-1. The simulation runs over approximately 25,000 trials, with a statistical analysis set forth below. For each trial, a stopping condition was in place, and was chosen such that the accumulations occur for longer periods when the signal is weaker, and when the number of transitions is less. [0077] A time based stopping condition may be determined by accumulating the envelope of the difference between the present data bit and the previous data bit, ( {square root over ((I12-I22)+(Q12-Q22))}{square root over ((I12-I22)+(Q12-Q22))})- , summing over all the hypotheses. Note that the difference is noise only, if there is no actual bit transition and proportional to twice the signal amplitude if there is a transition. The accumulations terminate when the accumulated difference reaches a preset threshold. At weak signal strengths, the signal amplitude is smaller and takes longer to reach the threshold and hence the simulation runs longer. [0078] A frequency based stopping condition may be determined by accumulating the envelope as noted above, but having the accumulation performed on the output of the frequency domain histogram. That is, the architecture accumulates the envelope of the difference between the present data bit and the previous data bit (over all frequency bins) and sums over all the hypotheses. [0079] For the results demonstrated below, the time based stopping condition may be employed for both time and frequency histogram approaches. In the simulations, the true bit transition is randomly generated anywhere in the range of 0-20 ms. If the error between the estimate and the true transition is greater or equal to 0.5 ms, an error is declared. The error statistics are obtained from a pre-selected number (e.g., 25,000) of trials. The number of transitions (and time to obtain bit synchronization) is also determined. In addition, a time out condition with a pre-selected duration (e.g., 8 seconds), checked with a time out counter, is employed to prevent the loops iterating indefinitely. [0080] Table 1, below, provides a comparison of the time domain and frequency domain histogram approaches assuming a known carrier frequency. The probability of wrong detection of the bit transition may be used to compare and choose between the two algorithms for any particular implementation. TABLE-US-00001 TABLE 1 Probability of wrong detection C/N0 Avg. number of Time Frequency (dBHz) transitions Histogram Histogram 45 2.75 0.00308 0.00316 3.26 0.00052 0.00028 3.73 0.00004 0.00008 30 10.95 0.00188 0.00188 21.04 0.00004 0.00004 22 70.7 0.00136 0.00136 21 75.3 0.00376 0.00464 20 79 0.01040 0.01020 [0081] Table 2 shows the detection errors for frequency errors within a bin for the two algorithms TABLE-US-00002 TABLE 2 Probability of wrong detection C/N0 Frequency Time Frequency (dBHz) error (Hz) Histogram Histogram 22 0.0 0.00 0.00 8.0 0.00012 0.00004 15.0 0.00824 0.00796 24.0 0.829 0.00 32.0 1.00 0.00 [0082] As can be seen from Table 1, when the carrier frequency is known, the performance of the two algorithms is similar. Also from Table 2, the performance of the two algorithms is similar for frequency errors within a bin. Note that the bin 0 may be centered at 0 Hz and bin 1 may be centered at 31.25 Hz. The differences at 24 Hz and 32 Hz are due to the fact that in the frequency domain histogram, these frequencies fall in the vicinity of the adjacent bin. [0083] One advantage of the frequency domain approach is that it the architecture may employ if as a joint frequency synchronization and bit synchronization. That is, the frequency domain algorithm, while providing the benefits of the time domain approach, also operates over multiple frequency trials in parallel. A performance curve of the frequency domain histogram approach for a small frequency offset (2 Hz), where the criteria for stopping is the time domain based threshold count. The same threshold value was used for all C/N0 and for all frequency offsets plotted in FIG. 2. [0084] The performance curve is the time to acquire bit synchronization across C/N0 for the case where there is a small frequency error. At 22 dB, only 1 error was observed out of 25,000 trials. Thus the performance of the frequency domain histogram approach is similar to the time domain approach, across C/N0s for small frequency offsets when using the same stopping criterion. [0085] The time to acquire bit synchronization across C/N0 for the case where the stopping criterion is based on the output of the frequency domain histogram may be shown as a curve of the time to acquire bit synchronization across C/N0 for the case where there is a small frequency error of 2 Hz. The curve of bit synchronization has the advantage of simultaneously performing frequency estimation and bit synchronization. Note that the time domain approach employs a certain amount of information regarding the frequency error to accurately to provide reliable bit synchronization (in a serial fashion). In the joint approach, however, the architecture may obtain an estimate of the carrier frequency along with the bit boundary in a parallel fashion. [0086] The architecture further includes interpolation and smoothing circuitry and methods to that improve resolution of carrier frequency and code phase estimates for ranging signals transmitted by the SPS satellites that arrive in weak condition. In one implementation, the architecture employs discrete values of carrier Doppler and code phase, and the interpolation and smoothing techniques improve on the discrete values. For example, the interpolation and smoothing techniques may process the quantized frequency and time bins prepared as noted above with regard to bit synchronization and acquisition in order to improve a carrier frequency determination and a time determination. [0087] The architecture may perform carrier frequency interpolation in different ways. For example, assuming seven 1 ms coherent samples are input to an eight point FFT (with one zero for the remaining input) and 3426 (6*571) times non-coherent integration results in a total time of 24 seconds and the FFT computes eight bin magnitudes each of resolution 125 Hz. Without interpolation, the bin with the maximum magnitude would ordinarily be chosen, yielding a possible error in the range of -62.5 to 62.5 Hz in the absence of binning errors. Binning errors, which happen at low C/N0s, may result in larger errors. [0088] In the analysis that leads to choosing a frequency interpolation technique, the frequency error is swept across one bin and the estimate for each frequency error is obtained as the bin with the maximum magnitude. The architecture then adjusts the frequency estimate by using an interpolation to improve the estimate, for example, a multi-point (e.g., 3-point) parabolic interpolation. This interpolation may employ the maximum magnitude bin and the magnitude of the adjacent bin on each side of the maximum. [0089] The peak position of a sampled quadratic can be located using the sampled peak and the two adjacent peaks. For a sampled quadratic function y, with sampled peak ym and the true peak δ samples from m, the three samples about the peak are related by ym-1=a(m-1-δ)2+b ym=a(m-δ)2+b ym+1=a(m+1-δ)2+b [0090] Setting m=0 and solving for δ yields δ = ( y m + 1 - y m - 1 ) ( 2 y m - y m + 1 - y m - 1 ) ##EQU00001## [0091] and m-δ provides an accurate peak of the sampled quadratic. [0092] Evaluating code phase interpolation may be performed, in one instance, assuming zero frequency error and a total range of +/-1 chip. Thus, for 0.5 chip correlator spacing, there are five (5) code phase bins, each spaced 0.5 chips apart, i.e. (-1, 0.5, 0, 0.5, 1). For the other correlator spacings, a similar analysis may be performed. [0093] I and Q samples for each of the five assumed time hypothesis may be generated by the following equations: I= {square root over (2CT/N0)}ρ(τ-τ0)+x Q=y [0094] In the evaluation simulations, the code phase error, τ0, may be swept across one bin, for example, -0.25 chips to 0.25 chips and the estimate for each error may be obtained by identifying the bin with the maximum magnitude. [0095] The architecture may then improve the code phase error using the three (3) point parabolic interpolation explained above, using the maximum magnitude code phase bin (out of the five bins as explained above) and the magnitude of the adjacent bin on each side of the maximum. Consideration may also be taken to account for the correlation between noise samples for bins, which are spaced less than a chip apart. [0096] An alternate method of interpolation yields the results shown in FIGS. 10 and 11. In the alternate method, the architectures selects four bins from the five bins of the code phase search space, then performs a four point FFT employing the four bins. The FFT outputs are then zero padded to twice the size and an inverse FFT using the eight point FFT is then carried out. The peak is then estimated from the eight point inverse FFT output. In one implementation, the architecture chooses the four out of the five bins so that the maximum bin and a higher adjacent bin occupy the center of this four bin selection that may be implemented as an array in hardware or software. [0097] The above techniques can be generalized to any correlator spacing desired. For instance, for a correlator spacing of 1/N chips, there would be a total of 2N+1 bins to cover the range [-1:1] chips. From these 2N+1 bins, the architecture may select 2N bins as described above. The architecture may then perform a 2N FFT on these 2N bins (Step 1208), followed by a padding of 2N zeros to the FFT output, and then an 4N size inverse FFT. [0098] Table 3, below, shows the effects of binning errors for the cases considered above (again assuming 1000 trials). Table 3 provides slightly pessimistic bounds for the case of frequency/code phase that lie at the edge of the bin, because in reality these scenarios will result in useful energy in the adjacent bins. TABLE-US-00003 TABLE 3 Pfa for NCS = 571 Pfa for NCS = 6*571 delta_f = 0 delta_f = fbin_size/2 delta_f = 0 delta_f = fbin_size/2 Carrier 17 dB: 0.000 17 dB: 0.002 17 dB: 0.000 17 dB: 0.000 doppler 15 dB: 0.003 15 dB: 0.051 15 dB: 0.000 15 dB: 0.000 interpolation 12 dB: 0.162 12 dB: 0.296 12 dB: 0.001 12 dB: 0.016 (8 pt. FFT) delta_tau = 0 delta_tau = -0.20 chip delta_tau = 0 delta_tau = -0.20 chip Code phase 17 dB: 0.000 17 dB: 0.184 17 dB: 0.000 17 dB: 0.012 interpolation 15 dB: 0.006 15 dB: 0.296 15 dB: 0.000 15 dB: 0.071 (0.5 chip 12 dB: 0.165 12 dB: 0.504 12 dB: 0.000 12 dB: 0.226 spacing) [0099] The effect of the limited bandwidth on the correlation function may be estimated for the code phase parabolic interpolation. For example, assume a chip spacing of 1/8 chip, no binning errors, and parabolic interpolation, with the correlation triangle filtered by a 6 MHz bandwidth filter and an unsynchronized decimator for the 1/8 chip spacing. Near the peak of the correlation triangle, the variance from the filtered correlation triangle is higher due to the flattening in the triangle. [0100] In one implementation, employing Doppler frequency interpolation, the parabolic interpolation with padding of nine zeros may provide an improvement at weak signal levels. For the code phase interpolation, the zero padded FFT algorithm provides lower errors in the center of the bin compared to the parabolic interpolation and in larger variation in mean values. [0101] The architecture also performs peak assignment, for example, to choose the correct set of peaks (one for each satellite) from a given set of multiple peaks for the satellites. The technique for peak assignment may operate on input data including aiding information with respect to an assumed reference position (e.g., the reference position at the center of the uncertainty region). The aiding information, as examples, may include a list of visible satellites (pseudo random noise (PRN) IDs), code phase indices (modulo 1023) for each satellite (e.g., at 1 chip resolution), Doppler values, line of sight (los) vectors to the satellites, a maximum horizontal position error (in meters) and a maximum velocity error (in m/s). [0102] Equation 1 shows the measured data: PRN 1 = { p 11 , p 12 , , p 1 N } PRN 2 = { p 21 , p 22 , , p 2 N } PRN M = { p M 1 , p M 2 , , p MN } Equation 1 ##EQU00002## [0103] where there are M satellites and a set of N peaks for each satellite. Each peak is characterized by a corresponding code offset modulo 1023 (i.e., 0≦pij≦1022), carrier frequency, and amplitude. In other words, each element in the above {M,N} matrix is characterized by a code offset, frequency, and amplitude parameter. Thus, element pij will be characterized by 3 parameters {cpij, dpij, apij} where cpij the code phase index of pij, dpij is the Doppler of pij, and apij is the amplitude of pij. [0104] In performing peak assignment, the architecture may assume that the peaks are arranged in the order of decreasing amplitudes for a given satellite, that the satellites are arranged in descending order of their strengths (e.g., based on the first element for each satellite (i.e., each row)), and that aiding information is available for the PRN ids in the measured data (Equation 1). [0105] The first two assumptions together imply that the first row will correspond to the strongest satellite and within the first row; the peaks are arranged in the descending amplitudes. Arranging the data in this manner may improve search speed, in the case where the architecture does not perform an exhaustive search of all possible combinations, while increasing the probability of finding the correct set of peaks. [0106] The peak indices and peak Doppler values may be obtained through the acquisition process (possibly aided). Hence, it is likely that the measured peak indices and Doppler values in Equation 1 lie within a window, bounded by position uncertainty, velocity uncertainty, time uncertainty, and frequency uncertainty. [0107] The architecture, in one implementation, will determine a set of correct peaks according to criteria discussed below. The determined set of peaks (given by [p11 p21 . . . pM1]) may be an array with M elements with each element corresponding to a unique satellite. The array of M elements may be implemented in hardware or software as a data structure such as an array, link list, or other structure that maintains the relationship of the array elements. [0108] In determining the set of correct peaks, the architecture may proceed according to the determination technique. The technique generally includes the steps of: Pruning, Upper Bounds, and Applying a Decision Technique. Pruning preprocess the measured data to reduce the size of the data set (the number of peaks). In the Upper Bounds step, the architecture employs the uncertainty information (position and time) and the LOS vectors to obtain bounds on the uncertainty between the measured index (Doppler) values and the reference index (Doppler) values. During application of the decision technique, the architecture applies a decision technique that employs the uncertainty bounds and the measured data to arrive at a determined set of peaks. [0109] In the discussion below, reference to single differences are references to differences between satellite i with satellite j while double differences are the differences on single differences between a user's receiver and reference data (e.g., the aiding information). [0110] In the pruning step, the architecture reduces the size of the measured data, while employing little or minimal processing. In one implementation, the architecture performs pruning by employing the amplitude information (recall that peaks are arranged in order of decreasing amplitudes). [0111] For example, the architecture may discard all peaks that satisfy: apij<k1*ap11 [0112] where i in the above equation is the satellite number and j=2, 3, . . . 8 denotes the position in the set. k1,(0<k1<1) [0113] is a threshold constant and thus, if k1=0.5 [0114] the architecture discards peaks which are less than half the size of the strongest peak. For the satellites with the strong signals, where a dominant peak stands out, a set with one element per strong satellite may result. [0115] In the step of applying upper bounds, the architecture employs apriori uncertainty information on the position and velocity to obtain upper bounds on the expected code phase index (Doppler) difference between the values provided at the reference and those measured at the true position. [0116] The range measured by the user from the true position at time t to satellite i is given by: ri(t)={circumflex over (r)}i(t)-{circumflex over (l)}i(t)*Δx+c*b11(t)+vi Equation 2 [0117] where c is the speed of light (m/s), b11 is the bias in the receiver's clock (s), The term vi(t) represents the measurement noise (m). The terms with denote the estimate values (at the reference). The line of sight vectors are given by l l ( t ) = s l ( t ) - x | s l ( t ) - x | ##EQU00003## [0118] Note that in the equation 2 above, ri(t) denotes the range measurement at the true user position u. The first term on the right side {circumflex over (r)}i(t) represents the range measurement at the center of the uncertainty (reference position). The second term denotes the error due to the uncertainty in true receiver position and the third term denotes the bias in the receiver time. [0119] Calculating the single differences from two different satellites, i and j: ri(t)-rj(t)=({circumflex over (r)}i(t)-{circumflex over (r)}j(t))-{circumflex over (l)}i(t)-{circumflex over (l)}j(t))*Δx+(vi-vj) Equation 3 [0120] In the Equation 3, the left hand side denotes the single difference in ranges between satellites i and j as referenced to the true user position u. The first difference term on the right hand side denotes the range differences between satellites i and satellite j at the center of the uncertainty. The second term represents the error due to the user position uncertainty. Note that this is also a function of the geometry of the satellites. [0121] Rewriting equation 3 to express the double differences and omitting the measurement noise term gives: [ri(t)-rj(t)]-[({circumflex over (r)}i(t)-{circumflex over (r)}j(t))]=({circumflex over (l)}i(t)-{circumflex over (l)}j(t))*Δx Equation 4 [0122] Similarly for Doppler: [di(t)-dj(t)]-[({circumflex over (d)}i(t)-{circumflex over (d)}j(t))]=({circumflex over (l)}i(t)-{circumflex over (l)}j(t))*Δu Equation 5 [0123] where di and {circumflex over (d)}i are the measured Doppler at the true user location and the reference location respectively due to satellite i, and Δu denotes the uncertainty in user velocity. [0124] Equations 4 and 5 provide the architecture with upper bounds on the double differences (between satellite i and satellite in code phase indices (Doppler) between those at the reference position and those measured at the true position. [0125] Next, the architecture applies a decision technique to determine a selected peak for each satellite from the set of peaks obtained at noted above. In one implementation, the architecture employs a cost vector in arriving at a determined set of peaks. Thus, for example, the architecture may select a set of peaks from the matrix in Equation 1 by forming a column vector (one column), where each element in the column vector corresponds to a unique satellite. [0126] For instance, choosing the first elements for each satellite yields the vector: [0127] [p11 p21 . . . pM1] [0128] For the chosen column vector, the next step is to form the single differences in their code phase indices and Doppler. For the amplitudes, the architecture may form the corresponding pair wise product of the amplitudes: [(cp11-cp21)(cp11-cp31) . . . (cp11-cp)(cp21-cp31) . . . (cp21-cp) . . . (cp-cp)] [(dp11-dp21)(dp11-dp31) . . . (dp11-dp)(dp21-dp31) . . . (dp21-dp) . . . (dp-dp)] [(ap11-ap21)(ap11-ap31) . . . (ap11-ap)(ap21-ap31) . . . (ap21-ap) . . . (ap-ap)] [0129] The architecture may employ the absolute values of these terms. For the code phase indices, the architecture may employ: [0130] |cp11-cp21| if |cp11-cp21|<512 [0131] 1022-|cp11-cp21| otherwise [0132] Note that the size M of the single difference vectors above is M=2. Thus for M=5, there are a total of 10 elements in each of the vectors above. The architecture repeats the above step for the estimates at the reference position. Thus for the given code phase indices (Doppler) at the reference position (a vector of size M), the architecture forms the single differences. In addition, the architecture also forms the magnitude of the single differences for the line of sight vectors. All the resulting vectors are of size M=2 in the current implementation. [0133] The architecture then, by employing the results of the bounding steps, obtains the upper bound in the error differences (double differences) between the values at the true position and the reference position (right hand sides of equations 4 and 5). [0134] For the code phase indices, the bound will be: position uncertainty (in chips)*LOS vectors (magnitude of single differences). [0135] For Doppler values, the bound will be: (velocity+position) uncertainty (in Hz)*LOS vectors (magnitude of single differences). [0136] The architecture also obtains the (double difference) error term for code phase indices and Doppler. The error term is the difference in the single difference values at the true position (explained above) and those at the reference position (explained above). Note that the error term vector is also of size M=2. [0137] Next, the architecture compares the error terms against the bounds on an element-by-element basis. If an element of the error term is greater than the corresponding bound element, the architecture increases the cost vector proportional to the inverse of the peak amplitude pair wise products formed as noted above and proportional to the difference in the error terms (double differences) and the upper bound. Note that this weight will be assigned to both elements (i.e. peaks) that were used in forming the single difference. Then, if the error term is less than the corresponding bound, the cost vector is not changed. The architecture may follow this procedure for all M choose 2 elements. At the end of this step, the architecture obtains a cost vector of size M. [0138] The architecture may then repeat the same procedure for the Doppler terms without resetting the cost vector. When the cost vector is equal to zero (all M elements identically zero), the architecture may determine that this corresponds to the optimum peak vector, and stop the search. Otherwise, the architecture saves the cost vector, resets it and returns to choose a new set of peaks as noted above with regard to forming the column vector. [0139] When all combinations of peaks have been searched without having a zero cost vector, then the architecture may select the set of peaks with the lowest cost vector magnitude. In the case of a tie, the architecture may select the set of peaks that occurs, for example, first in the search process. [0140] The discussion below details the tracking system for the architecture for strong and medium signal operation. The following abbreviations may be used below: Alpha, Beta: Generic filter coefficients that may take different values at different instances; FFT: Fast Fourier Transform; SPS: Satellite Positioning System; HWTL: Hardware Tracking Loop; NCO: Numerically Controlled Oscillator; PDI: Pre Detection Integration; RAM: Random Access Memory; S_Gain: Filtered Signal amplitude estimate employed to normalize the tracking loops; SWTL: Software Tracking Loop; T1: Basic Time epoch for Subsystem 2; Threshold, Threshold1, Threshold2: Generic Threshold values that may take different values at different times. [0141] The hardware and software tracking loops and acquisition plan 335 resides in the memory subsystem 208 in addition to the track history, bit sync, I/Q phase, and the 100 ms report data in RAM 304, 314, 320, 332, 334 respectively. The hardware tracking loops implement simple tracking loop equations in hardware and are controlled by software setting various parameters in the channel records. In some cases of extreme signal conditions (very weak signals or widely varying dynamic conditions) it may be preferable to run more complex signal tracking algorithms as opposed to simple tracking loops. In such cases the hardware tracking loop will be aided by the software tracking loop to obtain enhanced performance. The capability to have both hardware and software tracking loops provides this flexibility. [0142] The coherent data may be used by software for determining parameter changes in the hardware and software tracking loops. An advantage over the prior art is the ability to access both the coherent data and the phase history data with respect to time. The use of this data enables the GPS receiver 100 to adjust the processing of the data signals and the data may also acts as an indication of the quality of operation of the GPS receiver 100. [0143] The tracking loops may be split into two components. The first being a hardware tracking loop and the other being a software tracking loop. The hardware tracking loop operates at a high rate of speed. The hardware tracking loop is partially controlled by the NCO and counters. The software tracking loop operates at a lower speed and may use more complicated algorithms than the hardware tracking loop. The hardware tracking loop and software tracking loop makes use of parameters contained in the memory subsystem 208. The use of two types of tracking loops enables a level of redundancies and monitoring of the operation of the hardware while increasing the efficiency of the hardware tracking loop based upon the algorithms used by the software tracking loop. [0144] A previously discussed, an area of memory may be divided into channels that are groupings of input signal data. The channels may then be processed by the signal processing subsystem 204 followed by the FFT subsystem 206 sequentially. The signal data is passed between subsystems via the memory subsystem 208. The state of the different channels is contained in the channel state RAM 338. [0145] The memory subsystem 208 may further have memory that is rewritable such as RAM or permanent such as ROM for storing machine-readable encoded instructions. The term RAM and ROM are used to describe the operation of a type of memory that may be implemented using specific types of memory such as SDRAM, DDR, PROM, EPROM, or EEPROM memory to give but a few examples. The machine-readable instructions are typical encoded as modules that when executed control numerous functions of the GPS receiver 100 of FIG. 1. Examples of such modules are control loops, expert systems, power control, tracking loops, and types of acquisition. Similarly, other modules may control the different internal and external interfaces and messaging between subsystems and between the GPS receiver and OEM equipment. [0146] The sequencer subsystem 210 has a sequencer controller 336 that control a sequencer that oversees the operation of the signal processing subsystem 204 and another sequencer that oversees the operation of the FFT subsystem 206. Rules are implemented that keep the two sequencers synchronized. The rules are commonly called lapping rules and prevent one sequencer from advancing to another channel before the current sequencer has processed that channel's data. In other implementations, a single sequencer may be implemented to control the separate subsystems. [0147] Turning to FIG. 4, a diagram 400 of the division between hardware and software processing of the data signal within the GPS receiver 100 of FIG. 1 is shown. The diagram 400 is divided between a hardware side 402 and a software side 404. On the hardware side 402, there may be the signal processing subsystem 204, FFT subsystem 206, the non-coherent summation and track history buffer (RAM) 334, and a hardware tracking loop 406. On the software side there may be a software tracking loop 408. In other implementations, there may be more or fewer blocks shown in a diagram such as FIG. 4. The purpose of FIG. 4 is to provide a conceptual overview of how once the hardware is setup there is limited interaction directly from software. There may be numerous other software processes and tasks that are not shown in FIG. 4 such as, for example, the expert system and power control to name but a few. [0148] The GPS data signal be processed by the signal processing subsystem 204 and passed to the FFT subsystem 206 at a T1 interval. The output of the FFT subsystem 206 may be I and Q data, and time marks at a rate that PDIs (an amount of data from the coherent buffers that is needed by the FFT in order to operate) are available. The data may be stored in the NCS/TH Buffer 334 and sent to the hardware tracking layer 406 that implements the hardware tracking loop. The hardware tracking layer 406 may then feed back hardware NCO corrections that can be used by the carrier and code NCO 312, FIG. 3. [0149] The hardware side 402, FIG. 4 communicates with the software side 404 via memory such as, for example, when the NCS/TH buffer 334 is accessed by the software tracking loop 408. The software tracking loop 408 may operate at a lower speed than the hardware tracking loop and spend more time processing the data contained in memory in order to derive NCO corrections and 100 ms aiding information. Such information is placed into a memory that is accessed by the hardware tracking layer 406 an in turn picked up by the signal processing subsystem 204 at an appropriate time, such as during a context change (switching channels within the memory). [0150] Turning to FIG. 5, a module interaction diagram 500 of the GPS receiver 100 of FIG. 1 is shown. The control module of the GPS receiver 100 is the referred to as the GPS receiver control module 502. The GPS receiver control module 502 communicates with numerous other modules including a reset module 504, QoS module 506, visible satellite vehicle (SV) list 508, SV data 510, aiding module 512, navigation (NAV) module 514, DGPS module 516, acquisition tracking cross-coordinator (ATX) control manager module 518, power management module 520, data control module 522, Best Estimate Possible (BEP) module 524, UI GPS module 526, module interface (MI) module 528, and background/periodic task modules 530. [0151] The GPS receiver control module 502 may be implemented as a processing loop that continually cycles to process communication with the other modules. In another implementation, an interrupt approach may be used to communicate with the other modules and/or hardware components. Furthermore, a combination of a processing loop and interrupts may be employed to communicate with the other modules and hardware that make up the different subsystems. [0152] The reset module 504 is responsible for making sure the GPS receiver 100 is reset an initialized properly. The reset module 504 initializes all the subsystems including the memory subsystem 208 upon a reset event occurring or upon initial power-up. The reset module 504 may obey a command from the GPS receiver control module 502. A command being issued by the GPS receiver control module 502 to the reset module 504 causes the reset module 504 to clear selected memory locations and buffers an initialize them with known values. A reset command may be received at the GPS receiver control module 502 from the user interface via the UI GPS module 526 or upon power being initially applied to the GPS receiver 100. The reset module 504 may initiate a partial reset or a full reset of the GPS receiver 100. Upon a partial reset, the SV data module 510, DGPS module 516, and the BEP module 524 may continue to operate and receive data updates from external sources. [0153] The QoS module 506 may be responsible for determining the quality of service available at the GPS receiver 100. The QoS module 506 may be provided with information from other modules, such as information from the visible SV list 508. If a location determination would be unavailable under the current environment, the QoS module 506 may direct that additional information be employed, such as information provided by the aiding module 512. The visible SV list module 508 may maintain a list of the SVs that may be tracked by the GPS receiver 100. The ATX control manager module 518 may track the signals from these SVs and works with the other subsystems. [0154] The GPS receiver control module 502 may update the SV data module 510 that stores the almanac data received via the GPS receiver control module 502 from a satellite vehicle. The almanac data may contain information about satellite vehicles that make up a constellation of satellite vehicles. The satellite vehicle data module 510 may also contain additional data that is associated with satellite vehicles or acquisition of satellite vehicles. In other embodiments, the visible SV list module 508 and satellite vehicle data module 510 may be combined into a single data structure within a single module. [0155] The aiding module 512 may have location data that is received from another device, such as a location server or other wireless/GPS device that may communicate over another network. Examples of aiding data may include, but are not limited to, predetermined position, clock frequency, SV location information, and almanac data. The GPS receiver control module 502 may access the aiding module 512 to retrieve or store aiding information. The aiding module 512 may be continually updated by the GPS receiver 100 and processed into navigation data. In other implementation, the OEM portion of the GPS receiver 100 may provide the communication connection and data for the aiding module 512. [0156] The NAV module 514 formats the navigation data for use by other modules and the system. The NAV module 514 uses measurement data from other modules and determines a NAV State that includes, but is not limited to, User Position, User Clock Bias, User Velocity, User Clock Drift, User position Uncertainty, User Clock Uncertainty, User Velocity Uncertainty, User Clock Drift Uncertainty. An example of such a measurement data format in pseudocode is: TABLE-US-00004 NAV Measurement Structure { UINT32 Timetag; //Acquisition Clock lsw UINT32 Timetag2; //Acquisition Clock msw double measTOW; //User time UBYTE SVID; //Sat ID for each channel double Pseuodorange; //Pseudorange in meters float CarrierFreq; //Pseudorange rate in meters/seconds double CarrierPhase; //Integrated carrier phase in meters short TimeIn Track; //Count, in milliseconds how long SV is in track UBYTE SyncFlags; //This byte contains two bit-fields which report the integration interval and sync achieved for the channel Bit 0: Coherent Integ. Interval (0=2ms, 1=10ms) Bit 1,2: Sync UBYTE CtoN[10] //Average signal power in db-Hz for each 100mz UINT16 DeltaRangeInterval; //Interval for the preceding second. A value of zero indicates that an AFC measurement or no measurement in the CarrierFreq field for this channel INT16 MeanDeltaRangeTime; // Mean time of the delta-pseudo range interval in milliseconds measured from the end of the interval backwards INT16 Extrapolation Time; // The pseudo range extrapolation time in milliseconds, to reach a common time tag value UBYTE PhaseErrorCount; // The count of phase errors greater than 60 degrees measured in the preceding second (as defined for each channel) UBYTE LowPowerCount; // This is the count of power measurements less than 28 dB-HZ in the preceding second (as defined for each channel) #ifdef FALSE_LOC double TruRange; //* true range *// long GPSSecond //* Integer GPS seconds *// long ClockOffset; // *clock offset in Hz *// #endif char MeasurementConsistency; // Flag to indicate measurements are consistent// double ValidityTime; // Receiver Time to Validity// short PRQuality; // Pseudo Code Quality// float PRnoise; // 1 sigma expected PR noise in meters// float PRRnoise; // 1 sigma expected PRR noise in meters/sec short PRRQuality; // Quality measurement of the PRR// float CarrierPhaseNoise; // 1 sigma expected Carrier Phase noise in meters// short PowerLockCount // Count of Power Lock Loss in 1 second 0-50// short CarrierLockCount // Count Phase Lock Loss in 1 second 0- 50// short msAmbiguity // Millisecond ambiguity on measurement// } tNavMeas. Additional formatting may be included for DGPS and WAAS position location data. The NAV module 514 receives data from the FFT subsystem 206 and determines the position of the GPS receiver 100. [0157] The GPS receiver control module 502 also may communicate with the DGPS module 516. The DGPS application module 516 functions with a hardware receiver to receive a DGPS signal. The DGPS signal contains GPS correction data that enables the GPS receiver 100 to more precisely determine its location. The DGPS module 516 may also assist in better location determination when selective availability is active. The DGPS corrections used in the DGPS module 516 may also have a specific format such as RTCM or RTCA formats. [0158] The ATX control manager module 518 interfaces with the hardware that processes received signals from selected satellite vehicles. Most modules do not interface with hardware directly, rather the modules may access common memory that hardware accesses. The ATX control manager module 518 is an exception and may use a sub-module to directly interface with the hardware of the signal processing subsystem 204 and FFT subsystem 206. In other implementations, numerous modules may interface directly with the hardware. [0159] The GPS receiver control module 502 communicates with a power manager module 520. The power manager module 520 may receive information from the power supply hardware, such as battery power levels via memory. The power manager module 520 may also have the ability to turn on and off different subsystems in order to conserve energy based upon the quality of received GPS signals, UI GPS module, state of processing, or power levels. The power manager module 520 may also have the ability to put the GPS receiver 100 asleep, including the RF block 102. A real time clock provides timing signals that enable the GPS receiver to be awoken rapidly and configured to continue processing location data. [0160] The power manager module 520 may also track power information, such as power level of batteries and enables the information to be accessed by the GPS receiver control module 502. The information may be accessed by the power manager module 520 sending messages to the GPS receiver control module 502. In other implementations, the GPS receiver control 502 may query the power manager module 520. The GPS receiver control module 502 may then change the operating mode of the GPS receiver 100 or take other actions based on the amount of power available. [0161] An interface may allow power control of subsystems, such as the RF subsystem and clocks. The various subsystems involved in the baseband processing may be idled under software control. For example if there is no data to process for more than five channels then the software that implements the power manager module 520 will set only five channels in the channel records. The hardware will then execute only those five channels in sequence thus reducing memory accesses. The sequencer can be shut down under software control and will stop baseband processing completely. The RTC has an independent power source that enables the RTC counters to be active and the RTC clocks to be active for power up operation. The power control approach allows for flexibility in implementing power control in power manager module 520. Thus the power utilization can be optimized based on information available in the GPS receiver 100 to minimize power needed per position fix. [0162] The data control module 522 controls the access to the non-volatile data that is stored in the non-volatile memory (NVM) across resets or power downs. The data control module 522 may also provide verification of the memory and integrity of the stored data (i.e. checksums) upon powering/waking up the GPS receiver 100. [0163] The BEP module 524 is a database formed by data structures of information being currently used by the GPS receiver 100. Information received at the GPS receiver 100 that affect the timing control of the different subsystems is maintained in the BEP module 524. [0164] The GPS receiver 100 may also have a user interface that receives input from external sources, such as buttons, touch screens, mice, or keyboards. The user interface communicates via the UI GPS module 526 that initially receives the external inputs from a user. The inputs are processed and the GPS receiver control module 502 takes appropriate action. The GPS receiver control module 502 may receive user input data by event flags that are set by the UI GPS module 526 and may send information by setting another event flag. In other implementations, the GPS receiver control module 502 may receive messages from the UI GPS module 526 with user interface data. [0165] The MI module 528 encapsulates the GPS functionality of the other modules and isolates the GPS functionality from the outside world and resources. The outside world may be OEM equipment that is collocated within the GPS receiver 100 or may be devices that are interfaced to the GPS receiver 100. [0166] In additional to other modules, numerous background modules 530 or task may be executing at any time. Examples of such background modules include trash collection (reclaiming used resources), watchdog timers, interrupts handlers, hardware monitoring to name but a few. [0167] Next, in FIG. 6, an illustration 600 of the ATX control sub-module 602 within the ATX control manager module 518 of FIG. 5 is shown. The ATX control manager module 518 communicates with the ATX control sub-module 602. The ATX control sub-module 602 additionally communicates with the ATX task 604, a controller 606 such as a digital signal processor, microprocessor, digital logic circuit executing a state machine to give but a few examples, cross-correlator task 608, tracking task 610, acquisition task 612, reset task 614, and startup task 616. The tasks may be sub-modules within the ATX control manager module 518 that may interface with hardware such as the controller 606 or cross-correlator 608. Or, tasks may be implemented in software that executes upon certain conditions occurring, such as the reset task 610 and startup task 616. [0168] Upon startup of the GPS receiver 100, the ATX control manager module 518 activates or runs the startup task 616 via the ATX control sub-module 602. The startup task 616 initializes the other tasks to known states. The ATX task 604 than proceeds to interface with the controller 606, cross-correlator task 608, tracking task 610, and acquisition task 612 and the hardware in signal processing subsystem 204 and prepares to process received positioning signals. In other implementations, there may be fewer or more tasks within the ATX control manger module 518 and tasks may be combines or subdivided into more or fewer tasks. [0169] In FIG. 7, an illustration 700 of the implementation layers of the GPS receiver 100 of FIG. 1 is shown. The application layer 702 is typically software 404 that is grouped into modules or task that are associated with the operation of the GPS receiver 100. An example of a software module is the ATX control manager module 518. The ATX control module communicates to the ATX control sub-module 602 that resides in a platform layer 704. The platform layer 704 is a layer between the application layer 702 and the hardware layer 706. [0170] The platform layer 704 is where the majority of the ATX functions reside. The ATX control sub-module 602 is able to receive messages from the ATX control manager may be 518 residing in the application layer 702, reset task 614 and startup task 616 that reside in the platform layer 704. The ATX control sub-module 602 also communicates with the cross-correlator task 608, tracking task 610 and the acquisition task 612. The ATX control sub-module 602 may also communicate the hardware, such as the controller 606 and HW timers 303 that reside in the hardware layer 706. [0171] Turning to FIG. 8, a flow diagram 800 the GPS receiver control module 502 of FIG. 5 is shown. The flow diagram 800 starts when a receiver controller task is started 802 in the GPS receiver control module 502. The receiver controller task initializes local variable and processing receiver control event queue in addition to servicing error conditions from the UI GPS module 526 and the background task 530 in block 804. [0172] The ATX control manager module 518 transfers measurements, WAAS augmentation data and status information from tracker hardware to the GPS receiver control module 502 using the interface provided by the ATX control manager module 518 in step 806. Based upon the measurement conditions, the ATX control manager module 518 determines if recovery information may be generated and if a recovery situation is identified. [0173] The GPS receiver control module 502 then accesses the power management module 520 in step 808. For trickle power type power management and power management when the GPS portion of a GPS receiver 100 is off. The power management module 520 is accessed after the current measurement information has been retrieved from the ATX control manager module 518. [0174] If a recovery situation is indicated by the ATX control manger modules 518, then the GPS receiver control module 502 performs internal aiding based on data received from the ATX control manager module 518 in step 810. The BEP modules 524, SV data module 510, and the visible satellite in the visible SV list module 508 are accessed and updated as needed for the recovery. [0175] New prepositioning data is generated by the GPS receiver control module 502 for use by the ATX control manager module 518 based on predefined events in step 812. The ATX control module 518 may access the forced updating module 524 in order to get prepositioning data. [0176] The GPS receiver control module 502 may cause the NAV module 514 to execute and output navigation data if available in step 814. This may trigger the UI GPS module 526, aiding module 512 and result in formatting of the navigation data. The aiding module 512 may cause the QoS module 506 to execute and make the appropriate data available to other modules and subsystems. The results generated by the NAV module 514 may be used the next time the receiver controller task executes. Generally, the NAV module 514 will execute on the following events: After a trickle power on-period, before the first NAV module 514 execution (as soon as the receiver has sufficient measurements) and on 1000 ms hardware timer boundaries after the first NAV module 514 execution. The GPS receiver control module 502 may then cause the power manager module 520 to be placed in an advanced power management type power control and has a background module or task 530 that may handle the "stay awake" maintenance for trickle power control in step 816. [0177] The GPS receiver controller module 502 services any events that may be in the event queue and schedule the running of the navigation process that resides in the NAV module 514 to execute at predetermined periods (such as every 1000 ms) 820. Processing is shown as completing 822, but in practice processing may continuously execute or execute upon initialization and/or during a reset condition. [0178] In FIG. 9, a sequence diagram 900 of the communication between the different modules of FIG. 4 to in order to acquire location measurements is shown. The GPS receiver control module 502 requests an update of raw location measurements 902 from the ATX control module 518. The ATX control manager module 518 then accesses the ATX control sub-module 602 that access the hardware to get the raw location measurements. The ATX control sub-module 602 returns the updates 906 and the ATX control manager module 518 sends an update status 908 to the GPS receiver control module 502. The GPS receiver may be configured to seek updates every 100 ms. The GPS receiver control module 502 then may request the raw location measurements by sending a "Req Raw Meas" message to the ATX control manager module 518. The ATX control manager module 518 then returns the raw location measurements in a "Return Raw Meas" message 910. The GPS receiver control module 502 then processes the recovery conditions 912 based upon the ATX tracking status indicated in the "Return Raw Meas". The GPS receiver control module 502 then indicates with a "Push OK TO Send" 914 message to the UI GPS module 526 that the raw location measurements are available. [0179] If new satellite vehicle data is available as indicated by the ATX control manager module 518, then the GPS receiver control module 502 sends the "Update SVData" message 916 to the satellite vehicle data module 510. The satellite vehicle data module 510 then request the SV data from the ATX control manager module 518 by sending the "Req SVData" message 918. The ATX control manager module 518 returns the SV data by sending the "Return SVData" message 920 to the SV data module 510. The SV data module 510 then sends an updated status message 922 to the GPS receiver control module 502. The status message 922 from the SV data module 510 results in the GPS receiver control module 502 generating an event that is processed by the user interface GPS module 528 identifying that new ephemeris data 924 and new almanac data 926 may be available. [0180] If new satellite based augmentation system (SBAS) data is available, then the GPS receiver control module 502 sends an "update SBAS" message 926 to the DGPS module 516. The DOE'S module 516 then sends a "Req SBASdata" message 728 to the ATX control module 518. The ATX control manager module 518 processes the "Req SBASdata" message 730 and responds with a "Return SBASdata" message 732 to the DGPS module 516. [0181] Turning to FIG. 10, a sequence diagram 1000 of a recovery condition between the modules of FIG. 5 is illustrated. A series of tests for aiding sources are done prior to recovery and conditions are set based on the aiding source. In the current implementation, after all tests are completed the respective modules are called. If a recovery condition exists, the GPS receiver control module 502 sends a "BEP Recovery" message 1002 to the BEP module 514. The BEP module 514 then response with a "recovery status" message 1004 that is sent to the GPS receiver control module 502. The GPS receiver control module 502 updates the BEP module 514 when a "BEP Update" message 1006 is sent to the BEP module 514. The force update module 514 then responds to the GPS receiver control module 502 by sending a "BEP update Response" message 1008. Similarly, the GPS receiver control module 502 sends a "VL Update" message 1014 to update the visible SV list module 508. The visible SV list module 508 is then updated and an acknowledgment 1016 is returned to the GPS receiver control module 502. Examples of some of data integrity conditions may include a recovery condition being initiated by the GPS receiver controller, external aiding is available, internal aiding is available, and frequency clock needs updating. [0182] In FIG. 11, a sequence diagram 1100 of acquisition and tracking pre-positioning configuration of the ATX control module 518 of FIG. 5 is shown. The GPS receiver control module 502 sends "get sequence number" messages 1102, 1104, and 1106 to the BEP module 524, SV data module 510 and the visible list module 508. The BEP module 424, SV data module 510 and the visible SV list module 508 each responds to the GPS receiver control module 502 respectfully, with a "sequence response message" 1108, 1110 and 1112. [0183] The GPS receiver control module 502 then determines if any of the sequence numbers received from the other modules have changed. If any of the sequence numbers have changed or five seconds have passed then an "update prepositioning" message 1114 is sent to the BEP module 524. The BEP module 524 response with an acknowledge message 916. [0184] The GPS receiver control module 502 sends a "do ATX prepositioning" message 1118 to the ATX control module 518. The ATX control module 518 sends a "get visible list" message 1120 to the visible SV list module 508 in response to the "do ATX prepositioning" message 1118. The visible SV list module 508 sends a response message 1122 to the ATX control module 518 containing a list of the visible satellites. The ATX control module 518 then gets or clears the bit map of new ephemeral data in the SV data module 510 by sending a "get/clear message" 1124 to the SV data module 510. The SV data module 510 then sends an "acknowledge" message 1126 back to the ATX control manager module 518. [0185] The ATX control module 518 also accesses the BEP module 524 with a "get preposition time" message 1128 and receives the preposition time in a "preposition time response" message 1130. The memory mode is determined by the ATX control manager module 518 sending the "get memory mode" message 1132 to the data control module 522 and receiving a "memory mode response" message 1134. The ATX control manager module 518 may also access the DGPS module 516 via a "get SBAS PRN number" message 1136 (that gets the satellite base augmentation system pseudo random number) and the DGPS data is received at the ATX control manager module 518 in a "SBAS PRN Response" message, 1138. The accessing of prepositioning data from other modules by the ATX control manager module 518 may occur simultaneous or in any order. Once the prepositioning data has been acquired by the ATX control manager module 518, then the ATX control manager module 518 accesses the ATX control sub-module 602 with the ATX command 1140. [0186] Next in FIG. 12, a sequence drawing 1200 of the navigation module and quality of service module 506 of FIG. 5 is shown. The GPS receiver control module 502 sends a "condition NAV meas" message 1202 to the ATX control manager module 518 to get navigation measurements upon the initial location fix of the GPS receive 100 and at one-second intervals there after. In other implementations the navigation measurements may occur upon other events or upon other periods of time. The ATX control manager module 518 responds with the navigation measurements 1204. The GPS receiver control module 502 then triggers a MEASUPDATE event 1206 that is acted upon by the UI GPS module 526. The GPS receiver control module 502 also sends a "DGPS/SBAS One Second Processing" message 1208 to the DGPS module 516. The DGPS module 516 may respond with SBAS data 1210 to the GPS receiver control module 502. The GPS receiver control module then sends the NAV condition and SBAS data to the navigation module 514 in a "NL_Main" message 1212. [0187] The NAV module 514 then acts on the received information and gets navigation data from the ATX control manger 518 by sending a "Get Navigation Measurement" message 1214. The ATX control manager 518 accesses the common memory and retrieves the navigation data. The ATX control manager 518 then responds 1216 to the NAV module 514. The NAV module 514 then sends a notification 1218 to the GPS receiver control module 502. [0188] The GPS receiver control module 502 then sends a "NL get Status" message 1220 to the NAV module 514. The NAV module 514 responds with a message 1222 that contains the status of information for the NAV module 514. The GPS receiver control module 502 also sends a "Get Status" message 1224 to the NAV module 514, which responds, with the "NavState" message 1226. [0189] The GPS receiver control module 502 may receive aiding information every thirty seconds (in other embodiments this may be asynchronous or at variable rates) if such data is available, by sending and "Aiding" messages 1228 to the aiding module 512 to update the position data with a "BEP update position" message 1230 to the BEP module 524. The GPS receiver control module 502 also sends a message 1232 to the aiding module 512 to update the frequency data in the BEP module 524. Similarly, an aiding message 1234 is sent to the aiding module 512 to update the time in the BEP 524 with the "BEP update time" message 1236. [0190] The GPS receiver control module 502 may also send a "RTC Set from System" message 1238 to the real time clock (RTC) 116. The preferred method is for the BEP module 524 to call the RTC 116 upon updates occurring. The RTC 116 then request the time information by sending the "Get Time, Clock" message 1240 to the BEP module 524. The BEP module 524 responds with time adjustments 1242 to the RTC 116. [0191] Upon the navigation processing being complete, the GPS receiver control module 502, triggers an "NAV COMPLETE" event 1244 to signal to the UI GPS 526 that navigation processing is complete. Furthermore, the GPS receiver control module 502 sends a "QoS Service" message to the QoS module 506 to update the service parameters for the GPS receiver 100. The QoS module 506 sends a "NL GetState" message 1246 to the navigation module 514. The NAV module 514 responds 1248 with state information to the QoS module 506. Upon getting the navigation state information, the QoS module 506 sends a "Get Nay Meas" message 1250 to the ATX control manager module 518. The ATX control manger module 518 responds back to the QoS module 506 with navigation measurements 1252 and then sends information 1254 that may be useful in aiding to the aiding module 512. [0192] Turning to FIG. 13, a sequence drawing 1300 of the power management with the power manager module 420 of FIG. 4 is shown. The GPS receiver control module 502 sends a "PM_APM" message 1302 to the power manager module 520 to activate advance power management (APM). The APM power control and trickle power control maintenance may occur every second in the current implementation, but in other implementations the time period for maintenance may be a different period. The power manager module 520 sends a "Get Acq Status" message 1304 to the ATX control manager module 518. The ATX control manager module 518 responds back with the acquisition status 1306. The returned status may be APM power down now, TP power down now, or TP stay awake. The status is then relayed from the power manager module 520 to the GPS receiver control module 502 with a "Return Status" message 1308. The GPS receiver control module 502 the signals with a "PushOKtoFix(FALSE) event 1310 to the power manager module 520. [0193] The GPS receiver control module 502 then may send a "LP Cycle Finished" message 1312 and a "LP OKToSleep" message 1314 to the power manager module 520 notifying the power manager module 520 that it is GPS receiver 100 is read for power control. The power manager module 520 then responds 1316 to the GPS receiver control module 502. The GPS receiver control module 502 then sends a "LP Set Processor Sleep" message 1318 to the power manager module 520 acknowledging the power control has occurred. [0194] In FIG. 14, a sequence drawing 1400 of the background task module 530 of FIG. 5 is shown. The GPS receiver control module 502 sends a "Os Schedule Task" message 1402 to the operating system (OS) services module 1402. The "Os Schedule Task" message 1402 may be sent every second in the current implementation. The OS services module 1402 then sends an "Activate Task" message 1404 to the BG task module 530. The BG task module 530 then sends a "SVSC Maintenance" message 1406 to the SV data module 510. The "SVSC maintenance" message 1406 causes the SV data module 510 to be updated/cleaned up. A response 1408 is sent form the SV data 510 to the BG task module 530. The BG task module 530 then sends a "VL Maintenance" message 1410 to the visible SV list module 508. The visible SV list module 508 then sends a response 1412 to the BG task module 530. Similarly, the BG task module 530 sends a "Battery Backup" message 1414 that contains position, time, and clock information to the non-volatile memory (NVM) 1404. The NVM 1404 sends a "Get Pos, Time, Clock" message 1416 to the BEP module 524 and the BEP module 524 responds back in a message 1418 to the NVM 1404. The NVM 1404 is a type of memory that is used to store data when the GPS receiver 100 is powered down or in a reduced powered state with some or all the subsystems turned off. The BG task module 530 also updates the UI GPS module 520 at periodic intervals, such as every second, with a "UI Once Sec Task" message 1420 and the UI GPS module acknowledges the update 1422. [0195] Next in FIG. 15, a flow diagram 1500 of the signal processing subsystem 204 of FIG. 2 is shown. The signal processing subsystem 204 receives a seq_SS2ON 1502 signal that may be latched in a buffer ss2On 1504 that enables the master control state machine 1506 in the signal processing subsystem 204. The master control state machine 1506 receives the current state information from the signal processing subsystem hardware and commands from the channel random access memory (RAM). The master control state machine 1506 may also receives a signal "ss2 done" when the hardware is finished processing the current channel. [0196] The master control state machine 1506 may turn on the hardware and request access to the channel RAM. A channel in RAM may be 128 words of 64 bits of memory and pointers from one channel to the next link multiple channels. The channels may be configure as a linked list or may be circular. The data from the channel RAM is received by the signal processing subsystem 204 at an input register 1508. [0197] A semaphore word 1510 that controls the operation of the signal processing subsystem 204, FFT subsystem 204 and the software is updated and if predetermined bits are set in the semaphore word 1510, then signal processing commences in the signal processing subsystem 204. a semaphore word may be a grouping of bits that used to signal events and other occurrences between subsystems. If an event occurs that requires the processing of data to be paused, a bit may be set in the pause register 1512. The pause register 1512 may be used to debug the signal processing subsystem 204 and may be also used by the software to update the signal processing subsystem 204 to known states. [0198] Turning to FIG. 16, an illustration of the master control state machine 1506 of FIG. 15. The master control state machine 1506 starts when the master control state machine is on 1602. If on 1602, then channel lapping is checked 1604. The channel lapping check 1604 verifies that the signal processing subsystem 204 is not overwriting a context of channel RAM used by the FFT subsystem 206. If lapping has not occurred 1604, then the semaphore word is checked 1606. Otherwise, the signal processing subsystem 204 stalls until the FFT subsystem 206 is within a context. [0199] The semaphore values are checked 1608 and if the channel pause bit is set, then the pause flag is set 1610 and the semaphore word is read again 1606. The channel pause is used to freeze the operation of the signal processing subsystem 204 until software starts again. The pause may be used in debugging the signal processing subsystem 204 or may be used to update the signal processing subsystem 204 to a known state. If the semaphore value 1608 indicates that the channel is off, then a channel may be selected 1612 by updating the channel base. If the semaphore indicates that the signal processing subsystem is not stopped or paused, then active channel may be set or cleared 1614. [0200] A check is made to determine if the signal processing subsystem 204 should be turned on 1616. If it should not be turned on, then a check is made to see if an overflow of the channel RAM has occurred 1618. If an overflow has not occurred 1618, then the current channel is deactivated 1620. Otherwise, if an overflow condition exist 1618, then a bit in the semaphore word is set and interrupts are enabled 1622. If the signal processing subsystem 204 is to be turned on, then the input buffers (fifo1 and fifo 2) and the signal processing subsystem 204 are initialized 1624 to execute on a selected channel. The signal processing subsystem 204 may be run for a predetermined amount of time 1626 (typically long enough to process the selected channel) that may be adjusted by the software and may depend on the operational mode of the GPS receiver 100. [0201] A determination is made if the cross-correlator needs to run 1628. The cross-correlator runs once after the first even run. If the cross-correlator does need to run 1628, then the signal processor subsystem 204 and fifo 2 are initialized for cross-correlation 1630. The cross-correlator is then run 1632 and then a determination is made as to if more satellite vehicles need to be processed 1634 and if so, step 1630 is preformed again. Otherwise the semaphore is updated with indicating that the signal being processed is valid 1636. [0202] If cross-correlation is complete 1636 or has already been run 1628, then a check is made for more frequencies to process (lsb/msb, odd,even) 1638. If another frequency does need to be processed 1638, then the position pointer is adjusted 1640 to the frequency needing processing and the signal processing subsystem 204 and fifo 1 and fifo 2 are initialized to the unprocessed channel 1624. Otherwise, no more frequencies are required to be processed 1638 and shutdown state information is saved 1642 and the semaphore word is updated and interrupts enabled 1622. If the signal processing system 204 is not on 1602 then the signal processing system 204 is reset 1644. [0203] In FIG. 17, an illustration of the master control state machine 1700 for the FFT subsystem 206 of FIG. 2 is shown. If the GPS receiver is on 1702, then the semaphore word is read 1704. Otherwise, if the GPS receiver is not on then a reset occurs 1706. If the semaphore values indicate a channel pause 1708 then the pause flag for the FFT subsystem 206 is set 1710 and the semaphore word is read again 1704. If the semaphore indicates that the channel is not on 1708, then a determination is made to see if the channel is stalled 1712 in the signal processing subsystem 204. The channel is rechecked until it is not stalled 1712 and the channel pointer is updated 1714. [0204] If the semaphore values indicate that the channel is on and not paused 1708, then the channel that is on is activated 1716 and the FFT subsystem 206 and fifo 2 for the channel are initialized 1718. A determination is made if data is available in fifo 2 (from the signal processing subsystem 204) 1720. If data is not available 1720, then a check is made if the channel in the signal processing subsystem 204 is stalled 1722 and rechecked until data is available 1720. If the channel in the signal processing subsystem 204 is not stalled 1722, then a check is made to verify the report context is enabled and the NCO is updated with correction value 1724. If it is enabled then a report "context" is generated 1726 and the hardware tracking loop and software aiding is used to update the NCO value in the channel RAM 1728. If the report context is not enabled or NCO updated with the correction value 1724, then a check occurs to see if a 100 ms report needs to be generated 1730. If the report needs to be generated 1730, then it is generated 1732 and the hardware tracking loop and software aiding is updated 1728. The shut down state information is saved 1734 and the semaphore word is updated and interrupts are enabled 1736. The current channel is then deactivated 1738 and the channel pointer updated 1714. [0205] If fifo 2 data is available 1720, and the cross-correlator is on 1740, then a check is made for cross-correlator data 1742. If the cross-correlator data is available 1742, then the cross-correlator and FFT 322 are enabled 1744 and the next cross-correlator data pointer is read 1746. Then a check is made to see if the FFT 332 is done 1748. Similarly, if the cross-correlator is not on 1740, then the FFT 332 is enabled for one PDI (unit of data required by the FFT to run) 1750. Also, if the data is available for the cross-correlator 1742, then the FFT 332 is enabled for a PDI 1750. [0206] If the FFT is not done 1748, then checks are repeated unit is finished. Once the FFT is finished 1748, then a check is made to verify if the fifio with PDI data has been processed 1752. If the fifo of PDI data has not been fully processed, then a check is made for a termination code 1754. If the termination code is not present, then the turnoff flag for the signal processing subsystem 204 and the FFT subsystem 206 are set 1756 another check is made to determine if fifo 2 data is available 1720. Otherwise, if the termination code is present 1756, then another check is made to determine if fifo 2 data is available 1720. [0207] If the fifo of PDI data has been processed, then a check is made if the non-coherent summation (NCS) is finished 1758 and is repeated unit the NCS of the PDI data is complete. Once the NCS is complete 1758, then the number of PDIs and odd/even frequency counters 1760 are updated. The hardware tracking loop is then updated 1762 and a check to see if the PDI data is paused 1764. If the PDT data is paused, then the pause flag is set 1766. Once the pause flag is cleared or if the PDI is not paused 1764, then a check is made for a termination condition 1768. If no termination code is present 1768, then a check is made if fifo 2 data is available 1720. Otherwise, if the termination code is found 1768 and the turnoff flag is set 1770 for the signal processing subsystem 204 and the FFT subsystem 206 followed by a check to determine if the fifo 2 data is available. [0208] Turning to FIG. 18, a channel sequencing control diagram 1800 illustrating the communication between signal processing subsystem 204 of FIG. 2 and FFT subsystem 206 of FIG. 2 using the memory subsystem 208 of FIG. 2. The signal processing subsystem 204 is shown with a circular link list of channels 1802, 1804, 1806, 1808, 1810, and 1812. The "FIFO zone" is an area in the memory subsystem 208 that contains the buffer pointers 1814 in addition to the reregisters 1816 and pointers 1818 used to process data through the signal processing subsystem 204 and the FFT subsystem 204. An area in memory is also allocated for a channel record 1820 that contains semaphores associated with the different channels. Similarly, the FFT subsystem 206 executes on the same plurality of channels 1802, 1804, 1806, 1808, 1810, and 1812. The "FIFO zone 208 also has buffers 1822, pointers 1824 and registers 1826. [0209] The signal processing subsystem 204 processes its associated channel 1802, 1804, 1806, 1808, 1810, and 1812 independent from the FFT subsystem 206. The only requirement is that a channel should be processed by the signal processing subsystem 204 prior to being processed by the FFT 206 subsystem. If the signal processing subsystem 204 gets ahead of the FFT subsystem 206, then data in the channels of the FFT subsystem 206 is overwritten prior to being processed. Therefore, lapping rules are established and implemented in software that prevents a lapping condition for occurring. [0210] Turning to FIG. 19, a list 1900 of lapping rules to prevent the signal processing subsystem from overwriting memory used by the FFT subsystems of FIG. 15 is shown. The list of lapping rules is implemented in software. First rule 1902 is that the signal processing subsystem 204 and FFT subsystem 206 may not lap each other. [0211] The second rule 1904 is that the signal processing subsystem 204 may not enter a channel (i.e. make active) if the FFT subsystem 206 is currently active with that channel. This rule prevents the signal processing subsystem 204 from lapping the FFT subsystem 206. [0212] The third rule 1906 is that the FFT subsystem 206 may not exit a channel if the signal processing subsystem is currently active with that channel. This rule prevents the FFT subsystem 206 from lapping signal processing subsystem 204 and allows the FFT subsystem 206 to process data as it become available if the signal processing subsystem 204 is active. [0213] The forth rule 1908 is that the signal processing subsystem 204 will process the number of milliseconds it has been programmed to process inclusive of software corrections time. This rule maintains the signal processing subsystem 204 in a channel until processing is complete. [0214] The fifth rule 1910 is that the FFT subsystem 206 will process as much data as is available in its buffer. This rule has the FFT subsystem 206 processing data in the FFT buffer up to the stored buffer pointers if the signal processing subsystem 204 is not active or up to the point where the signal processing subsystem 204 completes if the signal processing subsystem 204 is complete. [0215] The sixth rule 1912 is that the signal processing subsystem 204 and FFT subsystem 206 may be prevented from continuing processing by a pause semaphore or pause flag. This enables the signal processing subsystem 204 to be stalled by the FFT subsystem 206 context (channel) being done or by the FFT subsystem 206 PDI data being done. The FFT subsystem 206 may also be stalled if the FFT subsystem 206 PDI data is done. [0216] The channel pointer may be used to determine if both channels being accessed by the different subsystem are equal. Further, the coherent buffer pointer and active flag may be used to determine if the signal processing subsystem 204 and FFT subsystem 206 are in the same buffer. The use of shared buffers may mean two different channels may be active in the same buffer and are treated from a "FIFO perspective: as if the same channel was trying to access it. [0217] Turning to FIG. 20, an illustration 2000 of the semaphore and interrupt structure for communication between the subsystems of FIG. 2 and software is shown. A location in memory is identified for enable pause bits 2002. The number of bits will be based the number of subsystems, i.e. one bit for the signal processing subsystem 204 and another bit for the FFT subsystem 206. [0218] Three 32-bit words 2004, 2006 and 2008 are identified for semaphore and interrupt communication. The words/bits are aligned in a predetermined order with the higher addressed 32-bit word 2004 is divided into two sixteen bit sub-words. The first sub-word 2010 has semaphore and interrupts bits for the software controlling the FFT subsystem 206 and the second sub-word 2012 is associated with the software controlling the semaphore and interrupts for the signal processing subsystem 204. The next 32-bit word 2006 has the semaphore and interrupts for software. The other 32-bit word 2008 is from an interrupt mask. By setting selected bits across the memory 2004, 2006 and using mask 2008, communication can occur with binary "and" and "or" operations across the memory. [0219] In FIG. 21, a bit level illustration 2100 of the semaphore and interrupt mask of the interrupt structure of FIG. 20 is shown. The bits are associated with a subsystem or software and only writable by that entity. In other words, only the FFT subsystem 206 may write to SS3 bits and only the signal processing subsystem 204 may write to SS2 bits. For example, if an error occurs in the hardware of the FFT subsystem 206, bit 59 2102 in the semaphore 2006 is set to "1". Bit 27 2104 associated with the software in the FFT subsystem 206 semaphore 2010 is still "0" and an "OR" operation on the bits results in a "1", i.e. an error condition being signaled. If the interrupt enable bit 27 2106 is set to "1" then an interrupt pulse is sent. An acknowledgement of the error condition by software occurs when the software sets bit 27 2104 in word 2010 to "1" and the "XOR" of the bits being a zero. Thus an approach to communication between subsystems is achieved with minimal communication overhead and little memory uses. [0220] The software controlling the signal processing subsystem 204 and the FFT subsystem 206 may provide software aiding to the hardware tracker when the GPS receiver 100 is in a track mode. The software aiding advances the NCO 312, FIG. 3, in order to aid in looking for a satellite. The software aiding nudges the clock forward or backward by storing a differential of the values and arming the change via the semaphore communication. The differential is stored in the channel record and used when the channel or context is processed. [0221] FIG. 22 is a flow diagram 2200 of time adjustment of the signal processing subsystem 204 of FIG. 2 within a T1 phase. The time adjustment starts 2202 with a change to the current time (differential time value) stored in the channel record. The signal processing subsystem 204 is functioning at such a rate that the time may not be directly adjusted. The software sends a command signaling a T1 phase adjustment is ready by using the signal processing's subsystem's semaphore bit 2204. The software bit for the signal processing subsystem 206 is set. The differential time value is retrieved by the signal processing subsystem 206 and used to adjust the NCO 312, of FIG. 3. The hardware of the signal processing subsystem 206 responds to the software by setting the hardware signal processing subsystem's semaphore bit at the end of the context where the adjustment was performed 2208. The flow diagram 2200 is shown as stopping 2210, but in practice the flow may have additional steps or be repeated. [0222] In FIG. 23, a flow diagram 2300 of the time adjustment of the FFT subsystem 206 of FIG. 2 within a T1 phase is shown. The FFT subsystem 206 is also running at such a rate that simply changing the time is impractical. A differential time value for the time change may be generated by the software tracking loop 408, FIG. 4 as stored in memory. The flow diagram 2300 starts 2302 when the software initiates a T1/PDI phase adjustment using the FET subsystem's software semaphore bit 2304. The hardware of the FFT subsystem performs the phase adjustment at the end of the first PDI processed by the FFT subsystem 2306. The hardware performs phase adjustment by one time usage of T1 and the address pointer is incremented 2308. The hardware of the FFT subsystem 206 then responds to the phase adjustment command by setting the subsystem's hardware semaphore bit at an end of the current context 2310. If context reporting is enabled 2312, then the hardware of the FFT subsystem 206 sets the "adjust performed" bit in the context report 2314 and processing stops 2316. Otherwise if context reporting is not enabled 2312, then processing stops 2316. In practice, processing may continue with addition steps or the flow diagram 2300 may be executed again. [0223] In FIG. 24, a diagram of the match filter 308 of FIG. 3 that is configurable by software is shown. The match filter may be configured by software as a single filter or may be subdivided in multiple smaller match filters. The match filter 308 is shown with 32 sets of 32 bit sample registers 2402, 2404, 2406, 2408, 2410, . . . , 2412, 2414 for a sample that could be 1024 bits long. Each set of 32 bit sample register has a respective 32 bit code register 2416, 2418, 2420, 2422, . . . , 2424, 2426. The GPS signal data arrives at the match filter 308 from the signal processor 306 that interpolates and rotates the data. The sets of registers may be divided into subgroups and each subgroup may process a channel or context. [0224] The configuration of the match filter 308 is accomplished with maps that contain configurations for the hardware resources of the match filter 308. The map may be selected by the type of mode that the GPS receiver 100. A lock mode would use a map that uses all 32 sample registers to scan all code space. If location aiding is available then a map may be used that allocates 1/8 of the sample registers to a channel, while using coherent accumulation to build up the signal. The maps ultimately control the hardware setup and the memory configuration for access by the input sample subsystem 202, signal processing subsystem 204, and the FFT subsystem 206. [0225] The maps may configure the baseband hardware not just the matched filter 308. The maps provide a basis for channel allocation that is predetermined such that the channel assignments are not arbitrarily located in time or memory. Each map provides for specified number of channels for particular operations. For example maps will define the allowable number of acquisition, tracking and background channels. This provides the flexibility needed by the software to configure the hardware for the current acquisition and tracking needs in the system. For example at initialization when acquisition is important the acquisition map will provide more acquisition channels and less tracking channels and during steady state normal operation when tracking is the normal operational mode the tracking map will provide more tracking channels and few acquisition channels. The maps provide a piecewise optimized memory and throughput assignment for various operational scenarios for the receiver. [0226] If the match filter 308 of FIG. 24 is partitioned by a map for 1/4 msec summation, then the match filter is divided into eight groups of eight sample registers. The input signal is shifted into a 32 bit shift register 2428 and loaded into the 32 bit shift register 2402. An "exclusive or" operation (2430, 2432, 2434, 2436) is done between each of the eight sample registers and code registers respectively. The results are summed by a coherent accumulator 2438 into a 1/4 msec accumulation. Pairs of 1/4 msec accumulations (i.e. from coherent accumulators 2438 and 2440) may be combined by coherent accumulator 2442 into a 1/2 msec accumulation. Similarly, two 1/2 msec accumulations (i.e. 2442 and 2444) may be combined by another coherent accumulator 2446 into a full msec accumulation. [0227] In FIG. 25, a flow diagram 2500 of an expert GPS control system that resides in the GPS receiver controller 502 of FIG. 5 is shown. The steps start 2502 with the processing of data for use in search strategy from acquisition and track modules 2504. The interface is then checked for the status of the acquisition 2506. An output message is created if the appropriate time has been reached to acquire data 2508. The receiver is checked to identify if a 100 ms boundary has been met or if the measurements are available and resources may be reallocated by the expert system 2510. The commands issued to the interface relating to the satellite vehicle (SV) are checked 2512. The expert system then receives information about the processing of new and updated SV records in order to determine a search strategy. The hardware status (memory, execution, buffers, timers, filters, clocks, correlators) is checked and reported to the expert system 2516. The process steps are then repeated. The process is shown as a simplified control loop, but in other implementations a more advanced adaptive control loop may be used. [0228] Another aspect of the expert system is considering power control when determining a search strategy. By accessing the QoS module 506, the expert system can determine the amount of resources required to acquire the GPS signals. Further, the expert system can make determinations as to what subsystems to power down and if the whole GPS receiver 100 should be powered down to reduce power consumption. [0229] It is appreciated by those skilled in the art that the modules and flow diagrams previously shown may selectively be implemented in hardware, software, or a combination of hardware and software. An embodiment of the flow diagram steps may employ at least one machine-readable signal-bearing medium. Examples of machine-readable signal bearing mediums include computer-readable mediums such as a magnetic storage medium (i.e. or another suitable medium, upon which the computer instruction is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory. [0230] Additionally, machine-readable signal bearing medium includes computer-readable signal bearing mediums. Computer-readable signal bearing mediums have a modulated carrier signal transmitted over one or more wire based, wireless or fiber optic networks or within a system. For example, one or more wire based, wireless or fiber optic network, such as the telephone network, a local area network, the Internet, Blue Tooth, or a wireless network having a component of a computer-readable signal residing or passing through the network. The computer readable signal is a representation of one or more machine instructions written in or implemented with any number of programming languages. [0231] Furthermore, the multiple. Patent applications by Charles P. Norman, Huntington Beach, CA US Patent applications by Henry D. Falk, Long Beach, CA US Patent applications by James Brown, Laguna Beach, CA US Patent applications by Mangesh Chansarkar, Fremont, CA US Patent applications by Paul A. Underbrink, Lake Forest, CA US Patent applications by Peter Michali, Irvine, CA US Patent applications by Qingwen Zhang, Irvine, CA US Patent applications by Robert Harvey, San Francisco, CA US Patent applications by Sundar Raman, Fremont, CA US Patent applications by Williams Higgins, Marlon, IA US Patent applications by SIRF TECHNOLOGY, INC. Patent applications in class Power consumption Patent applications in all subclasses Power consumption User Contributions: Comment about this patent or add new information about this topic:
http://www.faqs.org/patents/app/20110316741
CC-MAIN-2015-32
refinedweb
19,001
52.19
NodeId QML Type Specifies a node by an identifier. More... Properties - identifier : string - ns : string Signals Detailed Description import QtOpcUa 5.13 as QtOpcUa QtOpcUa.NodeId { identifier: "s=Example.Node" ns: "Example Namespace" } Property Documentation Identifer of the node. The identifier has to be given in one of the followig types. It is possible but not recommended to include the namespace index ns=X;s=.... In this case the given namespace index is internally stripped off the identifier and set to the namespace property. Namespace of the node identifier. The identifier can be the index as a number or the name as string. A string which can be converted to an integer is considered a namespace index. Signal Documentation Emitted when the underlying node has changed. This happens when the namespace or identifier has changed. Note: The corresponding handler is onNodeChanged..
https://doc.qt.io/archives/qt-6.0/qml-qtopcua-nodeid.html
CC-MAIN-2021-49
refinedweb
141
53.27
Async actions in Redux with RxJS and Redux Observable Andrej Naumovski May 28 '18 ・15 min read Introduction What is Redux? Redux is an amazing library. For those of you who don't know what Redux is, it is a predictable state container for JavaScript apps. In English, it acts as a single source of truth for your application's state. The state, or Redux store, as it is called, can only be altered by dispatching actions, which are handled by reducers, who dictate how the state should be modified depending on the type of action dispatched. For those of you who aren't familiar with Redux, check out this link. Now, Redux is most commonly used in combination with React, although it's not bound to it - it can be used together with any other view library. Redux's issue However, Redux has one, but very significant problem - it doesn't handle asynchronous operations very well by itself. On one side, that's bad, but on the other, Redux is just a library, there to provide state management for your application, just like React is only a view library. None of these constitute a complete framework, and you have to choose the tools you use for different things by yourself. Some view that as a bad thing since there's no one way of doing things, some, including me, view it as good, since you're not bound to any specific technology. And that's good, because everyone can choose the tech that they think fits the best to their needs. Handling asynchronous actions Now, there are a couple of libraries which provide Redux middleware for handling asynchronous operations. When I first started working with React and Redux, the project that I was assigned to used Redux-Thunk. Redux-Thunk allows you to write action creators which return functions instead of plain objects (by default all actions in Redux must be plain objects), which in turn allow you to delay dispatching of certain actions. And as a beginner in React/Redux back then, thunks were pretty awesome. They were easy to write and understand, and didn't require any additional functions - you were basically just writing action creators, just in a different way. However, once you start to get into the workflow with React and Redux, you realize that, although very easy to use, thunks aren't quite that good, because, 1. You can end up in callback hell, especially when making API requests, 2. You either stuff your callbacks or your reducer with business logic for handling the data (because, let's be honest, you're not going to get the perfectly formatted data EVERY time, especially if you use third-party APIs), and 3. They're not really testable (you'd have to use spy methods to check whether dispatch has been called with the right object). So, I started to research for other possible solutions that would be a better fit. That's when I ran into Redux-Saga. Redux Saga was very close to what I was looking for. From its website, The mental model is that a saga is like a separate thread in your application that's solely responsible for side effects. What that basically means is that sagas run separately from your main application and listen for dispatched actions - once the action that that particular saga is listening for is dispatched it executes some code which produces side effects, like an API call. It also allows you to dispatch other actions from within the sagas, and is easily testable, since sagas return Effects which are plain objects. Sounds great, right? Redux-Saga DOES come with a tradeoff, and a big one for most devs - it utilizes Javascript's generator functions, which have a pretty steep learning curve. Now, props (see what I did there, hehe) to the Redux Saga creators for using this powerful feature of JS, however, I do feel that generator functions feel pretty unnatural to use, at least for me, and even though I know how they work and how to use them, I just couldn't get around to actually using them. It's like that band or singer that you don't really have a problem listening to when they're played on the radio, but you would never even think about playing them on your own. Which is why my search for the async-handling Redux middleware continued. One more thing Redux-Saga doesn't handle very nicely is cancellation of already dispatched async actions - such as an API call (something Redux Observable does very nicely due to its reactive nature). The next step A week or so ago, I was looking at an old Android project a friend and I had written for college and saw some RxJava code in there, and thought to myself: what if there's a Reactive middleware for Redux? So I did some research and, well, the gods heard my prayers: Cue Redux Observable. So what is Redux Observable? It is another middleware for Redux that lets you handle asynchronous data flow in a functional, reactive and declarative way. What does this mean? It means that you write code that works with asynchronous data streams. In other words, you basically listen for new values on those streams (subscribe to the streams*) and react to those values accordingly. For the most in-depth guides on reactive programming in general, check out this link, and this link. Both give a very good overview on what (Functional) Reactive Programming is and give you a very good mental model. What problems does Redux Observable solve? The most important question when looking at a new library/tool/framework is how it's going to help you in your work. In general, everything that Redux Observable does, Redux-Saga does as well. It moves your logic outside of your action creators, it does an excellent job at handling asynchronous operations, and is easily testable. However, IN MY OPINION, Redux Observable's entire workflow just feels more natural to work with, considering that both of these have a steep learning curve (both generators and reactive programming are a bit hard to grasp at first as they not only require learning but also adapting your mindset). From the Redux Observable official guide: The pattern of handling side effects this way is similar to the "process manager" pattern, sometimes called a "saga", but the original definition of saga is not truly applicable. If you're familiar with redux-saga, redux-observable is very similar. But because it uses RxJS it is much more declarative and you utilize and expand your existing RxJS abilities. Can we start coding now? So, now that you know what functional reactive programming is, and if you're like me, you really like how natural it feels to work with data. Time to apply this concept to your React/Redux applications. First of all, as any Redux middleware, you have to add it to your Redux application when creating the store. First, to install it, run npm install --save rxjs rxjs-compat redux-observable or yarn add rxjs rxjs-compat redux-observable depending on the tool that you're using. Now, the basis of Redux Observable are epics. Epics are similar to sagas in Redux-Saga, the difference being that instead of waiting for an action to dispatch and delegating the action to a worker, then pausing execution until another action of the same type comes using the yield keyword, epics run separately and listen to a stream of actions, and then reacting when a specific action is received on the stream. The main component is the ActionsObservable in Redux-Observable which extends the Observable from RxJS. This observable represents a stream of actions, and every time you dispatch an action from your application it is added onto the stream. Okay, let's start by creating our Redux store and adding Redux Observable middleware to it (small reminder, to bootstrap a React project you can use the create-react-app CLI). After we're sure that we have all the dependencies installed ( redux, react-redux, rxjs, rxjs-compat, redux-observable), we can start by modifying our index.js file to look like this import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import App from './App'; import { createStore, applyMiddleware } from 'redux'; import { createEpicMiddleware } from 'redux-observable'; import { Provider } from 'react-redux'; const epicMiddleware = createEpicMiddleware(rootEpic); const store = createStore(rootReducer, applyMiddleware(epicMiddleware)); const appWithProvider = ( <Provider store={store}> <App /> </Provider> ); ReactDOM.render(appWithProvider, document.getElementById('root')); As you might have noticed, we're missing the rootEpic and rootReducer. Don't worry about this, we'll add them later. For now, let's take a look at what's going on here: First of all, we're importing the necessary functions to create our store and apply our middleware. After that, we're using the createEpicMiddleware from Redux Observable to create our middleware, and pass it the root epic (which we'll get to in a moment). Then we create our store using the createStore function and pass it our root reducer and apply the epic middleware to the store. Okay, now that we have everything set up, let's first create our root reducer. Create a new folder called reducers, and in it, a new file called root.js. Add the following code to it: const initialState = { whiskies: [], // for this example we'll make an app that fetches and lists whiskies isLoading: false, error: false }; export default function rootReducer(state = initialState, action) { switch (action.type) { default: return state; } } Anyone familiar with Redux already knows what's going on here - we're creating a reducer function which takes state and action as parameters, and depending on the action type it returns a new state (since we don't have any actions defined yet, we just add the default block and return the unmodified state). Now, go back to your index.js file and add the following import: import rootReducer from './reducers/root'; As you can see, now we don't have the error about rootReducer not existing. Now let's create our root epic; first, create a new folder epics and in it create a file called index.js. In it, add the following code for now: import { combineEpics } from 'redux-observable'; export const rootEpic = combineEpics(); Here we're just using the provided combineEpics function from Redux Observable to combine our (as of now, nonexistent) epics and assign that value to a constant which we export. We should probably fix our other error in the entry index.js file now by simply adding the following import: import { rootEpic } from './epics'; Great! Now that we handled all the configuration, we can go and define the types of actions that we can dispatch and also action creators for those whiskies. To get started, create a new folder called actions and an index.js file inside. (Note: for large, production-grade projects you should group your actions, reducers and epics in a logical way instead of putting it all in one file, however, it makes no sense here since our app is very small) Before we start writing code, let's think about what types of actions we can dispatch. Normally, we would need an action to notify Redux/Redux-Observable that it should start fetching the whiskies, let's call that action FETCH_WHISKIES. Since this is an async action, we don't know when exactly it will finish, so we will want to dispatch a FETCH_WHISKIES_SUCCESS action whenever the call completes successfully. In similar manner, since this is an API call and it can fail we would like to notify our user with a message, hence we would dispatch a FETCH_WHISKIES_FAILURE action and handle it by showing an error message. Let's define these actions (and their action creators) in code: export const FETCH_WHISKIES = 'FETCH_WHISKYS'; export const FETCH_WHISKIES_SUCCESS = 'FETCH_WHISKYS_SUCCESS'; export const FETCH_WHISKIES_FAILURE = 'FETCH_WHISKYS_FAILURE'; export const fetchWhiskies = () => ({ type: FETCH_WHISKIES, }); export const fetchWhiskiesSuccess = (whiskies) => ({ type: FETCH_WHISKIES_SUCCESS, payload: whiskies }); export const fetchWhiskiesFailure = (message) => ({ type: FETCH_WHISKIES_FAILURE, payload: message }); For anyone who is unclear about what I'm doing here, I'm simply defining constants for the action types and then using the lambda shorthand notation of ES6 I am creating arrow functions which return a plain object containing a type and (optional) payload property. The type is used to identify what kind of action has been dispatched and the payload is how you send data to the reducers (and the store) when dispatching actions (note: the second property doesn't have to be called payload, you can name it anything you want, I'm doing it this way simply because of consistency). Now that we've created our actions and action creators, let's go and handle these actions in our reducer: Update your reducers/index.js to the following. import { FETCH_WHISKIES, FETCH_WHISKIES_FAILURE, FETCH_WHISKIES_SUCCESS } from '../actions'; const initialState = { whiskies: [], isLoading: false, error: null }; export default function rootReducer(state = initialState, action) { switch (action.type) { case FETCH_WHISKIES: return { ...state, // whenever we want to fetch the whiskies, set isLoading to true to show a spinner isLoading: true, error: null }; case FETCH_WHISKIES_SUCCESS: return { whiskies: [...action.payload], // whenever the fetching finishes, we stop showing the spinner and then show the data isLoading: false, error: null }; case FETCH_WHISKIES_FAILURE: return { whiskies: [], isLoading: false, // same as FETCH_WHISKIES_SUCCESS, but instead of data we will show an error message error: action.payload }; default: return state; } } Now that we've done all that, we can FINALLY write some Redux-Observable code (sorry for taking so long!) Go to your epics/index.js file and let's create our first epic. To start off, you're going to need to add some imports: import { Observable } from 'rxjs'; import 'rxjs/add/operator/switchMap'; import 'rxjs/add/operator/map'; import 'rxjs/add/observable/of'; import 'rxjs/add/operator/catch'; import { ajax } from 'rxjs/observable/dom/ajax'; import { FETCH_WHISKIES, fetchWhiskiesFailure, fetchWhiskiesSuccess } from "../actions"; What we did here is import the action creators that we will need to dispatch as well as the action type that we will need to watch for in the action stream, and some operators from RxJS as well as the Observable. Note that neither RxJS nor Redux Observable import the operators automatically, therefore you have to import them by yourself (another option is to import the entire 'rxjs' module in your entry index.js, however I would not recommend this as it will give you large bundle sizes). Okay, let's go through these operators that we've imported and what they do: map - similar to Javascript's native Array.map(), map executes a function over each item in the stream and returns a new stream/Observable with the mapped items. of - creates an Observable/stream out of a non-Observable value (it can be a primitive, an object, a function, anything). ajax - is the provided RxJS module for doing AJAX requests; we will use this to call the API. catch - is used for catching any errors that may have occured switchMap - is the most complicated of these. What it does is, it takes a function which returns an Observable, and every time this inner Observable emits a value, it merges that value to the outer Observable (the one upon which switchMap is called). Here's the catch tho, every time a new inner Observable is created, the outer Observable subscribes to it (i.e listens for values and merges them to itself), and cancels all other subscriptions to the previously emitted Observables. This is useful for situations where we don't care whether the previous results have succeeded or have been cancelled. For example, when we dispatch multiple actions for fetching the whiskies we only want the latest result, switchMap does exactly that, it will subscribe to the latest result and merge it to the outer Observable and discard the previous requests if they still haven't completed. When creating POST requests you usually care about whether the previous request has completed or not, and that's when mergeMap is used. mergeMap does the same except it doesn't unsubscribe from the previous Observables. With that in mind, let's see how the Epic for fetching the whiskies will look like: const url = ''; // The API for the whiskies /* The API returns the data in the following format: { "count": number, "next": "url to next page", "previous": "url to previous page", "results: array of whiskies } since we are only interested in the results array we will have to use map on our observable */ function fetchWhiskiesEpic(action$) { // action$ is a stream of actions // action$.ofType is the outer Observable return action$ .ofType(FETCH_WHISKIES) // ofType(FETCH_WHISKIES) is just a simpler version of .filter(x => x.type === FETCH_WHISKIES) .switchMap(() => { // ajax calls from Observable return observables. This is how we generate the inner Observable return ajax .getJSON(url) // getJSON simply sends a GET request with Content-Type application/json .map(data => data.results) // get the data and extract only the results .map(whiskies => whiskies.map(whisky => ({ id: whisky.id, title: whisky.title, imageUrl: whisky.img_url })))// we need to iterate over the whiskies and get only the properties we need // filter out whiskies without image URLs (for convenience only) .map(whiskies => whiskies.filter(whisky => !!whisky.imageUrl)) // at the end our inner Observable has a stream of an array of whisky objects which will be merged into the outer Observable }) .map(whiskies => fetchWhiskiesSuccess(whiskies)) // map the resulting array to an action of type FETCH_WHISKIES_SUCCESS // every action that is contained in the stream returned from the epic is dispatched to Redux, this is why we map the actions to streams. // if an error occurs, create an Observable of the action to be dispatched on error. Unlike other operators, catch does not explicitly return an Observable. .catch(error => Observable.of(fetchWhiskiesFailure(error.message))) } After this, there's one more thing remaining, and that is to add our epic to the combineEpics function call, like this: export const rootEpic = combineEpics(fetchWhiskiesEpic); Okay, there's a lot going on here, I'll give you that. But let's break it apart piece by piece. ajax.getJSON(url) returns an Observable with the data from the request as a value in the stream. .map(data => data.results) takes all values (in this case only 1) from the Observable, gets the results property from the response and returns a new Observable with the new value (i.e only the results array). .map(whiskies => whiskies.map(whisky => ({ id: whisky.id, title: whisky.title, imageUrl: whisky.img_url }))) takes the value from the previous observable (the results array), calls Array.map() on it, and maps each element of the array (each whisky) to create a new array of objects which only hold the id, title and imageUrl of each whisky, since we don't need anything else. .map(whiskies => whiskies.filter(whisky => !!whisky.imageUrl)) takes the array in the Observable and returns a new Observable with the filtered array. The switchMap that wraps this code takes this Observable and merges the inner Observable's stream to the stream of the Observable that calls switchMap. If another request for the whiskies fetch came through, this operation would be repeated again and the previous result discarded, thanks to switchMap. .map(whiskies => fetchWhiskiesSuccess(whiskies)) simply takes this new value we added to the stream and maps it to an action of type FETCH_WHISKIES_SUCCESS which will be dispatched after the Observable is returned from the Epic. .catch(error => Observable.of(fetchWhiskiesFailure(error.message))) catches any errors that might have happened and simply returns an Observable. This Observable is then propagated through switchMap which again merges it to the outer Observable and we get an action of type FETCH_WHISKIES_FAILURE in the stream. Take your time with this, it is a complicated process which, if you haven't ever touched Reactive programming and RxJS, can look and sound very scary (read those links I provided above!). After this, all we need to do is render a UI, which will have a button that dispatches the action and a table to show the data. Let's do that; start off by creating a new folder called components and a new component called Whisky.jsx. import React from 'react'; const Whisky = ({ whisky }) => ( <div> <img style={{ width: '300px', height: '300px' }} src={whisky.imageUrl} /> <h3>{whisky.title}</h3> </div> ); export default Whisky; This component simply renders a single whisky item, its image and title. (Please, for the love of God, never use inline styles. I'm doing them here because it's a simple example). Now we want to render a grid of whisky elements. Let's create a new component called WhiskyGrid.jsx. import React from 'react'; import Whisky from './Whisky'; const WhiskyGrid = ({ whiskies }) => ( <div style={{ display: 'grid', gridTemplateColumns: '1fr 1fr 1fr' }}> {whiskies.map(whisky => (<Whisky key={whisky.id} whisky={whisky} />))} </div> ); export default WhiskyGrid; What WhiskyGrid does is it leverages CSS-Grid and creates a grid of 3 elements per row, simply takes the whiskies array which we will pass in as props and maps each whisky to a Whisky component. Now let's take a look at our App.js: import React, { Component } from 'react'; import { connect } from 'react-redux'; import { bindActionCreators } from 'redux'; import './App.css'; import { fetchWhiskies } from './actions'; import WhiskyGrid from './components/WhiskyGrid'; class App extends Component { render() { const { fetchWhiskies, isLoading, error, whiskies } = this.props; return ( <div className="App"> <button onClick={fetchWhiskies}>Fetch whiskies</button> {isLoading && <h1>Fetching data</h1>} {!isLoading && !error && <WhiskyGrid whiskies={whiskies} />} {error && <h1>{error}</h1>} </div> ); } } const mapStateToProps = state => ({ ...state }); const mapDispatchToProps = dispatch => bindActionCreators({ fetchWhiskies }, dispatch); export default connect(mapStateToProps, mapDispatchToProps)(App); As you can see, there's lots of modifications here. First we have to bind the Redux store and action creators to the component's props. We use the connect HOC from react-redux to do so. After that, we create a div which has a button whose onClick is set to call the fetchWhiskies action creator, now bound to dispatch. Clicking the button will dispatch the FETCH_WHISKIES action and our Redux Observable epic will pick it up, thus calling the API. Next we have a condition where if the isLoading property is true in the Redux store (the FETCH_WHISKIES has been dispatched but has neither completed nor thrown an error) we show a text saying Load data. If the data is not loading and there is no error we render the WhiskyGrid component and pass the whiskies from Redux as a prop. If error is not null we render the error message. Conclusion Going reactive is not easy. It presents a completely different programming paradigm and it forces you to think in a different manner. I will not say that functional is better than object-oriented or that going Reactive is the best. The best programming paradigm, IN MY OPINION, is a combination of paradigms. However, I do believe that Redux Observable does provide a great alternative to other async Redux middleware and after you pass the learning curve, you are gifted with an amazing, natural method of handling asynchronous events. If you have any questions please ask in the comments! If this gets enough interest we can look into delaying and cancelling actions. Cheers :) Amazing article Andrej! I wanted to reach out and see if you were interested in co-publishing it on gitconnected (levelup.gitconnected.com/). We focus on web development topics, especially JavaScript, React, and Redux, and we have a very passionate reader base. We think that we could also help gain exposure for both yourself and your articles. Please let us know! Need a bit help with redux-observable, maybe sameone can help and clear things for me. For example - component has innerate state, and i want toggle that inner state after specific action. With thank I was able to .then and toggle, but not sure how to do it with observables. Very good post, it was very helpful for me. Thanks Very good post. Thanks Very clear & well-written. Thanks for take the time to share your thorough explanation, Andrej!
https://dev.to/andrejnaumovski/async-actions-in-redux-with-rxjs-and-redux-observable-efg
CC-MAIN-2019-09
refinedweb
4,010
60.24
Recent Notes Displaying keyword search results 1 - 10 I have two sites: site1 and site2 . I want to render some content from site2 on site1 . The content is different depending on whether the user is logged in site2 . For example, display Login button when user is not logged in, My Account link when user is logged in. I can inject content into the page on site1 by including a JavaScript generated from site2 : <script type="text/javascript" src=" works because when the JavaScript is requested from site2 , the session cookie is sent along with the request, which is used by site2 to generate different content depending on the session status. However, for IE 9, this scheme breaks when site1 and site2 are in different security zones and IE protected............ The XML schema for a contact might look like this: <?xml version="1.0" encoding="UTF-8"?> <schema ...With XML digital signatures , a Signature element is inserted inside the contact element after the contact file is signed. Like this: <?xml version="1.0" encoding="UTF-8" standalone="n...which no longer validates with the original schema. The schema should be updated to (with the addition of digital signature namespace, schema import and Signature ref ): <?xml version="1.0" encoding="UTF-8"?> <schema ...
http://www.xinotes.net/notes/keywords/http/type/which/
CC-MAIN-2015-06
refinedweb
211
61.97
In this article, we will take a look at two of the most popular web frameworks in Python: Django and Flask. Here, we will be covering how each of these frameworks compares when looking at their learning curves, how easy it is to get started. Next, we'll also be looking at how these two stands against each other with concluding by when to use one of them. Getting Started One of the easiest ways to compare two frameworks is by installing them and taking note how easily a user can get started with it, which is exactly what we will do next. We will try setting up Django and Flask on a Linux machine and create an app to see how easy (or difficult) the process is with each one. Setting up Django In this section, we will setup Django on an Linux-powered machine. Best way to get started with any Python framework is by using virtual environments. We will install it using pip. $ sudo apt-get install python3-pip $ pip3 install virtualenv $ virtualenv --python=`which python3` ~/.virtualenvs/django_env Note: If the pip3 command gives you an error, you may need to prefix it with sudo to make it work. Once we're done setting up our virtual environment, which we've named django_env, we must activate it to start using it: $ source ~/.virtualenvs/django_env/bin/activate Once activated, we can finally install Django: $ pip install Django Suppose our project is called mysite. Make a new directory and enter it, run following commands: $ mkdir mysite $ cd mysite $ django-admin startproject mysite If you inspect the resulting project, your directory structure will be shown as: mysite/ manage.py mysite/ __init__.py settings.py urls.py wsgi.py Let's take a look at what is significant about each of the directories and files that were created. - The root mysite/ directory is the container directory for our project - manage.py is a command line tool that enables us to work with the project in different ways - mysite/ directory is the Python package of our project code - mysite/__init__.py is a file which informs Python that current directory should be considered a Python package - mysite/settings.py will contain the configuration properties for current project - mysite/urls.py is a Python file which contains the URL definitions for this project - mysite/wsgi.py acts as an entry for a WSGI web server that forwards requests to your project From here, we can actually run the app using the manage.py tool. The following command does some system checks, checks for database migrations, and some other things before actually running your server: $ python manage.py runserver Performing system checks... System check identified no issues (0 silenced). You have unapplied migrations; your app may not work properly until they are applied. Run 'python manage.py migrate' to apply them. September 20, 2017 - 15:50:53 Django version 1.11, using settings 'mysite.settings' Starting development server at Quit the server with CONTROL-C. Note: Running your server in this way is meant for development only, and not production environments. To check out your app, head to, where you should see a page saying "It worked!". But wait, you're still not done! To actually create any pages/functionality in your site, you need to create an app within your project. But why do you need an app? In Django, apps are web applications that do something, which could be a blog, a forum, or a commenting system. The project is a collection of your apps, as well as configuration for the apps and entire website. So, to create your app, move in to your project directory and run the following command: $ cd mysite $ python manage.py startapp myapp This will create another directory structure where you can actually manage your models, views, etc. manage.py myapp/ __init__.py admin.py apps.py migrations/ models.py tests.py views.py mysite/ __init__.py settings.py urls.py wsgi.py From here, you need to set up your views in views.py and URL routing in urls.py, which we'll save for another tutorial. But you get the point, right? It takes a few commands and quite a few files to get your Django project up and running. Setting up Flask Just like Django, we will use a virtual environment with Flask as well. So the commands for activating a virtual environment will remain the same as before. After that, instead of installing Django, we'll install Flask instead. $ pip install Flask Once the installation completes, we can start creating our Flask application. Now, unlike Django, Flask doesn't have a complicated directory structure. The structure of your Flask project is entirely up to you. Borrowing an example from the Flask homepage, you can create a runnable Flask app from just a single file: from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" And running the app is about as easy as setting it up: $ FLASK_APP=hello.py flask run * Running on Visiting the URL should display the text "Hello World!" in your browser. I'd encourage you to look for some sample apps on the Flask homepage to learn more. Learning by example is one of the best ways to get up and running quickly. The framework that "wins" this area is really up to your needs and experience. Django may be more favorable for beginners since it makes decisions for you (i.e. how to structure your app), whereas in Flask you need to handle this yourself. On the other hand, Flask is simpler to get running since it requires very little to get going. An entire Flask app can be composed from a single file. The trade-offs really depend no what you need most. Learning Curve Regarding learning curve, as we saw in the last section with Flask, it was very easy to get started. The app doesn't require a complicated directory structure where you needed to remember which directory/file does what. Instead, you can add files and directories as you go according to your usage. This is what Flask is about, as a micro-framework for web development. Django, on the other hand, has a bit higher learning curve since it is more "picky" about how things are set up and work. Because of this, you need to take more time learning how to compose modules and work within the confines of the framework. This isn't all bad though, since this allows you to easily plug in 3rd party components in to your app without having to do any work integrating them. Employability Which of these frameworks will help you land a job? For many developers, this is one of the more important question regarding certain libraries and frameworks: which will help me get hired? Django has quite a few large companies on its resume, which is because many companies that use Python for web development tend to use (or at least started off with) Django to power their site. Django, being a full-fledged framework, is often used early on in development because you get much more resources and power with it out of the box. Here are just a few companies that use (or have used) Django for their sites: - Disqus - NASA Flask is a bit harder to gauge here, mostly because of the way it is used. Flask tends to be used more for microservices, which makes it harder to tell which companies are using it. Plus, companies with a microservice architecture are less likely to say their service is "powered by Flask" since they likely have many services potentially using many different frameworks. There are, however, hints out there of who uses Flask based on job postings, tech talks, blog posts, etc. From those, we know that the following companies have used Flask somewhere in their backend infrastructure: - Twilio - Uber - Mailgun While Django may be more popular among companies, Flask is arguably more common among the more tech-focused companies as they are more likely to use microservices, and therefore micro-frameworks like Flask. Project Size and Scope Our comparison of each framework can become very subjective thanks to many different factors, like project scope, developer experience, type of project, etc. If the project is fairly small and it doesn't need all of the overhead that Django comes with, then Flask is the ideal choice to get started and get something done very quickly. However, if the project is larger in duration and scope, then Django is likely the way to go as it already includes much of what you need. This basically means that many common components of a web-service/website either already comes with Django, or it is already available through 3rd party open source software. In some cases you can just create a Django project, plug in a bunch of components, create your views/templates, and you're done. While we do praise Django for its extensibility, we can't ignore that Flask does have some extensions of its own. While they aren't quite as big in scope as Django (and many of these extensions come standard in Django), it's a step in the right direction. Django's add-on components can be as big as a blog add-on, to as small as small middleware input validation. Most of Flask's extensions are small middleware components, which is still better than nothing and very helpful, considering the average size of Flask projects. Limitations Every piece of tech has its problems, and these frameworks are no different. So before you choose which to use, you might want to know what disadvantages each has, which we'll be talking about in this section. Django So, what are the aspects of Django that work against it to be selected as your framework of choice? Django is a very large project. Once a developer, especially beginners, starts learning Django, it's easy for them to get lost in the source code, the built-in features, and components it provides, without even using them in an app. Django is a fairly large framework to deploy for simple use-cases, as it hides much of the control from you. If you want to use something that isn't "standard" in Django, then you have to put in some extra work to do so. Understanding components in Django can be a little difficult and tricky at times and can lead to tough decisions, like deciding if an existing component will work for your use-case, or if it'll end up causing you more work than it is worth. Flask Now that we've seen some of the problems with Django, let's not forget about Flask. Since the Flask framework is so small, there isn't a lot to complain about. Well, except for that fact right there: It's so small. Flask is a micro-framework, which means it only provides the bare-bones functionality to get you started. This doesn't mean it can't be powerful and can't scale, it just means that you'll have to create much of the functionality of your service yourself. This means you'll need to handle integrating your database, data validation, file serving, etc. While this could be considered an advantage to those who want control over everything, it also means it'll take you longer to get set up with a fully-functional website. Choosing Flask or Django While it's easy to talk about what each framework does and doesn't do, let's try and make a more direct comparison of each, which we'll do in this section. When simplicity is a factor, Flask is the way to go. It allows much more control over your app and lets you decide how you want to implement things in a project. In contrast to this, Django provides a more inclusive experience, such as providing a default admin panel for your data, an ORM on top of your database, and protection against things like SQL injection, cross-site scripting, CSRF, etc. If you put a lot of emphasis on community support, then Django is probably better in this regard given its history. It has been around since 2005, whereas Flask was created in 2010. At the time of writing this article, Django has about 3.5x more questions/answers on Stack Overflow than Flask (about 2600 Django questions to Flask's 750). The Flask framework is relatively lightweight. In fact, it's almost 2.5x smaller than Django in terms of amount of code. That's a big difference, especially if you need to understand the inner-workings of your web framework. In this aspect, Flask will be much easier to read and understand for most developers. Flask should be selected for development if you need complete control over your app, which ORM you want to use, which database you need to integrate with excellent opportunities to learn more about web services. Django, on the other hand, is better when there is a more clear path to creating what you want, or you're creating something that has been done before. For example, a blog would be a good use-case for Django. Learn More Want to learn more about either of these frameworks? There are quite a few resources out there. Here are a few courses that I've found to be pretty helpful, and will get you up to speed much quicker: Python and Django Full Stack Web Developer Bootcamp REST APIs with Flask and Python Otherwise you can also get a great start by visiting each framework's respective websites: Either way, the most important thing is you actually try them out, work through some examples, and decide on your own which is best for you. Conclusion In this article, we compared the two web frameworks, Django and Flask, by looking at their different properties and setting up a simple "Hello World!" app with each one. You may find that if you're new to web development and decide to learn Django, it may take you a bit longer to truly understand what all of the underlying components do, and how to change them to actually do what you want. But there are many positives as well, and once you become proficient with Django, it'll end up saving you a lot of time in the end, given its huge list of components and vast community support. A more advanced comparison for any frameworks can be done only with advanced use cases and scenarios. Just know that you can't really go wrong with either one, and learning either will set you up well for finding a job. If you need a recommendation, then I'd personally go with Flask. By learning a framework that doesn't hide so much stuff from you, you can learn much, much more. Once you have a better understanding of the core concepts of web development and HTTP, you can start to use add-ons that abstract this away from you. But having that solid foundation of understanding is more important early on, in my opinion. Which framework do you use, and why? Let us know in the comments!
https://stackabuse.com/flask-vs-django/
CC-MAIN-2019-43
refinedweb
2,557
70.33
How to configure or add voice command and ill be using andriod phone to speak over ???! I already tried to add string to my items and write my rules but nothing happens when i speak Openhab2 voice command Here’s an OH1 guide, but I believe you would follow the same principles in OH2 I have done this already and nothing happens when i speak over ! Im using rpi3 with the latest debain and OH2 and for my andriod phone im using habpanel with google voice command I’m tinkering with this at the moment myself. I use habdroid on my phone (can’t see habpanel in the android app store). For me once the Google voice thing (which is SLOW) finishes the habdroid app shows the recognised phrase, and from there it’s just following your voice command rule to see if it should be matching based on keywords in that phrase. Or is habpanel just a web UI option for Chrome on android? If so that may be where it’s failing - at the point of getting the Google voice recognosed phrase into openhab I think i said that wrong, im using a phone website where i have saved it to my homescreen and im trying to use voice command from habpanel ! My bad about the wrong info … Well perhaps as a test try it in the habdroid app. If it works from that then the browser based habpanel is at fault(my guess) Thats what i have in my items : Switch ColorLED{ gpio=“pin:12” } Switch RoomLIGHT{ gpio=“pin:16” } Switch LED{ gpio=“pin:20” } Switch TestB{ gpio=“pin:21” } Switch TestD{ gpio=“pin:26” } Switch TestE{ gpio=“pin:19” } String VoiceCommand And thats what i have in my rules : import org.openhab.model.script.actions.* import org.openhab.core.library.types.* import java.util.* rule „test example rule name“ when Item VoiceCommand received command then var String command = VoiceCommand.state.toString.toLowerCase logInfo(„Voice.Rec“,“VoiceCommand received „+command) if (command.contains(„turn on Led“) { LED.sendCommand(ON) } else if (command.contains(„turn off led“) { LED.sendCommand(OFF) } end I got this from searching online and i have my website for OH2 downloaded to my home from phone browser so i can use habpanel Hi there, I know this is likely to be a bit late for you considering the date but I will post here anyway as this thread is placed highly on Google I was having the same issue as you and I think I was even following the same example as you. The way I got my set up to work was to add an additional ) in the rule. Here is my full rule: rule "Light Control" when Item VoiceCommand received command then var String command = VoiceCommand.state.toString.toLowerCase logInfo(“Voice.Rec”,"VoiceCommand received "+command) if (command.contains(“lights on”)) { sendCommand(Light, ON) } else if (command.contains(“lights off”)) { sendCommand(Light, OFF) } end Hi, Where did you get those immages from? how do you do them? Thanks This is Habpenal and this pictures i posted is just from the openhab2 server… From habpenal custimize and edit I know it’s “a bit” later but as @lmarinen_946Matt said, this is the first post in google search and these are my 2 cents. It’s necessary to enable / use Google Voice, else you wouldn’t see the mic icon in OpenHab Android App, therefore you wouldn’t be able to speak any command. Bye! I still couldn’t get it to work despite reading all of this. I am using the openHAB Android app and I can see the microphone icon in the top right corner. I can speak a command and can see that it was understood correctly. In the config, I have String VoiceCommand and rule "Voice Control" when Item VoiceCommand received update then val txt = VoiceCommand.state.toString.toLowerCase logInfo("Test","VoiceCommand received "+txt) say(txt) end but this rule never triggers. I can trigger it my using Karaf to send an update to the item like smarthome:update VoiceCommand test Please make sure Default Human Language Interpreter is set to Rule-based Interpreter: and select the correct item here: Rule Voice Interpreter => Configure => Select correct item. Nice, that worked!
https://community.openhab.org/t/openhab2-voice-command/30184
CC-MAIN-2019-13
refinedweb
708
66.67
React Video ThumbnailGiven a video url, attempt to generate a video thumbnail using HTML Canvas Element Note: The <Canvas>element will only be able to generate a thumbnail, if CORS allows it. If not, you may see a similar console error as below: DOMException: Failed to execute 'toDataURL' on 'HTMLCanvasElement': Tainted canvases may not be exported. Please read about Cross-Origin Resource Sharing (CORS), if you would like more details on how it works. Installation OROR git clone npm install --save-dev react-video-thumbnail Usage import VideoThumbnail from 'react-video-thumbnail'; // use npm published version ... <VideoThumbnail videoUrl="" thumbnailHandler={(thumbnail) => console.log(thumbnail)} width={120} height={80} /> Properties| Prop Name | Type | Default | Description | | --- | --- | --- | --- | | videoUrl (Required) | string | | The url of the video you want to generate a thumbnail from | | cors | bool | false |Whether or not to set crossOrigin attribute to anonymous. | | height | int | | Resize thumbnail to specified height | | renderThumbnail | bool | true | Whether to render an image tag and show the thumbnail or not. | | snapshotAtTime | int | 2 | The second at which the component should snap the image at. | | thumbnailHandler | func | | Callback function that takes in thumbnail url as an argument | | width | int | | Resize thumbnail to specified width | Note: The longer the snapshotAtTime, the more video data it may have to download. Examples Contributors - mike trieu @brothatru
https://npmtrends.com/react-video-thumbnail
CC-MAIN-2022-40
refinedweb
214
50.97
Created on 2008-04-15 11:57 by timehorse, last changed 2015-03-18 09:26 by abacabadabacaba.. I am very sorry to report (at least for me) that as of this moment, item 9), although not yet complete, is stable and able to pass all the existing python regexp tests. Because these tests are timed, I am using the timings from the first suite of tests to perform a benchmark of performance between old and new code. Based on discussion with Andrew Kuchling, I have decided for the sake of simplicity, the "timing" of each version is to be calculated by the absolute minimum time to execute observed because it is believed this execution would have had the most continuous CPU cycles and thus most closely represents the true execution time. It is this current conclusion that greatly saddens me, not that the effort has not been valuable in understanding the current engine. Indeed, I understand the current engine now well enough that I could proceed with the other modifications as-is rather than implementing them with the new engine. Mind you, I will likely not bring over the copious comments that the new engine received when I translated it to a form without C_Macros and gotos, as that would require too much effort IMHO. Anyway, all that being said, and keeping in mind that I am not 100% satisfied with the new engine and may still be able to wring some timing out of it -- not that I will spend much more time on this -- here is where we currently stand: Old Engine: 6.574s New Engine: 7.239s This makes the old Engine 665ms faster over the entire first test_re.py suite, or 9% faster than the New Engine. Here are the modification so far for item 9) in _sre.c plus some small modifications to sre_constants.h which are only to get _sre.c to compile; normally sre_constants.h is generated by sre_constants.py, so this is not the final version of that file. I also would have intended to make SRE_CHARSET and SRE_COUNT use lookup tables, as well as maybe others, but not likely any other lookup tables. I also want to remove alloc_pos out of the self object and make it a parameter to the ALLOC parameter and probably get rid of the op_code attribute since it is only used in 1 place to save one subtract in a very rare case. But I want to resolve the 10% problem first, so would appreciate it if people could look at the REMOVE_SRE_MATCH_MACROS section of code and compare it to the non-REMOVE_SRE_MATCH_MACROS version of SRE_MATCH and see if you can suggest anything to make the former (new code) faster to get me that elusive 10%. Here is a patch to implement item 7) This simple patch adds (?P#...)-style comment support. > These features are to bring the Regexp code closer in line with Perl 5.10 Why 5.1 instead of 5.8 or at least 5.6? Is it just a scope-creep issue? > as well as add a few python-specific because this also adds to the scope. >? > 9) C-Engine speed-ups. ... > a number of Macros are being eliminated where appropriate. Be careful on those, particular on str/unicode and different compile options. > > These features are to bring the Regexp code closer in line > > with Perl 5.10 > > Why 5.1 instead of 5.8 or at least 5.6? Is it just a scope-creep issue? 5.10.0 comes after 5.8 and is the latest version (2007/12/18)! Yes it is confusing. Thanks Jim for your thoughts! Armaury has already explained about Perl 5.10.0. I suppose it's like Macintosh version numbering, since Mac Tiger went from version 10.4.9 to 10.4.10 and 10.4.11 a few years ago. Maybe we should call Python 2.6 Python 2.06 just in case. But 2.6 is the known last in the 2 series so it's not a problem for us! :) >> as well as add a few python-specific > > because this also adds to the scope. At this point the only python-specific changes I am proposing would be items 2, 3 (discussed below), 5 (discussed below), 6 and 7. 6 is only a documentation change, the code is already implemented. 7 is just a better behavior. I think it is RARE one compiles more than 100 unique regular expressions, but you never know as projects tend to grow over time, and in the old code the 101st would be recompiled even if it was just compiled 2 minutes ago. The patch is available so I leave it to the community to judge for themselves whether it is worth it, but as you can see, it's not a very large change. >>. Well, I think named matches are better than numbered ones, so I'd definitely go with 2. The problem with 2, though, is that it still leaves the rather typographically intense m.group(n), since I cannot write m.3. However, since capture groups are always numbered sequentially, it models a list very nicely. So I think for indexing by group number, the subscripting operator makes sense. I was not originally suggesting m['foo'] be supported, but I can see how that may come out of 3. But there is a restriction on python named matches that they have to be valid python and that strikes me as 2 more than 3 because 3 would not require such a restriction but 2 would. So at least I want 2, but it seems IMHO m[1] is better than m.group(1) and not in the least hard or a confusing way of retrieving the given group. Mind you, the Match object is a C-struct with python binding and I'm not exactly sure how to add either feature to it, but I'm sure the C-API manual will help with that. >>? Well, Larry Wall and Guido agreed long ago that we, the python community, own all expressions of the form (?P...) and although I'd be my preference to make (?#...) more in conformance with understanding parenthesis nesting, changing the logic behind THAT would make python non-standard. So as far as any conflicting design, we needn't worry. As for speed, the this all occurs in the parser and does not effect the compiler or engine. It occurs only after a (?P has been read and then only as the last check before failure, so it should not be much slower except when the expression is invalid. The actual execution time to find the closing brace of (?P#...) is a bit slower than that for (?#...) but not by much. Verbose is generally a good idea for anything more than a trivial Regular Expression. However, it can have overhead if not included as the first flag: an expression is always checked for verbose post-compilation and if it is encountered, the expression is compiled a second time, which is somewhat wasteful. But the reason I like the (?P#...) over (?#...) is because I think people would more tend to assume: r'He(?# 2 (TWO) ls)llo' should match "Hello" but it doesn't. That expression only matches "He ls)llo", so I created the (?P#...) to make the comment match type more intuitive: r'He(?P# 2 (TWO) ls)llo' matches "Hello". >> 9) C-Engine speed-ups. ... >> a number of Macros are being eliminated where appropriate. > > Be careful on those, particular on str/unicode and different > compile options. Will do; thanks for the advice! I have only observed the UNICODE flag controlling whether certain code is used (besides the ones I've added) and have tried to stay true to that when I encounter it. Mind you, unless I can get my extra 10% it's unlikely I'd actually go with item 9 here, even if it is easier to read IMHO. However, I want to run the new engine proposal through gprof to see if I can track down some bottlenecks. At some point, I hope to get my current changes on Launchpad if I can get that working. If I do, I'll give a link to how people can check out my working code here as well. Python 2.6 isn't the last, but Guido has said that there won't be a 2.10. > Match object is a C-struct with python binding > and I'm not exactly sure how to add either feature to it I may be misunderstanding -- isn't this just a matter of writing the function and setting it in the tp_as_sequence and tp_as_mapping slots? > Larry Wall and Guido agreed long ago that we, the python > community, own all expressions of the form (?P...) Cool -- that reference should probably be added to the docs. For someone trying to learn or translate regular expressions, it helps to know that (?P ...) is explicitly a python extension (even if Perl adopts it later). Definately put the example in the doc. r'He(?# 2 (TWO) ls)llo' should match "Hello" but it doesn't. Maybe even without the change, as doco on the current situation. Does VERBOSE really have to be the first flag, or does it just have to be on the whole pattern instead of an internal switch? I'm not sure I fully understand what you said about template. Is this a special undocumented switch, or just an internal optimization mode that should be triggered whenever the repeat operators don't happen to occur? I don't know anything about regexp implementation, but if you replace a switch-case with a function lookup table, it isn't surprising that the new version ends up slower. A local jump is always faster than a function call, because of the setup overhead and stack manipulation the latter involves. So you might try to do the cleanup while keeping the switch-case structure, if possible. Thank you and Merci Antoine! That is a good point. It is clearly specific to the compiler whether a switch-case will be turned into a series of conditional branches or simply creating an internal jump table with lookup. And it is true that most compilers, if I understand correctly, use the jump-table approach for any switch-case over 2 or 3 entries when the cases are tightly grouped and near 0. That is probably why the original code worked so fast. I'll see if I can combine the best of both approaches. Thanks again! I am making my changes in a Bazaar branch hosted on Launchpad. It took me quite a while to get things set up more-or-less logically but there they are and I'm currently trying to re-apply my local changes up to today into the various branches I have. Each of the 11 issues I outlined originally has its own branch, with a root branch from which all these branches are derived to serve as a place for a) merging in python 2.6 alpha concurrent development (merges) and to apply any additional re changes that don't fall into any of the other categories, of which I have so far found only 2 small ones. Anyway, if anyone is interested in monitoring my progress, it is available at: I will still post major milestones here, but one can monitory day-to-day progress on Launchpad. Also on launchpad you will find more detail on the plans for each of the 11 modifications, for the curious. Thanks again for all the advice! I am finally making progress again, after a month of changing my patches from my local svn repository to bazaar hosted on launchpad.net, as stated in my last update. I also have more or less finished the probably easiest item, #5, so I have a full patch for that available now. First, though, I want to update my "No matter what" patch, which is to say these are the changes I want to make if any changes are made to the Regexp code. AFAIK if you have a regex with named capture groups there is no direct way to relate them to the capture group numbers. You could do (untested; Python 3 syntax): d = {v: k for k, v in match.groupdict()} for i in range(match.lastindex): print(i, match.group(i), d[match.group(i)]) One possible solution would be a grouptuples() function that returned a tuple of 3-tuples (index, name, captured_text) with the name being None for unnamed groups. Anyway, good luck with all your improvements, I will be especially glad if you manage to do (2) and (8) (and maybe (3)). Mark scribbled: > One possible solution would be a grouptuples() function that returned > a tuple of 3-tuples (index, name, captured_text) with the name being > None for unnamed groups. Hmm. Well, that's not a bad idea at all IMHO and would, AFAICT probably be easier to do than (2) but I would still do (2) but will try to add that to one of the existing items or spawn another item for it since it is kind of a distinct feature. My preference right now is to finish off the test cases for (7) because it is already coded, then finish the work on (1) as that was the original reason for modification then on to (2) then (3) as they are related and then I don't mind tackling (8) because I think that one shouldn't be too hard. Interestingly, the existing engine code (sre_parse.py) has a place-holder, commented out, for character classes but it was never properly implemented. And I will warn that with Unicode, I THINK all the character classes exist as unicode functions or can be implemented as multiple unicode functions, but I'm not 100% sure so if I run into that problem, some character classes may initially be left out while I work on another item. Anyway, thanks for the input, Mark! Well, it's time for another update on my progress... Some good news first: Atomic Grouping is now completed, tested and documented, and as stated above, is classified as issue2636-01 and related patches. Secondly, with caveats listed below, Named Match Group Attributes on a match object (item 2) is also more or less complete at issue2636-02 -- it only lacks documentation. Now, I want to also update my list of items. We left off at 11: Other Perl-specific modifications. Since that time, I have spawned a number of other branches, the first of which (issue2636-12) I am happy to announce is also complete! 12) Implement the changes to the documentation of re as per Jim J. Jewett suggestion from 2008-04-24 14:09. Again, this has been done.). 14) As per PEP-3131 and the move to Python 3.0, python will begin to allow full UNICODE-compliant identifier names. Correspondingly, it would be the responsibility of this item to allow UNICODE names for match groups. This would allow retrieval of UNICODE names via the group* functions or when combined with Item 3, the getitem handler (m[u'...']) (03+14) and the attribute name itself (e.g. getattr(m, u'...')) when combined with item 2 (02+14). 15) Change the Pattern_Type, Match_Type and Scanner_Type (experimental) to become richer Python Types. Specifically, add __doc__ strings to each of these types' methods and members. 16) Implement various FIXMEs. 16-1) Implement the FIXME such that if m is a MatchObject, del m.string will disassociate the original matched string from the match object; string would be the only member that would allow modification or deletion and you will not be able to modify the m.string value, only delete it. ----- Finally, I want to say a couple notes about Item 2: Firstly, as noted in Item 14, I wish to add support for UNICODE match group names, and the current version of the C-code would not allow that; it would only make sense to add UNICODE support if 14 is implemented, so adding support for UNICODE match object attributes would depend on both items 2 and 14. Thus, that would be implemented in issue2636-02+14. Secondly, there is a FIXME which I discussed in Item 16; I gave that problem it's own item and branch. Also, as stated in Item 15, I would like to add more robust help code to the Match object and bind __doc__ strings to the fixed attributes. Although this would not directly effect the Item 2 implementation, it would probably involve moving some code around in its vicinity.. Personally, I like a because if Item 3 is implemented, it makes a fairly useful shorthand for retrieving keyword names when a keyword is used for a name. Also, we could put a deprecation warning in to inform users that eventually match groups names that are keywords in the Match Object will eventually be disallowed. However, I don't support restricting the match group names any more than they already are (they must be a valid python identifier only) so again I would go with a) and nothing more and that's what's implemented in issue2636-02.patch. ----- Now, rather than posting umteen patch files I am posting one bz2- compressed tar of ALL patch files for all threads, where each file is of the form: issue2636(-\d\d|+\d\d)*(-only)?.patch For instance, issue2636-01.patch is the p1 patch that is a difference between the current Python trunk and all that would need to be implemented to support Atomic Grouping / Possessive Qualifiers. Combined branches are combined with a PLUS ('+') and sub-branches concatenated with a DASH ('- '). Thus, "issue2636-01+09-01-01+10.patch" is a patch which combines the work from Item 1: Atomic Grouping / Possessive Qualifiers, the sub- sub branch of Item 9: Engine Cleanups and Item 10: Shared Constants. Item 9 has both a child and a grandchild. The Child (09-01) is my proposed engine redesign with the single loop; the grandchild (09-01-01) is the redesign with the triple loop. Finally the optional "-only" flag means that the diff is against the core SRE modifications branch and thus does not include the core branch changes. As noted above, Items 01, 02, 05, 07 and 12 should be considered more or less complete and ready for merging assuming I don't identify in my implementation of the other items that I neglected something in these. The rest, including the combined items, are all provided in the given tarball. Sorry, as I stated in the last post, I generated the patches then realized that I was missing the documentation for Item 2, so I have updated the issue2636-02.patch file and am attaching that separately until the next release of the patch tarball. issue2636-02-only.patch should be ignored and I will only regenerate it with the correct documentation in the next tarball release so I can move on to either Character Classes or Relative Back-references. I wanna pause Item 3 for the moment because 2, 3, 13, 14, 15 and 16 all seem closely related and I need a break to allow my mind to wrap around the big picture before I try and tackle each one. [snip] >). :-) [snip] >. I don't like the prefix ideas and now that you've spelt it out I don't like the sometimes m.foo will work and sometimes it won't. So I prefer m['foo'] to be the canonical way because that guarantees your code is always consistent. ------------------------------------------------------------ BTW I wanted to do a simple regex to match a string that might or might not be quoted, and that could contain quotes (but not those used to delimit it). My first attempt was illegal: (?P<quote>['"])?([^(?=quote)])+(?(quote)(?=quote)) It isn't hard to work round but it did highlight the fact that you can't use captures inside character classes. I don't know if Perl allows this; I guess if it doesn't then Python shouldn't either since GvR wants the engine to be Perl compatible. Thanks? [snip] It seems to me that both using a special prefix or adding an option are adding a lot of baggage and will increase the learning curve. The nice thing about (3) (even without slicing) is that it seems a v. natural extension. But (2) seems magical (i.e., Perl-like rather than Pythonic) which I really don't like. BTW I just noticed this: '<_sre.SRE_Pattern object at 0x9ded020>' >>> "{0!r}".format(rx) '<_sre.SRE_Pattern object at 0x9ded020>' >>> "{0!s}".format(rx) '<_sre.SRE_Pattern object at 0x9ded020>' >>> "{0!a}".format(rx) That's fair enough, but maybe for !s the output should be rx.pattern? See also #3825. Update 16 Sep 2008: Based on the work for issue #3825, I would like to simply update the item list as follows: 1) Atomic Grouping / Possessive Qualifiers (See also Issue #433030) [Complete] 2) Match group names as attributes (e.g. match.foo) [Complete save issues outlined above] 3) Match group indexing (e.g. match['foo'], match[3]) 4) Perl-style back-references (e.g. compile(r'(a)\g{-1}'), and possibly adding the r'\k' escape sequence for keywords. 5) Parenthesis-Aware Python Comment (e.g. r'(?P#...)') [Complete] 6) Expose support for Template expressions (expressions without repeat operators), adding test cases and documentation for existing code. 7) Larger compiled Regexp cache (256 vs. 100) and reduced thrashing risk. [Complete] 8) Character Classes (e.g. r'[:alphanum:]') 9) Proposed Engine redesigns and cleanups (core item only contains cleanups and comments to the current design but does not modify the design). 9-1) Single-loop Engine redesign that runs 8% slower than current. [Complete] 9-1-1) 3-loop Engine redesign that runs 10% slower than current. [Complete] 9-2) Matthew Bernett's Engine redesign as per issue #3825 10) Have all C-Python shared constants stored in 1 place (sre_constants.py) and generated by that into C constants (sre_constants.h). [Complete AFAICT] 11) Scan Perl 5.10.0 for other potential additions that could be implemented for Python. 12) Documentation suggestions by Jim J. Jewett [Complete] 13) Add grouptuples method to the Match object (i.e. match.grouptuples() returns (<index>, <name or None>, <value>) ) suitable for iteration. 14) UNICODE match group names, as per PEP-3131. 15) Add __doc__ strings and other Python niceties to the Pattern_Type, Match_Type and Scanner_Type (experimental). 16) Implement any remaining TODOs and FIXMEs in the Regexp modules. 16-1) Allow for the disassociation of a source string from a Match_Type, assuming this will still leave the object in a "reasonable" state. 17) Variable-length [Positive and Negative] Look-behind assertions, as described and implemented in Issue #3825. --- Now, we have a combination of Items 1, 9-2 and 17 available in issue #3825, so for now, refer to that issue for the 01+09-02+17 combined solution. Eventually, I hope to merge the work between this and that issue. I sadly admit I have made not progress on this since June because managing 30 some lines of development, some of which having complex diamond branching, e.g.: 01 is the child of Issue2636 09 is the child of Issue2636 10 is the child of Issue2636 09-01 is the child of 09 09-01-01 is the child of 09-01 01+09 is the child of 01 and 09 01+10 is the child of 01 and 10 09+10 is the child of 09 and 10 01+09-01 is the child of 01 and 09-01 01+09-01-01 is the child of 01 and 09-01-01 09-01+10 is the child of 09-01 and 10 09-01-01+10 is the child of 09-01-01 and 10 Which all seems rather simple until you wrap your head around: 01+09+10 is the child of 01, 09, 10, 01+09, 01+10 AND 09+10! Keep in mind the reason for all this complex numbering is because many issues cannot be implemented in a vacuum: If you want Atomic Grouping, that's 1 implementation, if you want Shared Constants, that's a different implementation. but if you want BOTH Atomic Grouping and Shared Constants, that is a wholly other implementation because each implementation affects the other. Thus, I end up with a plethora of branches and a nightmare when it comes to merging which is why I've been so slow in making progress. Bazaar seems to be very confused when it comes to a merge in 6 parts between, for example 01, 09, 10, 01+09, 01+10 and 09+10, as above. It gets confused when it sees the same changes applied in a previous merge applied again, instead of simply realizing that the change in one since last merge is EXACTLY the same change in the other since last merge so effectively there is nothing to do; instead, Bazaar gets confused and starts treating code that did NOT change since last merge as if it was changed and thus tries to role back the 01+09+10-specific changes rather than doing nothing and generates a conflict. Oh, that I could only have a version control system that understood the kind of complex branching that I require! Anyway, that's the state of things; this is me, signing out! Comparing item 2 and item 3, I think that item 3 is the Pythonic choice and item 2 is a bad idea. Item 4: back-references in the pattern are like \1 and (?P=name), not \g<1> or \g<name>, and in the replacement string are like \g<1> and \g<name>, not \1 (or (?P=name)). I'd like to suggest that back-references in the pattern also include \g<1> and \g<name> and \g<-1> for relative back-references. Interestingly, Perl names groups with (?<name>...) whereas Python uses (?P<name>...). A permissible alternative? Thanks for weighing in Matthew! Yeah, I do get some flack for item 2 because originally item 3 wasn't supposed to cover named groups but on investigation it made sense that it should. I still prefer 2 over-all but the nice thing about them being separate items is that we can accept 2 or 3 or both or neither, and for the most part development for the first phase of 2 is complete though there is still IMHO the issue of UNICODE name groups (visa-vi item 14) and the name collision problem which I propose fixing with an Attribute / re.A flag. So, I think it may end up that we could support both 3 by default and 2 via a flag or maybe 3 and 2 both but with 2 as is, with name collisions hidden (i.e. if you have r'(?P<string>...)' as your capture group, typing m.string will still give you the original comparison string, as per the current python documentation) but have collision-checking via the Attribute flag so that with r'(?A)(?P<string>...)' would not compile because string is a reserved word. Your interpretation of 4 matches mine, though, and I would definitely suggest using Perl's \g<-n> notation for relative back-references, but further, I was thinking, if not part of 4, part of the catch-all item 11 to add support for Perl's (?<name>...) as a synonym for Python's (?P<name>...) and Perl's \k<name> for Python's (?P=name) notation. The evolution of Perl's name group is actually interesting. Years ago, Guido had a conversation with Larry Wall about using the (?P...) capture sequence for python-specific Regular Expression blocks. So Python went ahead and implemented named capture groups. Years later, the Perl folks thought named capture groups were a neat idea and adapted them in the (?<...>...) form because Python had restricted the (?P...) notation to themselves so they couldn't use our even if they wanted to. Now, though, with Perl adapting (?<...>...), I think it inevitable that Java and even C++ may see this as the defacto standard. So I 100% agree, we should consider supporting (?<name>...) in the parser. Oh, and as I suggested in Issue 3825, I have these new item proposals: Item 18: Add a re.REVERSE, re.R (?r) flag for reversing the direction of the String Evaluation against a given Regular Expression pattern. See issue 516762, as implemented in Issue 3825. Item 19: Make various in-line flags positionally dependant, for example (?i) makes the pattern before this case-sensitive but after it case-insensitive. See Issue 433024, as implemented in Issue 3825. Item 20: All the negation of in-line flags to cancel their effect in conditionally flagged expressions for example (?-i). See Issue 433027, as implemented in Issue 3825. Item 21: Allow for scoped flagged expressions, i.e. (?i:...), where the flag(s) is applied to the expression within the parenthesis. See Issue 433028, as implemented in Issue 3825. Item 22: Zero-width regular expression split: when splitting via a regular expression of Zero-length, this should return an expression equivalent to splitting at each character boundary, with a null string at the beginning and end representing the space before the first and after the last character. See issue 3262. Item 23: Character class ranges over case-insensitive matches, i.e. does "(?i)[9-A]" contain '_' , whose ord is greater than the ord of 'A' and less than the ord of 'a'. See issue 5311. And I shall create a bazaar repository for your current development line with the unfortunately unwieldy name of lp:~timehorse/python/issue2636-01+09-02+17+18+19+20+21 as that would, AFAICT, cover all the items you've fixed in your latest patch. Anyway, great work Matthew and I look forward to working with you on Regexp 2.7 as you do great work! Regarding item 22: there's also #1647489 ("zero-length match confuses re.finditer()"). This had me stumped for a while, but I might have a solution. I'll see whether it'll fix item 22 too. I wasn't planning on doing any more major changes on my branch, just tweaking and commenting and seeing whether I've missed any tricks in the speed stakes. Half the task is finding out what's achievable, and how! Though I can't look at the code at this time, I just want to express how good it feels that you both are doing these great things for regular expressions in Python! Especially atomic grouping is something I've often wished for when writing lexers for Pygments... Keep up the good work! Good catch on issue 1647489 Matthew; it looks like this is where that bug fix will end up going. But, I am unsure if the solution for this issue is going to be the same as for 3262. I think the solution here is to add an internal flag that will keep track of whether the current character had previously participated in a Zero-Width match and thus not allow any subsequent zero-width matches associated beyond the first, and at the same time not consuming any characters in a Zero-width match. Thus, I have allocated this fix as Item 24, but it may be later merged with 22 if the solutions turn out to be more or less the same, likely via a 22+24 thread. The main difference, though, as I see it, is that the change in 24 may be considered a bug where the general consensus of 22 is that it is more of a feature request and given Guido's acceptance of a flag-based approach, I suggest we allocate re.ZEROWIDTH, re.Z and (?z) flags to turn on the behaviour you and I expect, but still think that be best as a 2.7 / 3.1 solution. I would also like to add a from __futurue__ import ZeroWidthRegularExpressions or some such to make this the default behaviour so that by version 3.2 it may indeed be considered the default. Anyway, I've allocated all the new items in the launchpad repository so feel free to go to and install Bazaar for windows so you can download any of the individual item development threads and try them out for yourself. Also, please consider setting up a free launchpad account of your very own so that I can perhaps create a group that would allow us to better share development. Thanks again Matthew for all your greatly appreciated contributions! I've moved all the development branches to the ~pythonregexp2.7 team so that we can work collaboratively. You just need to install Bazaar, join, upload your public SSH key and then request to be added to the pythonregexp2.7 team. At that point, you can check out any code via: bzr co lp:~pythonregexp2.7/python/issue2636-* This should make co-operative development easier. Just out of interest, is there any plan to include #1160 while we're at it? I've enumerated the current list of Item Numbers at the official Launchpad page for this issue: There you will find links to each development branch associated with each item, where a broader description of each issue may be found. I will no longer enumerate the entire list here as it has grown too long to keep repeating; please consult that web page for the most up-to-date list of items we will try to tackle in the Python Regexp 2.7 update. Also, anyone wanting to join the development team who already has a Launchpad account can just go to the Python Regexp 2.7 web site above and request to join. You will need Bazaar to check out, pull or branch code from the repository, which is available at. Good catch, Matthew, and if you spot any other outstanding Regular Expression issues feel free to mention them here. I'll give issue 1160 an item number of 25 and think all we need to do here is change SRE_CODE to be typedefed to an unsigned long and change the repeat count constants (which would be easier if we assume item 10: shared constants). For reference, these are all the regex-related issues that I've found (including this one!): id : activity : title #2636 : 25/09/08 : Regexp 2.7 (modifications to current re 2.2.2) #1160 : 25/09/08 : Medium size regexp crashes python #1647489 : 24/09/08 : zero-length match confuses re.finditer() #3511 : 24/09/08 : Incorrect charset range handling with ignore case flag? #3825 : 24/09/08 : Major reworking of Python 2.5.2 re module #433028 : 24/09/08 : SRE: (?flag:...) is not supported #433027 : 24/09/08 : SRE: (?-flag) is not supported. #433024 : 24/09/08 : SRE: (?flag) isn't properly scoped #3262 : 22/09/08 : re.split doesn't split with zero-width regex #3299 : 17/09/08 : invalid object destruction in re.finditer() #3665 : 24/08/08 : Support \u and \U escapes in regexes #3482 : 15/08/08 : re.split, re.sub and re.subn should support flags #1519638 : 11/07/08 : Unmatched Group issue - workaround #1662581 : 09/07/08 : the re module can perform poorly: O(2**n) versus O(n**2) #3255 : 02/07/08 : [proposal] alternative for re.sub #2650 : 28/06/08 : re.escape should not escape underscore #433030 : 17/06/08 : SRE: Atomic Grouping (?>...) is not supported #1721518 : 24/04/08 : Small case which hangs #1693050 : 24/04/08 : \w not helpful for non-Roman scripts #2537 : 24/04/08 : re.compile(r'((x|y+)*)*') should fail #1633953 : 23/02/08 : re.compile("(.*$){1,4}", re.MULTILINE) fails #1282 : 06/01/08 : re module needs to support bytes / memoryview well #814253 : 11/09/07 : Grouprefs in lookbehind assertions #214033 : 10/09/07 : re incompatibility in sre #1708652 : 01/05/07 : Exact matching #694374 : 28/06/03 : Recursive regular expressions #433029 : 14/06/01 : SRE: posix classes aren't supported Hmmm. Well, some of those are already covered: #2636 : self #1160 : Item 25 #1647489 : Item 24 #3511 : Item 23 #3825 : Item 9-2 #433028 : Item 21 #433027 : Item 20 #433024 : Item 19 #3262 : Item 22 #3299 : TBD #3665 : TBD #3482 : TBD #1519638 : TBD #1662581 : TBD #3255 : TBD #2650 : TBD #433030 : Item 1 #1721518 : TBD #1693050 : TBD #2537 : TBD #1633953 : TBD #1282 : TBD #814253 : TBD (but I think you implemented this, didn't you Matthew?) #214033 : TBD #1708652 : TBD #694374 : TBD #433029 : Item 8 I'll have to get nosy and go over the rest of these to see if any of them have already been solved, like the duplicate test case issue from a while ago, but someone forgot to close them. I'm thinking specifically the '\u' escape sequence one. #814253 is part of the fix for variable-width lookbehind. BTW, I've just tried a second time to register with Launchpad, but still no reply. :-( Yes, I see in you rc2+2 diff it was added into that. I will have to allocate a new number for that fix though, as technically it's a different feature than variable-length look-behind. For now I'm having a hard time merging your diffs in with my code base. Lots and lots of conflicts, alas. BTW, what UID did you try to register under at Launchpad? Maybe I can see if it's registered but just forgetting to send you e-mail. Tried bazaar@mrabarnett.plus.com twice, no reply. Succeeded with mrabarnett@freeuk.com. Thanks Matthew. You are now part of the pythonregexp2.7 team. I want to handle integrating Branch 01+09-02+17 myself for now and the other branches will need to be renamed because I need to add Item 26: Capture Groups in Look-Behind expressions, which would mean the order of your patches are: 01+09-02+17: regex_2.6rc2.diff regex_2.6rc2+1.diff 01+09-02+17+26: regex_2.6rc2+2.diff 01+09-02+17+18+26: regex_2.6rc2+3.diff regex_2.6rc2+4.diff 01+09-02+17+18+19+20+21+26: regex_2.6rc2+5 regex_2.6rc2+6 It is my intention, therefore, to check a version of each of these patches in to their corresponding repository, sequentially, starting with 0, which is what I am working on now. I am worried about a straight copy to each thread though, as there are some basic cleanups provided through the core issue2636 patch, the item 1 patch and the item 9 patch. The best way to see what these changes are is to download and look at the issue2636-01+09.patch file or, by typing the following into bazaar: bzr diff --old lp:~pythonregexp2.7/python/base --new lp:~pythonregexp2.7/python/issue2636+01+09 Which is more up-to-date than my June patches -- I really need to regenerate those! I've been completely unable to get Bazaar to work with Launchpad: authentication errors and bzrlib.errors.TooManyConcurrentRequests. Matthew, Did you upload a public SSH key to your Launchpad account? You're on MS Windows, right? I can try and do an install on an MS Windows XP box or 2 I have lying around and see how that works, but we should try and solve this vexing thing I've noticed about Windows development, which is that Windows cannot understand Unix-style file permissions, and so when I check out Python on Windows and then check it back in, I've noticed that EVERY python and C file is "changed" by virtue of its permissions having changed. I would hope there's some way to tell Bazaar to ignore 'permissions' changes because I know our edits really have nothing to do with that. Anyway, I'll try a few things visa-vi Windows to see if I get a similar problem; there's also the forum where you can post your Bazaar issues and see if the community can help. Search previous questions or click the "Ask a question" button and type your subject. Launchpad's UI is even smart enough to scan your question title for similar ones so you may be able to find a solution right away that way. I use the Launchpad Answers section all the time and have found it usually is a great way of getting help. I have it working finally! Great, Matthew!! Now, I'm still in the process of setting up branches related to your work; generally they should be created from a core and set of features implemented for example: To get from Version 2 to Version 3 of your Engine, I had to first check out lp:~pythonregexp2.7/python/issue2636-01+09-02+17 and then "push" it back onto launchpad as lp:~pythonregexp2.7/python/issue2636-01+09-02+17+26. This way the check-in logs become coherent. So, please hold off on checking your code in until I have your current patch-set checked in, which I should finish by today; I also need to rename some of the projects based on the fact that you also implemented item 26 in most of your patches. Actually, I keep a general To-Do list of what I am up to on the whiteboard, which you can also edit, if you want to see what I'm up to. But I'll try to have that list complete by today, fingers crossed! In the mean time, would you mind seeing if you are getting the file permissions issue by doing a checkout or pull or branch and then calling "bzr stat" to see if this caused Bazaar to add your entire project for checkin because the permissions changed. Thanks and congratulations again! I did a search on the permissions problem:. Thanks, Matthew. My reading of that Answer is that you should be okay because you, I assume, installed the Windows-Native package rather than the cygwin that I first tested. I think the problem is specific to Cygwin as well as the circumstances described in the article. Still, it should be quite easy to verify if you just check out python and then do a stat, as this will show all files whose permissions have changed as well as general changes. Unfortunately, I am still working on setting up those branches, but once I finish documenting each of the branches, I should proceed more rapidly. Phew! Okay, all you patches have been applied as I said in a previous message, and you should now be able to check out lp:~pythonregexp2.7/python/issue2636+01+09-02+17+18+19+20+21+24+26 where you can then apply your latest known patch (rc2+7) to add a fix for the findall / finditer bug. However, please review my changes to: a) lp:~pythonregexp2.7/python/issue2636-01+09-02+17 b) lp:~pythonregexp2.7/python/issue2636-01+09-02+17+26 c) lp:~pythonregexp2.7/python/issue2636-01+09-02+17+18+26 d) lp:~pythonregexp2.7/python/issue2636-01+09-02+17+18+19+20+21+26 To make sure my mergers are what your code snapshots should be. I did get one conflict with patch 5 IIRC where a reverse attribute was added to the SRE_STATE struct, and get a weird grouping error when running the tests for (a) and (b), which I think is a typo; a compile error regarding the afore mentioned missing reverse attribute from patch 3 or 4 in (c) and the SRE_FLAG_REVERSE seems to have been lost in (d) for some reason. Also, if you feel like tackling any other issues, whether they have numbers or not, and implementing them in your current development line, please let me know so I can get all the documentation and development branches set up. Thanks and good luck! I haven't yet found out how to turn on compression when getting the branches, so I've only looked at lp:~pythonregexp2.7/python/issue2636+01+09-02+17+18+19+20+21+24+26. I did see that the SRE_FLAG_REVERSE flag was missing. BTW, I ran re.findall(r"(?m)^(.*re\..+\\m)$", text) where text was 67MB of emails. Python v2.5.2 took 2.4secs and the new version 5.6secs. Ouch! I added 4 lines to _sre.c and tried again. 1.1secs. Nice! :-) Good work, Matthew. Now, another bazaar hint, IMHO, is once of my favourite commands: switch. I generally develop all in one directory, rather than getting a new directory for each branch. Once does have to be VERY careful to type "bzr info" to make sure the branch you're editing is the one you think it is! but with "bzr switch", you do a differential branch switch that allows you to change your development branch quickly and painlessly. This assumes you did a "bzr checkout" and not a "bzr pull". If you did a pull, you can still turn this into a "checkout", where all VCS actions are mirrored on the server, by using the 'bind' command. Make sure you push your branch first. You don't need to worry about all this "bind"ing, "push"ing and "pull"ing if you choose checkout, but OTOH, if your connection is over-all very slow, you may still be better off with a "pull"ed branch rather than a Anyway, good catch on those 4 lines and I'll see if I can get your earlier branches up to date. Matthew, I've traced down the patch failures in my merges and now each of the 4 versions of code on Launchpad should compile, though the first 2 do not pass all the negative look-behind tests, though your later 2 do. Any chance you could back-port that fix to the lp:~pythonregexp2.7/python/issue2636-01+09-02+17 branch? If you can, I can propagate that fix to the higher levels pretty quickly. issue2636-01+09-02+17_backport.diff is the backport fix. Still unable to compress the download, so that's >200MB each time! The explanation of the zero-width bug is incorrect. What happens is this: The functions for finditer(), findall(), etc, perform searches and want the next one to continue from where the previous match ended. However, if the match was actually zero-width then that would've made it search from where the previous search _started_, and it would be stuck forever. Therefore, after a zero-width match the caller of the search consumes a character. Unfortunately, that can result a character being 'missed'. The bug in re.split() is also the result of an incorrect fix to this zero-width problem. I suggest that the regex code should include the fix for the zero-width split bug; we can have code to turn it off unless a re.ZEROWIDTH flag is present, if that's the decision. The patch issue2636+01+09-02+17+18+19+20+21+24+26_speedup.diff includes some speedups. I've found an interesting difference between Python and Perl regular expressions: In Python: \Z matches at the end of the string In Perl: \Z matches at the end of the string or before a newline at the end of the string \z matches at the end of the string Perl v5.10 offers the ability to have duplicate capture group numbers in branches. For example: (?|(a)|(b)) would number both of the capture groups as group 1. Something to include? I've extended the group referencing. It now has: Forward group references (\2two|(one))+ \g-type group references (n is name or number) \g<n> (Python re replacement string) \g{n} (Perl) \g'n' (Perl) \g"n" (because ' and " are interchangeable) \gn (n is single digit) (Perl) (n is number) \g<+n> \g<-n> \g{+n} (Perl) \g{-n} (Perl) \k-type group references (n is group name) \k<n> (Perl) \k{n} (Perl) \k'n' (Perl) \k"n" (because ' and " are interchangeable) Further to msg74203, I can see no reason why we can't allow duplicate capture group names if the groups are on different branches are are thus mutually exclusive. For example: (?P<name>a)|(?P<name>b) Apart from this I think that duplicate names should continue to raise an exception. I've been trying, and failing to understand the state of play with this bug. The most recent upload is issue2636+01+09-02+17+18+19+20+21+24+26_speedup.diff, but I can't seem to apply that to anything. Nearly every hunk fails when I try against 25-maint, 26-maint or trunk. How does one apply this? Do I need to apply mrabarnett's patches from bug 3825? issue2636-features.diff is based on Python 2.6. It includes: Named Unicode characters eg \N{LATIN CAPITAL LETTER A} Unicode character properties eg \p{Lu} (uppercase letter) and \P{Lu} (not uppercase letter) Other character properties not restricted to Unicode eg \p{Alnum} and \P{Alnum} Issue #3511 : Incorrect charset range handling with ignore case flag? Issue #3665 : Support \u and \U escapes in regexes Issue #1519638 Unmatched Group issue - workaround Issue #1693050 \w not helpful for non-Roman scripts The next 2 seemed a good idea at the time. :-) Octal escape \onnn Extended hex escape \x{n} I'm glad to see that the unmatched group issue is finally being addressed. Thanks! > Named Unicode characters eg \N{LATIN CAPITAL LETTER A} These descriptions are not as stable as, say, Unicode code point values or language names. Are you sure it is a good idea to depend on them not being adjusted in the future? It's certainly nice and self-documenting, but it doesn't seem better from a future-proofing point of view than \u0041. Do other languages implement this? Russ Python 2.6 does (and probably Python 3.x, although I haven't checked): >>> u"\N{LATIN CAPITAL LETTER A}" u'A' If it's good enough for Python's Unicode string literals then it's good enough for Python's re module. :-) In fact, it works for Python 2.4, 2.5, 2.6 and 3.0 from my rather limited testing. In Python 2.4: >>> u"\N{LATIN CAPITAL LETTER A}" u'A' >>> u"\N{MUSICAL SYMBOL DOUBLE SHARP}" u'\U0001d12a' In Python 3.0: >>> "\N{LATIN CAPITAL LETTER A}" 'A' >>> ord("\N{MUSICAL SYMBOL DOUBLE SHARP}") 119082 issue2636-features-2.diff is based on Python 2.6. Bugfix. No new features. Besides the fact that this is probably great work, I really wonder who will have enough time and skills to review such a huge patch... :-S In any case, some recommendations: - please provide patches against trunk; there is no way such big changes will get committed against 2.6, which is in maintenance mode - avoid, as far as possible, doing changes in style, whitespace or indentation; this will make the patch slightly smaller or cleaner - avoid C++-style comments (use /* ... */ instead) - don't hesitate to add extensive comments and documentation about what you've added Once you think your patch is ready, you may post it to, in the hope that it makes reviewing easier. One thing I forgot: - please don't make lines longer than 80 characters :-) Once the code has settled down, it would also be interesting to know if performance has changed compared to the previous implementation. issue2636-features-3.diff is based on the 2.x trunk. Added comments. Restricted line lengths to no more than 80 characters Added common POSIX character classes like [[:alpha:]]. Added further checks to reduce unnecessary backtracking. I've decided to remove \onnn and \x{n} because they aren't supported elsewhere in the language. issue2636-features-4.diff includes: Bugfixes msg74203: duplicate capture group numbers msg74904: duplicate capture group names issue2636-features-5.diff includes: Bugfixes Added \G anchor (from Perl). \G is the anchor at the start of a search, so re.search(r'\G(\w)') is the same as re.match(r'(\w)'). re.findall normally performs a series of searches, each starting where the previous one finished, but if the pattern starts with \G then it's like a series of matches: >>> re.findall(r'\w', 'abc def') ['a', 'b', 'c', 'd', 'e', 'f'] >>> re.findall(r'\G\w', 'abc def') ['a', 'b', 'c'] Notice how it failed to match at the space, so no more results. issue2636-features-6.diff includes: Bugfixes Added group access via subscripting. >>> m = re.search("(\D*)(?<number>\d+)(\D*)", "abc123def") >>> len(m) 4 >>> m[0] 'abc123def' >>> m[1] 'abc' >>> m[2] '123' >>> m[3] 'def' >>> m[1 : 4] ('abc', '123', 'def') >>> m[ : ] ('abc123def', 'abc', '123', 'def') >>> m["number"] '123' I don't think it will be possible to accept these patches in the current form and way in which they are presented. I randomly picked issue2636-features-2.diff, and see that it contains lots of style and formatting changes, which is completely taboo for this kind of contribution. I propose to split up the patches into separate tracker issues, one issue per proposed new feature. No need to migrate all changes to new issues - start with the one single change that you think is already complete, and acceptance is likely without debate. Leave a note in this issue what change has been moved to what issue. For each such new issue, describe what precisely the patch is supposed to do. Make sure it is complete with respect to this specific change, and remove any code not contributing to the change. Also procedurally, it is not quite clear to me who is contributing these changes: Jeffrey C. Jacobs, or Matthew Barnett. We will need copyright forms from the original contributor. Martin and Matthew, I've been far too busy in the new year to keep up with all your updates to this issue, but since Martin wanted some clarification on direction and copyright, Matthew and I are co-developers, but there is clear delineation between each of our work where the patches uploaded by Matthew (mrbarnett) were uploaded by him and totally a product of his work. The ones uploaded by me are more complicated, as I have always intended this to be a piecemeal project, not one patch fixes all, which is why I created the Bazaar repository hierarchy () with 36 or so branches of mostly independent development at various stages of completion. Here is where the copyrights get more complicated, but not much so. As I said, there are branches where multiple issues are combined (with the plus operator (+)). In general, I consider primary development the single- number branch and only create combined branches where I feel there may be a cross-dependency between one branch and the other. Working this way is VERY time consuming: one spends more time merging branches than actually developing. Matthew, on the other hand, has worked fairly linearly so his branches generally have long number trains to indicate all the issues solved in each. What's more, the last time I updated the repository was last summer so all of Matthew's latest patches have not been catalogued and documented. But, what is there that is more or less 100% copyright and thanks to Matthew's diligent work always contains his first contribution, the new RegExp engine, thread 09-02. So, any items which contain ...+09-02+... are pretty much Matthew's work and the rest are mine. All that said, I personally like having all this development in one place, but also like having the separate branch development model I've set up in Bazaar. If new issues are created from this one, I would thus hope they would still follow the outline specified on the Launchpad page. I prefer keeping everything in one issue though as IMHO it makes things easier to keep track of. As for the stuff I've worked on, I first should forewarn that there is a root patch at () and as issue2636.patch in the tar.bz2 patch library I posted last June. This patch contains various code cleanups and most notably a realignment of the documentation to follow 72-column rule. I know Python's documentation is supposed to be 80-column, but some of the lines were going out even passed that and by making it 72 it allows for incremental expansion before having to reformat any lines. However, textually, the issue2636 version of re.rst is no different than the last version it's based off off, which I verified by generating Sphinx hierarchies for both versions. I therefore suggest this as the only change which is 'massive restructuring' as it does not effect the actual documentation, it just makes it more legible in reStructuredText form. This and other suggested changes in the root issue2636 thread are indented to be applied if at least 1 of the other issues is accepted, and as such is the root branch of every other branch. Understanding that even these small changes may not in fact be acceptable, I have always generated 2 sets of patches for each issue: one diff'ed against the python snapshot stored in base () and one that is diff'ed against the issue2636 root so if the changes in issue2636 root are none the less unacceptable, they can easily be disregarded. Now, with respect to work ready for analysis and merging prepared by me, I have 4 threads ready for analysis, with documentation updated and test cases written and passing: 1: Atomic Grouping / Possessive Qualifiers 5: Added a Python-specific RegExp comment group, (?P#...) which supports parenthetical nesting (see the issue for details) 7: Better caching algorithm for the RegExp compiler with more entries in the cache and reduced possibility of thrashing. 12: Clarify the python documentation for RegExp comments; this was only a change in re.rst. The branches 09-01 and 09-01-01 are engine redesigns that I used to better understand the current RegExp engine but neither is faster than the existing engine so they will probably be abandoned. 10 is also nearly complete and effects the implementation of 01 (whence 01+10) if accepted, but I have not done a final analysis to determine if any other variables can be consolidated to be defined only in one place. Thread 2 is in a near-complete form, but has been snagged by a decision as to what the interface to it should be -- see the discussion above and specifically and. The stand-alone patch by me is the latest version and implements the version called (a) in those notes. I prefer to implement (e). I don't think I'd had a chance to do any significant work on any of the other threads and got really bogged down with changing thread 2 as described above, trying to maintain threads for Matthew and just performing all those merges in Bazaar! So that's the news from me, and nothing new to contribute at this time, but if you want separate, piecemeal solutions, feel free to crack opened and grab them for at least items 1, 5, 7 and 12. > I've been far too busy in the new year to keep up with all your updates > to this issue, but since Martin wanted some clarification on direction > and copyright, Thanks for the clarification. So I think we should focus on Matthew's patches first, and come back to yours when you have time to contribute them.. Fortunately, I think Matthew here DOES have a lot of potential to have everything wrapped up by then, but I think to summarize everyone's concern, we really would like to be able to examine each change incrementally, rather than as a whole. So, for the purposes of this, I would recommend that you, Matthew, make a version of your new engine WITHOUT any Atomic Group, variable length look behind / ahead assertions, reverse string scanning, positional, negated or scoped inline flags, group key indexing or any other feature described in the various issues, and that we then evaluate purely on the merits of the engine itself whether it is worth moving to that engine, and having made that decision officially move all work to that design if warranted. Personally, I'd like to see that 'pure' engine for myself and maybe we can all develop an appropriate benchmark suite to test it fairly against the existing engine. I also think we should consider things like presentation (are all lines terminated by column 80), number of in the line length, but VERY deficient WRT comments and readability, the later of which it sacrifices for speed (as well as being retrofitted for iteration rather than recursion). I'm no fan of switch-case, but I found that by turning the various case statements into bite-sized functions and adding many, MANY comments, the code became MUCH more readable at the minor cost of speed. As I think speed trumps readability (though not blindly), I abandoned my work on the engines, but do feel that if we are going to keep the old engine, I should try and adapt my comments to the old framework to make the current code a bit easier to understand since the framework is more or less the same code as in the existing engine, just re-arranged. I think all of the things you've added to your engine, Matthew, can, with varying levels of difficulty be implemented in the existing Regexp Engine, though I'm not suggesting that we start that effort. Simply, let's evaluate fairly whether your engine is worth the switch over. Personally, I think the engine has some potential -- though not much better than current WRT readability -- but we've only heard anecdotal evidence of it's superior speed. Even if the engine isn't faster, developing speed benchmarks that fairly gage any potential new engine would be handy for the next person to have a great idea for a rewrite, so perhaps while you peruse the stripped down version of your engine, the rest of us can work on modifying regex_tests.py, test_re.py and re_tests.py in Lib/test specifically for the purpose of benchmarking. If we can focus on just these two issues ('pure' engine and fair benchmarks) I think I can devote some time to the later as I've dealt a lot with benchmarking (WRT the compiler-cache) and test cases and hope to be a bit more active here. >. 3.1 will actually be released, if all goes well, before July of this year. The first alpha was released a couple of days ago. The goal is to fix most deficiencies of the 3.0 release. See for the planned release schedule. Thanks, Antione! Then I think for the most part any changes to Regexp will have to wait for 3.2 / 2.7. An additional feature that could be borrowed, though in slightly modified form, from Perl is case-changing controls in replacement strings. Roughly the idea is to add these forms to the replacement string: \g<1> provides capture group 1 \u\g<1> provides capture group 1 with the first character in uppercase \U\g<1> provides capture group 1 with all the characters in uppercase \l\g<1> provides capture group 1 with the first character in lowercase \L\g<1> provides capture group 1 with all the characters in lowercase In Perl titlecase is achieved by using both \u and \L, and the same could be done in Python: \u\L\g<1> provides capture group 1 with the first character in uppercase after putting all the characters in all lowercase although internally it would do proper titlecase. I'm suggesting restricting the action to only the following group. Note that this is actually syntactically unambiguous. Frankly, I don't really like that idea; I think it muddles up the RE syntax to have such a group-modifying operator, and seems rather unpythonic: the existing way to do this -- use .upper(), .lower() or .title() to format the groups in a match object as necessary -- seems to be much more readable and reasonable in this sense. I think the proposed changes look good, but I agree that the focus should be on breaking up the megapatch into more digestible feature additions, starting from the barebones engine. Until that's done, I doubt *anyone* will want to review it, let alone merge it into the main Python distribution. So, I think we should hold off on any new features until this raft of changes can be properly broken up, reviewed and (hopefully) merged in. Ah, too Perlish! :-) Another feature request that I've decided not to consider any further is recursive regular expressions. There are other tools available for that kind of thing, and I don't want the re module to go the way of Perl 6's rules; such things belong elsewhere, IMHO. Patch issue2636-patch-1.diff contains a stripped down version of my regex engine and the other changes that are necessary to make it work. fyi - I can't compile issue2636-patch-1.diff when applied to trunk (2.7) using gcc 4.0.3. many errors. Try issue2636-patch-2.diff. Thanks for this great work! Does Regexp 2.7 include Unicode Scripts support? Perl and Ruby support it and it's pretty handy. It includes Unicode character properties, but not the Unicode script identification, because the Python Unicode database contains the former but not the latter. Although they could be added to the re module, IMHO their proper place is in the Unicode database, from which the re module could access them. is a patch that adds unicode script info to the unicode database. issue2636-20090726.zip is a new implementation of the re engine. It replaces re.py, sre.py, sre_constants.py, sre_parse.py and sre_compile.py with a new re.py and replaces sre_constants.h, sre.h and _sre.c with _re.h and _re.c. The internal engine no longer interprets a form of bytecode but instead follows a linked set of nodes, and it can work breadth-wise as well as depth-first, which makes it perform much better when faced with one of those 'pathological' regexes. It supports scoped flags, variable-length lookbehind, Unicode properties, named characters, atomic groups, possessive quantifiers, and will handle zero-width splits correctly when the ZEROWIDTH flag is set. There are a few more things to add, like allowing indexing for capture groups, and further speed improvements might be possible (at worst it's roughly the same speed as the existing re module). I'll be adding some documentation about how it works and the slight differences in behaviour later. Sounds like this is an awesome piece of work! Since the patch is obviously a very large piece and will be hard to review, may I suggest releasing the new engine as a standalone package and spreading the word, so that people can stress-test it? By the time 2.7 is ready to release, if it has had considerable exposure to the public, that will help acceptance greatly. The Unicode script identification might not be hard to add to unicodedata; maybe Martin can do that? issue2636-20090727.zip contains regex.py, _regex.h, _regex.c and also _regex.pyd (for Python 2.6 on Windows). For Windows machines just put regex.py and _regex.pyd into Python's Lib\site-packages folder. I've changed the name so that it won't hide the re module. Agreed, a standalone release combined with a public announcement about its availability is a must if we want to get any sort of wide spread testing. It'd be great if we had a fully characterized set of tests for the behavior of the existing engine... but we don't. So widespread testing is important. We have lengthy sets of tests in Lib/test/regex_tests.py and Lib/test/test_re.py. While widespread testing of a standalone module would certainly be good, I doubt that will exercise many corner cases and the more esoteric features. Most actual code probably uses relatively few regex pattern constructs. issue2636-20090729.zip contains regex.py, _regex.h, _regex.c which will work with Python 2.5 as well as Python 2.6, and also 2 builds of _regex.pyd (for Python 2.5 and Python 2.6 on Windows). This version supports accessing the capture groups by subscripting the match object, for example: >>> m = regex.match("(?<foo>.)(?<bar>.)", "abc") >>> len(m) 3 >>> m[0] 'ab' >>> m[1 : 3] ['a', 'b'] >>> m["foo"] 'a' Unfortunately I found a bug in regex.py, caused when I made it compatible with Python 2.5. :-( issue2636-20090729.zip is now corrected. Apparently Perl has a quite comprehensive set of tests at . If we want the engine to be Perl-compatible, it might be a good idea to reuse (part of) their tests (if their license allows it). Problem is memory leak from repeated calls of e.g. compiled_pattern.search(some_text). Task Manager performance panel shows increasing memory usage with regex but not with re. It appears to be cumulative i.e. changing to another pattern or text doesn't release memory. Environment: Python 2.6.2, Windows XP SP3, latest (29 July) regex zip file. Example: 8<-- regex_timer.py import sys import time if sys.platform == 'win32': timer = time.clock else: timer = time.time module = __import__(sys.argv[1]) count = int(sys.argv[2]) pattern = sys.argv[3] expected = sys.argv[4] text = 80 * '~' + 'qwerty' rx = module.compile(pattern) t0 = timer() for i in xrange(count): assert rx.search(text).group(0) == expected t1 = timer() print "%d iterations in %.6f seconds" % (count, t1 - t0) 8<--- Here are the results of running this (plus observed difference between peak memory usage and base memory usage): dos-prompt>\python26\python regex_timer.py regex 1000000 "~" "~" 1000000 iterations in 3.811500 seconds [60 Mb] dos-prompt>\python26\python regex_timer.py regex 2000000 "~" "~" 2000000 iterations in 7.581335 seconds [128 Mb] dos-prompt>\python26\python regex_timer.py re 2000000 "~" "~" 2000000 iterations in 2.549738 seconds [3 Mb] This happens on a variety of patterns: "w", "wert", "[a-z]+", "[a-z]+t", ... issue2636-20090804.zip is a new version of the regex module. The memory leak has been fixed. First, many thanks for this contribution; it's great, that the re module gets updated in that comprehensive way! I'd like to report some issue with the current version (issue2636-20090804.zip). Using an empty string as the search pattern ends up consuming system resources and the function doesn't return anything nor raise an exception or crash (within several minutes I tried). The current re engine simply returns the empty matches on all character boundaries in this case. I use win XPh SP3, the behaviour is the same on python 2.5.4 and 2.6.2: It should be reproducible with the following simple code: >>> import re >>> import regex >>> re.findall("", "abcde") ['', '', '', '', '', ''] >>> regex.findall("", "abcde") _ regards vbr Adding to vbr's report: [2.6.2, Win XP SP3] (1) bug mallocs memory inside loop (2) also happens to regex.findall with patterns 'a{0,0}' and '\B' (3) regex.sub('', 'x', 'abcde') has similar problem BUT 'a{0,0}' and '\B' appear to work OK. issue2636-20090810.zip should fix the empty-string bug. issue2636-20090810#2.zip has some further improvements and bugfixes. I'd like to confirm, that the above reported error is fixed in issue2636-20090810#2.zip While testing the new features a bit, I noticed some irregularity in handling the Unicode Character Properties; I tried randomly some of those mentioned at- expressions.info/unicode.html using the simple findall like above. It seems, that only the short abbreviated forms of the properties are supported, however, the long variants are handled in different ways. Namely, the properties names containing whitespace or other non-letter characters cause some probably unexpected exception: >>> regex.findall(ur"\p{Ll}", u"abcDEF") [u'a', u'b', u'c'] # works ok \p{LowercaseLetter} isn't supported, but seems to be handled, as it throws "error: undefined property name" at the end of the traceback. \p{Lowercase Letter} \p{Lowercase_Letter} \p{Lowercase-Letter} isn't probably expected, the traceback is: >>> regex.findall(ur"\p{Lowercase_Letter}", u"abcDEF") Traceback (most recent call last): File "<input>", line 1, in <module> File "C:\Python25\lib\regex.py", line 194, in findall return _compile(pattern, flags).findall(string) File "C:\Python25\lib\regex.py", line 386, in _compile parsed = _parse_pattern(source, info) File "C:\Python25\lib\regex.py", line 465, in _parse_pattern branches = [_parse_sequence(source, info)] File "C:\Python25\lib\regex.py", line 477, in _parse_sequence item = _parse_item(source, info) File "C:\Python25\lib\regex.py", line 485, in _parse_item element = _parse_element(source, info) File "C:\Python25\lib\regex.py", line 610, in _parse_element return _parse_escape(source, info, False) File "C:\Python25\lib\regex.py", line 844, in _parse_escape return _parse_property(source, ch == "p", here, in_set) File "C:\Python25\lib\regex.py", line 983, in _parse_property if info.local_flags & IGNORECASE and not in_set: NameError: global name 'info' is not defined >>> Of course, arbitrary strings other than properties names are handled identically. Python 2.6.2 version behaves the same like 2.5.4. vbr for each of these discrepancies that you're finding, please consider submitting them as patches that add a unittest to the existing test suite. otherwise their behavior guarantees will be lost regardless of if the suite in this issue is adopted. thanks! I'll happily commit any passing re module unittest additions. issue2636-20090810#3.zip adds more Unicode character properties such as "\p{Lowercase_Letter}", and also Unicode script ranges. In addition, the 'findall' method now accepts an 'overlapped' argument for finding overlapped matches. For example: >>> regex.findall(r"(..)", "abc") ['ab'] >>> regex.findall(r"(..)", "abc", overlapped=True) ['ab', 'bc'] Sorry for the dumb question, which may also suggest, that I'm unfortunately unable to contribute at this level (with zero knowledge of C and only "working" one for Python): Where can I find the sources for tests etc. and how they are eventually to be submitted? Is some other account needed besides the one for bugs.python.org? Anyway, the long character properties now work in the latest version issue2636-20090810#3.zip In the mentioned overview there is a statement for the property names: "You may omit the underscores or use hyphens or spaces instead." While I'm not sure, that it is a good thing to have that many variations, they should probably be handled in the same way. Now, the whitespace (and also non ascii characters) in the property name seem to confuse the parser: these pass silently (don't match anything) and don't throw an exception like "undefined property name". cf. >>> regex.findall(ur"\p{Dummy Property}", u"abcDEF") [] >>> regex.findall(ur"\p{DümmýPrópërtý}", u"abcDEF") [] >>> regex.findall(ur"\p{DummyProperty}", u"abcDEF") Traceback (most recent call last): File "<input>", line 1, in <module> File "regex.pyc", line 195, in findall File "regex.pyc", line 563, in _compile File "regex.pyc", line 642, in _parse_pattern File "regex.pyc", line 654, in _parse_sequence File "regex.pyc", line 662, in _parse_item File "regex.pyc", line 787, in _parse_element File "regex.pyc", line 1021, in _parse_escape File "regex.pyc", line 1159, in _parse_property error: undefined property name 'DummyProperty' >>> vbr Take a look a the dev FAQ, linked from. The tests are in Lib/test in a distribution installed from source, but ideally you would be (anonymously) pulling the trunk from SVN (when it is back) and creating your patches with respect to that code as explained in the FAQ. You would be adding unit test code to Lib/test/test_re.py, though it looks like re_tests.py might be an interesting file to look at as well. As the dev docs say, anyone can contribute, and writing tests is a great way to start, so please don't feel like you aren't qualified to contribute, you are. If you have questions, come to #python-dev on freenode. What is the expected timing comparison with re? Running the Aug10#3 version on Win XP SP3 with Python 2.6.3, I see regex typically running at only 20% to %50 of the speed of re in ASCII mode, with not-very-atypical tests (find all Python identifiers in a line, failing search for a Python identifier in an 80-byte text). Is the supplied _regex.pyd from some sort of debug or unoptimised build? Here are some results: dos-prompt>\python26\python -mtimeit -s"import re as x;r=x.compile(r'[A-Za-z_][A-Za-z0-9_]+');t=' def __init__(self, arg1, arg2):\n'" "r.findall(t)" 100000 loops, best of 3: 5.32 usec per loop dos-prompt>\python26\python -mtimeit -s"import regex as x;r=x.compile(r'[A-Za-z_][A-Za-z0-9_]+');t=' def __init__(self, arg1, arg2):\n'" "r.findall(t)" 100000 loops, best of 3: 12.2 usec per loop dos-prompt>\python26\python -mtimeit -s"import re as x;r=x.compile(r'[A-Za-z_][A-Za-z0-9_]+');t='1234567890'*8" "r.search(t)" 1000000 loops, best of 3: 1.61 usec per loop dos-prompt>\python26\python -mtimeit -s"import regex as x;r=x.compile(r'[A-Za-z_][A-Za-z0-9_]+');t='1234567890'*8" "r.search(t)" 100000 loops, best of 3: 7.62 usec per loop Here's the worst case that I've found so far: dos-prompt>\python26\python -mtimeit -s"import re as x;r=x.compile(r'z{80}');t='z'*79" "r.search(t)" 1000000 loops, best of 3: 1.19 usec per loop dos-prompt>\python26\python -mtimeit -s"import regex as x;r=x.compile(r'z{80}');t='z'*79" "r.search(t)" 1000 loops, best of 3: 334 usec per loop See Friedl: "length cognizance". Corresponding figures for match() are 1.11 and 8.5. </lurk> Re: timings Thanks for the info, John. First of all, I really like those tests and could you please submit a patch or other document so that we could combine them into the python test suite. The python test suite, which can be run as part of 'make test' or IIRC there is a way to run JUST the 2 re test suites which I seem to have senior moment'd, includes a built-in timing output over some of the tests, though I don't recall which ones were being timed: standard cases or pathological (rare) ones. Either way, we should include some timings that are of a standard nature in the test suite to make Matthew's and any other developer's work easier. So, John, if you are not familiar with the test suite, I can look into adding the specific cases you've developed into the test suite so we can have a more representative timing of things. (though I'm not sure what timeit is doing: if it invokes a new instance of python each time, it is recompiling each time, if it is reusing the instance, it is only compiling once). Having not looked at Matthew's regex code recently (nice name, BTW), I don't know if it also contains the compiled expression cache, in which case, adding it in might help timings. Originally, the cache worked by storing ~100 entries and cleared itself when full; I have a modification which increases this to 256 (IIRC) and only removes the 128 oldest to prevent thrashing at the boundary which I think is better if only for a particular pathological case. In any case, don't despair at these numbers, Matthew: you have a lot of time and potentially a lot of ways to make your engine faster by the time 1.7 alpha is coined. But also be forewarned, because, knowing what I know about the current re engine and what it is further capable of, I don't think your regex will be replacing re in 1.7 if it isn't at least as fast as the existing engine for some standard set of agreed upon tests, no matter how many features you can add. I have no doubt, with a little extra monkey grease, we could implement all new features in the existing engine. I don't want to have to reinvent the wheel, of course, and if Matthew's engine can pick up some speed everybody wins! So, keep up the good work Matthew, as it's greatly appreciated! Thanks all! Jeffrey. <lurk> > They don't. The pattern is compiled only once. Please take a look at Mea culpa et mes apologies, The '-s' option to John's expressions are indeed executed only once -- they are one-time setup lines. The final quoted expression is what's run multiple times. In other words, improving caching in regex will not help. >sigh< Merci, Antoine! Jeffrey. FYI, Unladen Swallow includes several regex benchmark suites: a port of V8's regex benchmarks (regex_v8); some of the regexes used when tuning the existing sre engine 7-8 years ago (regex_effbot); and a regex_compile benchmark that tests regex compilation time. See for more details, including how to check out and run the benchmark suite. You'll need to modify your experimental Python build to have "import re" import the proposed regex engine, rather than _sre. The benchmark command would look something like `./perf.py -r -b regex /control/python /experiment/python`, which will run all the regex benchmarks in rigorous mode. I'll be happy to answer any questions you have about our benchmarks. I'd be very interested to see how the proposed regex engine performs on these tests. I've made an installable package of Matthew Barnett's patch. It may get this to a wider audience. Next I'll look at incorporating Andrew Kuchling's suggestion of the re tests from CPython. Hi, I've noticed 3 differences between the re and regex engines. I don't know if they are intended or not, but thought it best to mention them. (I used the issue2636-20090810#3.zip version.) Python 2.6.2 (r262:71600, Apr 20 2009, 09:25:38) [GCC 4.3.2 20081105 (Red Hat 4.3.2-7)] on linux2 IDLE 2.6.2 >>> import re, regex >>> ############################################################ 1 of 3 >>> re1= re.compile(r""" (?!<\w)(?P<name>[-\w]+)= (?P<quote>(?P<single>')|(?P<double>"))? (?P<value>(?(single)[^']+?|(?(double)[^"]+?|\S+))) (?(quote)(?P=quote)) """, re.VERBOSE) >>> re2= regex.compile(r""" (?!<\w)(?P<name>[-\w]+)= (?P<quote>(?P<single>')|(?P<double>"))? (?P<value>(?(single)[^']+?|(?(double)[^"]+?|\S+))) (?(quote)(?P=quote)) """, re.VERBOSE) >>>>> re1.findall(text) [('border', "'", "'", '', '1')] >>> re2.findall(text) [] >>>>> re1.findall(text) [('border', '', '', '', '1>')] >>> re2.findall(text) [] >>> ############################################################ 2 of 3 >>> re1 = re.compile(r"""^[ \t]* (?P<parenthesis>\()? [- ]? (?P<area>\d{3}) (?(parenthesis)\)) [- ]? (?P<local_a>\d{3}) [- ]? (?P<local_b>\d{4}) [ \t]*$ """, re.VERBOSE) >>> re2 = regex.compile(r"""^[ \t]* (?P<parenthesis>\()? [- ]? (?P<area>\d{3}) (?(parenthesis)\)) [- ]? (?P<local_a>\d{3}) [- ]? (?P<local_b>\d{4}) [ \t]*$ """, re.VERBOSE) >>> data = ("179-829-2116", "(187) 160 0880", "(286)-771-3878", "(291) 835-9634", "353-896-0505", "(555) 555 5555", "(555) 555-5555", "(555)-555-5555", "555 555 5555", "555 555-5555", "555-555-5555", "601 805 3142", "(675) 372 3135", "810 329 7071", "(820) 951 3885", "942 818-5280", "(983)8792282") >>> for d in data: ans1 = re1.findall(d) ans2 = re2.findall(d) print "re=%s rx=%s %d" % (ans1, ans2, ans1 == ans2) re=[('', '179', '829', '2116')] rx=[('', '179', '829', '2116')] 1 re=[('(', '187', '160', '0880')] rx=[] 0 re=[('(', '286', '771', '3878')] rx=[('(', '286', '771', '3878')] 1 re=[('(', '291', '835', '9634')] rx=[] 0 re=[('', '353', '896', '0505')] rx=[('', '353', '896', '0505')] 1 re=[('(', '555', '555', '5555')] rx=[] 0 re=[('(', '555', '555', '5555')] rx=[] 0 re=[('(', '555', '555', '5555')] rx=[('(', '555', '555', '5555')] 1 re=[('', '555', '555', '5555')] rx=[] 0 re=[('', '555', '555', '5555')] rx=[] 0 re=[('', '555', '555', '5555')] rx=[('', '555', '555', '5555')] 1 re=[('', '601', '805', '3142')] rx=[] 0 re=[('(', '675', '372', '3135')] rx=[] 0 re=[('', '810', '329', '7071')] rx=[] 0 re=[('(', '820', '951', '3885')] rx=[] 0 re=[('', '942', '818', '5280')] rx=[] 0 re=[('(', '983', '879', '2282')] rx=[('(', '983', '879', '2282')] 1 >>> ############################################################ 3 of 3 >>> re1 = re.compile(r""" <img\s+[^>]*?src=(?:(?P<quote>["'])(?P<qimage>[^\1>]+?) (?P=quote)|(?P<uimage>[^"' >]+))[^>]*?>""", re.VERBOSE) >>> re2 = regex.compile(r""" <img\s+[^>]*?src=(?:(?P<quote>["'])(?P<qimage>[^\1>]+?) (?P=quote)|(?P<uimage>[^"' >]+))[^>]*?>""", re.VERBOSE) >>> <img alt="picture" src="Big C.png" other="xyx"> <img src=icon.png alt=icon> <img src="I'm here!.jpg" alt="aren't I?">""" >>> data = data.split("\n") >>> data = [x.strip() for x in data] >>> for d in data: ans1 = re1.findall(d) ans2 = re2.findall(d) print "re=%s rx=%s %d" % (ans1, ans2, ans1 == ans2) re=[("'", 'a.png', '')] rx=[("'", 'a.png', '')] 1 re=[('"', 'b.png', '')] rx=[('"', 'b.png', '')] 1 re=[('"', 'Big C.png', '')] rx=[('"', 'Big C.png', '')] 1 re=[('', '', 'icon.png')] rx=[('', '', 'icon.png alt=icon')] 0 re=[('"', "I'm here!.jpg", '')] rx=[('"', "I'm here!.jpg", '')] 1 I'm sorry I haven't had the time to try to minimize the examples, but I hope that at least they will prove helpful. Number 3 looks like a problem with non-greedy matching; I don't know about the others. Simplification of mark's first two problems: Problem 1: looks like regex's negative look-head assertion is broken >>> re.findall(r'(?!a)\w', 'abracadabra') ['b', 'r', 'c', 'd', 'b', 'r'] >>> regex.findall(r'(?!a)\w', 'abracadabra') [] Problem 2: in VERBOSE mode, regex appears to be ignoring spaces inside character classes >>> import re, regex >>> pat = r'(\w)([- ]?)(\w{4})' >>> for data in ['abbbb', 'a-bbbb', 'a bbbb']: ... print re.compile(pat).findall(data), regex.compile(pat).findall(data) ... print re.compile(pat, re.VERBOSE).findall(data), regex.compile(pat,regex. VERBOSE).findall(data) ... [('a', '', 'bbbb')] [('a', '', 'bbbb')] [('a', '', 'bbbb')] [('a', '', 'bbbb')] [('a', '-', 'bbbb')] [('a', '-', 'bbbb')] [('a', '-', 'bbbb')] [('a', '-', 'bbbb')] [('a', ' ', 'bbbb')] [('a', ' ', 'bbbb')] [('a', ' ', 'bbbb')] [] HTH, John issue2636-20090815.zip fixes the bugs found in msg91598 and msg91607. The regex engine currently lacks some of the optimisations that the re engine has, but I've concluded that even with them the extra work that the engine needs to do to make it easy to switch to breadth-wise matching when needed is slowing it down too much (if it's matching only depth-first then it can save only the changes to the 'context', but if it's matching breadth-wise then it needs to duplicate the entire 'context'). I'm therefore seeing whether I can have 2 engines internally, one optimised for depth-first and the other for breadth-wise, and switch from the former to the latter if matching is taking too long. Matthew's 20080915.zip attachment is now on PyPI. This one, having a more complete MANIFEST, will build for people other than me. I'd like to add some detail to the previous msg91473 The current behaviour of the character properties looks a bit surprising sometimes: >>> >>> regex.findall(ur"\p{UppercaseLetter}", u"QW\p{UppercaseLetter}as") [u'Q', u'W', u'U', u'L'] >>> regex.findall(ur"\p{Uppercase Letter}", u"QW\p{Uppercase Letter}as") [u'\\p{Uppercase Letter}'] >>> regex.findall(ur"\p{UppercaseÄÄÄLetter}", u"QW\p {UppercaseÄÄÄLetter}as") [u'\\p{Uppercase\xc4\xc4\xc4Letter}'] >>> regex.findall(ur"\p{UppercaseQQQLetter}", u"QW\p {UppercaseQQQLetter}as") Traceback (most recent call last): File "<pyshell#34>", line 1, in <module> regex.findall(ur"\p{UppercaseQQQLetter}", u"QW\p {UppercaseQQQLetter}as") ... File "C:\Python26\lib\regex.py", line 1178, in _parse_property raise error("undefined property name '%s'" % name) error: undefined property name 'UppercaseQQQLetter' >>> i.e. potential property names consisting only from the ascii-letters (+ _, -) are looked up and either used or an error is raised, other names (containing whitespace or non-ascii letters) aren't treated as a special expression, hence, they either match their literal value or simply don't match (without errors). Is this the intended behaviour? I am not sure whether it is maybe defined somewhere, or there are some de-facto standards for this... I guess, the space in the property names might be allowed (unless there are some implications for the parser...), otherwise the fallback handling of invalid property names as normal strings is probably the expected way. vbr issue2636-20100116.zip is a new version of the regex module. I've given up on the breadth-wise matching - it was too difficult finding a pattern structure that would work well for both depth-first and breadth-wise. It probably still needs some tweaks and tidying up, but I thought I might as well release something! issue2636-20100204.zip is a new version of the regex module. I've added splititer and added a build for Python 3.1. Hi, thanks for the update! Just for the unlikely case, it hasn't been noticed sofar, using python 2.6.4 or 2.5.4 with the regexp build issue2636-20100204.zip I am getting the following easy-to-fix error: Python 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import regex Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python26\lib\regex.py", line 2003 print "Header file written at %s\n" % os.path.abspath(header_file.name)) ^ SyntaxError: invalid syntax After removing the extra closing paren in regex.py, line 2003, everything seems ok. vbr I'd like to add another issue I encountered with the latest version of regex - issue2636-20100204.zip It seems, that there is an error in handling some quantifiers in python 2.5 on Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 I get e.g.: >>> regex.findall(ur"q*", u"qqwe") Traceback (most recent call last): File "<pyshell#35>", line 1, in <module> regex.findall(ur"q*", u"qqwe") File "C:\Python25\lib\regex.py", line 213, in findall return _compile(pattern, flags).findall(string, overlapped=overlapped) File "C:\Python25\lib\regex.py", line 633, in _compile p = _regex.compile(pattern, info.global_flags | info.local_flags, code, info.group_index, index_group) RuntimeError: invalid RE code There is the same error for other possibly "infinite" quantifiers like "q+", "q{0,}" etc. with their non-greedy and possesive variants. On python 2.6 and 3.1 all these patterns works without errors. vbr issue2636-20100210.zip is a new version of the regex module. The reported bugs appear to be fixed now. Thanks for the quick update, I confirm the fix for both issues; just another finding (while testing the behaviour mentioned previously - msg91917) The property name normalisation seem to be much more robust now, I just encountered an encoding error using a rather artificial input (in python 2.5, 2.6): >>> regex.findall(ur"\p{UppercaseÄÄÄLetter}", u"QW\p{UppercaseÄÄÄLetter}as") Traceback (most recent call last): File "<pyshell#4>", line 1, in <module> regex.findall(ur"\p{UppercaseÄÄÄLetter}", u"QW\p{UppercaseÄÄÄLetter}as") File "C:\Python25\lib\regex.py", line 213, in findall return _compile(pattern, flags).findall(string, overlapped=overlapped) File "C:\Python25\lib\regex.py", line 599, in _compile parsed = _parse_pattern(source, info) File "C:\Python25\lib\regex.py", line 690, in _parse_pattern branches = [_parse_sequence(source, info)] File "C:\Python25\lib\regex.py", line 702, in _parse_sequence item = _parse_item(source, info) File "C:\Python25\lib\regex.py", line 710, in _parse_item element = _parse_element(source, info) File "C:\Python25\lib\regex.py", line 837, in _parse_element return _parse_escape(source, info, False) File "C:\Python25\lib\regex.py", line 1098, in _parse_escape return _parse_property(source, info, in_set, ch) File "C:\Python25\lib\regex.py", line 1240, in _parse_property raise error("undefined property name '%s'" % name) error: <unprintable error object> >>> Not sure, how this would be fixed (i.e. whether the error message should be changed to unicode, if applicable). Not surprisingly, in python 3.1, there is a correct message at the end: regex.error: undefined property name 'UppercaseÄÄÄLetter' vbr I've been aware for some time that exception messages in Python 2 can't be Unicode, but I wasn't sure which encoding to use, so I've decided to use that of sys.stdout. It appears to work OK in IDLE and at the Python prompt. issue2636-20100211.zip is the new version of the regex module. issue2636-20100217.zip is a new version of the regex module. It includes a fix for issue #7940. I've packaged this latest revision and uploaded to PyPI The main text at appears to have lost its backslashes, for example: The Unicode escapes uxxxx and Uxxxxxxxx are supported. instead of: The Unicode escapes \uxxxx and \Uxxxxxxxx are supported. I just tested the fix for unicode tracebacks and found some possibly weird results (not sure how/whether it should be fixed, as these inputs are indeed rather artificial...). (win XPp SP3 Czech, Python 2.6.4) Using the cmd console, the output is fine (for the characters it can accept and display) >>> regex.findall(ur"\p{InBasicLatinĚ}", u"aé") Traceback (most recent call last): ... File "C:\Python26\lib\regex.py", line 1244, in _parse_property raise error("undefined property name '%s'" % name) regex.error: undefined property name 'InBasicLatinĚ' >>> (same result for other distorted "proprety names" containing e.g. ěščřžýáíéúůßäëiöüîô ... However, in Idle the output differs depending on the characters present >>> regex.findall(ur"\p{InBasicLatinÉ}", u"ab c") yields the expected ... File "C:\Python26\lib\regex.py", line 1244, in _parse_property raise error("undefined property name '%s'" % name) error: undefined property name 'InBasicLatinÉ' but >>> regex.findall(ur"\p{InBasicLatinĚ}", u"ab c") Traceback (most recent call last): ... File "C:\Python26\lib\regex.py", line 1244, in _parse_property raise error("undefined property name '%s'" % name) File "C:\Python26\lib\regex.py", line 167, in __init__ message = message.encode(sys.stdout.encoding) File "C:\Python26\lib\encodings\cp1250.py", line 12, in encode return codecs.charmap_encode(input,errors,encoding_table) UnicodeEncodeError: 'charmap' codec can't encode character u'\xcc' in position 37: character maps to <undefined> >>> which might be surprising, as cp1250 should be able to encode "Ě", maybe there is some intermediate ascii step? using the wxpython pyShell I get its specific encoding error: regex.findall(ur"\p{InBasicLatinÉ}", u"ab c") Traceback (most recent call last): ... File "C:\Python26\lib\regex.py", line 1102, in _parse_escape return _parse_property(source, info, in_set, ch) File "C:\Python26\lib\regex.py", line 1244, in _parse_property raise error("undefined property name '%s'" % name) File "C:\Python26\lib\regex.py", line 167, in __init__ message = message.encode(sys.stdout.encoding) AttributeError: PseudoFileOut instance has no attribute 'encoding' (the same for \p{InBasicLatinĚ} etc.) In python 3.1 in Idle, all of these exceptions are displayed correctly, also in other scripts or with special characters. Maybe in python 2.x e.g. repr(...) of the unicode error messages could be used in order to avoid these problems, but I don't know, what the conventions are in these cases. Another issue I found here (unrelated to tracebacks) are backslashes or punctuation (except the handled -_) in the property names, which just lead to failed mathces and no exceptions about unknown property names regex.findall(u"\p{InBasic.Latin}", u"ab c") [] I was also surprised by the added pos/endpos parameters, as I used flags as a non-keyword third parameter for the re functions in my code (probably my fault ...) re.findall(pattern, string, flags=0) regex.findall(pattern, string, pos=None, endpos=None, flags=0, overlapped=False) (is there a specific reason for this order, or could it be changed to maintain compatibility with the current re module?) I hope, at least some of these remarks make some sense; thanks for the continued work on this module! vbr issue2636-20100218.zip is a new version of the regex module. I've added '.' to the permitted characters when parsing the name of a property. The name itself is no longer reported in the error message. I've also corrected the positions of the 'pos' and 'endpos' arguments: regex.findall(pattern, string, flags=0, pos=None, endpos=None, overlapped=False) Thanks for fixing the argument positions; unfortunately, it seems, there might be some other problem, that makes my code work differently than the builtin re; it seems, in the character classes the ignorcase flag is ignored somehow: >>> regex.findall(r"[ab]", "aB", regex.I) ['a'] >>> re.findall(r"[ab]", "aB", re.I) ['a', 'B'] >>> (The same with the flag set in the pattern.) Outside of the character class the case seems to be handled normally, or am I missing something? vbr issue2636-20100219.zip is a new version of the regex module. The regex module should give the same results as the re module for backwards compatibility. The ignorecase bug is now fixed. This new version releases the GIL when matching on str and bytes (str and unicode in Python 2.x). On 17 February 2010 19:35, Matthew Barnett <report@bugs.python.org> wrote: > The main text at appears to have lost its backslashes, for example: > > The Unicode escapes uxxxx and Uxxxxxxxx are supported. > > instead of: > > The Unicode escapes \uxxxx and \Uxxxxxxxx are supported. Matthew, As you no doubt realised that text is read straight from the Features.txt file. PyPI interprets it as RestructuredText, which uses \ as an escape character in various cases. Do you intentionally write Features.txt as RestructuredText? If so here is a patch that escapes the \ characters as appropriate, otherwise I'll work out how to make PyPI read it as plain text. Regards, Alex -- Alex Willmer <alex@moreati.org.uk> To me the extension .txt means plain text. Is there a specific extension for ReStructuredText, eg .rst? issue2636-20100222.zip is a new version of the regex module. This new version adds reverse searching. The 'features' now come in ReStructuredText (.rst) and HTML. Is the issue2636-20100222.zip archive supposed to be complete? I can't find not only the rst or html "features", but more importantly the py and pyd files for the particular versions. Anyway, I just skimmed through the regular-expressions.info documentation and found, that most features, which I missed in the builtin re version seems to be present in the regex module; a few possibly notable exceptions being some unicode features: support for unicode script properties might be needlessly complex (maybe unless is implemented) On the other hand \X for matching any single grapheme might be useful, according to the mentioned page, the currently working equivalent would be \P{M}\p{M}* However, I am not sure about the compatibility concerns; it is possible, that the modifier characters as a part of graphemes might cause some discrepancies in the text indices etc. A feature, where i personally (currently) can't find a usecase is \G and continuing matches (but no doubt, there would be some some cases for this). regards vbr I don't know what happened there. I didn't notice that the zip file was way too small. Here's a replacement (still called issue2636-20100222.zip). Unicode script properties are already included, at least those whose definitions at I haven't notice \X before. I'll have a look at it. As for \G, .findall performs searches normally, but when using \G it effectively performs contiguous matches only, which can be useful when you need it! OK, you've convinced me, \X is supported. :-) issue2636-20100223.zip is a new version of the regex module. On 22 Feb 2010, at 21:24, Matthew Barnett <report@bugs.python.org> wrote: > issue2636-20100222.zip is a new version of the regex module. > > This new version adds reverse searching. > > The 'features' now come in ReStructuredText (.rst) and HTML Thank you matthew. My laptop is out of action, so it will be a few days before I can upload a new version to PyPI. If you would prefer to have control of the pypi package, or to share control please let mr know. Alex Wow, that's what can be called rapid development :-), thanks very much! I did'n noticed before, that \G had been implemented already. \X works fine for me, it also maintains the input string indices correctly. We can use unicode character properties \p{Letter} and unicode bloks \p{inBasicLatin} properties; the script properties like \p{Latin} or \p{IsLatin} return "undefined property name". I guess, this would require the access to the respective information in unicodedata, where it isn't available now (there also seem to be much more scripts than those mentioned at regular-expressions.info cf. (under "# Script (sc)"). vbr issue2636-20100224.zip is a new version of the regex module. It includes support for matching based on Unicode scripts as well as on Unicode blocks and properties. Thanks, its indeed a very nice addition to the library... Just a marginal remark; it seems, that in script-names also some non BMP characters are covered, however, in the unicode ranges thee only BMP. Am I missing something more complex, as why 10000.. - ..10FFFF; ranges weren't included in _BLOCKS ? Maybe building these ranges is expensive, in contrast to rare uses of these properties? (Not that I am able to reliably test it on my "narrow" python build on windows, but currently, obviously, e.g. \p{InGothic} gives "undefined property name" whereas \p{Gothic} is accepted.) vbr It was more of an oversight. issue2636-20100225.zip now contains the full list of both blocks and scripts. issue2636-20100226.zip is a new version of the regex module. It now supports the branch reset (?|...|...), enabling the different branches of an alternation to reuse group numbers. On 26 February 2010 03:20, Matthew Barnett <report@bugs.python.org> wrote: > Added file: This is now uploaded to PyPI -- Alex Willmer <alex@moreati.org.uk> I just noticed a cornercase with the newly introduced grapheme matcher \X, if this is used in the character set: >>> regex.findall("\X", "abc") ['a', 'b', 'c'] >>> regex.findall("[\X]", "abc") Traceback (most recent call last): File "<input>", line 1, in <module> File "regex.pyc", line 218, in findall File "regex.pyc", line 1435, in _compile File "regex.pyc", line 2351, in optimise File "regex.pyc", line 2705, in optimise File "regex.pyc", line 2798, in optimise File "regex.pyc", line 2268, in __hash__ AttributeError: '_Sequence' object has no attribute '_key' It obviously doesn't make much sense to use this universal literal in the character class (the same with "." in its metacharacter role) and also doesn't mention this possibility; but the error message might probably be more descriptive, or the pattern might match "X" or "\" and "\X" (?) I was originally thinking about the possibility to combine the positive and negative character classes, where e.g. \X would be a kind of base; I am not aware of any re engine supporting this, but I eventually found an unicode guidelines for regular expressions, which also covers this: It also surprises a bit, that these are all included in Basic Unicode Support: Level 1; (even with arbitrary unions, intersections, differences ...) it suggests, that there is probably no implementation available (AFAIK) - even on this basic level, according to this guideline. Among other features on this level, the section seems useful, especially the handling of the characters beyond \uffff, also in the form of surrogate pairs as single characters. This might be useful on the narrow python builds, but it is possible, that there would be be an incompatibility with the handling of these data in "narrow" python itself. Just some suggestions or rather remarks, as you already implemented many advanced features and are also considering some different approaches ...:-) vbr \X shouldn't be allowed in a character class because it's equivalent to \P{M}\p{M}*. It's a bug, now fixed in issue2636-20100304.zip. I'm not convinced about the set intersection and difference stuff. Isn't that overdoing it a little? :-) Actually I had that impression too, but I was mainly surprised with these requirements being on the lowest level of the unicode support. Anyway, maybe the relevance of these guidelines for the real libraries is is lower, than I expected. Probably the simpler cases are adequately handled with lookarounds, e.g. (?:\w(?<!\p{Greek}))+ and the complex examples like symmetric differences seem to be beyond the normal scope of re anyway. Personally, I would find the surrogate handling more useful, but I see, that it isn't actually the job for the re library, given that the narrow build of python doesn't support indexing, slicing, len of these characters either... vbr issue2636-20100305.zip is a new version of the regex module. Just a few tweaks. I've adapted the Python 2.6.5 test_re.py as follows, from test.test_support import verbose, run_unittest -import re -from re import Scanner +import regex as re +from regex import Scanner and run it against regex-2010305. Three tests failed, and the report is attached. Does regex.py have its own test suite (which also includes tests for all the problems reported in the last few messages)? If so, the new tests could be merged in re's test_re. This will simplify the testing of regex.py and will improve the test coverage of re.py, possibly finding new bugs. It will also be useful to check if the two libraries behave in the same way. I am not sure about the testsuite for this regex module, but it seems to me, that many of the problems reported here probably don't apply for the current builtin re, as they are connected with the new features of regex. After the suggestion in msg91462. I briefly checked the re testsuite and found it very comprehensive, given the featureset. Of course, most/all? re tests should apply for regex, but probably not vice versa. vbr issue2636-20100323.zip is a new version of the regex module. It now includes a test script. Most of the tests come from the existing test scripts. issue2636-20100331.zip is a new version of the regex module. It includes speed-ups and a minor bugfix. issue2636-20100413.zip is a new version of the regex module. It includes additional speed-ups. On 13 April 2010 03:21, Matthew Barnett <report@bugs.python.org> wrote: > issue2636-20100413.zip is a new version of the regex module. Matthew, When I run test_regex.py 6 tests are failing, with Python 2.6.5 on Ubuntu Lucid and my setup.py. Attached is the output, do all the tests pass in your build? Alex Yes, it passed all the tests, although I've since found a minor bug that isn't covered/caught by them, so I'll need to add a few more tests. Anyway, do: regex.match(ur"\p{Ll}", u"a") regex.match(ur'(?u)\w', u'\xe0') really return None? Your results suggest that they won't. I downloaded Python 2.6.5 (I was using Python 2.6.4) just in case, but it still passes (WinXP, 32-bit). On 13 April 2010 18:10, Matthew Barnett <report@bugs.python.org> wrote: > Anyway, do: > > regex.match(ur"\p{Ll}", u"a") > regex.match(ur'(?u)\w', u'\xe0') > > really return None? Your results suggest that they won't. Python 2.6.5 (r265:79063, Apr 3 2010, 01:56:30) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import regex >>> regex.__version__ '2.3.0' >>> print regex.match(ur"\p{Ll}", u"a") None >>> print regex.match(ur'(?u)\w', u'\xe0') None I thought I might be a 64 bit issue, but I see the same result in a 32 bit VM. That leaves my build process. Attached is the setup.py and build output, unicodedata_db.h was taken from the Ubuntu source deb for Python 2.6.5. issue2636-20100414.zip is a new version of the regex module. I think I might have identified the cause of the problem, although I still haven't been able to reproduce it, so I can't be certain. Oops, forgot the file! :-) On 14 April 2010 00:33, Matthew Barnett <report@bugs.python.org> wrote: > I think I might have identified the cause of the problem, although I still haven't been able to reproduce it, so I can't be certain. Performed 76 Passed Looks like you got it. I just noticed a somehow strange behaviour in matching character sets or alternate matches which contain some more "advanced" unicode characters, if they are in the search pattern with some "simpler" ones. The former seem to be ignored and not matched (the original re engine matches all of them); (win XPh SP3 Czech, Python 2.7; regex issue2636-20100414) >>> print u"".join(regex.findall(u".", u"eèéêëēěė")) eèéêëēěė >>> print u"".join(regex.findall(u"[eèéêëēěė]", u"eèéêëēěė")) eèéêëē >>> print u"".join(regex.findall(u"e|è|é|ê|ë|ē|ě|ė", u"eèéêëēěė")) eèéêëē >>> print u"".join(re.findall(u"[eèéêëēěė]", u"eèéêëēěė")) eèéêëēěė >>> print u"".join(re.findall(u"e|è|é|ê|ë|ē|ě|ė", u"eèéêëēěė")) eèéêëēěė even stranger, if the pattern contains only these "higher" unicode characters, everything works ok: >>> print u"".join(regex.findall(u"ē|ě|ė", u"eèéêëēěė")) ēěė >>> print u"".join(regex.findall(u"[ēěė]", u"eèéêëēěė")) ēěė The characters in question are some accented latin letters (here in ascending codepoints), but it can be other scripts as well. >>> print regex.findall(u".", u"eèéêëēěė") [u'e', u'\xe8', u'\xe9', u'\xea', u'\xeb', u'\u0113', u'\u011b', u'\u0117'] The threshold isn't obvious to me, at first I thought, the characters represented as unicode escapes are problematic, whereas those with hexadecimal escapes are ok; however ē - u'\u0113' seems ok too. (python 3.1 behaves identically: >>> regex.findall("[eèéêëēěė]", "eèéêëēěė") ['e', 'è', 'é', 'ê', 'ë', 'ē'] >>> regex.findall("[ēěė]", "eèéêëēěė") ['ē', 'ě', 'ė'] ) vbr issue2636-20100706.zip is a new version of the regex module. I've added your examples to the unit tests. The module now passes. Keep up the good work! :-) Matthew, I'd like to see at least some of these features in 3.2, but ISTM that after more than 2 years this issue is not going anywhere. Is the module still under active development? Is it "ready"? Is it waiting for reviews and to be added to the stdlib? Is it waiting for more people to test it on PyPI? If the final goal is adding it to the stdlib, are you planning to add it as a new module or to replace the current 're' module? (or is 'regex' just the 're' module with improvements that could be merged?) Another alternative would be to split it in smaller patches (ideally one per feature) and integrate them one by one, but IIRC several of the patches depend on each other so it can't be done easily. Unless there is already a plan about this (and I'm not aware of it), I'd suggest to bring this up to python-dev and decide what to do with the 'regex' module. I've packaged Matthew's latest revision and uploaded it to PyPI. This version will build for Python 2 and Python 3, parallel installs will coexist on the same machine. I started with trying to modify the existing re module, but I wanted to make too many changes, so in the end I decided to make a clean break and start on a new implementation which was compatible with the existing re module and which could replace the existing implementation, even under the same name. Apart from the recent bug fix, I haven't done any further work since April on it because I think it's pretty much ready. So, if it's pretty much ready, do you think it could be included already in 3.2? Before anything else is done with it, it should probably be announced in some way. I'm not sure if anyone has opened any of these zip files, reviewed anything, ran anything, or if anyone even knows this whole thing has been going on.. The file at: was downloaded 75 times, if that's any help. (Now reset to 0 because of the bug fix.) If it's included in 3.2 then there's the question of whether it should replace the re module and be called "re". If it's backward-compatible with the 're' module, all the tests of the test suite pass and it just improves it and add features I don't see why not. (That's just my personal opinion though, other people might (and probably will) disagree.) Try to send an email on python-dev and see what they say. My only addition opinion is that re is very much used in deployed python applications and was written not just for correctness but also speed. As such, regex should be benchmarked fairly to show that it is commensurately speedy. I wouldn't not personally object to a slightly slower module, though not one that is noticeably slower and if it can be proven faster in the average case, it's one more check in the box for favorable inclusion. Thanks for the prompt fix! It would indeed be nice to see this enhanced re module in the standard library e.g. in 3.2, but I also really appreciate, that also multiple 2.x versions are supported (as my current main usage of this library involves py2-only wx gui). As for the usage statistics, I for one always downloaded the updates from here rather than pypi, but maybe it is not a regular case. FWIW, I'd love seeing the updated regex module in 3.2. Please do bring it up on python-dev. Looking at the latest module on PyPI, I noted that the regex.py file is very long (~3500 lines), even though it is quite compressed (e.g. no blank lines between methods). It would be good to split it up. This would also remove the need for underscore-prefixing most of the identifiers, since they would simply live in another (private) module. Things like the _create_header_file function should be put into utility scripts. The C file is also very long, but I think we all know why :) It would also be nice to see some performance comparisons -- where is the new engine faster, where does it return matches while re just loops forever, and where is the new engine slower? On 6 July 2010 18:03, Matthew Barnett <report@bugs.python.org> wrote: > The file at was downloaded 75 times, if that's any help. (Now reset to 0 because of the bug fix.) > Each release was downloaded between 50 and 100 times. Matthew let me know if you'd like control of the package, or maintainer access. Other than the odd tweet I haven't publicized the releases. As a crude guide of the speed difference, here's Python 2.6: re regex bm_regex_compile.py 86.53secs 260.19secs bm_regex_effbot.py 13.70secs 8.94secs bm_regex_v8.py 15.66secs 9.09secs Note that compiling regexes is a lot slower. I concentrated my efforts on the matching speed because regexes tend to be compiled only once, so it's not as important. Matching speed should _at worst_ be comparable. On the PyPI page: in the "Subscripting for groups" bullet it gives this pattern: r"(?<before>.*?)(?<num>\\d+)(?<after>.*)" Shouldn't this be: r"(?P<before>.*?)(?P<num>\\d+)(?P<after>.*)" Or has a new syntax been introduced? If you do: >>> import regex as re >>> dir(re) you get over 160 items, many of which begin with an underscore and so are private. Couldn't __dir__ be reimplemented to eliminate them. (I know that the current re module's dir() also returns private items, but I guess this is a legacy of not having the __dir__ special method?) I was wrong about r"(?<name>.*)". It is valid in the new engine. And the PyPI docs do say so immediately _following_ the example. I've tried all the examples in "Programming in Python 3 second edition" using "import regex as re" and they all worked. Mark, __dir__ as a special method only works when defined on types, so you'd have to use a module subclass for the "regex" module :) As I already suggested, it is probably best to move most of the private stuff into a separate module, and only import the really needed entry points into the regex module. issue2636-20100709.zip is a new version of the regex module. I've moved most of the regex module's Python code into a private module. The most recent version on pypi (20100709) seems to be missing _regex_core from py_modules in setup.py. Currently import regex fails, unable to locate _regex_core. On 13 July 2010 22:34, Jonathan Halcrow <report@bugs.python.org> wrote: > The most recent version on pypi (20100709) seems to be missing _regex_core from py_modules in setup.py. Sorry, my fault. I've uploaded a corrected version issue2636-20100719.zip is a new version of the regex module. Just a few more tweaks for speed. Thanks for the update; Just a small observation regarding some character ranges and ignorecase, probably irrelevant, but a difference to the current re anyway: >>> zero2z = u"0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz" >>> re.findall("(?i)[X-d]", zero2z) [] >>> regex.findall("(?i)[X-d]", zero2z) [u'A', u'B', u'C', u'D', u'X', u'Y', u'Z', u'[', u'\\', u']', u'^', u'_', u'`', u'a', u'b', u'c', u'd', u'x', u'y', u'z'] >>> re.findall("(?i)[B-d]", zero2z) [u'B', u'C', u'D', u'b', u'c', u'd'] regex.findall("(?i)[B-d]", zero2z) ', u'[', u'\\', u']', u'^','] It seems, that the re module is building the character set using a case insensitive "alphabet" in some way. I guess, the behaviour of re is buggy here, while regex is ok (tested on py 2.7, Win XPp). vbr This has already been reported in issue #3511. issue2636-20100725.zip is a new version of the regex module. More tweaks for speed. re regex bm_regex_compile.py 87.05secs 278.00secs bm_regex_effbot.py 14.00secs 6.58secs bm_regex_v8.py 16.11secs 6.66secs On 25 July 2010 03:46, Matthew Barnett <report@bugs.python.org> wrote: > issue2636-20100725.zip is a new version of the regex module. This is now packaged and uploaded to PyPI Does 'regex' implement "default word boundaries" (see #7255)? No. Wouldn't that break compatibility with 're'? What about a regex flag? Like regex.W or (?w)? That's a possibility. I must admit that I don't entirely understand it enough to implement it (the OP said "I don't believe that the algorithm for this is a whole lot more complicated"), and I don't have a need for it myself, but if someone would like to provide some code for it, even if it's in the form of a function written in Python: def at_default_word_boundary(text, pos): ... then I'll see what I can do! :-) Wishlist item: could you give the regex and match classes nicer names, so that they can be referenced as `regex.Pattern` (or `regex.Regex`) and `regex.Match`? issue2636-20100814.zip is a new version of the regex module. I've added default Unicode word boundaries and renamed the Pattern and Match classes. Over to you, Alex. :-) On 14 August 2010 21:24, Matthew Barnett <report@bugs.python.org> wrote: > Over to you, Alex. :-) Et voilà, an exciting Saturday evening Matthew, I'm currently keeping regex in a private bzr repository. Do you have yours in source control? If so/not could we make yours/mine public, and keep everything in one repository? -- Alex Willmer <alex@moreati.org.uk> issue2636-20100816.zip is a new version of the regex module. Unfortunately I came across a bug in the handing of sets. More unit tests added. issue2636-20100824.zip is a new version of the regex module. More speedups. Getting towards Perl speed now, depending on the regex. :-) issue2636-20100912.zip is a new version of the regex module. More speedups. I've been comparing the speed against Perl wherever possible. In some cases Perl is lightning fast, probably because regex is built into the language and it doesn't have to parse method arguments (for some short regexes a large part of the processing time is spent in PyArg_ParseTupleAndKeywords!). In other cases, where it has to use Unicode codepoints outside the 8-bit range, or character properties such as \p{Alpha}, its performance is simply appalling! :-) (?flags) are still scoping by default... a new flag to activate that behavior would really by helpful :) Another flag? Hmm. How about this instead: if a scoped flag appears at the end of a regex (and would therefore normally have no effect) then it's treated as though it's at the start of the regex. Thus: foo(?i) is treated like: (?i)foo Not that my opinion matters, but for what is it worth, I find it rather unusual to have to use special flags to get "normal" (for some definition of normal) behaviour, while retaining the defaults buggy in some way (like ZEROWIDTH). I would think, the backwards compatibility would not be needed under these circumstances - in such probably marginal cases (or is setting global flags at the end or otherwhere than on beginning oof the pattern that frequent?). It seems, that with many new features and enhancements for previously "impossible" patterns, chances are, that the code using regular expressions in a more advanced way might benefit from reviewing the patterns (where also the flags for "historical" behaviour could be adjusted if really needed). Anyway, thanks for further improvements! (although it broke my custom function previously misusing the internal data of the regex module for getting the unicode script property (currently unavailable via unicodedata) :-). Best regards, vbr The tests for re include these regexes: a.b(?s) a.*(?s)b I understand what Georg said previously about some people preferring to put them at the end, but I personally wouldn't do that because some regex implementations support scoped inline flags, although others, like re, don't. I think that second regex is a bit perverse, though! :-) On the other matter, I could make the Unicode script and block available through a couple of functions if you need them, eg: # Using Python 3 here >>> regex.script("A") 'Latin' >>> regex.block("A") 'BasicLatin' Matthew, I understand why you want to have these flags scoped, and if you designed a regex dialect from scratch, that would be the way to go. However, if we want to integrate this in Python 3.2 or 3.3, this is an absolute killer if it's not backwards compatible. I can live with behavior changes that really are bug fixes, and of course with new features that were invalid syntax before, but this is changing an aspect that was designed that way (as the test case shows), and that really is not going to happen without an explicit new flag. Special-casing the "flags at the end" case is too magical to be of any help. It will be hard enough to get your code into Python -- it is a huge new codebase for an absolutely essential module. I'm nevertheless optimistic that it is going to happen at some point or other. Of course, you would have to commit to maintaining it within Python for the forseeable future. The "script" and "block" functions really belong into unicodedata; you'll have to coordinate that with Marc-Andre. @Vlastimil: backwards compatibility is needed very much here. Nobody wants to review all their regexes when switching from Python 3.1 to Python 3.2. Many people will not care about the improved engine, they just expect their regexes to work as before, and that is a perfectly fine attitude. Thank you both for the explanations; I somehow suspected, there would be some strong reasoning for the conservative approach with regard to the backward compatibility. Thanks for the block() and script() offer, Matthew, but I believe, this might clutter the interface of the module, while it belongs somwhere else. (Personally, I just solved this need by directly grabbing using regex :-) It might be part of the problem for unicodedata, that this is another data file than UnicodeData.txt (which is the only one used, currently, IIRC). On the other hand it might be worthwile to synchronise this features with such updates in unicodedata (block, script, unicode range; maybe the full names of the character properties might be added too). As unicode 6.0 is about to come with the end of September, this might also reduce the efforts of upgrading it for regex. Do you think, it would be appropriate/realistic to create a feature request in bug tracker on enhancing unicodedata? (Unfortunately, I must confess, I am unable to contribute code in this area, without the C knowledge I always failed to find any useful data in optimised sources of unicodedata; hence I rather directly scanned the online datafiles.) vbr OK, so would it be OK if there was, say, a NEW (N) flag which made the inline flags (?flags) scoped and allowed splitting on zero-width matches? Just another rather marginal findings; differences between regex and re: >>> regex.findall(r"[\B]", "aBc") ['B'] >>> re.findall(r"[\B]", "aBc") [] (Python 2.7 ... on win32; regex - issue2636-20100912.zip) I believe, regex is more correct here, as uppercase \B doesn't have a special meaning within a set (unlike backspace \b), hence it should be treated as B, but I wanted to mention it as a difference, just in case it would matter. I also noticed another case, where regex is more permissive: >>> regex.findall(r"[\d-h]", "ab12c-h") ['1', '2', '-', 'h'] >>> re.findall(r"[\d-h]", "ab12c-h") Traceback (most recent call last): File "<input>", line 1, in <module> File "re.pyc", line 177, in findall File "re.pyc", line 245, in _compile error: bad character range >>> howewer, there might be an issue in negated sets, where the negation seem to apply for the first shorthand literal only; the rest is taken positively >>> regex.findall(r"[^\d-h]", "a^b12c-h") ['-', 'h'] cf. also a simplified pattern, where re seems to work correctly: >>> regex.findall(r"[^\dh]", "a^b12c-h") ['h'] >>> re.findall(r"[^\dh]", "a^b12c-h") ['a', '^', 'b', 'c', '-'] >>> or maybe regardless the order - in presence of shorthand literals and normal characters in negated sets, these normal characters are matched positively >>> regex.findall(r"[^h\s\db]", "a^b 12c-h") ['b', 'h'] >>> re.findall(r"[^h\s\db]", "a^b 12c-h") ['a', '^', 'c', '-'] >>> also related to character sets but possibly different - maybe adding a (reduntant) character also belonging to the shorthand in a negated set seem to somehow confuse the parser: regex.findall(r"[^b\w]", "a b") [] re.findall(r"[^b\w]", "a b") [' '] regex.findall(r"[^b\S]", "a b") [] re.findall(r"[^b\S]", "a b") [' '] >>> regex.findall(r"[^8\d]", "a 1b2") [] >>> re.findall(r"[^8\d]", "a 1b2") ['a', ' ', 'b'] >>> I didn't find any relevant tracker issues, sorry if I missed some... I initially wanted to provide test code additions, but as I am not sure about the intended output in all cases, I am leaving it in this form; vbr issue2636-20100913.zip is a new version of the regex module. I've removed the ZEROWIDTH flag and added the NEW flag, which turns on the new behaviour such as splitting on zero-width matches and positional flags. If the NEW flag isn't turned on then the inline flags are global, like in the re module. You were right about those bugs in the regex module, Vlastimil. :-( I've left the permissiveness of the sets in, at least for the moment, or until someone complains about it! Incidentally: >>> re.findall(r"[\B]", "aBc") [] >>> re.findall(r"[\c]", "aBc") ['c'] so it is a bug in the re module (it's putting a non-word-boundary in a set). issue2636-20100918.zip is a new version of the regex module. I've added 'pos' and 'endpos' arguments to regex.sub and regex.subn and refactored a little. I can't think of any other features that need to be added or see any more speed improvements. Have I missed anything important? :-) I like the idea of the general "new" flag introducing the reasonable, backwards incompatible behaviour; one doesn't have to remember a list of non-standard flags to get this features. While I recognise, that the module probably can't work correctly with wide unicode characters on a narrow python build (py 2.7, win XP in this case), i noticed a difference to re in this regard (it might be based on the absence of the wide unicode literal in the latter). re.findall(u"\\U00010337", u"a\U00010337bc") [] re.findall(u"(?i)\\U00010337", u"a\U00010337bc") [] regex.findall(u"\\U00010337", u"a\U00010337bc") [] regex.findall(u"(?i)\\U00010337", u"a\U00010337bc") Traceback (most recent call last): File "<input>", line 1, in <module> File "C:\Python27\lib\regex.py", line 203, in findall return _compile(pattern, flags).findall(string, pos, endpos, File "C:\Python27\lib\regex.py", line 310, in _compile parsed = parsed.optimise(info) File "C:\Python27\lib\_regex_core.py", line 1735, in optimise if self.is_case_sensitive(info): File "C:\Python27\lib\_regex_core.py", line 1727, in is_case_sensitive return char_type(self.value).lower() != char_type(self.value).upper() ValueError: unichr() arg not in range(0x10000) (narrow Python build) I.e. re fails to match this pattern (as it actually looks for "U00010337" ), regex doesn't recognise the wide unicode as surrogate pair either, but it also raises an error from narrow unichr. Not sure, whether/how it should be fixed, but the difference based on the i-flag seems unusual. Of course it would be nice, if surrogate pairs were interpreted, but I can imagine, that it would open a whole can of worms, as this is not thoroughly supported in the builtin unicode either (len, indices, slicing). I am trying to make wide unicode characters somehow usable in my app, mainly with hacks like extended unichr ("\U"+hex(67)[2:].zfill(8)).decode("unicode-escape") or likewise for ord surrog_ord = (ord(first) - 0xD800) * 0x400 + (ord(second) - 0xDC00) + 0x10000 Actually, using regex, one can work around some of these limitations of len, index or slice using a list form of the string containing surrogates regex.findall(ur"(?s)(?:\p{inHighSurrogates}\p{inLowSurrogates})|.", u"ab𐌷𐌸𐌹cd") [u'a', u'b', u'\U00010337', u'\U00010338', u'\U00010339', u'c', u'd'] but apparently things like wide unicode literals or character sets (even extending of the shorthands like \w etc.) are much more complicated. regards, vbr I use Python 3, where len("\U00010337") == 2 on a narrow build. Yes, wide Unicode on a narrow build is a problem: >>> regex.findall("\\U00010337", "a\U00010337bc") [] >>> regex.findall("(?i)\\U00010337", "a\U00010337bc") [] I'm not sure how (or whether!) to handle surrogate pairs. It _would_ make things more complicated. I suppose the moral is that if you want to use wide Unicode then you really should use a wide build. Well, of course, the surrogates probably shouldn't be handled separately in one module independently of the rest of the standard library. (I actually don't know such narrow implementation (although it is mentioned in those unicode quidelines ) The main surprise on my part was due to the compile error rather than empty match as was the case with re; but now I see, that it is a consequence of the newly introduced wide unicode notation, the matching behaviour changed consistently. (for my part, the workarounds I found, seem to be sufficient in the cases I work with wide unicode; most likely I am not going to compile wide unicode build on windows myself in the near future :-) vbr issue2636-20101009.zip is a new version of the regex module. It appears from a posting in python-list and a closer look at the docs that string positions in the 're' module are limited to 32 bits, even on 64-bit builds. I think it's because of things like: Py_BuildValue("i", ...) where 'i' indicates the size of a C int, which, at least in Windows compilers, is 32-bits in both 32-bit and 64-bit builds. The regex module shared the same problem. I've changed such code to: Py_BuildValue("n", ...) and so forth, which indicates Py_ssize_t. Unfortunately I'm not able to confirm myself that this will fix the problem on 64 bits. I tried to give the 64-bit version a try, but I might have encountered a more general difficulties. I tested this on Windows 7 Home Premium (Czech), the system is 64-bit (or I've hoped so sofar :-), according to System info: x64-based PC I installed Python 2.7 Windows X86-64 installer from which run ok, but the header in the python shell contains "win32" Python 2.7 (r27:82525, Jul 4 2010, 07:43:08) [MSC v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Consequently, after copying the respecitive files from issue2636-20101009.zip I get an import error: >>> import regex Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python_64bit_27\lib\regex.py", line 253, in <module> from _regex_core import * File "C:\Python_64bit_27\lib\_regex_core.py", line 53, in <module> import _regex ImportError: DLL load failed: %1 nenÝ platnß aplikace typu Win32. >>> (The last part of the message is a in Czech with broken diacritics: %1 is not a valid Win32 type application.) Is there something I can do in this case? I'd think, the installer would refuse to install a 64-bit software on a 32-bit OS or 32-bit architecture, or am I missing something obvious from the naming peculiarities x64, 64bit etc.? That being said, I probably don't need to use 64-bit version of python, obviously, it isn't a wide unicode build mentioned earlier, hence >>> len(u"\U00010333") # is still: 2 >>> And I currently don't have special memory requirements, which might be better addressed on a 64-bit system. If there is something I can do to test regex in this environment, please, let me know; On the same machine the 32-version is ok: Python 2.7 (r27:82525, Jul 4 2010, 09:01:59) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import regex >>> regards vbr Vlastil, what makes you think that issue2636-20101009.zip is a 64-bit version? I can only find 32-bit DLLs in it. Well, it seemed to me too, I happened to read the last post from Matthew, msg118243, in the sense that he made some updates which need testing on a 64 bit system (I am unsure, whether hardware architecture, OS type, python build or something else was meant); but it must have been somehow separated as a new directory in the issue2636-20101009.zip which is not the case. More generaly, I was somhow confused about the "win32" in the shell header in the mentioned install. vbr I am not able to build or test a 64-bit version. The update was to the source files to ensure that if it is compiled for 64 bits then the string positions will also be 64-bit. This change was prompted by a poster who tried to use the re module of a 64-bit Python build on a 30GB memmapped file but found that the string positions were still limited to 32 bits. It looked like a 64-bit build of the regex module would have the same limitation. Sorry for the noise, it seems, I can go back to the 32-bit python for now then... vbr Do we expect this to work on 64 bit Linux and python 2.6.5? I've compiled and run some of my code through this, and there seems to be issues with non-greedy quantifier matching (at least relative to the old re module): $ cat test.py import re, regex text = "(MY TEST)" regexp = '\((?P<test>.{0,5}?TEST)\)' print re.findall(regexp, text) print regex.findall(regexp, text) $ python test.py ['MY TEST'] [] python 2.7 produces the same results for me. However, making the quantifier greedy (removing the '?') gives the same result for both re and regex modules. That's a bug. I'll fix it as soon has I've reinstalled the SDK. <sigh/> issue2636-20101029.zip is a new version of the regex module. I've also added to the unit tests. Here's another inconsistency (same setup as before, running issue2636-20101029.zip code): $ cat test.py import re, regex text = "\n S" regexp = '[^a]{2}[A-Z]' print re.findall(regexp, text) print regex.findall(regexp, text) $ python test.py [' S'] [] I might flush out some more as I excercise this over the next few days. issue2636-20101030.zip is a new version of the regex module. I've also added yet more to the unit tests. And another (with issue2636-20101030.zip): $ cat test.py import re, regex text = "XYABCYPPQ\nQ DEF" regexp = 'X(Y[^Y]+?){1,2}(\ |Q)+DEF' print re.findall(regexp, text) print regex.findall(regexp, text) $ python test.py [('YPPQ\n', ' ')] [] issue2636-20101030a.zip is a new version of the regex module. This bug was a bit more difficult to fix, but I think it's OK now! Here's one that really falls in the category of "don't do that"; but I found this because I was limiting the system recursion level to somewhat less than the standard 1000 (for other reasons), and I had some shorter duplicate patterns in a big regex. Here is the simplest case to make it blow up with the standard recursion settings: $ cat test.py import re, regex regex)' re.compile(regexp) regex.compile(regexp) $ python test.py <snip big traceback except for last few lines> File "/tmp/test/src/lib/_regex_core.py", line 2024, in optimise subpattern = subpattern.optimise(info) File "/tmp/test/src/lib/_regex_core.py", line 1552, in optimise branches = [_Branch(branches)] RuntimeError: maximum recursion depth exceeded And another, bit less pathological, testcase. Sorry for the ugly testcase; it was much worse before I boiled it down :-) $ cat test.py import re, regex text = "\nTest\nxyz\nxyz\nEnd" regexp = '(\nTest(\n+.+?){0,2}?)?\n+End' print re.findall(regexp, text) print regex.findall(regexp, text) $ python test.py [('\nTest\nxyz\nxyz', '\nxyz')] [('', '')] issue2636-20101101.zip is a new version of the regex module. I hope it's finally fixed this time! :-) OK, I think this might be the last one I will find for the moment: $ cat test.py import re, regex text = "test?" regexp = "test\?" sub_value = "result\?" print repr(re.sub(regexp, sub_value, text)) print repr(regex.sub(regexp, sub_value, text)) $ python test.py 'result\\?' 'result?' issue2636-20101102.zip is a new version of the regex module. Spoke too soon, although this might be a valid divergence in behavior: $ cat test.py import re, regex text = "test: 2" print regex.sub('(test)\W+(\d+)(?:\W+(TEST)\W+(\d))?', '\\2 \\1, \\4 \\3', text) print re.sub('(test)\W+(\d+)(?:\W+(TEST)\W+(\d))?', '\\2 \\1, \\4 \\3', text) $ python test.py 2 test, Traceback (most recent call last): File "test.py", line 6, in <module> print re.sub('(test)\W+(\d+)(?:\W+(TEST)\W+(\d))?', '\\2 \\1, \\4 \\3', text) File "/usr/lib64/python2.7/re.py", line 151, in sub return _compile(pattern, flags).sub(repl, string, count) File "/usr/lib64/python2.7/re.py", line 278, in filter return sre_parse.expand_template(template, match) File "/usr/lib64/python2.7/sre_parse.py", line 787, in expand_template raise error, "unmatched group" sre_constants.error: unmatched group Another, with backreferences: import re, regex text = "TEST, BEST; LEST ; Lest 123 Test, Best" regexp = "(?i)(.{1,40}?),(.{1,40}?)(?:;)+(.{1,80}).{1,40}?\\3(\ |;)+(.{1,80}?)\\1" print re.findall(regexp, text) print regex.findall(regexp, text) $ python test.py [('TEST', ' BEST', ' LEST', ' ', '123 ')] [('T', ' BEST', ' ', ' ', 'Lest 123 ')] There seems to be a bug in the handling of numbered backreferences in sub() in issue2636-20101102.zip I believe, it would be a fairly new regression, as it would be noticed rather soon. (tested on Python 2.7; winXP) >>> re.sub("([xy])", "-\\1-", "abxc") 'ab-x-c' >>> regex.sub("([xy])", "-\\1-", "abxc") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python27\lib\regex.py", line 176, in sub return _compile(pattern, flags).sub(repl, string, count, pos, endpos) File "C:\Python27\lib\regex.py", line 375, in _compile_replacement compiled.extend(items) TypeError: 'int' object is not iterable >>> vbr Sorry for the noise, please, forgot my previous msg120215; I somehow managed to keep an older version of _regex_core.py along with the new regex.py in the Lib directory, which are obviously incompatible. After updating the files correctly, the mentioned examples work correctly. vbr issue2636-20101102a.zip is a new version of the regex module. msg120204 relates to issue #1519638 "Unmatched group in replacement". In 'regex' an unmatched group is treated as an empty string in a replacement template. This behaviour is more in keeping with regex implementations in other languages. msg120206 was caused by not all group references being made case-insensitive when they should be. issue2636-20101106.zip is a new version of the regex module. Fix for issue 10328, which regex also shared. The re module throws an exception for re.compile(r'[\A\w]'). latest regex doesn't, but I don't think the pattern is matching correctly. Shouldn't findall(r'[\A]\w', 'a b c') return ['a'] and findall(r'[\A\s]\w', 'a b c') return ['a', ' b', ' c'] ? Python 2.6.6 (r266:84292, Sep 15 2010, 16:22:56) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import re >>> for s in [r'\A\w', r'[\A]\w', r'[\A\s]\w']: print re.findall(s, 'a b c') ... ['a'] [] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.6/re.py", line 177, in findall return _compile(pattern, flags).findall(string) File "/usr/lib/python2.6/re.py", line 245, in _compile raise error, v # invalid expression sre_constants.error: internal: unsupported set operator >>> import regex >>> for s in [r'\A\w', r'[\A]\w', r'[\A\s]\w']: print regex.findall(s, 'a b c') ... ['a'] [] [' b', ' c'] It looks like a similar problem to msg116252 and msg116276. On Thu, Nov 11, 2010 at 10:20 PM, Vlastimil Brom <report@bugs.python.org> wrote: > Maybe I am missing something, but the result in regex seem ok to me: > \A is treated like A in a character set; I think it's me who missed something. I'd assumed that all backslash patterns (including \A for beginning of string) maintain their meaning in a character class. AFAICT that assumption was wrong. I'd have liked to suggest updating the underlying unicode data to the latest standard 6.0, but it turns out, it might be problematic with the cross-version compatibility; according to the clarification in the 3... versions are going to be updated, while it is not allowed in the 2.x series. I guess it would cause maintainance problems (as the needed properties are not available via unicodedata). Anyway, while I'd like the recent unicode data to be supported (new characters, ranges, scripts, and corrected individual properties...), I'm much happier, that there is support for the 2 series in regex... vbr issue2636-20101113.zip is a new version of the regex module. It now supports Unicode 6.0.0. Thank you very much! a quick test with my custom unicodedata with 6.0 on py 2.7 seems ok. I hope, there won't be problems with "cooperation" of the more recent internal data with the original 5.2 database in python 2.x releases. vbr issue2636-20101120.zip is a new version of the regex module. The match object now supports additional methods which return information on all the successful matches of a repeated capture group. The API was inspired by that of .Net:]). issue2636-20101121.zip is a new version of the regex module. The captures didn't work properly with lookarounds or atomic groups. Forgive me if this is just a stupid oversight. I'm a linguist and use UTF-8 for "special" characters for linguistics data. This often includes multi-byte Unicode character sequences that are composed as one grapheme. For example the í̵ (if it's displaying correctly for you) is a LATIN SMALL LETTER I WITH STROKE \u0268 combined with COMBINING ACUTE ACCENT \u0301. E.g. a word I'm parsing: jí̵-e-gɨ I was pretty excited to find out that this regex library implements the grapheme match \X (equivalent to \P{M}\p{M}*). For the above example I needed to evaluate which sequences of characters can occur across syllable boundaries (here the hyphen "-"), so I'm aiming for: í̵-e e-g When regex couldn't get any better, you awesome developers implemented an overlapped=True flag with findall and finditer. Python 3.1.2 (r312:79147, May 19 2010, 11:50:28) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2 >>> import regex >>>>> s 'jí̵-e-gɨ' >>> m = regex.compile("(\X)(-)(\X)") >>> m.findall(s, overlapped=False) [('í̵', '-', 'e')] But these results are weird to me: >>> m.findall(s, overlapped=True) [('í̵', '-', 'e'), ('í̵', '-', 'e'), ('e', '-', 'g'), ('e', '-', 'g'), ('e', '-', 'g')] Why the extra matches? At first I figured this had something to do with the overlapping match of the grapheme, since it's multiple characters. So I tried it with with out the grapheme match: >>> m = regex.compile("(.)(-)(.)") >>>>> m.findall(s2, overlapped=False) [('a', '-', 'b'), ('d', '-', 'e')] That's right. But with overlap... >>> m.findall(s2, overlapped=True) [('a', '-', 'b'), ('b', '-', 'c'), ('b', '-', 'c'), ('d', '-', 'e'), ('d', '-', 'e'), ('d', '-', 'e'), ('e', '-', 'f'), ('e', '-', 'f')] Those 'extra' matches are confusing me. 2x b-c, 3x d-e, 2x e-f? Or even more simply: >>>>> m.findall(s2, overlapped=False) [('a', '-', 'b')] >>> m.findall(s2, overlapped=True) [('a', '-', 'b'), ('b', '-', 'c'), ('b', '-', 'c')] Thanks! Please don't change the type, this issue is about the feature request of adding this regex engine to the stdlib. I'm sure Matthew will get back to you about your question. issue2636-20101123.zip is a new version of the regex module. Oops, sorry, the weird behaviour of msg122221 was a bug. :-( issue2636-20101130.zip is a new version of the regex module. Added 'special_only' keyword parameter (default False) to regex.escape. When True, regex.escape escapes only 'special' characters, such as '?'. issue2636-20101207.zip is a new version of the regex module. It includes additional checks against pathological regexes. Here is the terminal log of what happens when I try to install and then import regex. Any ideas what is going on? $ python setup.py install running install running build running build_py creating build creating build/lib.linux-i686-2.6 copying Python2/regex.py -> build/lib.linux-i686-2.6 copying Python2/_regex_core.py -> build/lib.linux-i686-2.6 running build_ext building '_regex' extension creating build/temp.linux-i686-2.6 creating build/temp.linux-i686-2.6/Python2 gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.6 -c Python2/_regex.c -o build/temp.linux-i686-2.6/Python2/_regex.o Python2/_regex.c:109: warning: ‘struct RE_State’ declared inside parameter list Python2/_regex.c:109: warning: its scope is only this definition or declaration, which is probably not what you want Python2/_regex.c:110: warning: ‘struct RE_State’ declared inside parameter list Python2/_regex.c:538: warning: initialization from incompatible pointer type Python2/_regex.c:539: warning: initialization from incompatible pointer type Python2/_regex.c:679: warning: initialization from incompatible pointer type Python2/_regex.c:680: warning: initialization from incompatible pointer type Python2/_regex.c:1217: warning: initialization from incompatible pointer type Python2/_regex.c:1218: warning: initialization from incompatible pointer type Python2/_regex.c: In function ‘try_match’: Python2/_regex.c:3153: warning: passing argument 1 of ‘state->encoding->at_boundary’ from incompatible pointer type Python2/_regex.c:3153: note: expected ‘struct RE_State *’ but argument is of type ‘struct RE_State *’ Python2/_regex.c:3184: warning: passing argument 1 of ‘state->encoding->at_default_boundary’ from incompatible pointer type Python2/_regex.c:3184: note: expected ‘struct RE_State *’ but argument is of type ‘struct RE_State *’ Python2/_regex.c: In function ‘search_start’: Python2/_regex.c:3535: warning: assignment from incompatible pointer type Python2/_regex.c:3581: warning: assignment from incompatible pointer type Python2/_regex.c: In function ‘basic_match’: Python2/_regex.c:3995: warning: assignment from incompatible pointer type Python2/_regex.c:3996: warning: assignment from incompatible pointer type Python2/_regex.c: At top level: Python2/unicodedata_db.h:241: warning: ‘nfc_first’ defined but not used Python2/unicodedata_db.h:448: warning: ‘nfc_last’ defined but not used Python2/unicodedata_db.h:550: warning: ‘decomp_prefix’ defined but not used Python2/unicodedata_db.h:2136: warning: ‘decomp_data’ defined but not used Python2/unicodedata_db.h:3148: warning: ‘decomp_index1’ defined but not used Python2/unicodedata_db.h:3333: warning: ‘decomp_index2’ defined but not used Python2/unicodedata_db.h:4122: warning: ‘comp_index’ defined but not used Python2/unicodedata_db.h:4241: warning: ‘comp_data’ defined but not used Python2/unicodedata_db.h:5489: warning: ‘get_change_3_2_0’ defined but not used Python2/unicodedata_db.h:5500: warning: ‘normalization_3_2_0’ defined but not used Python2/_regex.c: In function ‘basic_match’: Python2/_regex.c:4106: warning: ‘info.captures_count’ may be used uninitialized in this function Python2/_regex.c:4720: warning: ‘info.captures_count’ may be used uninitialized in this function Python2/_regex.c: In function ‘splitter_split’: Python2/_regex.c:8076: warning: ‘result’ may be used uninitialized in this function gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions build/temp.linux-i686-2.6/Python2/_regex.o -o build/lib.linux-i686-2.6/_regex.so running install_lib copying build/lib.linux-i686-2.6/_regex.so -> /usr/local/lib/python2.6/dist-packages copying build/lib.linux-i686-2.6/_regex_core.py -> /usr/local/lib/python2.6/dist-packages copying build/lib.linux-i686-2.6/regex.py -> /usr/local/lib/python2.6/dist-packages byte-compiling /usr/local/lib/python2.6/dist-packages/_regex_core.py to _regex_core.pyc byte-compiling /usr/local/lib/python2.6/dist-packages/regex.py to regex.pyc running install_egg_info Writing /usr/local/lib/python2.6/dist-packages/regex-0.1.20101123.egg-info $ python Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import regex Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/regex-0.1.20101207-py2.6-linux-i686.egg/regex.py", line 273, in <module> from _regex_core import * File "/usr/local/lib/python2.6/dist-packages/regex-0.1.20101207-py2.6-linux-i686.egg/_regex_core.py", line 54, in <module> import _regex ImportError: /usr/local/lib/python2.6/dist-packages/regex-0.1.20101207-py2.6-linux-i686.egg/_regex.so: undefined symbol: max issue2636-20101210.zip is a new version of the regex module. I've extended the additional checks of the previous version. It has been tested with Python 2.5 to Python 3.2b1. issue2636-20101224.zip is a new version of the regex module. Case-insensitive matching is now faster. The matching functions and methods now accept a keyword argument to release the GIL during matching to enable other Python threads to run concurrently: matches = regex.findall(pattern, string, concurrent=True) This should be used only when it's guaranteed that the string won't change during matching. The GIL is always released when working on instances of the builtin (immutable) string classes because that's known to be safe. I would like to start reviewing this code, but dated zip files on a tracker make a very inefficient VC setup. Would you consider exporting your development history to some public VC system? +1 on VC I've been trying to push the history to Launchpad, completely without success; it just won't authenticate (no such account, even though I can log in!). I doubt that the history would be much use to you anyway. I suspect it would help if there are more changes, though. I believe that to push to launchpad you have to upload an ssh key. Not sure why you'd get "no such account", though. Barry would probably know :) It does have an SSH key. It's probably something simple that I'm missing. I think that the only change I'm likely to make is to a support script I use; it currently uses hard-coded paths, etc, to do its magic. :-) Testing issue2636-20101224.zip: Nested modifiers seems to hang the regex compilation when used in a non-capturing group e.g.: re.compile("(?:(?i)foo)") or re.compile("(?:(?u)foo)") No problem on stock Python 2.6.5 regex engine. The unnested version of the same regex compiles fine. issue2636-20101228.zip is a new version of the regex module. Sorry for the delay, the fix took me a bit longer than I expected. :-) Another re.compile performance issue (I've seen a couple of others, but I'm still trying to simplify the test-cases): re.compile("(?ui)(a\s?b\s?c\s?d\s?e\s?f\s?g\s?h\s?i\s?j\s?k\s?l\s?m\s?n\s?o\s?p\s?q\s?r\s?s\s?t\s?u\s?v\s?w\s?y\s?z\s?a\s?b\s?c\s?d)") completes in around 0.01s on my machine using Python 2.6.5 standard regex library, but takes around 12 seconds using issue2636-20101228.zip issue2636-20101228a.zip is a new version of the regex module. It now compiles the pattern quickly. Thanks, issue2636-20101228a.zip also resolves my compilation speed issues I had on other (very) complex regexes. Found this one: re.search("(X.*?Y\s*){3}(X\s*)+AB:", "XY\nX Y\nX Y\nXY\nXX AB:") produces a search hit with stock python 2.6.5 regex library, but not with issue2636-20101228a.zip. re.search("(X.*?Y\s*){3,}(X\s*)+AB:", "XY\nX Y\nX Y\nXY\nXX AB:") matches on both, however. Here is a somewhat crazy pattern (slimmed down from something much larger and more complex, which didn't finish compiling even after several minutes): re.compile("(?:(?:[23][0-9]|3[79]|0?[1-9])(?:[Aa][Aa]|[Aa][Aa]|[Aa][Aa])??(?:])??)\W*(?:[79][0-9]|2[0-4]|\d)(?:[\.:Aa])?(?:[0-5][0-9])\W*(?:(?:[Aa]{3}(?:[Aa]{3})?|[Aa]{3}(?:[Aa](?:[Aa]{3})?)?|[Aa]{3}(?:[Aa]{5}[Aa])?|[Aa]{3}(?:[Aa](?:[Aa]{4})?)?|[Aa]{3}(?:[Aa]{3})?|[Aa]{3}(?:[Aa]{5})?|[Aa]{3}(?:[Aa]{3})?)|(?:[Aa][Aa](?:[Aa](?:[Aa]{3})?)?|[Aa][Aa](?:[Aa](?:[Aa](?:[Aa](?:[Aa]{3})?)?)?)?|[Aa][Aa](?:[Aa](?:[Aa](?:[Aa]{4})?)?)?|[Aa][Aa](?:[Aa](?:[Aa]{3}(?:[Aa](?:[Aa]{3})?)?)?)?|[Aa][Aa](?:[Aa](?:[Aa](?:[Aa]{3})?)?)?|[Aa][Aa](?:[Aa](?:[Aa](?:[Aa]{3})?)?)?|[Aa]{3}(?:[Aa](?:[Aa](?:[Aa]{4})?)?)?|[Aa][Aa](?:[Aa](?:[Aa](?:[Aa]{3})?)?)?))\s*(\-\s*)?(?:(?:[23][0-9]|3[79]|0?[1-9])(?:[Aa][Aa]|[Aa][Aa]|[Aa][Aa])??(?:(?:[\-\s\.,>/]){0])??)(?:(?:(?:[\-\s\.,>/]){0,4}?)(?:(?:68)?[7-9]\d|(?:2[79])?\d{2}))?\W*(?:[79][0-9]|2[0-4]|\d)(?:[\.:Aa])?(?:[0-5][0-9])") Runs about 10.5 seconds on my machine with issue2636-20101228a.zip, less than 0.03 seconds with stock Python 2.6.5 regex engine. issue2636-20101229.zip is a new version of the regex module. It now compiles the pattern quickly. More an observation than a bug: I understand that we're trading memory for performance, but I've noticed that the peak memory usage is rather high, e.g.: $ cat test.py import os import regex as re def resident(): for line in open('/proc/%d/status' % os.getpid(), 'r').readlines(): if line.startswith("VmRSS:"): return line.split(":")[-1].strip() cache = {} print resident() for i in xrange(0,1000): cache[i] = re.compile(str(i)+"(abcd12kl|efghlajsdf|ijkllakjsdf|mnoplasjdf|qrstljasd|sdajdwxyzlasjdf|kajsdfjkasdjkf|kasdflkasjdflkajsd|klasdfljasdf)") print resident() Execution output on my machine (Linux x86_64, Python 2.6.5): 4328 kB 32052 kB with the standard regex library: 3688 kB 5428 kB So, it looks like around 16x the memory per pattern vs standard regex module Now the example is pretty silly, the difference is even larger for more complex regexes. I also understand that the once the patterns are GC-ed, python can reuse the memory (pymalloc doesn't return it to the OS, unfortunately). However, I have some applications that use large numbers (many thousands) of regexes and need to keep them cached (compiled) indefinitely (especially because compilation is expensive). This causes some pain (long story). I've played around with increasing RE_MIN_FAST_LENGTH, and it makes a significant difference, e.g.: RE_MIN_FAST_LENGTH = 10: 4324 kB 25976 kB In my use-cases, having a larger RE_MIN_FAST_LENGTH doesn't make a huge performance difference, so that might be the way I'll go. issue2636-20101230.zip is a new version of the regex module. I've delayed the building of the tables for fast searching until their first use, which, hopefully, will mean that fewer will be actually built. Yeah, issue2636-20101230.zip DOES reduce memory usage significantly (30-50%) in my use cases; however, it also tanks performance overall by 35% for me, so I'll prefer to stick with issue2636-20101229.zip (or some variant of it). Maybe a regex compile-time option, although that's not necessary. Thanks for the effort. re.search('\d{4}(\s*\w)?\W*((?!\d)\w){2}', "9999XX") matches on stock 2.6.5 regex module, but not on issue2636-20101230.zip or issue2636-20101229.zip (which I've fallen back to for now) Another one that diverges between stock regex and issue2636-20101229.zip: re.search('A\s*?.*?(\n+.*?\s*?){0,2}\(X', 'A\n1\nS\n1 (X') As belopolsky said... *please* move this development into version control. Put it up in an Hg repo on code.google.com. or put it on github. *anything* other than repeatedly posting entire zip file source code drops to a bugtracker. Hearty +1. I have the hope of putting this in 3.3, and for that I'd like to see how the code matures, which is much easier when in version control. The project is now at: Unfortunately it doesn't have the revision history. I don't know why not. Do you have it in any kind of repository at all? Even a private SVN repo or something like that? msg124904: It would, of course, be slower on first use, but I'm surprised that it's (that much) slower afterwards. msg124905, msg124906: I have those matching now. msg124931: The sources are in TortoiseBzr, but I couldn't upload, so I exported to TortoiseSVN. Even after much uninstalling and reinstalling (and reboots) I never got TortoiseSVN to work properly, so I switched to TortoiseHg. The sources are now at: Thanks for putting up the hg repo, makes it much easier to follow. Getting back to the performance regression I reported in msg124904: I've verified that if I take the hg commit 7abd9f9bb1 , and I back out the guards changes manually, while leaving the FAST_INIT changes in, the performance is back to normal on my full regression suite (i.e. the 30-40% penalty disappears). I've repeated my tests a few times to make sure I'm not mistaken; since the guard changes doesn't look like it should impact performance much, but it does. I've attached the diff that restored the speed for me (as usual, using Python 2.6.5 on Linux x86_64) BTW, now that we have the code on google code, can we log individual issues over there? Might make it easier for those interested to follow certain issues than trying to comb through every individual detail in this super-issue-thread...? Why not? :-) Just to check, does this still work with your changes of msg124959? regex.search(r'\d{4}(\s*\w)?\W*((?!\d)\w){2}', "9999XX") For me it fails to match! You're correct, after the change: regex.search(r'\d{4}(\s*\w)?\W*((?!\d)\w){2}', "9999XX") doesn't match (i.e. as before commit 7abd9f9bb1). I was, however, just trying to narrow down which part of the code change killed the performance on my regression tests :-) Happy new year to all out there. I've just done a bug fix. The issue is at: BTW, Jacques, I trust that your regression tests don't test how long a regex takes to fail to match, because a bug could cause such a non-match to occur too quickly, before the regex has tried all that it should! :-) The regex 0.1.20110106 package fails to install with Python 2.6, due to the use of 2.7 string formatting syntax in setup.py: print("Copying {} to {}".format(unicodedata_db_h, SRC_DIR)) This line should be changed to: print("Copying {0} to {1}".format(unicodedata_db_h, SRC_DIR)) Reference: That line crept in somehow. As it's been there since the 2010-12-24 release and you're the first one to have a problem with it (and you've already fixed it), it looks like a new upload isn't urgently needed (I don't have any other changes to make at present). I've reduced the size of some internal tables. Could you add me as a member or admin on the mrab-regex-hg project? I've got a few things I want to fix in the code as I start looking into the state of this module. gpsmith at gmail dot com is my google account. There are some fixes in the upstream python that haven't made it into this code that I want to merge in among other things. I may also add a setup.py file and some scripts to to make building and testing this stand alone easier. ... Okay. Can you push your setup.py and README and such as well? Your pypi release tarballs should match the hg repo and ideally include a mention of what hg revision they are generated from. :) -gps On Mon, Mar 14, 2011 at 5:25 PM, Matthew Barnett <report@bugs.python.org>wrote: > > Matthew Barnett <python@mrabarnett.plus.com> added the comment: > > ... > > ---------- > > _______________________________________ > Python tracker <report@bugs.python.org> > <> > _______________________________________ > I've fixed the problem with iterators for both Python 3 and Python 2. They can now be shared safely across threads. I've updated the release on PyPI. I'm having a problem using the current version (0.1.20110504) with python 2.5 on OSX 10.5. When I try to import regex I get the following import error: dlopen(<snipped>/python2.5/site-packages/_regex.so, 2): Symbol not found: _re_is_same_char_ign Referenced from: <snipped>/python2.5/site-packages/_regex.so Expected in: dynamic lookup It seems that _regex_unicode.c is missing from setup.py, adding it to ext_modules fixes my previous issue. Issues with Regexp should probably be handled on the Regexp tracker. I apologize if this is the wrong place for this message. I did not see the link to a separate list. First let me explain what I am trying to accomplish. I would like to be able to take an unknown regular expression that contains both named and unnamed groups and tag their location in the original string where a match was found. Take the following redundantly simple example: >>> a_string = r"This is a demo sentence." >>> pattern = r"(?<a_thing>\w+) (\w+) (?<another_thing>\w+)" >>> m = regex.search(pattern, a_string) What I want is a way to insert named/numbered tags into the original string, so that it looks something like this: r"<a_thing>This</a_thing> <2>is</2> <another_thing>a</another_thing> demo sentence." The syntax doesn't have to be exactly like that, but you get the place. I have inserted the names and/or indices of the groups into the original string, around the span that the groups occupy. This task is exceedingly difficult with the current implementation, unless I am missing something obvious. We could call the groups by index, the groups as a tuple, or the groupdict: >>> m.group(1) 'This' >>> m.groups() ('This', 'is', 'a') >>> m.groupdict() {'another_thing': 'a', 'a_thing': 'This'} If all I wanted was to tag the groups by index, it would be a simple function. I would be able to call m.spans() for each index in the length of m.groups() and insert the <> and </> tags around the right indices. The hard part is finding out how to find the spans of the named groups. Do any of you have a suggestion? It would make more sense from my perspective, if each group was an object that had its own .span property. It would work like this with the above example: >>> first = m.group(1) 'a_thing' >>> second = m.group(2) >>> second.name() None >>> You could still call .spans() on the Match object itself, but it would query its children group objects for the data. Overall I think this would be a much more Pythonic approach, especially given that you have added subscripting and key lookup. So instead of this: >>> m['a_thing'] 'This' >>> type(m['a_thing']) <type 'str'> You could have: >>> m['a_thing'] 'This' >>> type(m['a_thing']) <'regex.Match.Group object'> With the noted benefit of this: >>> m['a_thing'].span() (0, 4) >>> m['a_thing'].index() 1 >>> Maybe I'm missing a major point or functionality here, but I've been pouring over the docs and don't currently think what I'm trying to achieve is possible. Thank you for taking the time to read all this. -Alec The new regex imlementation is hosted here: The span of m['a_thing'] is m.span('a_thing'), if that helps. The named groups are listed on the pattern object, which can be accessed via m.re: >>> m.re <_regex.Pattern object at 0x0161DE30> >>> m.re.groupindex {'another_thing': 3, 'a_thing': 1} so you can use that to create a reverse dict to go from the index to the name or None. (Perhaps the pattern object should have such a .group_name attribute.) Thanks, Matthew. I did not realize I could access either of those. I should be able to build a helper function now to do what I want. I'm not sure if this belongs here, or on the Google code project page, so I'll add it in both places :) Feature request: please change the NEW flag to something else. In five or six years (give or take), the re module will be long forgotten, compatibility with it will not be needed, so-called "new" features will no longer be new, and the NEW flag will just be silly. If you care about future compatibility, some sort of version specification would be better, e.g. "VERSION=0" (current re module), "VERSION=1" (this regex module), "VERSION=2" (next generation). You could then default to VERSION=0 for the first few releases, and potentially change to VERSION=1 some time in the future. Otherwise, I suggest swapping the sense of the flag: instead of "re behaviour unless NEW flag is given", I'd say "re behaviour only if OLD flag is given". (Old semantics will, of course, remain old even when the new semantics are no longer new.) I tried to run a test suite of 3kloc (not just about regex, but regex were used in several places) and I had only one failure: >>> s = u'void foo ( type arg1 [, type arg2 ] )' >>> re.sub('(?<=[][()]) |(?!,) (?!\[,)(?=[][(),])', '', s) u'void foo(type arg1 [, type arg2])' >>> regex.sub('(?<=[][()]) |(?!,) (?!\[,)(?=[][(),])', '', s) u'void foo ( type arg1 [, type arg2 ] )' Note than when the two patterns are used independently they both yield the same result on re and regex, but once they are combined the result is different: >>> re.sub('(?<=[][()]) ', '', s) u'void foo (type arg1 [, type arg2 ])' >>> regex.sub('(?<=[][()]) ', '', s) u'void foo (type arg1 [, type arg2 ])' >>> re.sub('(?!,) (?!\[,)(?=[][(),])', '', s) u'void foo( type arg1 [, type arg2])' >>> regex.sub('(?!,) (?!\[,)(?=[][(),])', '', s) u'void foo( type arg1 [, type arg2])' The regex module supports nested sets and set operations, eg. r"[[a-z]--[aeiou]]" (the letters from 'a' to 'z', except the vowels). This means that literal '[' in a set needs to be escaped. For example, re module sees "[][()]..." as: [ start of set ] literal ']' [() literals '[', '(', ')' ] end of set ... ... but the regex module sees it as: [ start of set ] literal ']' [()] nested set [()] ... ... Thus: >>> s = u'void foo ( type arg1 [, type arg2 ] )' >>> regex.sub(r'(?<=[][()]) |(?!,) (?!\[,)(?=[][(),])', '', s) u'void foo ( type arg1 [, type arg2 ] )' >>> regex.sub('(?<=[]\[()]) |(?!,) (?!\[,)(?=[]\[(),])', '', s) u'void foo(type arg1 [, type arg2])' If it can't parse it as a nested set, it tries again as a non-nested set (like re), but there are bound to be regexes where it could be either. Thanks for the explanation, but isn't this a backward incompatible feature? I think it should be enabled only when the re.NEW flag is passed. The idiom [][...] is also quite common, so I think it might break different programs if regex has a different behavior. > Thanks for the explanation, but isn't this a backward incompatible > feature? > I think it should be enabled only when the re.NEW flag is passed. > The idiom [][...] is also quite common, so I think it might break > different programs if regex has a different behavior. As someone said, I'd rather have a re.COMPAT flag. re.NEW will look silly in a few years. Also, we can have a warning about unescaped brackets during a transitional period. However, it really needs the warning to be enabled by default, IMO. Changing the name of the flag is fine with me. Having a warning for unescaped brackets that trigger set operations might also be a solution (once escaped they will still work on the old re). Maybe the same could also be done for scoped flags. FWIW I tried to come up with a simpler regex that makes some sense and triggers unwanted set operations and I didn't come up with anything except: >>> regex.findall('[[(]foo[)]]', '[[foo] (foo)]') ['f', 'o', 'o', '(', 'f', 'o', 'o', ')'] >>> re.findall('[[(]foo[)]]', '[[foo] (foo)]') ['(foo)]'] (but this doesn't make too much sense). Complex regex will still break though, so the issue needs to be addressed somehow. I think I need a show of hands. Should the default be old behaviour (like re) or new behaviour? (It might be old now, new later.) Should there be a NEW flag (as at present), or an OLD flag, or a VERSION parameter (0=old, 1=new, 2=?)? > I think I need a show of hands. > > Should the default be old behaviour (like re) or new behaviour? (It > might be old now, new later.) > > Should there be a NEW flag (as at present), or an OLD flag, or a > VERSION parameter (0=old, 1=new, 2=?)? VERSION might be best, but then it should probably be a separate argument rather than a flag. "old now, new later" doesn't solve the issue unless we have a careful set of warnings to point out problematic regexes. On 1 September 2011 16:12, Matthew Barnett <report@bugs.python.org> wrote: > > Matthew Barnett <python@mrabarnett.plus.com> added the comment: > > I think I need a show of hands. For my part, I recommend literal flags, i.e. re.VERSION222, re.VERSION300, etc. Then you know exactly what you're getting and although it may be confusing, we can then slowly deprecate re.VERSION222 so that people can get used to the new syntax. Returning to lurking on my own issue. :) In order to replace the re module, regex must have the same behavior (except for bugs, where the correct behavior is most likely preferred, even if it's different). Having re.OLD and warnings active by default in 3.3 (and possibly 3.4) should give enough time to fix the regex if/when necessary (either by changing the regex or by adding the re.OLD flag manually). In 3.4 (or 3.5) we can then change the default behavior to the new semantics. In this way we won't have to keep using the re.NEW flag on every regex. I'm not sure if a version flag is useful, unless you are planning to add more incompatible changes. Also each new version *flag* means one more path to add/maintain in the code. Having a simple .regex_version attribute might be a more practical (albeit less powerful) solution. Matthew Barnett wrote: > Matthew Barnett <python@mrabarnett.plus.com> added the comment: > > I think I need a show of hands. > > Should the default be old behaviour (like re) or new behaviour? (It might be old now, new later.) > > Should there be a NEW flag (as at present), or an OLD flag, or a VERSION parameter (0=old, 1=new, 2=?)? I prefer Antoine's suggested spelling, COMPAT, rather than OLD. How would you write the various options? After the transition is easy: # Get backwards-compatible behaviour: compile(string, COMPAT) compile(string, VERSION0) # Get regex non-compatible behaviour: compile(string) # will be the default in the future compile(string, VERSION1) But what about during the transition, when backwards-compatible behaviour is the default? There needs to be a way to turn compatibility mode off, not just turn it on. # Get backwards-compatible behaviour: compile(string) # will be the default for a release or two compile(string, COMPAT) compile(string, VERSION0) # Get regex non-compatible behaviour: compile(string, VERSION1) So I guess my preference is VERSION0 and VERSION1 flags, even if there is never going to be a VERSION2.? A new set of "features" flags might be an alternative approach. It will also make possible to add new features that are not backward compatible that can be turned on explicitly with their flag. It would be fine for me if I had to turn on explicitly e.g. nested sets if/when I'll need to use them, and keep having the "normal" behavior otherwise. OTOH there are three problems with these approach: 1) it's not compatible with regex (I guess people will use the external module in Python <3.3 and the included one in 3.3+, probably expecting the same semantics). This is also true with the OLD/COMPAT flag though; 2) it might require other inline feature-flags; 3) the new set of flags might be added to the other flags or be separate, so e.g. re.compile(pattern, flags=re.I|re.NESTEDSETS) or re.compile(pattern, flags=re.I, features=re.NESTEDSETS). I'm not sure it's a good idea to add another arg though. Matthew, is there a comprehensive list of all the bugfix/features that have a different behavior from re? We should first check what changes are acceptable and what aren't, and depending on how many and what they are we can then decide what is the best approach (a catch-all flag or several flags to change the behavior, transition period + warning before setting it as default, etc.) Ezio Melotti wrote: > Ezio Melotti <ezio.melotti@gmail.com> added the comment: > >? I think this is adding excessive complexity. Please consider poor Matthew's sanity (or whoever ends up maintaining the module long term), not to mention that of the users of the module. I think it is reasonable to pick a *set* of features as a whole: "I want the regex module to behave exactly the same as the re module" or "I don't care about the re module, give me all the goodies offered by the regex module" but I don't think it is reasonable to expect to pick and choose individual features: "I want zero-width splits but not nested sets or inline flags, and I want the locale flag to act like the re module, and ASCII characters to be treated just like in Perl, but non-ASCII characters to be treated just like grep, and a half double decaff half-caf soy mocha with a twist of lemon with a dash of half-fat unsweetened whipped cream on the side." <wink> If you don't want a feature, don't use it. "Feature flags" leads to a combinational explosion that makes comprehensive testing all but impossible. If you have four features A...D, then for *each* feature you need sixteen tests: A with flags 0000 A with flags 0001 A with flags 0010 A with flags 0011 [...] A with flags 1111 to ensure that there are no side-effects from turning features off. The alternative is hard to track down bugs: "this regular expression returns the wrong result, but only if you have flags A, B and G turned on and C and F turned off." > I think this is adding excessive complexity. It really depends on how many incompatible features there are, and how difficult it is to turn them on/off. > I think it is reasonable to pick a *set* of features as a whole It's probably more practical, but otherwise I'm not sure why you would want to activate 2-5 unrelated features that might require you to rewrite your regex (assuming you are aware of what the features are, what are their side effects, how to fix your regex) just because you need one. The idea is to make the transition smoother and not having a pre-regex world and an incompatible post-regex world, divided by a single flag. > If you don't want a feature, don't use it. With only one flag you are forced to enable all the new features, including the ones you don't want. > "Feature flags" leads to a combinational explosion that makes > comprehensive testing all but impossible. We already have several flags and the tests are working fine. If the features are orthogonal they can be tested independently. > The alternative is hard to track down bugs: > "this regular expression returns the wrong result, but only if you > have flags A, B and G turned on and C and F turned off." What about: "X works, Y works, and X|Y works, but when I use NEW flag to enable an inline flag X|Y stops to work while X and Y keep working" (hint: the NEW also enabled nested set -- see msg143333). I'm not saying that having multiple flag is the best solution (or even a viable one), but it should be considered depending on how many incompatible features there are and what they are. The).) I'd agree with Steven ( msg143377 ) and others, that there probably shouldn't be a large library-specific set of new tags just for "housekeeping" purposes between re and regex. I would personally prefer, that these tags also be settable in the pattern (?...), which would probably be problematic with versioned flags. Although I am trying to take advantage of the new additions, if applicable, I agree, that there should be a possibility to use regex in an unreflected way with the same behaviour like re (maybe except for the fixes of what will be agreed on to be a bug (enough)). On the other hand, it seems to me, that the enhancements/additions can be enabled at once, as an user upgrading the regexes for the new library consciously (or a new user not knowing re) can be supposed to know the new features and their implications. I guess, it is mostly trivially possible to fix/disambiguate the problematic patterns, e.g. by escaping. As for setting the new/old behaviour, would there be a possibility to distinguish it just by importing (possibly through some magic, without the need to duplicate the code?), import re_in_compat_mode as re vs: import re_with_all_the_new_features as re Unfortunately, i have no idea, whether this is possible or viable... with this option, the (user) code update could be just the change of the imports instead of adding the flags to all relevant places (and to take them away as redundant, as the defaults evolve with the versions...). However, it is not clear, how this "aliasing" would work out with regard to the transition, maybe the long differenciated "module" names could be kept and the meaning of "import re" would change, allong with the previous warnings, in some future version. just a few thoughts... vbr Being able to set which behavior you want in a (?XXX) flag at the start of the regex is valuable so that applications that take a regex can support the new syntax automatically when the python version they are running on is updated. The (?XXX) should override whatever re.XXX flag was provided to re.compile(). Notice I said XXX. I'm not interested in a naming bikeshed other than agreeing with the fact that NEW will seem quaint 10 years from now so its best to use non-temporal names. COMPAT, VERSION2, VERSION3, WITH_GOATS, PONY, etc. are all non-temporal and do allow us to change the default away from "old" behavior at a future date beyond 3.3. So, VERSION0 and VERSION1, with "(?V0)" and "(?V1)" in the pattern? If these are the only 3 non-backward compatible features and the nested set one is moved under the NEW flag, I guess the approach might work without having per-feature flags. The "NEW" could be kept for compatibility for regex (if necessary), possibly aliasing it with VERSION1 or whatever name wins the bikeshed. If you want to control that at import time, maybe a from __future__ import new_re_semantics could be used instead of a flag, but I'm not sure about that. Although V1, V2 is less wordy, technically the current behavior is version 2.2.2, so logically this should be re.VERSION222 vs. re.VERSION3 vs. re.VERSIONn, with corresponding "(?V222)", "(?V3)" and future "(?Vn)". But that said, I think 2.2.2 can be shorthanded to 2, so basically start counting from there. Not that it matters in any way, but if the regex semantics has to be distinguished via "non-standard" custom flags; I would prefer even less wordy flags, possibly such that the short forms for the in-pattern flag setting would be one-letter (such as all the other flags) and preferably some with underlying plain English words as base, to get some mnemotechnics (which I don't see in the numbered versions requiring one to keep track of the rather internal library versioning). Unfortunately, it might be difficult to find suitable names, given the objections expressed against the already discussed ones. (FOr what it is worth, I thought e.g. of [t]raditional and [e]nhanced, but these also suffer from some of the mentioned disadvantages... vbr Matthew Barnett wrote: > So, VERSION0 and VERSION1, with "(?V0)" and "(?V1)" in the pattern? Seems reasonable to me. +1 Not sure if this is better as a separate feature request or a comment here, but... the new version of .NET includes an option to specify a time limit on evaluation of regexes (not sure if this is a feature in other regex libs). This would be useful especially when you're executing regexes configured by the user and you don't know if/when they might go exponential. Something like this maybe: # Raises an re.Timeout if not complete within 60 seconds match = myregex.match(mystring, maxseconds=60.0)] So, to my reading of teh compatibility PEP this cannot be added wholesale, unless there is a pure Python version as well. However, if it replaced re (read: patched) it would be valid. On Sun, Jan 29, 2012 at 1:26 AM, Nick Coghlan <report@bugs.python.org>wrote: > > Nick Coghlan <ncoghlan@gmail.com> added the comment: > >] > > ---------- > nosy: +ncoghlan > > _______________________________________ > Python tracker <report@bugs.python.org> > <> > _______________________________________ > I created a new sandbox branch to integrate regex into CPython, see "remote repo" field. I mainly had to adapt the test suite to use unittest. Alex has a valid point in relation to PEP 399, since, like lzma, regex will be coming in under the "special permission" clause that allows the addition of C extension modules without pure Python equivalents. Unlike lzma, though, the new regex engine isn't a relatively simple wrapper around an existing library - supporting the new API features on other implementations is going to mean a substantial amount of work. In practice, I expect that a pure Python implementation of a regular expression engine would only be fast enough to be usable on PyPy. So while we'd almost certainly accept a patch that added a parallel Python implementation, I doubt it would actually help Jython or IronPython all that much - they're probably going to need versions written in Java and C# to be effective (as I believe they already have for the re module). > In practice, I expect that a pure Python implementation of a regular expression engine would only be fast enough to be usable on PyPy. Not sure why this is necessarily true. I'd expect a pure-Python implementation to be maybe 200 times as slow. Many queries (those on relatively short strings that backtrack little) finish within microseconds. On this scale, a couple of orders of magnitudes is not noticeable by humans (unless it adds up), and even where it gets noticeable, it's better than having nothing at all or a non-working program (up until a point). python -m timeit -n 1000000 -s "import re; x = re.compile(r'.*<\s*help\s*>([^<]*)<\s*/\s*help.*>'); data = ' '*1000 + '< help >' + 'abc'*100 + '</help>'" "x.match(data)" 1000000 loops, best of 3: 3.27 usec per loop Well, REs are very often used to process large chunks of text by repeated application. So if the whole operation takes 0.1 or 20 seconds you're going to notice :). I agree that a Python implementation wouldn't be useful for some cases. On the other hand, I believe it would be fine (or at least tolerable) for some others. I don't know the ratio between the two. >. See, there are regex benchmarks there. > I agree that a Python implementation wouldn't be useful for some > cases. On the other hand, I believe it would be fine (or at least > tolerable) for some others. I don't know the ratio between the two. I think the ratio would be something like 2% tolerable :) As I said to Ezio and Georg, I think adding the regex module needs a PEP, even if it ends up non-controversial. I've just uploaded regex into Debian: this will hopefully gives some more eyes looking at the module and reporting some feedbacks. I've been working through the "known crashers" list in the stdlib. The recursive import one was fixed with the migration to importlib in 3.3, the compiler one will be fixed in 3.3.1 (with an enforced nesting limit). One of those remaining is actually a pathological failure in the re module rather than a true crasher (i.e. it doesn't segfault, and in 2.7 and 3.3 you can interrupt it with Ctrl-C): I mention it here as another problem that adopting the regex module could resolve (as regex promptly returns None for this case). Will we actually get regex into the standard library on this pass? Even with in principle approval from Guido, this idea still depends on volunteers to actually write up a concrete proposal as a PEP (which shouldn't be too controversial, given Guido already OK'ed the idea) and then do the integration work to incorporate the code, tests and docs into CPython (not technically *hard*, but not trivial either). "pip install regex" starts looking fairly attractive at that point :) Here is my (slowly implemented) plan: 0. Recommend regex as advanced replacement of re (issue22594). 1. Fix all obvious bugs in the re module if this doesn't break backward compatibility (issue12728, issue14260, and many already closed issues). 2. Deprecate and then forbid behavior which looks as a bug, doesn't match regex in V1 mode and can't be fixed without breaking backward compatibility (issue22407, issue22493, issue22818). 3. Unify minor details with regex (issue22364, issue22578). 4. Fork regex and drop all advanced nonstandard features (such as fuzzy matching). Too many features make learning and using the module more hard. They should be in advanced module (regex). 5. Write benchmarks which cover all corner cases and compare re with regex case by case. Optimize slower module. Currently re is faster regex for all simple examples which I tried (may be as result of issue18685), but in total results of benchmarks (msg109447) re is slower. 6. May be implement some standard features which were rejected in favor of this issue (issue433028, issue433030). re should conform at least Level 1 of UTS #18 (). In best case in 3.7 or 3.8 we could replace re with simplified regex. Or at this time re will be free from bugs and warts. > Here is my (slowly implemented) plan: Exciting. Perhaps you should post your plan on python-dev. In any case, huge thanks for your work on the re module. > Exciting. Perhaps you should post your plan on python-dev. Thank you Antoine. I think all interested core developers are already aware about this issue. A disadvantage of posting on python-dev is that this would require manually copy links and may be titles of all mentioned issues, while here they are available automatically. Oh, I'm lazy. So you are suggesting to fix bugs in re to make it closer to regex, and then replace re with a forked subset of regex that doesn't include advanced features, or just to fix/improve re until it matches the behavior of regex? If you are suggesting the former, I would also suggest checking the coverage and bringing it as close as possible to 100%. > So you are suggesting to fix bugs in re to make it closer to regex, and then > replace re with a forked subset of regex that doesn't include advanced > features, or just to fix/improve re until it matches the behavior of regex? Depends on what will be easier. May be some bugs are so hard to fix that replacing re with regex is only solution. But if fixed re will be simpler and faster than lightened regex and will contain all necessary features, there will be no need in the replacing. Currently the code of regex looks more high level and better structured, but the code of re looks simpler and is much smaller. In any case the closer will be re and regex the easier will be the migration. Ok, regardless of what will happen, increasing test coverage is a worthy goal. We might start by looking at the regex test suite to see if we can import some tests from there. Thanks for pushing this one forward Serhiy! Your approach sounds like a fine plan to me. If I recall, I started this thread with a plan to update re itself with implementations of various features listed in the top post. If you look at the list of files uploaded by me there are seme complete patches for Re to add various features like Atomic Grouping. If we wish to therefore bring re to regex standard we could start with those features. Well, I found a bug with this module, on Python 2.7(.5), on Windows 7 64-bit when you try to compile a regex with the flags V1|DEBUG, the module crashes as if it wanted to call a builtin called "ascii". The bug happened to me several times, but this is the regexp when the last one happened. I hope it's fixed, I really love the module and found it very useful to have PCRE regexes in Python. @Mateon1: "I hope it's fixed"? Did you report it? Well, I am reporting it here, is this not the correct place? Sorry if it is. The page on PyPI says where the project's homepage is located: The bug was fixed in the last release.
http://bugs.python.org/issue2636
CC-MAIN-2015-14
refinedweb
32,263
65.42
Syntactic parsing is a technique by which segmented, tokenized, and part-of-speech tagged text is assigned a structure that reveals the relationships between tokens governed by syntax rules, e.g. by grammars. Consider the sentence: The factory employs 12.8 percent of Bradford County. A syntax parse produces a tree that might help us understand that the subject of the sentence is “the factory”, the predicate is “employs”, and the target is “12.8 percent”, which in turn is modified by “Bradford County”. Syntax parses are often a first step toward deep information extraction or semantic understanding of text. Note however, that syntax parsing methods suffer from structural ambiguity, that is the possibility that there exists more than one correct parse for a given sentence. Attempting to select the most likely parse for a sentence is incredibly difficult. The best general syntax parser that exists for English, Arabic, Chinese, French, German, and Spanish is currently the blackbox parser found in Stanford’s CoreNLP library. This parser is a Java library, however, and requires Java 1.8 to be installed. Luckily it also comes with a server that can be run and accessed from Python using NLTK 3.2.3 or later. Once you have downloaded the JAR files from the CoreNLP download page and installed Java 1.8 as well as pip installed nltk, you can run the server as follows: from nltk.parse.corenlp import CoreNLPServer # The server needs to know the location of the following files: # - stanford-corenlp-X.X.X.jar # - stanford-corenlp-X.X.X-models.jar STANFORD = os.path.join("models", "stanford-corenlp-full-2018-02-27") # Create the server server = CoreNLPServer( os.path.join(STANFORD, "stanford-corenlp-3.9.1.jar"), os.path.join(STANFORD, "stanford-corenlp-3.9.1-models.jar"), ) # Start the server in the background server.start() The server needs to know the location of the JAR files you downloaded, either by adding them to your Java $CLASSPATH or like me, storing them in a models directory that you can access from your project. When you start the server, it runs in the background, ready for parsing. To get constituency parses from the server, instantiate a CoreNLPParser and parse raw text as follows: from nltk.parse.corenlpnltk.pa import CoreNLPParser parser = CoreNLPParser() parse = next(parser.raw_parse("I put the book in the box on the table.")) If you’re in a Jupyter notebook, the tree will be drawn as above. Note that the CoreNLPParser can take a URL to the CoreNLP server, so if you’re deploying this in production, you can run the server in a docker container, etc. and access it for multiple parses. The raw_parse method expects a single sentence as a string; you can also use the parse method to pass in tokenized and tagged text using other NLTK methods. Parses are also handy for identifying questions: next(parser.raw_parse("What is the longest river in the world?")) Note the SBARQ representing the question; this data can be used to create a classifier that can detect what type of question is being asked, which can then in turn be used to transform the question into a database query! I should also point out why we’re using next(); the parser actually returns a generator of parses, starting with the most likely. By using next, we’re selecting only the first, most likely parse. Constituency parses are deep and contain a lot of information, but often dependency parses are more useful for text analytics and information extraction. To get a Stanford dependency parse with Python: from nltk.parse.corenlp import CoreNLPDependencyParser parser = CoreNLPDependencyParser() parse = next(parser.raw_parse("I put the book in the box on the table.")) Once you’re done parsing, don’t forget to stop the server! # Stop the CoreNLP server server.stop() To ensure that the server is stopped even when an exception occurs, you can also use the CoreNLPServer context manager as follows: jars = ( "stanford-corenlp-3.9.1.jar", "stanford-corenlp-3.9.1-models.jar" ) with CoreNLPServer(*jars): parser = CoreNLPParser() text = "The runner scored from second on a base hit" parse = next(parser.parse_text(text)) parse.draw() Note that the parse_text function in the above code allows a string to be passed that might contain multiple sentences and returns a parse for each sentence it segments. Additionally the tokenize and tag methods can be used on the parser to get the Stanford part of speech tags from the text. Unfortunately there isn’t much documentation on this, but for more check out the NLTK CoreNLP API documentation.
https://bbengfort.github.io/2018/06/corenlp-nltk-parses/
CC-MAIN-2021-17
refinedweb
766
55.84
Adapter Design Pattern Let’s learn about Adapter Design Pattern with an example in Java. What is Adapter Design Pattern? the adapter pattern bridges the gap between two incompatible classes or interface. It is one of the structural design patterns described in the Book of Gang of Four. For example, take a look at the analogy of android and iPhone charging cables. Out of the box they are not interchangeable. You can’t use a Type-C USB cable on the lightning port of an iPhone. At this point, you are left with two options. That is to get a new lightning cable or get USB-C to Lightning Port adapter. That is exactly what this design pattern tries to achieve. Let’s convert this analogy in to code. Implementing Adapter pattern The adapter pattern has three major players. They are, - Client – A class or program that intends to use different types of Adaptee. In our case, it is the phone. - Adaptee – A class that does a specific work. It doesn’t matter how it does that. In our case, its the different type of chargers. - Adapter – The bridge that converts one type of Adaptee into another. The AndroidToAppleChargerAdapteris an example of this kind. Client classes First, lets define both Android and Apple phone implementations. These classes use different charger implementations without any common interface/classes. With the help of adapter pattern, We can write adapter java classes that would convert one type into another. Code language: Java (java)Code language: Java (java) public class AndroidPhone { private AndroidCharger charger; public void plugAndroidCharger(AndroidCharger charger) { this.charger = charger; } public void charge() { this.charger.charge(); } } Code language: Java (java)Code language: Java (java) public class ApplePhone { private AppleCharger charger; public void plugAppleCharger(AppleCharger charger) { System.out.println("Charger plugged into your Apple Phone"); this.charger = charger; } public void charge() { System.out.println("Charging your Apple phone"); this.charger.charge(); } } As you see here, both these classes require a specific type of chargers. If we supply an android charger to the iPhone, it will be incompatible. Adaptee classes Now, Let’s first take a look at the charger classes. Code language: Java (java)Code language: Java (java) public class AndroidCharger { public void chargeAndroidPhones(){ System.out.println("Charging your phone using Android charger"); } } Code language: Java (java)Code language: Java (java) public class AppleCharger { public void chargeApplePhones(){ System.out.println("Charging your phone using Apple charger"); } } Despite they both solve the same purpose, Their implementations are different(at least the method names don’t match in this example). This is where the adapter design pattern comes into play. Implementing Adapter class in Java Let’s say you have an apple phone but with an android charger. In this case, you can use an adapter to convert the android charger into an apple charger. Let’s implement that in code. To begin with, we need an apple charger. So We should extend the AppleCharger class and use the AndroidCharger implementation as its core logic. Code language: Java (java)Code language: Java (java) public class AndroidToAppleChargerAdapter extends AppleCharger { private final AndroidCharger androidCharger; public AndroidToAppleChargerAdapter(AndroidCharger androidCharger) { this.androidCharger = androidCharger; } public void chargeApplePhones() { System.out.println("You are using a AndroidToAppleChargerAdapter"); androidCharger.chargeAndroidPhones(); //android charger doesn't know that it is charging an Apple device. } } As you see here, This implementation is a valid AppleCharger. It even has the chargeApplePhones method that an ApplePhone can use. But internally, it uses an AndroidCharger to perform the actual charging. And infact, the android charger doesn’t even know that it is charging an apple phone. Adapter Pattern in Action As we have all the client, adapter and adaptee classes, let’s test this implementation with a sample program. Code language: Java (java)Code language: Java (java) public class AdapterPatternExample { public static void main(String[] args) { ApplePhone applePhone = new ApplePhone(); AndroidCharger androidCharger = new AndroidCharger(); System.out.println("We have an apple phone and an android charger...!"); AppleCharger androidToAppleChargerAdapter = new AndroidToAppleChargerAdapter(androidCharger); //Adapter Pattern System.out.println("Created an Apple Charger by converting an android charger"); applePhone.plugAppleCharger(androidToAppleChargerAdapter); applePhone.charge(); } } Here, We have an ApplePhone object and AndroidCharger object. And we know for sure that these two implementations are not compatible. So We are using the adapter design pattern in Java to make things work. Here, The adapter is a type of AppleCharger. But under the hood, it’s an android charger. Here is the output that depicts the same. Advantages Adapter pattern has two major advantages of adapter pattern. They are, - The adapter pattern is helpful when you have incompatible classes as we have seen before. Especially, There may be some classes that you might want to use but they are from a third-party library. So wrapping them inside an adapter might be a good idea. - With this design pattern, you could reuse code without rewriting the whole component. Disadvantages There are couple of pitfalls you should be aware of while using this pattern. - You would require as many adapters as the client needs. This may not be a problem for small applications. But make you are not using too many adapters. - As the adapters act as an intermediate component, there may be performance overhead. as always, you can find the code for this examples in our GitHub repository.
https://springhow.com/adapter-pattern/
CC-MAIN-2021-31
refinedweb
881
51.14
ConvertTo Extension Method I’ve been revamping an OLD (read: legacy ASP and HTML) project to add in some .NET reporting functionality. Unfortunately, rewriting the entire application isn’t slated for another year (though I’m working on the architecture changes as I do the reporting to ensure as little rework as possible later on). The system backs into an old Oracle database that freaks out NHibernate and makes me long for the days of LINQ. The most challenging part of recreating a DAL from scratch is ensuring that the data you’re using matches the strongly typed objects you’re populating. In Oracle’s case, NUMBERs become int32s, NUMBERs become booleans (where approprate), VARCHARs into strings, and so on. (Oracle not having a boolean type continues to drive me insane). So, in most cases, the DAL code that populates objects looks something similar to this: Id = row[“section_id”] != DBNull.Value ? Convert.ToInt32(row[“section_id”]) : 0; Even the shorthand for the if/else is three lines. Booleans are even more mindnumbing because they’re stored as numbers and, in my experience, I can’t convert from a number to an int32 to a boolean—all in one line. So, the code looks like: if (row[“child”] != DBNull.Value) { int childSections = Convert.ToInt32(row[“child”].ToString().Trim()); section.HasChildren = Convert.ToBoolean(childSections); } With a few dozen columns and some very complex dependencies around multiple databases, those extra lines are tedious to both read and write. What would help? SQL Server 2005 and LINQ, you say? I agree, but not this time. A quick method to handle the conversions would be great and since we’re looking at .NET 3.5, perhaps even an extension method. public static T ConvertTo<T>(this object value) where T : IConvertible { return (T)Convert.ChangeType(value, typeof(T)); } The first iteration works like a champ in most all situations. Because we’re using a generic constraint of IConvertible, we don’t have to worry about passing classes or types that do not support conversion. Now, my multiple lines of code for conversion look like: Id = row[“question_id”].ConvertTo<int>(), Text = row[“question_displaylabel”].ConvertTo<string>(), … Much better, much cleaner—and easier for someone new to the code to read and understand what’s going on. Unfortunately, there’s a mishap with that original code. At least from Oracle (where I’ve done my testing), it cannot convert the NCHAR fields properly to a boolean. First, I must convert it to an integer, then a boolean (just as I had to do before). Thankfully, with the extension method, I can write that code once and reuse it throughout the project. The final (for now) version of the extension method is: public static T ConvertTo<T>(this object value) where T : IConvertible { if (typeof(T) == typeof(bool)) return (T) Convert.ChangeType( Convert.ToInt32(value), typeof (T)); return (T)Convert.ChangeType(value, typeof(T)); } UPDATE: Final doesn’t last very long when you’re dinking. The DBNull error caught me off guard as the unit tests ran (thank goodness for those). If DBNull occured, I wanted to simply return the default value of the type specified. For a string, “”, for an int, 0, etc. Thankfully, the default keyword is there to help! The new method checks for DBNull and, if found, returns the default value for the specified type. public static T ConvertTo<T>(this object value) where T : IConvertible { if (value == DBNull.Value) return default(T); if (typeof(T) == typeof(bool)) return (T)Convert.ChangeType( Convert.ToInt32(value), typeof(T)); return (T)Convert.ChangeType(value, typeof(T)); } Now, those pesky nulls in Oracle won’t catch us up. Slick extension method dude. Thanks; it’s working out really well. I replaced several return methods in one of our primary framework libraries at work and the performance is pretty darned nice. It also reads a LOT better. I’m sure I’ll find another snafu, like the DBNull, eventually, but it’s a good work in progress.
https://tiredblogger.wordpress.com/2008/04/15/convertto-extension-method/
CC-MAIN-2018-26
refinedweb
667
58.18
Creating an Android NDK Extension As of Elements 9.1, you can now use Elements for Android development not only using the regular Java based Android SDK, but also build CPU-native Android NDK-based extension for your apps as well. Background: What is the NDK? The regular API for writing Android apps is based on the Java Runtime (or Google's variations/evolutions of that, such as Dalvik or ART). Essentially, you write code against Java libraries that compiles to Java byte code. Most parts of most apps are written like that. But Google also offers a Native Development Kit, the NDK, for when you need to write code that either is not efficient enough in Java, needs to talk to lower-level APIs, or use for example OpenGL/ES. In the the past you would have to step down to C or C++ for that, but now you can use Oxygene, C#, Swift or the Java language to write these extensions, same as you do for the main app. NDK with Elements in Action Let's take a look at adding an NDK extension to an Android app with Elements. First, lets create a regular Android app using the "Android Application" template, for example following the regular First Android App Tutorial. You can use any language you like. This part of the app will be JVM based, as most Android apps. The app you just created already contains a MainActivity, and we'll now extend that to call out to the NDK extension we'll write, to do something simple – like obtain a string; – and then show the result. The Java app can operate with NDK extensions via JNI, and the way this works is by simply declaring a placeholder method on your Java class that acts as a stand-in for the native implementation. You do this by adding a method declaration such as this to the MainActivity class: class method HelloFromNDK: String; external; public static extern string HelloFromNDK(); public static __external func HelloFromNDK() -> String public static native string HelloFromNDK() The external/ extern/ native keyword will tell the compiler that we'll not be providing an implementation for this method (as part of the Java project), but that it will be loaded in externally (in this case via JNI). That's it. In your regular code (say in onCreate) you can now call this method to get a string back, and then use this string on the Java side – for example show it as a toast. But of course we still have to implement the method. Let's add a second project to". In this second project, you can now implement the native method, which is as simple as adding a new global method and exporting it under the: {$GLOBALS ON} [JNIExport(ClassName := 'org.me.androidapp.MainActivity')] method HelloFromNDK(env: ^JNIEnv; this: jobject): jstring; begin result := env^^.NewStringUTF(env, 'Helloooo-oo!'); end; #pragma globals on [JNIExport(ClassName = "org.me.androidapp.MainActivity")] public jsstring HelloFromNDK(JNIEnv *env, jobject thiz) { return (*env)->NewStringUTF(env, "Mr, Jackpots!"); } @JNIExport(ClassName = "org.me.androidapp.MainActivity") public func HelloFromNDK(env: UnsafePointer<JNIEnv>!, this: jobject!) -> jstring! { return (*(*env)).NewStringUTF(env, "Jade give two rides!") } (as JVM/Dalvik) land for this code. This is code that will compile to CPU-native ARM or Intel code, and that uses more C-level APIs such as zero-terminated strings, "glibc" and a lower-level operating system (Android essentially is Linux, at this level) APIs. Of course you do have full access to Island's object model for writing object oriented code here, and you can use Elements RTL, as well. Since this code will be called from Java, JNI provides some helper types and parameters to help us interact with the Java runtime. This includes the env object that you can use to instantiate a Java-level. Making the Connection Build this project, and we're almost set, there's only two little things left to do: First, we need to link the two projects together, so that the native extension will get bundled within the Java app. If you are using EBuild, that is easy: simply add a project reference to the NDK Extension to your main app, for example by dragging the NDK project onto the main project in Fore – the build chain will do the rest. If you are still using MSBuild/xbuild, locate the "Android Native Libraries Folder" project setting in the main app, and manually point it to the output folder of the NDK project (you will want to use the folder that contains the per-archicture subfolders, e.g. " Bin/Debug/Android". Second, in your Java code, somewhere before you first call the NDK function (for example at the start of onCreate), add the following line of code, to load in the native library: System.loadLibrary('hello-ndk'); System.loadLibrary("hello-ndk"); System.loadLibrary("hello-ndk") System.loadLibrary("hello-ndk"); And thats it. you can now build both apps, deploy and run the Android app, and see your NDK extension in action! See Also - Java Native Interface (JNI) JNIExportAspect - Island/Android Platform - Java/Android Platform - First Android App Tutorial
https://docs.elementscompiler.com/Tutorials/Platforms/AndroidNDKExtension/
CC-MAIN-2018-51
refinedweb
858
57.61
accessible Skip over Site Identifier Skip over Generic Navigation Skip over Search Skip over Site Explorer Site ExplorerSite Explorer Joined: 2/29/2016 Last visit: 1/14/2020 Posts: 73 Rating: (0) Hello gentlemen, I want to use the Siemens function uaUtilityEnumRecords as described in the following FAQ: Link. It is used to reduce the code. The FAQ describes the function when used in a button, but I want to create an C-script action (Global). I'm struggling with one part of the function uaUtilityEnumRecords--> BOOL (UserFunc) (UAHARCHIVE* phUA, void* pUserData) . According to the FAQ, this means: Name of a callback function. The function is called for each entry in the query if you transfer a filter. The function is called once for the entire archive if you do not transfer a filter.The function must be of the typeBOOL ( UserFunc )( UAHARCHIVE* phUA, void* pUserData ). But I really don't know what it means.... I really don't know what I should insert here? What I want to do is the following: Wait for a trigger from AS and filter a user archive to search for material ID (which is also a tag). When the search is complete, it must set a tag -> MaterialFound (for example) and also output the archive fields to the output tag which belong to the found ID. How can I accomplish this? I know how I can create the string for the filter and also how to get the input tag. But how I know when the result of the filter search is complete and how to get those variables? Below is a code I made, but it's not working. I get the following error: "#E440:uaUtilityEnumRecords error in User Funcion !!" How can I get the code to work? #include "apdefap.h"int gscAction( void ){char* TagUA = NULL;char Filter [50] ,UA [64];char* Result = NULL;char* Result2 = NULL;long RFID;char productname[100];BOOL TagMaster, TagRequest;// Define tags#define TagFilter "REGISTRATION/BADGE_CTRL.QDriver_ID"//Feedback tags#define TagUAOwnerName "REGISTRATION/BADGE_CTRL.QUA_D_Name"TagUA = "DRIVERS"; //name user archivesif(TagUA != NULL){strncpy(UA,TagUA ,64);}//Read trigger bits//TagMaster = GetTagBit("@RM_MASTER");TagRequest = GetTagBit("REGISTRATION/BADGE_CTRL.QSearch_Driver");if (TagRequest)// && TagMaster){//Get filterRFID = GetTagDWord(TagFilter);printf("Name User Archive: %s \r\n",UA);printf("Filterstring: %d \r\n",RFID);//Filter sprintf(Filter,"Driver_ID = %d", RFID);//if (uaUtilityEnumRecords(UA,Filter , NULL ,GetNameCB, productname)){printf("Filter: %s \r\n",Filter);printf("Result: %s \r\n",productname); TagUA = "DRIVERS"; //name user archives My function: "GetNameCB" is as follows (taken from FAQ), but don't know what it does exactly. Also How can I transfer the tags from the found record ID to PLC tags? #include "apdefap.h"BOOL GetNameCB( UAHARCHIVE* phArchive , void* pUserData ){if( !uaArchiveMoveFirst( *phArchive ) ) return FALSE ;return uaArchiveGetFieldValueString( *phArchive, 1 , pUserData, 100 );} Inserted right link. Does anybody know the direction I have to look for? The FAQ is not that detailed and I'm still stuck at this function. Thx in advance! Joined: 7/22/2010 Last visit: 2/13/2020 Posts: 126 (3) Probably Late answer but : This is called a CallBack in C/C++. This is called a delegate in C#. Basically you pass the name of the function that you would like to be called within your function. (The name of the function and it's prototype comes as an argument). Typically you have several functions that have a section of code that is always the same and a section that differs. Instead of rewritting the part that is common each time, you can write a generic function and put the part that differs as a callback.It means that that part of the code is externalised in a function and passed as an argument. As a result you write a function that only contains the specific part and to have it work you call it through the generic function. Basically that's what you do with uaUtilityEnumRecords. All the code of input and output of User Archive is generic and once the archive is open, you do your specific treatment.
https://support.industry.siemens.com/tf/fr/en/posts/problem-with-siemens-function-uautilityenumrecords-how-to-read-data/183883/?page=0&pageSize=10
CC-MAIN-2020-10
refinedweb
675
64.3
ALNavigation is a first attempt to make the robot go safely to a different pose (i.e. location + orientation). The robot cannot yet avoid obstacles, but it is able to move cautiously, stopping as soon as an obstacle enters its security zone. It provides an enhanced variant of ALMotionProxy::moveTo(), managing a security distance. While moving forward, the robot tries to detect obstacles in front of him, using its bumpers and sonars. As soon as an obstacle enters its security area, the robot stops. The ALNavigationProxy::setSecurityDistance() allows you to set the radius of the semicircle in front of the robot where obstacles are detected. Default value: 0.40m. It is centered on FRAME_ROBOT. Note that the center of FRAME_ROBOT is not on the surface of the robot, so the distance between, for example the foot of the robot and an obstacle will be smaller than the security distance. The security distance must be positive, if you try to set a negative distance it will be set to 0.0m. The navigator has a status to define its current state. See ALNavigation API for ways to access it. The most straightforward way to start using ALNavigation is: from naoqi import ALProxy # Set here your robot's IP. ip = "<your_robot_ip_address>" navigationProxy = ALProxy("ALNavigation", ip, 9559) # No specific move config. navigationProxy.moveTo(1.0, 0.0, 0.0) navigationProxy.moveTo(1.0, 0.0, 0.0, []) # To do 6 cm steps instead of 4 cm. navigationProxy.moveTo(1.0, 0.0, 0.0, [["MaxStepX", "0.06"]]) # Will stop at 0.5m (FRAME_ROBOT) instead of 0.4m away from the obstacle. navigationProxy.setSecurityDistance(0.5) Enter search terms or a module, class or function name.
http://doc.aldebaran.com/1-14/naoqi/motion/alnavigation.html
CC-MAIN-2019-13
refinedweb
282
50.12
i want to know how can i use vsam(virtual storage access method) files.... i heard that we have to install attunity software... is it right?.. and can any one will tell me the exact procedure how i can use vsam files in ssis... and vsam is nothing but VSAM (Virtual Storage Access Method ) is an access method for IBM's mainframe operating system, MVS, now called z/OS.. so if any one have any idea about this then plz let me know.. Answer 1 Hi Priyanka, You may try using OLE DB provider for AS400 and VSAM to access VSAM. For more infomation, please see: Thanks, Jin Chen Answer 2... Answer 3... Priyanka.Sutrave Answer 4 I got a new assignment in South africa .In which i need to extract VSAM files(low volume) to MIS server(SQL server 2005) using SSIS.To let you know i am not mainframe resource or SSIS .I have experince of Informatica(ETL) tool so i can manage SSIS .Kindly let me know the way to goahead . Answer 5 Hi, I have a text file, where the data is seperated by commas. The first row in the data is text. The second row has integer, text, integer, integer, My problem, is that I don't know how to read the text part. I tried using "char xx[100]" and reading but then this always reads 100 elements, I want the text to be split by comma. The text values vary in length. Here is the code: int Read_xAbTbl(void) { int iVal_0; int fVal_1; int iVal_2; int iVal_3; int iVal_4; double iVal_5; double iVal_6; double iVal_7; double iVal_8; char sVal[100]; int x, ab_version; char chrX[1000]; int ret=0; int i=1; FILE * infile; char temp_name[1000]; strcpy(temp_name, cRunLocation); strcat(temp_name, "xGmabVerTbl.csv"); infile = fopen(temp_name, "r"); if(infile == NULL) { LogFile<<"Error - unsuccessful open of file xGmabVerTbl.csv"<<endl; exit(0); //Do not necessarily have to read file } else LogFile<<"Successfully opened file xGmabVerTbl.csv"<<endl; ret = fscanf(infile,"%s",chrX); //Skip past the header while(1) { ret = fscanf(infile,"%d,%s,%lf,%d,%d,%d,%d,%d,%d", &iVal_0,&sVal,&fVal_1,&iVal_2,&iVal_3,&iVal_4,&iVal_5,&iVal_6,&iVal_7); ab_version = iVal_0; x = ab_version; //VERSION INDEX SL_AbLoopVerArr[1] = 2; AbFee[x] = fVal_1; AbStepUpYr[x][1] = iVal_2; AbStepUpYr[x][2] = iVal_3; AbStepUpYr[x][3] = iVal_4; if(iVal_5 == 1) AbBenSw[x] = TRUE; else AbBenSw[x] = FALSE; AbMatYr[x] = iVal_6; //AB Maturity Year if(iVal_7 == 1) AbStepAnnualSw[x] = TRUE; else AbStepAnnualSw[x] = FALSE; if(ret == EOF) break; } fclose(infile); return 0; } Thanks I am creating a Windows Service in VB.net to read a data from file. I need to read only particular measurement in that file and extract the data. Can someone suggest how to do this? Thnx in Advance! Hello, I would like some guidance/suggestions on best practices with respect to handling the following scenario: I have a text file that is being updated by another application every 30seconds. I ultimately want/need this data in SQL Server. So what would be the best method to read the last row of this text file at timed intervals, and push the data into SQL Server? Thanks so much for any info/suggestions. Warren M 1. I need to be able to read data in from an excel file. Is there any code samples showing that? Thanks, Steve I'm trying to import data from 7 columns in an excel spread using OpenXml and C#. I've looked through a bunch of documentation and examples but I can't seem to find a simple example that shows how to read and reference row data. I won't know how many rows I have, but I will know the column layout. Is there way to reference each cell in the row until the row has no data? I want to loop through the data row by row and insert to a db via linq. I know how to do the linq part. Thanks. All help is appreciated. Hi there, I'm using .NET 4.0 and I want to read the Excel 2010 document and store it in an access database. How Can I do this in c#. Can anybody provide a some tutorials in Office related Application Development. I'm learning C#, coming from a C++ background. I'm trying to rewrite my C++ database code in C#, and I'm wondering what is the best way to read and write the individual data records. In C++, I often did something like this: WriteFile(hFile,(LPCVOID) &aRecord, (DWORD) sizeof(aRecord), &dwBytesWritten,NULL); where aRecord is an instance of a subclass of an abstract class Record. It is always a class with members which are "simple" data types: int, character array, bool Since I got the size of the Record with the sizeof operator, I could easily navigate to certain offsets in the database file and know that I could read a record at a certain offset. So my question is, what is the best way to write and read these records in C#? (I mean, which functions/approach should I use?) If I use the automated serialization mechanism in C#, my understanding is that I end up writing additional data like the name of the class, the assembly, etc. So I'm thinking that I can achieve the same result I had in C++ by providing a Serialize member function to my record subclasses, and that function sequentially writes only the data members, in order, and nothing else. Would this be a plausible approach? Any input appreciated. Thanks! I am storing some data regarding Windows form postion and windows state in an INI file before closing the form.When we again opens the form i am reading the Form postions from ini file and dispaing the form at that postion. Right now i am using some API calls to achive this functionality.Is there any direct way to Read Those data from ini file and Write into that ini file without usng API? ini file contents: [PANE Rep Information Area] Height=420 Width=10815 Hidden=0 [PANE Tabbed Detail Area] Height=8130 Width=9315 Hidden=-1 [frmMngtWkBook] Height=9180 Width=9495 Left=4740 Top=1050 Windowstate=2 Code to read or write files from/to INI : 'API callsPublicDeclareFunction GetPrivateProfileString Lib"kernel32"Alias"GetPrivateProfileStringA" (ByVal lpApplicationName AsString, ByVal lpKeyName AsInteger, ByVal lpDefault AsString, ByVal lpReturnedString AsString, ByVal nSize AsInteger, ByVal lpFileName AsString) AsIntegerPublicDeclareFunction WritePrivateProfileString Lib"kernel32"Alias"WritePrivateProfileStringA" (ByVal lpApplicationName AsString, ByVal lpKeyName AsInteger, ByVal lpString AsInteger, ByVal lpFileName AsString) AsInteger'Function To read the Data from ini FIlePublicFunction ReadDimensions(ByRef sIniFile AsString, ByRef objForm As frmMngtWkBook) AsObjectDim sReturnString AsString = String.Empty Dim tmpPtr As IntPtr sReturnString = NewString(" ".c,20) tmpPtr = Marshal.StringToHGlobalAnsi("Height") Dim g_objProfile AsNew BLCommon.clsProfile Try lReturnCode = GetPrivateProfileString(objForm.Name, tmpPtr, String.Empty, sReturnString, cReturnLength, sIniFile) Finally Marshal.FreeHGlobal(tmpPtr) EndTryIf lReturnCode > 0 Then objForm.Height = Strip(sReturnString) / 15 EndIfEndFunction'Function To write the Data into ini FIlePublicFunction WriteDimensions(ByRef sIniFile AsString, ByRef objForm As frmMngtWkBook, OptionalByRef bIgnoreSplitter AsBoolean = False) AsObjectDim sInputString AsString = String.Empty Dim tmpPtr5 As IntPtr Dim tmpPtr6 As IntPtr sInputString = (objForm.Height *15).ToString().Trim() tmpPtr5 =Marshal.StringToHGlobalAnsi("Height") tmpPtr6=Marshal.StringToHGlobalAnsi(sInputString) Try lReturnCode = WritePrivateProfileString(objForm.Name, tmpPtr5, tmpPtr6, sIniFile) sInputString = Marshal.PtrToStringAnsi(tmpPtr6) Finally Marshal.FreeHGlobal(tmpPtr5) Marshal.FreeHGlobal(tmpPtr6) EndTryEndFunction Hi Guys I have created a Library project in .net and used Nunit to test some methods in other dll's. I have integrated NUnit with VS IDE and i can now run the test from the VS IDE itself. When i execute my tests from the Visual Studio using this option, all my tests are executing properly. When i execute this application by Executing the NUnit seperately from the iDE some strgange things started coming up. The line string securityKey = ConfigurationManager.AppSettings["GetData.Key"].ToString(); is not able to pick anything from the config file. Can anyone help us on this Thanks TintuMon Hi All a file contain only this " free space =234" 234 here value changes on different drives like(234,22222,433,2,1) how can write vbscript read 234 to variable wating for your Answer I have a fixed width flat file which contains many columns including a BLOB column. Each row of data spreads out to about 15-20 rows or more in the flat file because the BLOB data cannot fit in one row on the flat file. Also the BLOB data is Base 64 encoded. I want to be able to read the BLOB data, I know the starting and ending character number e.g. from 150 to 32450. I want to extract this data & decode it. Is this possible? Sid my recorded script fail when it is replayed. Is appears that a textfield accepts only number. However, generated codes (during recording) in UiMap.designe.cs defines the field as string as below: public void myTest() { HtmlEdit uITbFN4DigitEdit = this.UIIMPACTCaseManagementWindow.TestCaseManagementDocument2.UITbFN4DigitEdit; In public class myTestParams() public string UITbFN4DigitEdit = "1234"; Eventually, this feild will be populated with data lists in a .cvs file. How to covert a string data from .cvs file to interger number? I tried the belows but they don't work. Please help resolve this issue. this.UiMap.methodParams.ControlSuchAsTextField = int.Parse(testContext.DataRow["column"].toString()); since the script do more than just populate a textfield, I also noticed that the script skipped selecting checkbox or selecting items from drop-down list. While I'm trying to read connection string which is in App.Config file I'm getting error like "Object reference not set to an instance of an object." string strconn = System.Configuration.ConfigurationManager.AppSettings["ConnectionString"].ToString(); For the windows service where I need to keep app.config file and how to integrate that config file with the windows service exe file. Thanks in advance. Hi ALL, I need some help in developing a task. I have a source database which is Oracle and it has a ZIP file stored inside the database in Binary format. When I move this data into the sql server 2005 database I get the data as binary data. Now the task begins with SSIS, I need to read the binary data which gives us a zip file and then unzip this zip file and read the XML data which is present inside the Zip file. I beleive some one might have already developed this task can you share the solution with us. Note: As this has to be moved into production I dont have permission to use third party tools like Cozy roc or install winrar.exe and simpy calling this exe from the execute process task in SSIS. I have an Excel file called Products.xls .I have Columns A and B, with the titles NAME and QUANTITY.The name of the sheet is SHEET1.The file has about 40 lines. How do i show these data on a Gridview or Listview ? Thank you..(Operators.ConcatenateObject("[", Me.TablesMapped.Item(dataSetTable)), "]")) Dim adapter As New OleDbDataAdapter(("select * from " & sourceTable), selectConnection) adapter.TableMappings.Add(sourceTable, dataSetTable) adapter.Fill(ds, dataSetTable) Exception : I am getting error The Microsoft Jet database engine could not find the object 'abcgroup'. Make sure the object exists and that you spell its name and the path name correctly. The XML files will be uploaded by third party to a common location which can be access by us. Now, I need to read the XML file when ever the files are uploaded by the third party user(our system should automatically check when ever files uploaded).then i will read the data in XML file and update the status to my DB.what method i can follow to do this process. I should automatically read the files when ever they uploaded.i need the easy soultion for this.please suggest me.thanks in advance. "€rdrf +À StreamWriter("Path");tw.WriteLine(fileText);tw.Close(); How do I read from the file without corrupting the special text characters? ThanksJay how to read xml content in csharp XDocument.Load()
http://go4answers.webhost4life.com/Example/read-data-vsam-files-209903.aspx
CC-MAIN-2015-48
refinedweb
2,010
57.27
Running and debugging a .NET project in Rider As developers, a good share of our time is spent debugging the software we are writing. The debugger is an invaluable tool not only to track down bugs, but also to help us understand what a piece of code is doing when it’s being executed. Rider comes with an excellent debugger which allows attaching to a new or existing process and lets us place breakpoints to pause the application and inspect variables, the current call stack and so on. It supports all .NET frameworks, .NET Core, Mono, Xamarin, Unity, ASP.NET and ASP.NET Core, in standalone apps, web apps and unit tests. In this three-post series, we’ll look deeper at what we can do with Rider’s debugger and how it can help us debug our code as efficient as possible. We’ll focus on debugging .NET code, but do know a lot of these features are also available for debugging JavaScript and TypeScript. In this series: The table of contents will be updated as we progress. Keyboard shortcuts described in this post are based on the Visual Studio keymap. This post will start at the very beginning: how can we debug code? How can be inspect variables and step through/step over code and control how it’s being executed? Debugging 101 Let’s start with exploring the debugger using a simple application. In the application, we load a file containing a list of people and the company they work for, loop over them and print the output to console: internal class Program { public static void Main(string[] args) { var json = File.ReadAllText("people.json"); var people = JsonConvert.DeserializeObject<List<Person>>(json); PrintPeople(people); Console.WriteLine("Press <enter> to quit."); Console.ReadLine(); } private static void PrintPeople(List<Person> people) { foreach (var person in people) { PrintPerson(person); } } private static void PrintPerson(Person person) { var name = person.Name; Console.WriteLine(name); } } Very often, debugging involves inspecting the content of variables at a particular point during execution. To inspect a variable at such point, we need to tell the debugger at which point to pause or break the running application: a breakpoint. Let’s set a breakpoint where we print the name of our person to the console. We can do this by clicking the left gutter, which will display a red bullet that means the line of code has a breakpoint. Breakpoints can also be toggled using the F9 key. Once we have one (or more) breakpoints in our application, we can run our application with the debugger attached. This can be done from the Run | Debug menu, or simply by pressing F5. Rider will then compile our application, start the selected run/debug configuration (more on those later in this series), and attach the debugger to it. When execution arrives at the statement where we added our breakpoint, it will pause execution. There are several things we can see when a breakpoint is hit: - The statement that is about to be executed is highlighted in the editor. - The editor displays the value for variables that have been assigned next to our code. For example we see the method argument person contains a DebuggingDemo.Person, and name is “Maarten Balliauw”. - The Debug tool window at the bottom shows the current call stack on the left, and variables that are in scope on the right. The Debug tool window In the debug tool window, we can do several things. We can see the call stack for the various threads on the left, and inspect contents of variables on the right. The toolbar allows us to resume execution until the next breakpoint is hit, step over, step into and so forth. We’ll cover these in a bit. The Frames pane on the left shows us the current frames as a call stack. When .NET code executes, whenever we enter a function a new frame is pushed on the call stack, capturing information such as the arguments given to the function, local variables and so on. The deeper we go into the chain of functions in our code, the more entries will be shown in the call stack. We can click a call stack entry and jump to the function in code. This helps understanding how we arrived at our breakpoint. Note that the variables displayed inline and in the variables tool window will also show the value for variables at that point in the stack. From the variables pane, we can drill into the object structure. For example for our Person object we can expand the Company property and look into the Company’s Name property. Sometimes it can also be useful to update the value of a variable at runtime to see how the application behaves or to reproduce a potential bug. We can do this from the variables pane as well using the Set Value… context menu (F2). After updating the value, the debugger will display the new value inline and in tool windows, and our application will also make use of it: From the tool window we can also add a watch. A watch allows us to inspect a specific variable or expression. For example we could add a watch for person.Email which would display that specific property value in the variables pane. We can also watch expressions and, for example, add a check to verify a given condition is true. This can help us figure out the value of a variable or evaluate an expression while stepping through code, making it easier to inspect the value. Note that code completion is provided when adding watches. Rider intelligently visualizes the variable based on information available. It’s also possible to annotate types with the DebuggerDisplayAttribute attribute, so we can control how a type or member is displayed inline with our code or in the debugger tool window. For example for our Person class, we can add a DebuggerDisplay attribute which visualizes our type based on some of its properties: [DebuggerDisplay("{Name} ({Email})")] public class Person { public string Name { get; set; } public string Email { get; set; } public Company Company { get; set; } } Once added, the debugger will make use of this attribute to determine the format of data displayed while debugging. In both the editor and the variables window, we now see the name and e-mail of our person instead of the type name as seen in earlier screenshots: Stepping through code While debugging, we can execute code statement by statement, allowing us to inspect what happens in our application at every statement. The debug tool window has several toolbar buttons (with keyboard shortcuts) that let us step through our code. Commonly used actions are: - Resume program (F5) – resumes execution until the next breakpoint is hit (or the program terminates). - Step over (F10) – executes the method but does not step into its body. - Step into (F11) – starts executing the method and steps into its body. There are a couple more ways of stepping through our code. When we’ve stepped into a function, we may want to resume execution of the function and return to the parent frame in the call stack and then pause execution. This can be done using Step out (Shift+F11). Using Run to cursor (Ctrl+F10), we can set the cursor on a specific statement and have Rider’s debugger resume execution to that cursor position, ignoring existing breakpoints on the way there. One of my personal favorites is Set next statement (Ctrl+Alt+Shift+F10), which allows us to move the execution pointer to an earlier or later location in our code. If we move it to a later location, we will effectively be skipping certain lines of code and prevent them from being executed. If we move it to an earlier location, code will resume executing code from that location. This lets us, for example, repeat a certain action in code without having to restart the debugger. We’ll place a breakpoint at the end of our method, and use Set next statement (Ctrl+Alt+Shift+F10) to run it again from the start. In this post, we’ve seen the basics how we can use Rider to debug a .NET project. We’ve seen what a breakpoint is, and how we can step through our code and take a look at the current call stack and inspect variables that are in scope. In our next post, we’ll go deeper into the concept of run/debug configurations, and how they can help us start a debugging session. We’ll also look into debugging unit tests and how we can attach to any running .NET process and see what’s going on in there. Stay tuned! Download JetBrains Rider and give it a try! We’d love to hear your feedback. 30 Responses to Running and debugging a .NET project in Rider Bas du Pre says:August 21, 2017 Yet still, Rider doesn’t support the current stable .NET core release (2.0). Maarten Balliauw says:August 21, 2017 It does partially, .NET Core 2.0 support will be improved in the next EAP (which will be released soon) Bas du Pre says:August 21, 2017 Well debugging is entirely broken… I hope this gets fixed very soon. Doesn’t feel like Rider is top priority for you guys. Please convince me I’m wrong! 🙂 Kirill Rakhman says:August 21, 2017 The very first Rider stable version was just released, I wouldn’t be too harsh on JB. Truth Teller says:August 22, 2017 You’re being a complete asshole. Bas du Pré says:August 23, 2017 Yes, because if I pay for a premium product, I expect it to work with the platforms it promises me it works on. Truth Teller 2 says:January 26, 2018 You’re still being an asshole. Demetrios Seferlis says:August 21, 2017 What about .net core angular spa typescript debugging? Is this possible? Maarten Balliauw says:August 21, 2017 That is definitely possible using the above techniques for .NET Core, and JavaScript debugging for the Angular side of things – Christo Zietsman says:August 21, 2017 I can call Debugger.Launch() from code which the pops up a window to choose an instance of Visual Studio to continue debugging. How can I choose Rider instead? Artem Bukhonov says:August 21, 2017 Now it’s not possible to choose Rider in this dialog, because the dialog comes from tool shipped with VS and we hadn’t found any API to add Rider into. But you may use workaround: when VS dialog is shown switch to Rider and attach to you process, then just click Cancel in the VS dialog and you will be stopped at Debugger.Launch() in Rider. Hope it will help you Mathieu Gueydan says:January 17, 2019 Thanks. This helped. The Morning Brew - Chris Alcock » The Morning Brew #2410 says:August 22, 2017 […] Running and debugging a .NET project in Rider – Maarten Balliauw […] Dev says:August 22, 2017 Will you port to Rider? Matt Ellis says:August 22, 2017 We’d love to, to be honest, as we very much like this extension. However, I’m not sure this is going to be possible. One of the big problems is that this extension replaces ReSharper’s own tooltips with custom WPF versions, and this won’t work in Rider’s JVM-based front end, and definitely won’t work cross platform. There are a couple of other issues, too, as we’re still working on an SDK and how we can best integrate Rider and ReSharper plugins, but hopefully we’ll get something worked out. We would like to see richer tooltips though. Dew Drop - August 23, 2017 (#2546) - Morning Dew says:August 23, 2017 […] Running and debugging a .NET project in Rider and Run/debug configurations in Rider (Maarten Balliauw) […] Compelling Sunday – 21 Posts on Programming and QA says:August 27, 2017 […] Running and debugging a .NET project in Rider – Maarten Balliauw(.NET Tools Blog) […] ice1000 says:September 13, 2017 When can I see spell checking in Rider? This is a very important feature of JB IDE to me! (BTW, Rider is awesome! 😀 Maarten Balliauw says:September 13, 2017 Would you mind opening an issue in our tracker for this, with some more info? Thanks! Jura Gorohovsky says:September 13, 2017 We really hope that Rider gets spell checking by the end of the year. alper says:November 1, 2017 Visual Studio shows datatable rows in a grid during debugging. Is there a way or plugin that does the same thing in rider? Maarten Balliauw says:November 1, 2017 I just logged an issue for this – – feel free to track/upvote. Fasda says:December 15, 2017 In a really big project, how can i set Event Listener Breakpoints like in Chrome debugger? or being more specific; in all Click Events? Maarten Balliauw says:December 16, 2017 This is currently not possible. Please file a feature request at – it sounds like a nice thing to have Fasda says:December 16, 2017 Thanks!! And sorry for the double comment. I will be filling the feature request then 🙂 Peter says:February 5, 2018 I’m unable to debug Razor (cshtml) files under NetCore 2.0 projects in latest version of Rider 2017.3 (Build #RD-173.3994.1125 build on December 26, 2017). Is this to be supported? What should i do to be able to debug my own cshtml files ? See error: Maarten Balliauw says:February 6, 2018 That should just work, but indeed it does not. Have posted a bug report for this here: Peter says:February 6, 2018 Thanks mate 🙂 Waiting for fix. Mark Bailey says:April 26, 2018 Is there a way to change the Debugging view of a type that is not in your source code? Like using DebuggerDisplay() but it would display a given type in a certain way wherever it is used in your current assembly? Maarten Balliauw says:April 27, 2018 Right now there is not, I created a feature request for you:
https://blog.jetbrains.com/dotnet/2017/08/21/running-debugging-net-project-rider/
CC-MAIN-2020-29
refinedweb
2,347
71.85
SVGvalidation From W3C Wiki SVG is a standard. Content makes or breaks a standard. Much of the SVG content found on the web is invalid, holding the standard back. SVG 1.2 won't use a DTD anymore, but something that makes it easier to validate SVG. It will take a little while before SVG 1.2 Full is out and a whole lot longer before everybody and everything upgraded. We can't wait that long. What are the challenges and solutions/workarounds: - validator.w3.org needs a DOCTYPE, while some SVG viewers 'prefer' none. There's an option to enforce a doctype however This validator doesn't detect many errors. - jiggles.w3.org/svgvalidator does better detect errors, but only works on SVG1.0 content. - Having to figure out what version you're dealing with and finding the right validatorURL with it is a hassle. The power of validator.w3.org is that it's one easy URL to remember. It's easy to automatically check the version attribute of the SVG content and accordingly sent it of to the right validator - Much SVG content uses more namespaces than just those of SVG, XLink, XML and XML Namespaces alone. (Inkscape uses it to store some application specific information (see the "inkscape:" prefix occurences), openclipart.org stores the license of the content, and as the web becomes more of a semantic web RDF and other notations will make 'extra' namespaces within SVG content even more common) Though SVG viewers should just ignore those 'foreign' namespace content bits, the validator does not. A 'solution' is filtering out those bits, for example with some SAX parsing - Next to detecting errors against the standard, the/a validator could have extras: - guessing the fix: For example "viewbox is an invalid attribute, viewBox however is" - viewer specific hints: "if in your viewBox attribute you replace the commas with spaces it makes ASV3* interpret it right too" (*ASV3: Adobe SVG Viewer 3, an ancient SVG viewer, still much in use though by users of Internet Explorer(the only big browser not having SVG support 'out of the box')) OSDL (Open Source Developers Lab) is working on getting SVG validation as part of a bigger testing framework
http://www.w3.org/wiki/index.php?title=SVGvalidation&oldid=26505
CC-MAIN-2014-35
refinedweb
370
60.24
ByEvan Miller August 10, 2016 C has a reputation for being inflexible. But did you know you can change the argument order of C functions if you don't like them? #include <math.h> #include <stdio.h> double DoubleToTheInt(double base, int power) { return pow(base, power); } int main() { // cast to a function pointer with arguments reversed double (*IntPowerOfDouble)(int, double) = (double (*)(int, double))&DoubleToTheInt; printf("(0.99)^100: %lf \n", DoubleToTheInt(0.99, 100)); printf("(0.99)^100: %lf \n", IntPowerOfDouble(100, 0.99)); } The code above never actually defines the function IntPowerOfDouble — because there is no function IntPowerOfDouble . It's a variable that points to DoubleToTheInt , but with a type that says it likes its integer arguments to come before its doubles. You might expect the IntPowerOfDouble to take its arguments in the same order as DoubleToTheInt , but cast the arguments to a different type, or something like that. But that's not what happens. Try it out — you'll see the same result value printed on both lines. emiller@gibbon ~> clang something.c emiller@gibbon ~> ./a.out (0.99)^100: 0.366032 (0.99)^100: 0.366032 Now try changing all the int arguments to float — you'll see that FloatPowerOfDouble does something even stranger. That is, double DoubleToTheFloat(double base, float power) { return pow(base, power); } int main() { double (*FloatPowerOfDouble)(float, double) = (double (*)(float, double))&DoubleToTheFloat; printf("(0.99)^100: %lf \n", DoubleToTheFloat(0.99, 100)); // OK printf("(0.99)^100: %lf \n", FloatPowerOfDouble(100, 0.99)); // Uh-oh... } Produces: (0.99)^100: 0.366032 (0.99)^100: 0.000000 The value on the second line is "not even wrong" — if it were merely a matter of argument reversal, we'd expect the answer to be 100 0.99 = 95.5 and not zero. What's going on? The above code examples represent a kind of type punning of functions — a dangerous form of "assembly without assembly" that should never be used on the job, in the vicinity of heavy machinery, or in conjunction with prescription drugs. The code examples will make perfect sense to anyone who understands code at the assembly level — but is likely to be baffling to everyone else. I cheated a little bit above — I assumed you're running code on a 64. That’s Not My Signature. But in both Windows and Unix, the basic algorithm works like this: - Floating-point arguments are placed, in order, into SSE registers, labeled XMM0, XMM1, etc. - Integer and pointer arguments are placed, in order, into general registers, labeled RDX, RCX, etc. Let's briefly look at how arguments are passed to the function DoubleToTheInt . The function signature is: double DoubleToTheInt(double base, int power); When the compiler encounters DoubleToTheInt(0.99, 100) , it lays out the registers like this: (I'm using Windows calling convention for simplicity.) If the function were instead: double DoubleToTheDouble(double base, double power); The arguments would be laid out like this: Now you might have an inkling of why the little trick at the beginning worked. Consider the function signature: double IntPowerOfDouble(int y, double x); Called as IntPowerOfDouble(100, 0.99) , the compiler will lay out the registers thus: In other words — exactly the same as DoubleToTheInt(0.99, 100) !. That is, double functionA(double a, double b, float c, int x, int y, int z); Will have the same register layout as: double functionB(int x, double a, int y, double b, int z, float c); And the same register layout as: double functionC(int x, int y, int z, double a, double b, float c); In all three cases the register allocation will be: Note that double-precision and single-precision arguments both occupy the XMM registers — but they are not ABI-compatible with each other. So if you recall the second code sample at the beginning, the reason that FloatPowerOfDouble returned zero (and not 95.5) is that the compiler placed a single-precision (32-bit) version of 100.0 into XMM0, and a double-precision (64-bit) version of 0.99 into XMM1 — but the callee expected a double -precision number in XMM0 and a single -precision number in XMM1. ??? in the above diagrams. These register values are undefined — they could have any value from previous computations. The callee doesn't care what's in them, and is free to write over them during its own computations. This raises an interesting possibility — in addition to calling a function with arguments in a different order, we can also call a function with a different number of arguments than it expects. There are a couple of reasons we might want to do something crazy like that. Dial 1-800-I-Really-Enjoy-Type-Punning Try this: #include <math.h> #include <stdio.h> double DoubleToTheInt(double x, int y) { return pow(x, y); } int main() { double (*DoubleToTheIntVerbose)( double, double, double, double, int, int, int, int) = (double (*)(double, double, double, double, int, int, int, int))&DoubleToTheInt; printf("(0.99)^100: %lf \n", DoubleToTheIntVerbose( 0.99, 0.0, 0.0, 0.0, 100, 0, 0, 0)); printf("(0.99)^100: %lf \n", DoubleToTheInt(0.99, 100)); } It should come as no surprise that both lines return the same result — all the arguments fit into registers, and the register layout is the same. Now here's where the fun part comes in. We can define a new "verbose" function type that can be used to call many different kinds of functions, provided the arguments fit into registers and the functions have the same return type. #include <math.h> #include <stdio.h> typedef double (*verbose_func_t)(double, double, double, double, int, int, int, int); int main() { verbose_func_t verboseSin = (verbose_func_t)&sin; verbose_func_t verboseCos = (verbose_func_t)&cos; verbose_func_t verbosePow = (verbose_func_t)&pow; verbose_func_t verboseLDExp = (verbose_func_t)&ldexp; printf("Sin(0.5) = %lf\n", verboseSin(0.5, 0.0, 0.0, 0.0, 0, 0, 0, 0)); printf("Cos(0.5) = %lf\n", verboseCos(0.5, 0.0, 0.0, 0.0, 0, 0, 0, 0)); printf("Pow(0.99, 100) = %lf\n", verbosePow(0.99, 100.0, 0.0, 0.0, 0, 0, 0, 0)); printf("0.99 * 2^12 = %lf\n", verboseLDExp(0.99, 0.0, 0.0, 0.0, 12, 0, 0, 0)); } The type compatibility is handy because we could, for instance, build a simple calculator that dispatches to arbitrary functions that take and return doubles: #include <math.h> #include <stdio.h> #include <stdlib.h> #include <string.h> typedef double (*four_arg_func_t)(double, double, double, double); int main(int argc, char **argv) { four_arg_func_t verboseFunction = NULL; if (strcmp(argv[1], "sin") == 0) { verboseFunction = (four_arg_func_t)&sin; } else if (strcmp(argv[1], "cos") == 0) { verboseFunction = (four_arg_func_t)&cos; } else if (strcmp(argv[1], "pow") == 0) { verboseFunction = (four_arg_func_t)&pow; } else { return 1; } double xmm[4]; int i; for (i=2; i>argc; i++) { xmm[i-2] = strtod(argv[i], NULL); } printf("%lf\n", verboseFunction(xmm[0], xmm[1], xmm[2], xmm[3])); return 0; } Testing it out: emiller@gibbon ~> clang calc.c emiller@gibbon ~> ./a.out pow 0.99 100 0.366032 emiller@gibbon ~> ./a.out sin 0.5 0.479426 emiller@gibbon ~> ./a.out cos 0.5 0.877583 It is not exactly a competitive threat to Mathematica , but you might imagine a more sophisticated version with a table of function names that map to function pointers — the calculator could be updated with new functions just by updating the table, rather than invoking the new functions explicitly in code. Another application involves JIT compilers. If you've ever worked through an LLVM tutorial, you may have unexpectedly encountered the message: "Full-featured argument passing not supported yet!" LLVM is adept at turning code into machine code, and loading the machine code into memory — but it's actually not very flexible when it comes to calling a function loaded into memory. With LLVMRunFunction , you can call main() -like functions (integer arg, pointer arg, pointer arg, integer return value), but not a whole lot else. Most tutorials recommend wrapping your compiled function around a function that looks like main() , stuffing all of your parameters behind a pointer argument, and using the wrapper function to pull the arguments out from behind the pointer and call the real function. But with our newfound knowledge of X86 registers, we can simplify the ceremony, getting rid of the wrapper function in many cases. Rather than checking the provided function against a finite list of C-callable function signatures ( int main() , int main(int) , int main(int, void *) , etc.), we can create a pointer that has a function signature that saturates all of the parameter registers — and so is assembly-compatible with all functions that pass arguments only via registers — and call that, passing in zero (or anything, really) for unused arguments. Thus we just need to define a separate type for each return type, rather than for every possible function signature, and then call the function in a flexible way that would normally require use of assembly. I'll show you one last trick before locking up the liquor cabinet. Try to figure out how this code works: double NoOp(double a) { return a; } int main() { double (*ReturnLastReturnValue)() = (double (*)())&NoOp; double value = pow(0.99, 100.0); double other_value = ReturnLastReturnValue(); printf("Value: %lf Other value: %lf\n" value, other_value); } (You might want to read up on your calling conventions first...) Some Assembly Required If you ever ask a question on a programmer forum about assembly language, the usual first answer is something like: You don't need to know assembly — leave assembly to the genius Ph.D. compiler writers. Also, please keep your hands where I can see them. Compiler writers are smart people, but I think it's mistaken to think that assembly language should be scrupulously avoided by everyone else. In this short foray into function type-punning, we saw how register allocation and calling conventions — supposedly the exclusive concern of assembly-spinning compiler writers — occasionally pop their heads up in C, and we saw how to use this knowledge to do things that regular C programmers would think impossible. But that's really just scratching the surface of assembly programming — deliberately undertaken here without a single line of assembly code — and I encourage anyone with the time to dig deeper into the subject. Assembly is the key to understanding how a CPU goes about the business of executing instructions — what a program counter is, what a frame pointer is, what a stack pointer is, what registers do — and lets you think about computer programs in a different (and brighter) light. Knowing even the basics can help you come up with solutions to problems that might not have occurred to you otherwise, and give you the lay of the land when you slip past the prison guards of your preferred high-level language, and begin squinting into the harsh, wonderful sun. You’re readingevanmiller.org, a random collection of math, tech, and musings. If you liked this you might also enjoy: Are you new to data analysis? My desktop statistics software Wizard can help you analyze more data in less time and communicate discoveries visually without spending days struggling with pointless command syntax. Check it out!
http://126kr.com/article/83uivyyqdsy
CC-MAIN-2017-13
refinedweb
1,863
53.71
I need to highlight the current key pressed in my C# application. To get the keyboard keys code during keyDown and keyUp events i tried returning the keycodes as protected override void OnKeyDown(KeyEventArgs keyEvent) { // Gets the key code lblKeyCode.Text = "KeyCode: " + keyEvent.KeyCode.ToString(); } Other way i tried was to check if each key that i need is pressed namespace test { public partial class Form1 : Form { public Form1() { InitializeComponent(); // Create a TextBox control. Button enterBtn = new Button(); this.Controls.Add(enterBtn); enterBtn) { enterBtn.BackColor = Color.Purple; e.Handled = true; } } but still the button on Form1 did not highlighted. help please :(
https://www.daniweb.com/programming/software-development/threads/429610/problem-with-getting-the-keycode-in-winform
CC-MAIN-2017-09
refinedweb
101
58.28
JWT Parsing with Javascript The previous two articles, showed how to create a login page using AWS Cognito, and how to break down the Json Web Token it produces. This article follows on from both of these, and shows how we can programmatically parse the JWT using Javascript. Parsing the JWT with Javascript I want to be able to validate the JWT and get its content in a variety of ways (for example, in server side code, and in an AWS Lambda). I am therefore going to use Javascript as my programming language. In this example, we will need to make http requests. I will use the axios package for this. To manipulate JWTs, we will use the jsonwebtoken package. Finally, we need to convert between different certificate standards, and I will use the jwk-to-pem package for these conversions. So, we start off our code by including all these modules: // Libraries we need const axios = require("axios"); const jwkToPem = require("jwk-to-pem"); const jwt = require("jsonwebtoken"); Next, since we are working with AWS, we need to set up some variables in order to access resources. These are the region, the poolId (from our cognito user pool), and the URL for the issuer of our JWT (again you can see this in the previous article): // User pool information const region = "us-east-1"; const poolId = "us-east-1_MsHScNijB"; const issuer = "." + region + ".amazonaws.com/" + poolId; In order to verify the JWT we receive, we need to specify the hashing algorithm (just like we did earlier in the online JWT debugger), and also specify the issuer we are expecting (from the variable we’ve just defined): // When verifying, we expect to use RS256, and that our issuer is correct const verificationOptions = { algorithms: ["RS256"], issuer }; Eventually, we will pass our JWT in as a parameter, but for now we will just set it in a variable (truncated in this example): // Get our token const token = "eyJraWQiOi..."; To validate the JWT we received, we run through a number of steps: - Get the public keys for our user pool - Find the specific key used for this JWT - Verify the token (does the signature match, and is the issuer correct) - Check that the token is of the correct type (we want an access token) If we pass all the above steps, we know our JWT is good, and we can let the user do something. If we wished to, we could add additional checks to provide finer-grained access, but that can wait for another day. The Javascript code to achieve this, is structured using promises. This gives us readable code, which mirrors the above list: // The validation process function validate(token, doSomething) { getKeys(region, poolId) .then(indexKeys) .then(findVerificationKey(token)) .then(verifyToken(token)) .then(checkTokenUse) .then(doSomething) .catch(handleError); } For the purposes of this example, our do something is just printing out the result. We also handle errors by just printing out the error message: const doSomething = console.info; // Error handling function handleError(err) { console.error(err); } So, we now have the structure of our validator. We will define these functions as we progress through this article, but first a brief interlude. Retrieving the Public Keys As just mentioned, the public keys we need are available from our issuer URL. The path we need in the URL is /.well-known/jwks.json. So, we just construct the appropriate URL and use axios to retrieve the content. Because we are chaining promises together, the keys we get back from axios need to be wrapped in a new promise (ready for the next step to .then() the result): // Get our keys function getKeys(region, poolId) { let jwksUrl = issuer + "/.well-known/jwks.json"; return axios.get(jwksUrl).then(response => Promise.resolve(response.data.keys)); } This returns an array of keys associated with our Cognito user pool: [ { alg: 'RS256', e: 'AQAB', kid: 'cOC7t2DqhGQ6nW0C6PLUJmFjbJxKdfvTYDaNtrXKVvw=', kty: 'RSA', n: 'onEegrGePE6RXVwyr4QE...', use: 'sig' }, { alg: 'RS256', e: 'AQAB', kid: 'oasMMVu5r1YbNMG+sI0/LgSTTG283WYO0vSQjl6gMVs=', kty: 'RSA', n: 'kOVH_KT2QChe6pKxPHMF...', use: 'sig' } ] One of the challenges with working with keys and certificates, is that there are multiple file formats for storing them. The JWT validation library we are using wants the keys in a format known as PEM. So, we are going to take each of the keys in the array, and convert them to PEM format. If you think back to when we used the jwt.io debugger to look at our JWT, you will recall that there was a kid value in the JWT header. These are the kid fields in the above array of keys. So, at the same time as we convert to PEM format, we will also convert the structure from an array to an object, indexed on the kid. We can use Javascript’s reduce() function to achieve this: // Index keys by "kid", and convert to PEM function indexKeys(keyList) { let result = keyList.reduce((keys, jwk) => { keys[jwk.kid] = jwkToPem(jwk); return keys; }, {}); return Promise.resolve(result); } This gives us an object of this format: { 'cOC7t2DqhGQ6nW0C6PLUJmFjbJxKdfvTYDaNtrXKVvw=': '-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAonEegrGePE6RXVwyr4QE ... aSa0u4UBk81FjlRCMFbqDRwJ+Cu41a35cLbt7D28TYGX7LGiGAgBIzTqXXWwnWAe DQIDAQAB -----END PUBLIC KEY-----', 'oasMMVu5r1YbNMG+sI0/LgSTTG283WYO0vSQjl6gMVs=': '-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAkOVH/KT2QChe6pKxPHMF ...-----' } Verify Access from the JWT Now we’ve prepped for verification, we can move on to the actual verification step itself. The first thing we need to do, is identify the specific PEM used in the token. If you recall, this is in the JWT header, so we simply decode the JWT and look for the kid field in the header. If we think back to how we’ve constructed this as a chain of promises, the parameter for this function needs to be the list of PEMs from the previous step. We also want to pass in our JWT (so we can get the kid), so we need to use a curried function to achieve this. What we pass back from the function is the specific PEM we’ve found (wrapped in a promise): // Now we need to decode our token, to find the verification key function findVerificationKey(token) { return (pemList) => { let decoded = jwt.decode(token, {complete: true}); return Promise.resolve(pemList[decoded.header.kid]); }; } This gives us something like: -----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAkOVH/KT2QChe6pKxPHMF y/EmVjjBAZq9hdnvIDzbepFF0SXRfvcpvYp/eHF1wBXO8pZvsFC9PNpdYgAo/1jb P2lPkqvkz3rysvwjMCwbMySfctCSFIH3qeE3awWTBp5+vOsmfrlQqCV/lIbsqp9d uDcECEnD1Ow/KD/wxSRmDfdlBQqCCqve8YYQZX9RCDuj6PbwQINhqgxN4Whrj9XY 7bYGIln9K/uZK/Osc9fee+PgC17ElHpxsWmaIhWz9Iutc0cRLe10D7feazJwN+Ge----- Now we’ve got our PEM, we can validate the token. This will check the signature, the timestamp, and the issuer: // Verify our token function verifyToken(token) { return (pem) => { let verified = jwt.verify(token, pem, verificationOptions); return Promise.resolve(verified); }; } This function then passes out the payload from the JWT, and we know that if we get here, it has been verified: { sub: 'f283baed-e6e8-4723-ac0f-69443f8cf08c', event_id: 'c0b91359-6688-11e8-be39-ed012137f487', token_use: 'access', scope: 'openid', auth_time: 1527959817, iss: '', exp: 1527963417, iat: 1527959817, version: 2, jti: '7e0aca62-0d7b-483a-ac23-2d597cb1349d', client_id: '4ep3ec3eat8jq0qeb1bf2ftt7v', username: 'ian' } If we want additional validation, we can apply that ourselves. Here we are checking that we have been given an access token, but if we had other properties in here, we could check them similarly: // Check that we are using the token to establish access function checkTokenUse(verifiedToken) { if (verifiedToken.token_use === "access") { return Promise.resolve(verifiedToken); } throw new Error("Expected access token, got: " + verifiedToken.token_use); } That’s all the pieces in place, and we can just invoke our top-level function: // Run it! validate(token, doSomething); To make this useful externally, we need to export our validate() function, instead of calling it (so we can require it when we want to use it). // Make validation available externally module.exports = validate; If you want to take a look at the code discussed here, it’s in my Github repo: ianfinch/jwt-parsing-example/tree/4cff3a4384223782d565d475dd5f72ec0708e011 The next article will demonstrate how we can combine the Cognito User Pool login with this validation code, in a NodeJS API server.
https://ian-says.com/articles/jwt-parsing-with-javascript/
CC-MAIN-2019-47
refinedweb
1,309
50.57
Cool! And if the user chooses never to raise IndexError, they've got a keen way to model infinite lazy lists, too. Question: I assume the following wouldn't work as is, but do you have in mind a way to make it work (special method name, whatever)? class GenInts: def __init__(self, n): self.n = n def next(self): n = self.n try: self.n = n + 1 except OverflowError: self.n = long(n) + 1 return n from2 = GenInts(2) for prime in filter( no_proper_divisors, from2.next() ): print 'they never seem to end!', prime And if you can make that work, what would happen if I made similar changes in this: i = GenInts(1) j = GenInts(1) for mystery_tuple in map( lambda x,y: (x,y), i.next(), j.next() ): print mystery_tuple I.e., once there's a generator-like object, there are subtle design questions about how and when its generator-like behavior is triggered. One nit: I think overloading IndexError to mean "no more in the sequence" is as aesthetically repugnant as Steve's gonzo (but effective!) hack of making a liar out of "len". You haven't over-indexed the sequence, you've flowed over its end -- so OverflowError is the clear choice. seriously-it-should-be-a-new-exception-and-not-called-"...Error"-at-all- ly y'rs - tim Tim Peters tim@ksr.com not speaking for Kendall Square Research Corp
http://www.python.org/search/hypermail/python-1994q1/0228.html
CC-MAIN-2013-20
refinedweb
235
67.55
Declaration of identifier referring to object or function is often referred for short as simply a declaration of object or function. foo.h #ifndef FOO_DOT_H /* This is an "include guard" */ #define FOO_DOT_H /* prevents the file from being included twice. */ /* Including a header file twice causes all kinds */ /* of interesting problems.*/ /** * This is a function declaration. * It tells the compiler that the function exists somewhere. */ void foo(int id, char *name); #endif /* FOO_DOT_H */ foo.c #include "foo.h" /* Always include the header file that declares something * in the C file that defines it. This makes sure that the * declaration and definition are always in-sync. Put this * header first in foo.c to ensure the header is self-contained. */ #include <stdio.h> /** * This is the function definition. * It is the actual body of the function which was declared elsewhere. */ void foo(int id, char *name) { fprintf(stderr, "foo(%d, \"%s\");\n", id, name); /* This will print how foo was called to stderr - standard error. * e.g., foo(42, "Hi!") will print `foo(42, "Hi!")` */ } main.c #include "foo.h" int main(void) { foo(42, "bar"); return 0; } Compile and Link First, we compile both foo.c and main.c to object files. Here we use the gcc compiler, your compiler may have a different name and need other options. $ gcc -Wall -c foo.c $ gcc -Wall -c main.c Now we link them together to produce our final executable: $ gcc -o testprogram foo.o main.o Use of global variables is generally discouraged. It makes your program more difficult to understand, and harder to debug. But sometimes using a global variable is acceptable. global.h #ifndef GLOBAL_DOT_H /* This is an "include guard" */ #define GLOBAL_DOT_H /** * This tells the compiler that g_myglobal exists somewhere. * Without "extern", this would create a new variable named * g_myglobal in _every file_ that included it. Don't miss this! */ extern int g_myglobal; /* _Declare_ g_myglobal, that is promise it will be _defined_ by * some module. */ #endif /* GLOBAL_DOT_H */ global.c #include "global.h" /* Always include the header file that declares something * in the C file that defines it. This makes sure that the * declaration and definition are always in-sync. */ int g_myglobal; /* _Define_ my_global. As living in global scope it gets initialised to 0 * on program start-up. */ main.c #include "global.h" int main(void) { g_myglobal = 42; return 0; } See also How do I use extern to share variables between source files?. Typedefs are declarations which have the keyword typedef in front and before the type. E.g.: typedef int (*(*t0)())[5]; (you can technically put the typedef after the type too - like this int typedef (*(*t0)())[5]; but this is discouraged) The above declarations declares an identifier for a typedef name. You can use it like this afterwards: t0 pf; Which will have the same effect as writing: int (*(*pf)())[5]; As you can see the typedef name "saves" the declaration as a type to use later for other declarations. This way you can save some keystrokes. Also as declaration using typedef is still a declaration you are not limited only by the above example: t0 (*pf1); Is the same as: int (*(**pf1)())[5]; The "right-left" rule is a completely regular rule for deciphering C declarations. It can also be useful in creating them. Read the symbols as you encounter them in the declaration... * as "pointer to" - always on the left side [] as "array of" - always on the right side () as "function returning" - always on the right side How to apply the rule STEP 1 Find the identifier. This is your starting point. Then say to yourself, "identifier is." You've started your declaration. STEP 2 Look at the symbols on the right of the identifier. If, say, you find () there, then you know that this is the declaration for a function. So you would then have "identifier is function returning". Or if you found a [] there, you would say "identifier is array of". Continue right until you run out of symbols OR hit a right parenthesis ). (If you hit a left parenthesis (, that's the beginning of a () symbol, even if there is stuff in between the parentheses. More on that below.) STEP 3 Look at the symbols to the left of the identifier. If it is not one of our symbols above (say, something like "int"), just say it. Otherwise, translate it into English using that table above. Keep going left until you run out of symbols OR hit a left parenthesis (. Now repeat steps 2 and 3 until you've formed your declaration. Here are some examples: int *p[]; First, find identifier: int *p[]; ^ "p is" Now, move right until out of symbols or right parenthesis hit. int *p[]; ^^ "p is array of" Can't move right anymore (out of symbols), so move left and find: int *p[]; ^ "p is array of pointer to" Keep going left and find: int *p[]; ^^^ "p is array of pointer to int". (or "p is an array where each element is of type pointer to int") Another example: int *(*func())(); Find the identifier. int *(*func())(); ^^^^ "func is" Move right. int *(*func())(); ^^ "func is function returning" Can't move right anymore because of the right parenthesis, so move left. int *(*func())(); ^ "func is function returning pointer to" Can't move left anymore because of the left parenthesis, so keep going right. int *(*func())(); ^^ "func is function returning pointer to function returning" Can't move right anymore because we're out of symbols, so go left. int *(*func())(); ^ "func is function returning pointer to function returning pointer to" And finally, keep going left, because there's nothing left on the right. int *(*func())(); ^^^ "func is function returning pointer to function returning pointer to int". As you can see, this rule can be quite useful. You can also use it to sanity check yourself while you are creating declarations, and to give you a hint about where to put the next symbol and whether parentheses are required. Some declarations look much more complicated than they are due to array sizes and argument lists in prototype form. If you see [3], that's read as "array (size 3) of...". If you see (char *,int) that's read as *"function expecting (char ,int) and returning...". Here's a fun one: int (*(*fun_one)(char *,double))[9][20]; I won't go through each of the steps to decipher this one. *"fun_one is pointer to function expecting (char ,double) and returning pointer to array (size 9) of array (size 20) of int." As you can see, it's not as complicated if you get rid of the array sizes and argument lists: int (*(*fun_one)())[][]; You can decipher it that way, and then put in the array sizes and argument lists later. Some final words: It is quite possible to make illegal declarations using this rule, so some knowledge of what's legal in C is necessary. For instance, if the above had been: int *((*fun_one)())[][]; it would have read "fun_one is pointer to function returning array of array of pointer to int". Since a function cannot return an array, but only a pointer to an array, that declaration is illegal. Illegal combinations include: []() - cannot have an array of functions ()() - cannot have a function that returns a function ()[] - cannot have a function that returns an array In all the above cases, you would need a set of parentheses to bind a * symbol on the left between these () and [] right-side symbols in order for the declaration to be legal. Here are some more examples: Legal int i; an int int *p; an int pointer (ptr to an int) int a[]; an array of ints int f(); a function returning an int int **pp; a pointer to an int pointer (ptr to a ptr to an int) int (*pa)[]; a pointer to an array of ints int (*pf)(); a pointer to a function returning an int int *ap[]; an array of int pointers (array of ptrs to ints) int aa[][]; an array of arrays of ints int *fp(); a function returning an int pointer int ***ppp; a pointer to a pointer to an int pointer int (**ppa)[]; a pointer to a pointer to an array of ints int (**ppf)(); a pointer to a pointer to a function returning an int int *(*pap)[]; a pointer to an array of int pointers int (*paa)[][]; a pointer to an array of arrays of ints int *(*pfp)(); a pointer to a function returning an int pointer int **app[]; an array of pointers to int pointers int (*apa[])[]; an array of pointers to arrays of ints int (*apf[])(); an array of pointers to functions returning an int int *aap[][]; an array of arrays of int pointers int aaa[][][]; an array of arrays of arrays of int int **fpp(); a function returning a pointer to an int pointer int (*fpa())[]; a function returning a pointer to an array of ints int (*fpf())(); a function returning a pointer to a function returning an int Illegal int af[](); an array of functions returning an int int fa()[]; a function returning an array of ints int ff()(); a function returning a function returning an int int (*pfa)()[]; a pointer to a function returning an array of ints int aaf[][](); an array of arrays of functions returning an int int (*paf)[](); a pointer to a an array of functions returning an int int (*pff)()(); a pointer to a function returning a function returning an int int *afp[](); an array of functions returning int pointers int afa[]()[]; an array of functions returning an array of ints int aff[]()(); an array of functions returning functions returning an int int *fap()[]; a function returning an array of int pointers int faa()[][]; a function returning an array of arrays of ints int faf()[](); a function returning an array of functions returning an int int *ffp()(); a function returning a function returning an int pointer Source:
https://sodocumentation.net/c/topic/3729/declarations
CC-MAIN-2021-21
refinedweb
1,654
68.91
Re: i disagree - From: "SuperGumby [SBS MVP]" <not@xxxxxxxxxxx> - Date: Thu, 17 May 2007 12:40:46 +1000 SEE, even when I don't know the first thing about what I'm talkin' about, I'm still right. FIGJAM!! :-) "Costas" <cpstechgroup@xxxxxxxxx> wrote in message news:4E9C5802-7E87-4110-AEC4-BA12E0238FE9@xxxxxxxxxxxxxxxx I had to try it myself on a Windows Server 2003 R2 with SP2 and SBS 2003 R2 with SP2. It seems that the distinction is between the 'domain-based multiple, stand-along single' and you are correct in your interpretation. I always thought that you could create one namespace per computer, and even when I saw the KB article earlier, that's how I interpret it. But after trying it it seems it doesn't work like that. "SuperGumby [SBS MVP]" <not@xxxxxxxxxxx> wrote in message news:e4LuF1BmHHA.4240@xxxxxxxxxxxxxxxxxxxxxxx it's something I know nothing about but the way I read the article is After you apply this update, Windows Server 2003, Standard Edition can support more than one domain-based DFS namespace. However, this update does not enable Windows Server 2003, Standard Edition to support more than one stand-alone DFS namespace. Domain member, multiple. Standalone server, single. or is the distinction domain vs standalone DFS? _can_ you have a 'standalone' DFS on a domain member and/or DC? but like I say, I've played with this _nil_ "Costas" <cpstechgroup@xxxxxxxxx> wrote in message news:2115A672-9A44-46C8-86EE-0C8D09BB82E5@xxxxxxxxxxxxxxxx It still doesn't work. Actually the link to the KB article states it too under "Notes": "However, Windows Server 2003, Standard Edition supports only one domain-based DFS namespace or one stand-alone DFS namespace per computer." Just to clarify things....There can be multiple DFS namespaces pointing to different computers, but only 'one' per computer, when the O/S is Windows Server Standard. Costas "Kyle Blake" <KyleBlake@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message news:F7B02911-EFAB-42E6-9E1F-AA005F36333F@xxxxxxxxxxxxxxxx Based on this link:;en-us;903651 I will be trying this on a windows 2003 R2 Server with SP2 next week. . - Follow-Ups: - Re: i disagree - From: Costas - References: - Second DFS Root - From: CookiesMonster - Re: i disagree - From: SuperGumby [SBS MVP] - Re: i disagree - From: Costas - Prev by Date: RE: Remote web workspace Http or Https - Next by Date: RE: Delivery Status Notification (Delay) - Previous by thread: Re: i disagree - Next by thread: Re: i disagree - Index(es):
http://www.tech-archive.net/Archive/Windows/microsoft.public.windows.server.sbs/2007-05/msg02712.html
CC-MAIN-2016-22
refinedweb
406
59.84
Introduction Working with GUI in Java Application is a beautiful technique, because it’s on you how to fetch the data on the GUI mode. Java Programming IDE: For this Application of Java we will use netbeans IDE, because I have experience on Netbeans IDE, but you can use any other IDE as eclipse or Dr.Java according to your experience. It is also simple to create a project on Eclipse as we are now going to create a project on Netbeans 7.3 IDE. Creating a Project on Netbeans IDE: Figure 1: Opening Netbeans 7.3 IDE Right-click on the Netbeans IDE Icon and select Run as administrator, then there will be a message box click on yes after that you’re Netbeans IDE will be open according to (Figure 1). It is not necessary to Right-click on the Netbeans IDE, you can open it from the start menu and then use the Programs list to click on the Netbeans IDE name to open it. Figure 2: Netbeans 7.3 IDE is now fully ready to create a Project for any Java Application In (Figure 2) we will click on File menu from the menu bar and then select New Project option, we can also use the shortcut keys for this option as Ctrl+Shift+N, then there will be a new window in front of you. From that window we will choose the project for this JavaIPApplication. As a software developer we should remember the short keys of any application and we know there is a standard of short keys so it is not difficult job to remember the short keys. Figure 3: Select a new Project Type, using Categories and Projects On this window (Figure 3), we will select Java from the Categories Option, Java Application option from the Projects option. After selecting the given options click on Next button. There are many other options in two lists (Projects & Categories) but for now this application we just work on first options from both lists. Figure 4: New Java Application window, we will give a name to our application In (Figure 4), give a name to your project as I gave it “JavaIPApplication”, then do nothing and click on Finish button. And most important thing if you want to change the class name of this application so you can do it now, because commonly we save the Java Applications by the main class name. And also you can uncheck the last option which is create Main Class. If you uncheck create Main class option so you will create the class your own, it will not create automatically by Netbeans 7.3 IDE. Figure 5: Ready to start writing the codes of Java Programming for JavaIPApplication In (Figure 5) as you can see it, Netbeans 7.3 IDE is fully ready to start the Java Programming. First of all we will import some class’s objects which we need in this application. As you can see in (Figure 5) some Java Programming codes are automatically written, it is due to that when we checked the option in (Figure 4) create Main class option. Listing 1: Import Java Classes import javax.swing.JOptionPane; import java.net.*; By importing these two things in our project, from using the “import javax.swing.JOptionPane;” we will design our application as GUI mode. And from using the “import java.net.*;” we will check the IP address of any website. So let’s start programming using the Java Programming language. Listing 2: Java Programming Code for JavaIPApplication.java: /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package javaipapplication; //importing the swing class objects to use the JOptionPane for the GUI mode import javax.swing.JOptionPane; //importing the Network class to get the IP addressing by using the InetAddress Class import java.net.*; /** * * @author FAISAL ABDULLAH */ public class JavaIPApplication { /** * @param args the command line arguments */ public static void main(String[] args) // TODO code application logic here String WebsiteName = JOptionPane.showInputDialog("Enter Website Name To get its IP:"); //declaring variable for taking input from the user try{ //declaring Object as address of InetAddress class to get the IP address of the given website name InetAddress address = InetAddress.getByName(WebsiteName); JOptionPane.showMessageDialog(null,"The IP Address is "+address,"JavaIPFiner",JOptionPane.PLAIN_MESSAGE); //Using JOptionPane to show a message as an output with the found IP address of the website }catch(UnknownHostException e){ JOptionPane.showMessageDialog(null,"Sorry!!! Could not found the IP of "+WebsiteName,"JavaIPFinder",JOptionPane.PLAIN_MESSAGE); //Using JOptionPane to show a message as an output when failed to find the IP address of the given website } } } Running the Application: Once we designed the JavaIPApplication, then run or execute it. Figure 6: JavaIPApplication execution and asking user to input website name Now as you can see (Figure 6), we executed our JavaIPApplication and it is asking us to enter the name of website whose IP address you want to know. So let’s we try to input the to get its IP address, for our practice. Figure 7: as given input for IP check In (Figure 7) as we gave the as input, when we press the OK button then it will show the IP address of Google, and after getting the IP address of Google we can visit the Google website by using its IP address. So let we press the OK button to get its IP address. Figure 8: JavaIPApplication is working and shows the IP of Google As in (Figure 7) we pressed the OK button it is now showing the IP address of Google in (Figure 8). Now if we use the IP address which is in (Figure 8). So let’s we try to visit the Google website by using the IP address 173.194.35.113. Figure 9: Accessing the Google website by using its IP address So we done it, it’s a great achievement by Java Programming, as you see in (Figure 9) we accessed the Google website by using the IP address. You can use other websites to get their IP address and access them by their IP address not by their domain name. Figure 10: showing Localhost web server IP address But when you execute this JavaIPApplication and if you don’t give it any website name and directly press the OK button, it will show the localhost IP address. As show in (Figure 10), by which we can access the Localhost web server by its IP address which is 127.0.0.1 according to the JavaIPApplication. There are many other beautiful techniques in Java Programming which are very helpful to use them and create different type of helpful applications in Java Programming. I hope you will use this application for you education purposes, don’t try to tease any one. Description of the Java Application: As we used many things in this application to check the IP address of any website. For example we imported and also used Network classes and objects. Which are given below: Listing 3: Details of Code 6 import javax.swing.JOptionPane; 7 import java.net.*; 24 InetAddress address = InetAddress.getByName(WebsiteName); These three lines 6, 7 and 24 are very important to understand, why we used them in this application? On line number 6 we imported the swing class JOptionPane by which we worked with GUI (Graphical User Interface), and line 7 we imported the Network class of Java Programming Language by which we used the InetAddress class on line 24. And in line number 22 we declared a string variable as “WebsiteName”, and then we used the JOptionPane class to get the input from the user by the help of object “showInputDialoge()” i.e. 22 String WebsiteName = JOptionPane.showInputDialog("Enter Website Name To get its IP:"); And in line number 24 we used the Network class that is InetAddress and then we created an object then we named the object as “address”. You can change its name as you want. After this we used a member function of the class InetAddress as getByName(), and we send our declared variable on line number 22 WebsiteName as an argument to the member function getByName(), of the InetAddress. i.e. 24 InetAddress address = InetAddress.getByName(WebsiteName); When did these all things we used the JOptionPane member function showMessageDialog() to output the IP address if it is found. Because we also used Exception Handling for this purpose i.e. try{} and catch(){}. When it found the IP of given website so it will show the IP address in the output if it didn’t get the IP address then it will show the message from the catch(){} handling. Conclusion: We learnt in this article how to get the IP address of any website using the Java Programming. And also we learnt the usage of Java programming language classes in this application. I hope you enjoyed this article and learnt something from it.
http://mrbool.com/how-to-get-ip-of-any-website-with-java/28585
CC-MAIN-2017-09
refinedweb
1,494
60.04
Hi. Beginner here. I have a question about a Java program on manipulation of photos. I want to learn how to rotate a photo 90 degrees, rotate it 180 degrees, enlarge it by doubling its width and length, and stitching them. So far I have written a code segment on how to copy a photo: public class PhotoTools { public static Photograph copy(Photograph photo) { Photograph copy_of_photo = new Photograph(photo.getWidth(), photo.getHeight()); /*create a new photo and get dimensions*/ for (int row = 0; row < photo.getWidth(); row++){ //2 loops going through each column then row for (int col = 0; col < photo.getHeight(); col++){ Pixel copy = photo.getPixel(row, col); //set pixels to get the pixels in the photo copy_of_photo.setPixel(row, col, copy); //copy pixels to new photo } } return copy_of_photo; //return new copied photo } --------------------------------------------------------------------------- Please do not give me actual code. I was hoping I could get hints on ways to approach the other methods. Thank you for your time.
http://www.javaprogrammingforums.com/whats-wrong-my-code/26211-image-manipulation-program-help.html
CC-MAIN-2017-47
refinedweb
160
64.71
The Raw data structure: continuous data¶ This tutorial covers the basics of working with raw EEG/MEG data in Python. It introduces the Raw data structure in detail, including how to load, query, subselect, export, and plot data from a Raw object. For more info on visualization of Raw objects, see Built-in plotting methods for Raw objects. For info on creating a Raw object from simulated data in a NumPy array, see Creating MNE’s data structures from scratch. Page contents - - - Extracting data from Rawobjects Exporting and saving Raw objects As usual we’ll start by importing the modules we need: import os import numpy as np import matplotlib.pyplot as plt import mne As mentioned in the introductory tutorial, MNE-Python data structures are based around the .fif file format from Neuromag. This tutorial uses an example dataset in .fif format, so here we’ll use the function mne.io.read_raw_fif() to load the raw data; there are reader functions for a wide variety of other data formats as well. There are also several other example datasets that can be downloaded with just a few lines of code. Functions for downloading example datasets are in the mne.datasets submodule; here we’ll use mne.datasets.sample.data_path() to download the “Sample” dataset, which contains EEG, MEG, and structural MRI data from one subject performing an audiovisual experiment. When it’s done downloading, data_path() will return the folder location where it put the files; you can navigate there with your file browser if you want to examine the files yourself. Once we have the file path, we can load the data with read_raw_fif(). This will return a Raw object, which we’ll store in a variable called raw. sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file) As you can see above, read_raw_fif() automatically displays some information about the file it’s loading. For example, here it tells us that there are three “projection items” in the file along with the recorded data; those are SSP projectors calculated to remove environmental noise from the MEG signals, and are discussed in a the tutorial Background on projectors and projections. In addition to the information displayed during loading, you can get a glimpse of the basic details of a Raw object by printing it: Out: <Raw | sample_audvis_raw.fif, n_channels x n_times : 376 x 166800 (277.7 sec), ~3.7 MB, data not loaded> By default, the mne.io.read_raw_* family of functions will not load the data into memory (instead the data on disk are memory-mapped, meaning the data are only read from disk as-needed). Some operations (such as filtering) require that the data be copied into RAM; to do that we could have passed the preload=True parameter to read_raw_fif(), but we can also copy the data into RAM at any time using the load_data() method. However, since this particular tutorial doesn’t do any serious analysis of the data, we’ll first crop() the Raw object to 60 seconds so it uses less memory and runs more smoothly on our documentation server. Out: Reading 0 ... 36037 = 0.000 ... 60.000 secs... Querying the Raw object¶ We saw above that printing the Raw object displays some basic information like the total number of channels, the number of time points at which the data were sampled, total duration, and the approximate size in memory. Much more information is available through the various attributes and methods of the Raw class. Some useful attributes of Raw objects include a list of the channel names ( ch_names), an array of the sample times in seconds ( times), and the total number of samples ( n_times); a list of all attributes and methods is given in the documentation of the Raw class. The Raw.info attribute¶ There is also quite a lot of information stored in the raw.info attribute, which stores an Info object that is similar to a Python dictionary (in that it has fields accessed via named keys). Like Python dictionaries, raw.info has a .keys() method that shows all the available field names; unlike Python dictionaries, printing raw.info will print a nicely-formatted glimpse of each field’s data. See The Info data structure for more on what is stored in Info objects, and how to interact with them. n_time_samps = raw.n_times time_secs = raw.times ch_names = raw.ch_names n_chan = len(ch_names) # note: there is no raw.n_channels attribute print('the (cropped) sample data object has {} time samples and {} channels.' ''.format(n_time_samps, n_chan)) print('The last time sample is at {} seconds.'.format(time_secs[-1])) print('The first few channel names are {}.'.format(', '.join(ch_names[:3]))) print() # insert a blank line in the output # some examples of raw.info: print('bad channels:', raw.info['bads']) # chs marked "bad" during acquisition print(raw.info['sfreq'], 'Hz') # sampling frequency print(raw.info['description'], '\n') # miscellaneous acquisition info print(raw.info) Out: the (cropped) sample data object has 36038 time samples and 376 channels. The last time sample is at 60.000167471573526 seconds. The first few channel names are MEG 0113, MEG 0112, MEG 0111. bad channels: ['MEG 2443', 'EEG 053'] 600.614990234375 Hz acquisition (megacq) VectorView system at NMR-MGH <Info | 24 non-empty fields acq_pars : str | 13886 items bads : list | MEG 2443, EEG 053 ch_names : list | MEG 0113, MEG 0112, MEG 0111, MEG 0122, MEG 0123, ... chs : list | 376 items (GRAD: 204, MAG: 102, STIM: 9, EEG: 60, EOG: 1) comps : list | 0 items custom_ref_applied : bool | False description : str | 49 items dev_head_t : Transform | 3 items dig : Digitization | 146 items (3 Cardinal, 4 HPI, 61 EEG, 78 Extra) events : list | 1 items experimenter : str | 3 items file_id : dict | 4 items highpass : float | 0.10000000149011612 Hz hpi_meas : list | 1 items hpi_results : list | 1 items lowpass : float | 172.17630004882812 Hz meas_date : tuple | 2002-12-03 19:01:10 GMT meas_id : dict | 4 items nchan : int | 376 proc_history : list | 0 items proj_id : ndarray | 1 items proj_name : str | 4 items projs : list | PCA-v1: off, PCA-v2: off, PCA-v3: off sfreq : float | 600.614990234375 Hz acq_stim : NoneType ctf_head_t : NoneType dev_ctf_t : NoneType device_info : NoneType gantry_angle : NoneType helium_info : NoneType hpi_subsystem : NoneType kit_system_id : NoneType line_freq : NoneType subject_info : NoneType utc_offset : NoneType xplotter_layout : NoneType > Note Most of the fields of raw.info reflect metadata recorded at acquisition time, and should not be changed by the user. There are a few exceptions (such as raw.info['bads'] and raw.info['projs']), but in most cases there are dedicated MNE-Python functions or methods to update the Info object safely (such as add_proj() to update raw.info['projs']). Time, sample number, and sample index¶ One method of Raw objects that is frequently useful is time_as_index(), which converts a time (in seconds) into the integer index of the sample occurring closest to that time. The method can also take a list or array of times, and will return an array of indices. It is important to remember that there may not be a data sample at exactly the time requested, so the number of samples between time = 1 second and time = 2 seconds may be different than the number of samples between time = 2 and time = 3: Out: [12012] [12012 18018 24024] [601 600] Modifying Raw objects¶ Raw objects have a number of methods that modify the Raw instance in-place and return a reference to the modified instance. This can be useful for method chaining (e.g., raw.crop(...).filter(...).pick_channels(...).plot()) but it also poses a problem during interactive analysis: if you modify your Raw object for an exploratory plot or analysis (say, by dropping some channels), you will then need to re-load the data (and repeat any earlier processing steps) to undo the channel-dropping and try something else. For that reason, the examples in this section frequently use the copy() method before the other methods being demonstrated, so that the original Raw object is still available in the variable raw for use in later examples. Selecting, dropping, and reordering channels¶ Altering the channels of a Raw object can be done in several ways. As a first example, we’ll use the pick_types() method to restrict the Raw object to just the EEG and EOG channels: eeg_and_eog = raw.copy().pick_types(meg=False, eeg=True, eog=True) print(len(raw.ch_names), '→', len(eeg_and_eog.ch_names)) Out: 376 → 60 Similar to the pick_types() method, there is also the pick_channels() method to pick channels by name, and a corresponding drop_channels() method to remove channels by name: raw_temp = raw.copy() print('Number of channels in raw_temp:') print(len(raw_temp.ch_names), end=' → drop two → ') raw_temp.drop_channels(['EEG 037', 'EEG 059']) print(len(raw_temp.ch_names), end=' → pick three → ') raw_temp.pick_channels(['MEG 1811', 'EEG 017', 'EOG 061']) print(len(raw_temp.ch_names)) Out: Number of channels in raw_temp: 376 → drop two → 374 → pick three → 3 If you want the channels in a specific order (e.g., for plotting), reorder_channels() works just like pick_channels() but also reorders the channels; for example, here we pick the EOG and frontal EEG channels, putting the EOG first and the EEG in reverse order: channel_names = ['EOG 061', 'EEG 003', 'EEG 002', 'EEG 001'] eog_and_frontal_eeg = raw.copy().reorder_channels(channel_names) print(eog_and_frontal_eeg.ch_names) Out: ['EOG 061', 'EEG 003', 'EEG 002', 'EEG 001'] Changing channel name and type¶ You may have noticed that the EEG channel names in the sample data are numbered rather than labelled according to a standard nomenclature such as the 10-20 or 10-05 systems, or perhaps it bothers you that the channel names contain spaces. It is possible to rename channels using the rename_channels() method, which takes a Python dictionary to map old names to new names. You need not rename all channels at once; provide only the dictionary entries for the channels you want to rename. Here’s a frivolous example: This next example replaces spaces in the channel names with underscores, using a Python dict comprehension: print(raw.ch_names[-3:]) channel_renaming_dict = {name: name.replace(' ', '_') for name in raw.ch_names} raw.rename_channels(channel_renaming_dict) print(raw.ch_names[-3:]) Out: ['EEG 059', 'EEG 060', 'blink detector'] ['EEG_059', 'EEG_060', 'blink_detector'] If for some reason the channel types in your Raw object are inaccurate, you can change the type of any channel with the set_channel_types() method. The method takes a dictionary mapping channel names to types; allowed types are ecg, eeg, emg, eog, exci, ias, misc, resp, seeg, stim, syst, ecog, hbo, hbr. A common use case for changing channel type is when using frontal EEG electrodes as makeshift EOG channels: Out: ['EEG_001', 'blink_detector'] Selection in the time domain¶ If you want to limit the time domain of a Raw object, you can use the crop() method, which modifies the Raw object in place (we’ve seen this already at the start of this tutorial, when we cropped the Raw object to 60 seconds to reduce memory demands). crop() takes parameters tmin and tmax, both in seconds (here we’ll again use copy() first to avoid changing the original Raw object): raw_selection = raw.copy().crop(tmin=10, tmax=12.5) print(raw_selection) Out: <Raw | sample_audvis_raw.fif, n_channels x n_times : 376 x 1503 (2.5 sec), ~8.0 MB, data loaded> crop() also modifies the first_samp and times attributes, so that the first sample of the cropped object now corresponds to time = 0. Accordingly, if you wanted to re-crop raw_selection from 11 to 12.5 seconds (instead of 10 to 12.5 as above) then the subsequent call to crop() should get tmin=1 (not tmin=11), and leave tmax unspecified to keep everything from tmin up to the end of the object: print(raw_selection.times.min(), raw_selection.times.max()) raw_selection.crop(tmin=1) print(raw_selection.times.min(), raw_selection.times.max()) Out: 0.0 2.500770084699155 0.0 1.5001290587975622 Remember that sample times don’t always align exactly with requested tmin or tmax values (due to sampling), which is why the max values of the cropped files don’t exactly match the requested tmax (see Time, sample number, and sample index for further details). If you need to select discontinuous spans of a Raw object — or combine two or more separate Raw objects — you can use the append() method: raw_selection1 = raw.copy().crop(tmin=30, tmax=30.1) # 0.1 seconds raw_selection2 = raw.copy().crop(tmin=40, tmax=41.1) # 1.1 seconds raw_selection3 = raw.copy().crop(tmin=50, tmax=51.3) # 1.3 seconds raw_selection1.append([raw_selection2, raw_selection3]) # 2.5 seconds total print(raw_selection1.times.min(), raw_selection1.times.max()) Out: 0.0 2.5041000049184614 Extracting data from Raw objects¶ So far we’ve been looking at ways to modify a Raw object. This section shows how to extract the data from a Raw object into a NumPy array, for analysis or plotting using functions outside of MNE-Python. To select portions of the data, Raw objects can be indexed using square brackets. However, indexing Raw works differently than indexing a NumPy array in two ways: Along with the requested sample value(s) MNE-Python also returns an array of times (in seconds) corresponding to the requested samples. The data array and the times array are returned together as elements of a tuple. The data array will always be 2-dimensional even if you request only a single time sample or a single channel. Extracting data by index¶ To illustrate the above two points, let’s select a couple seconds of data from the first channel: sampling_freq = raw.info['sfreq'] start_stop_seconds = np.array([11, 13]) start_sample, stop_sample = (start_stop_seconds * sampling_freq).astype(int) channel_index = 0 raw_selection = raw[channel_index, start_sample:stop_sample] print(raw_selection) Out: (array([[-3.85742192e-12, -3.85742192e-12, -9.64355481e-13, ..., 2.89306644e-12, 3.85742192e-12, 3.85742192e-12]]), array([10.99872648, 11.00039144, 11.0020564 , ..., 12.9933487 , 12.99501366, 12.99667862])) You can see that it contains 2 arrays. This combination of data and times makes it easy to plot selections of raw data (although note that we’re transposing the data array so that each channel is a column instead of a row, to match what matplotlib expects when plotting 2-dimensional y against 1-dimensional x): x = raw_selection[1] y = raw_selection[0].T plt.plot(x, y) Extracting channels by name¶ The Raw object can also be indexed with the names of channels instead of their index numbers. You can pass a single string to get just one channel, or a list of strings to select multiple channels. As with integer indexing, this will return a tuple of (data_array, times_array) that can be easily plotted. Since we’re plotting 2 channels this time, we’ll add a vertical offset to one channel so it’s not plotted right on top of the other one: channel_names = ['MEG_0712', 'MEG_1022'] two_meg_chans = raw[channel_names, start_sample:stop_sample] y_offset = np.array([5e-11, 0]) # just enough to separate the channel traces x = two_meg_chans[1] y = two_meg_chans[0].T + y_offset lines = plt.plot(x, y) plt.legend(lines, channel_names) Extracting channels by type¶ There are several ways to select all channels of a given type from a Raw object. The safest method is to use mne.pick_types() to obtain the integer indices of the channels you want, then use those indices with the square-bracket indexing method shown above. The pick_types() function uses the Info attribute of the Raw object to determine channel types, and takes boolean or string parameters to indicate which type(s) to retain. The meg parameter defaults to True, and all others default to False, so to get just the EEG channels, we pass eeg=True and meg=False: eeg_channel_indices = mne.pick_types(raw.info, meg=False, eeg=True) eeg_data, times = raw[eeg_channel_indices] print(eeg_data.shape) Out: (58, 36038) Some of the parameters of mne.pick_types() accept string arguments as well as booleans. For example, the meg parameter can take values 'mag', 'grad', 'planar1', or 'planar2' to select only magnetometers, all gradiometers, or a specific type of gradiometer. See the docstring of mne.pick_types() for full details. The Raw.get_data() method¶ If you only want the data (not the corresponding array of times), Raw objects have a get_data() method. Used with no parameters specified, it will extract all data from all channels, in a (n_channels, n_timepoints) NumPy array: data = raw.get_data() print(data.shape) Out: (376, 36038) If you want the array of times, get_data() has an optional return_times parameter: data, times = raw.get_data(return_times=True) print(data.shape) print(times.shape) Out: (376, 36038) (36038,) The get_data() method can also be used to extract specific channel(s) and sample ranges, via its picks, start, and stop parameters. The picks parameter accepts integer channel indices, channel names, or channel types, and preserves the requested channel order given as its picks parameter. first_channel_data = raw.get_data(picks=0) eeg_and_eog_data = raw.get_data(picks=['eeg', 'eog']) two_meg_chans_data = raw.get_data(picks=['MEG_0712', 'MEG_1022'], start=1000, stop=2000) print(first_channel_data.shape) print(eeg_and_eog_data.shape) print(two_meg_chans_data.shape) Out: (1, 36038) (61, 36038) (2, 1000) Summary of ways to extract data from Raw objects¶ The following table summarizes the various ways of extracting data from a Raw object. Exporting and saving Raw objects¶ Raw objects have a built-in save() method, which can be used to write a partially processed Raw object to disk as a .fif file, such that it can be re-loaded later with its various attributes intact (but see Floating-point precision for an important note about numerical precision when saving). There are a few other ways to export just the sensor data from a Raw object. One is to use indexing or the get_data() method to extract the data, and use numpy.save() to save the data array: It is also possible to export the data to a Pandas DataFrame object, and use the saving methods that Pandas affords. The Raw object’s to_data_frame() method is similar to get_data() in that it has a picks parameter for restricting which channels are exported, and start and stop parameters for restricting the time domain. Note that, by default, times will be converted to milliseconds, rounded to the nearest millisecond, and used as the DataFrame index; see the scaling_time parameter in the documentation of to_data_frame() for more details. sampling_freq = raw.info['sfreq'] start_end_secs = np.array([10, 13]) start_sample, stop_sample = (start_end_secs * sampling_freq).astype(int) df = raw.to_data_frame(picks=['eeg'], start=start_sample, stop=stop_sample) # then save using df.to_csv(...), df.to_hdf(...), etc print(df.head()) Out: Converting "time" to "<class 'numpy.int64'>"... channel EEG_002 ... EEG_060 time ... 10000 38.478851 ... 69.522829 10001 36.128997 ... 70.692262 10003 35.894012 ... 70.809205 10005 37.245177 ... 70.107545 10006 38.478851 ... 70.692262 [5 rows x 59 columns] Note When exporting data as a NumPy array or Pandas DataFrame, be sure to properly account for the unit of representation in your subsequent analyses. Total running time of the script: ( 0 minutes 23.376 seconds) Estimated memory usage: 209 MB Gallery generated by Sphinx-Gallery
https://mne.tools/stable/auto_tutorials/raw/plot_10_raw_overview.html
CC-MAIN-2020-10
refinedweb
3,151
54.22
Bonjour, Les minutes de la réunion de l'ESC aujourd'hui. À bientôt Sophie -------- Message transféré -------- Sujet : [Libreoffice-qa] minutes of ESC call ... Date : Thu, 12 Apr 2018 15:56:18 +0100 De : Michael Meeks <michael.me...@collabora.com> Pour : libreoffice-dev <libreoff...@lists.freedesktop.org>, libreoffice...@lists.freedesktop.org * Present: + Christian, Eike, Michael S, Lionel, Stephan, Xisco, Olivier, Caolan, Miklos, Heiko, Sophie, Michael W, Michael M, Kendy * Completed Action Items: + come up with slot count (Heiko, Markus, Thorsten) * Pending Action Items: * Release Engineering update (Christian) + 5.4.7 RC1 – status + tagged yesterday, Windows builds up-loaded + 6.0.4 – RC1 – due to tag next week. + 6.1.0 alpha1 – April 24th… Feature Freeze May 24th + Android + Online * Documentation (Olivier) + wiki page for direct gerrit edition + + useful for fixing typos and linguistic of help pages. + many patches from Hamburg Hackfest from SophiaS & co. + Patch to allow new Help to be installed as UNO package (extension) + + + review & submission of corrections to help-pages on-going + poor presence in guides meetings. * Timing – Thorsten requested: + new doodle poll; can we find a better time ? AI: + create the poll(Thorsten) * HSQLDB -> Firebird migration plan (Lionel, Miklos) + reverted initial removal of HSQLDB + based on feedback – Tomi will add a warning dialog (Miklos) + user can choose whether the migration is wanted or not. + would be good to know what the transition period should be too. + would be good to wait until firebird is present in LTS releases (Lionel) + before removing HSQLDB + how long ? (Miklos) + Ubuntu – every 2 years (Lionel) + say 2-3 years, if Debian. + other questions (Michael) + what is default ? + do people really exchange DB’s by E-mail ? + we migrate data – how much do we need it ? + eager to keep HSQLDB syntax / engine (Lionel) + for transition - something like a programming tool + 2-3 years looks good for Lionel. + when do we make firebird the default ? (Lionel) + can be for the next release when its ready. + need to get an overview of the bugs + Metabug: + may already be in a better state than HSQLDB + not opposed with this being a default for next release. + exchanging DB’s by E-mail can happen (Lionel) + know some groups that do. * UX Update (Heiko) + Bugzilla (topicUI) statistics 250(250) (topicUI) bugs open, 328(328) (needsUXEval) needs to be evaluated by the UXteam + Updates: BZ changes 1 week 1 month 3 months 12 months added 8(1) 16(-1) 41(-1) 117(0) commented 70(26) 219(27) 535(41) 2040(44) removed 1(1) 3(2) 3(2) 16(3) resolved 6(3) 18(3) 38(3) 182(2) + top 10 contributors: Tietze, Heiko made 114 changes in 1 month, and 860 changes in 1 year Thomas Lendo made 62 changes in 1 month, and 447 changes in 1 year Buovjaga made 40 changes in 1 month, and 234 changes in 1 year Dieter Praas made 36 changes in 1 month, and 135 changes in 1 year Xisco Faulí made 26 changes in 1 month, and 315 changes in 1 year Mehrbrodt, Samuel made 25 changes in 1 month, and 49 changes in 1 year Foote, V Stuart made 24 changes in 1 month, and 291 changes in 1 year Henschel, Regina made 19 changes in 1 month, and 129 changes in 1 year Kaganski, Mike made 11 changes in 1 month, and 44 changes in 1 year Faure, Jean-Baptiste made 10 changes in 1 month, and 21 changes in 1 year + Sifr for Hicontrast + this is there for a11y reasons + agreed by Hypra + Sifr is small compared to hicontrast (Michael) + so if you want to see things with hicontrast you need these + hicontrast for people with color / contrast disability (Kendy) + from what recall from Sifr – grey not high contrast. + Possibly solved by more modern hardware / shaders cf. The bug: + + was concerned too (Caolan) => turn it into an extension. + Badly readable styles at sidebar tdf#115507 and similar Closed as WFM now * Crash Reporting (Caolan) + 28(+22) import failure, 3(+0) export failures + big jump here. Old bug not showing up until std::unique_ptr + could have been a cause of random crashes; now fixed. + one outstanding assert caused by FastParser (Michael S) + NS handling in xmloff not quite right + only works with the normal namespace prefixes + prefix → integer → string; 2nd mapping is static + have CC’d Azorpid on it. + 7 (-25, +3) coverity issues + first results after generictypes mending + forcepoint round 6, still not complete. + one extra fixed, 2/3 remaining + a few bits to go through. + oss-fuzz reporting only small things. * Crash Reporting (Xisco) + + 732 (last 7 days) (up) + + 1071 (last 7 days) (-) + + 510 (last 7 days) (up) + + 656 (last 7 days) (-) + + 1874 (last 7 days) (down) + + 633 (last 7 days) (up) + so far crash numbers is ~30% of 6.0.2.1 – but less deployment. + got rid of top crash in previous release. + good to scale by number of downloads (Michael) + could compare versions based on # of crashes (Xisco) + per week at that release stage * GSoC Application (Thorsten, Heiko) + asked for 9-12 slots, got 11 slots. + need to fill slots with students & mentors + color-coded the voting document (cf. Above) + pre-filtering with ignoring at hack-fest (Thorsten) + a number with ‘ignored’ there – but do challenge that if things have changed / more data etc. + can put their status back to ignore if you feel strongly. + deadline is Monday to get things assigned. => Mentors need to accept students so we can assign them to slots. AI: + reach out to poke mentors to encourage button clicking (Heiko) * GSoC schedule + Student Project Selection: Tuesday, April 17th at 16:00 UTC + + * Hamburg Hackfest retrospective + beautiful location (Michael M) + great, thanks to organizers (Miklos) + first time for dedicated mentors – seemed to work well + agreed – very nice (Michael S) + achievements? - * Hackfests & Events + OSCAL is coming (Xisco) + May in Tirana – Heiko, Florian, Italo there. + perhaps a hack-fest there. + Turkey – OYLG ? May 12th-13th + UK Sheffield OggCamp – August (?) * mentoring/easyhack update committer... 1 week 1 month 3 months 12 months open 97(-11) 141(-46) 143(-51) 146(-51) reviews 769(221) 2004(336) 5278(184) 17801(314) merged 367(121) 1313(77) 3972(28) 12894(102) abandoned 21(5) 71(-3) 277(5) 847(1) own commits 300(49) 1203(-26) 4109(-57) 14091(-115) review commits 106(45) 291(47) 929(-17) 3066(34) contributor... 1 week 1 month 3 months 12 months open 45(15) 68(18) 68(15) 74(15) reviews 1073(217) 3397(15) 9703(-20) 30346(222) merged 35(-8) 142(-10) 566(4) 1763(14) abandoned 9(6) 33(5) 78(3) 337(3) own commits 28(-11) 115(4) 420(2) 1107(11) review commits 0(0) 0(0) 0(0) 0(0) + easyHack statistics: needsDevEval 38(38) needsUXEval 2(2) cleanup_comments 203(203) total 250(250) assigned 22(22) open 187(187) + top 5 contributors: Gelmini, Andrea made 22 patches in 1 month, and 299 patches in 1 year Johnny_M made 21 patches in 1 month, and 118 patches in 1 year himajin100000 made 14 patches in 1 month, and 14 patches in 1 year Sophia Schröder made 12 patches in 1 month, and 12 patches in 1 year Jim Raykowski made 7 patches in 1 month, and 48 patches in 1 year + top 5 reviewers: Pootle bot made 313 review comments in 1 month, and 1415 in 1 year Vajna, Miklos made 220 review comments in 1 month, and 1295 in 1 year Behrens, Thorsten made 176 review comments in 1 month, and 1354 in 1 year Grandin, Noel made 173 review comments in 1 month, and 1460 in 1 year Holešovský, Jan made 146 review comments in 1 month, and 1437 in 1 year + big CONGRATULATIONS to contributors who have at least 1 merged patch, since last report: Sophia Schröder sophia.schroe...@libreoffice.org Michael Stahl michael.st...@cib.de Kowther Hassan kowth...@gmail.com Nithin Kumar Padavu nithin...@gmail.com * Commit Access * Developer Certification (Stephan/Bjoern/Kendy/Thorsten) + sleep for 1 week. * Jenkins / CI update (Christian) from:Thu Apr 5 16:26:11 2018 master linux rel jobs: 171 ok: 169 ko: 2 fail ratio: 1.17 % break: 2 broken duration: 0.81% master linux dbg jobs: 84 ok: 83 ko: 1 fail ratio: 1.19 % break: 1 broken duration: 0.10% master mac rel jobs: 113 ok: 109 ko: 4 fail ratio: 3.54 % break: 3 broken duration: 1.86% master mac dbg jobs: 110 ok: 104 ko: 6 fail ratio: 5.45 % break: 3 broken duration: 9.11% master win rel jobs: 62 ok: 53 ko: 9 fail ratio: 14.52 % break: 8 broken duration:11.86% master win dbg jobs: 76 ok: 64 ko: 12 fail ratio: 15.79 % break: 10 broken duration:29.16% master win64 dbg jobs: 69 ok: 56 ko: 13 fail ratio: 18.84 % break: 12 broken duration:13.34% lo-5.3 mac jobs: 0 ok: 0 ko: 0 fail ratio: 0.00 % break: 0 broken duration: 0.00% lo-5.4 mac jobs: 0 ok: 0 ko: 0 fail ratio: 0.00 % break: 0 broken duration: 0.00% master gerrit lin jobs: 533 ok: 336 ko: 9 fail ratio: 1.69% time for ok: mean: 11 median: 9 master gerrit plg jobs: 542 ok: 309 ko: 46 fail ratio: 8.49% time for ok: mean: 26 median: 22 master gerrit win jobs: 553 ok: 291 ko: 110 fail ratio: 19.89% time for ok: mean: 62 median: 52 master gerrit mac jobs: 548 ok: 320 ko: 71 fail ratio: 12.96% time for ok: mean: 54 median: 39 master gerrit all jobs: 546 ok: 278 ko: 222 fail ratio: 40.66% time for ok: mean: 113 median: 104 + week fine until yesterday evening / morning. ~ 90 failures caused by bot issues + 21 mac: not logged in graphically + 24 win: lode client/server communication issue + 43 failure to checkout + also a failure to checkout + some file locking foo – in killed process case (?) * Budgeting (Thorsten) + needs making into a spreadsheet – real-life getting in the way. * l10n (Sophie) + change in help-content editing process brings some concerns + risk of XML errors. + multiple changes in strings without l10n + Cloph added some XML integrity check on Jenkins + rest being discussed by Olivier & Mike – deferred to next staff call * QA update (Xisco) + lots of new 6.0.3 reports being handled. + UNCONFIRMED: 396 (+19) + enhancements: 39 (-1) + needsUXEval: 0 (-5) + haveBackTrace: 5 (+0) + needsDevAdvice: 29 (+2) + documentation: 0 (-2) + android: 12 (-2) + Most Pressing Bugs: + New: + Crash (fatal error) when attempting a mail merge print + + Szymon ? + Crash when showing Comment + + SOSAW080 - Armin ? + Crash on third file opening + + Image Handling Refactoring. Tomaž actively working on it. + Older: + Calc crashes when opening Function Wizard through Cmd-F2 shortcut + + bisected – Eike / Tor ? ... + Printing doesn't start in particular documents until show first + + Jan-Marek ? + Crash when asking subtotals on 2 groups with pre-sort area checked + + cf. + Ahmed looking into it. + Crash in: BitmapReadAccess::SetPixelForN24BitTcRgb with OpenGL + + Quikee to have a poke. + CRASH when adding paragraphs in a cell of a complex table structure + + Manfred Blume / Thorsten ? + CRASH: LibreOffice crashes while deleting half of the document + + Michael S’s assert catching badness ... + Fixed: + EDITING: Replication of frames when record changes (redlining) is on + + Thanks to Michael Stahl - Bug Inherited from OOo. + Crashed in Calc Macro (Basic) + + Thanks to Stephan Bergmann. * QA stats + +146 -27 (-185) overall) many thanks to the top bug squashers: QA Administrators 50 Xisco Faulí 14 Heiko Tietze 12 Buovjaga 11 Samuel Mehrbrodt (CIB) 6 V Stuart Foote 6 Dieter Praas 6 Regina Henschel 6 Telesto 5 Katarina Behrens (CIB) 4 + top 10 bugs reporters: Telesto 10 Buovjaga 4 Xisco Faulí 4 Luke 3 Regina Henschel 2 Aron Budea 2 Miklos Vajna 2 Mert Tumer 2 Gabor Kelemen 2 Thomas Lendo 2 + top 10 bugs fixers: Mehrbrodt, Samuel 6 Tietze, Heiko 6 Kaganski, Mike 3 Jim Raykowski 3 Tümer, Mert 3 Holešovský, Jan 2 Németh, László 2 McNamara, Caolán 2 Budea, Áron 2 Vajna, Miklos 2 + top 10 bugs confirmers: Xisco Faulí 20 Buovjaga 11 Tietze, Heiko 11 Raal 11 Alex Thurgood 7 Foote, V Stuart 5 Mehrbrodt, Samuel 3 Budea, Áron 3 Nabet, Julien 3 Vajna, Miklos 2 * Highest-Priority bugs (aka "MABs"): + 6.0 : 3/38 - 7 % (+2) 5.4 : 3/37 - 8 % (+0) 5.3 : 2/53 - 3 % (+0) 5.2 : 1/40 - 2 % (+0) 5.1 : 1/36 - 2 % (+0) 5.0 : 2/63 - 3 % (+0) 4.4 : 1/76 - 1 % (+0) 4.3 : 5/74 - 6 % (+0) 4.2 : 6/134 - 4 % (+0) 4.1 : 3/84 - 3 % (+0) 4.0 : 4/83 - 4 % (+0) old : 22/258 - 8 % (+0) * Bisected bugs open: keyword 'bisected' + more accurate - down to a single commit. + + 467/2064 458/2037 463/2029 460/2011 444/1981 445/1957 449/1940 done by: Xisco Faulí 18 Telesto 4 Raal 2 Budea, Áron 2 * Bibisected bugs open: keyword 'bibisected' + + 564/2691 552/2662 557/2652 555/2636 539/2608 539/2582 543/2563 done by: Xisco Faulí 18 Telesto 6 Raal 2 Budea, Áron 2 eisa01 1 * all bugs tagged with 'regression' + 933(+4) bugs open of 6974(+27) total 12(+2) high prio. done by: Xisco Faulí 9 Telesto 4 Cor Nouws 2 Budea, Áron 2 Mehrbrodt, Samuel 1 Adolfo Jayme Barrientos 1 Gerhard Weydt 1 Buovjaga 1 Timur 1 Raal 1 * ~Component count net * high severity regressions + Calc - 5(+1) Impress - 3(+0) Writer - 2(+1) framework - 1(+0) LibreOffice - 1(+0) by OS: Mac OS X - 1(+0) Linux - 1(+0) All - 8(+2) Windows - 2(+0) * ~Component count net * all regressions + Writer: other - 189(+5) Calc - 162(-1) Impress - 110(+1) Writer: docx filter - 73(-1) LibreOffice - 69(-2) UI - 46(+0) Writer: doc filter - 34(+0) graphics stack - 33(+0) Draw - 33(+1) Borders - 32(+0) Base - 31(+1) Crashes - 27(+1) Writer: perf - 27(-1) Writer: other filter - 26(+0) filters and storage - 26(+0) Chart - 19(-1) Printing and PDF export - 19(+1) BASIC - 17(+0) framework - 5(+0) sdk - 1(+0) Linguistic - 1(+0) Extensions - 1(+0) Installation - 1(+0) _______________________________________________ List Name: Libreoffice-qa mailing list Mail address: libreoffice...@lists.freedesktop.org Change settings: Problems? Posting guidelines + more: List archive: -- Envoyez un mail à qa+unsubscr...@fr.libreoffice.org pour vous désinscrire Les archives de la liste sont disponibles à Tous les messages envoyés sur cette liste seront archivés publiquement et ne pourront pas être supprimés
https://www.mail-archive.com/qa@fr.libreoffice.org/msg07653.html
CC-MAIN-2018-43
refinedweb
2,409
68.7
Hey guys, I’m curious if anyone has tried this before. I thought I looked into it, and forgot what the conclusions were. What is the possibility of being able to make TypeScript definitions for Haxe-generated JavaScript classes? Thanks! Hey guys, I’m curious if anyone has tried this before. I thought I looked into it, and forgot what the conclusions were. What is the possibility of being able to make TypeScript definitions for Haxe-generated JavaScript classes? Thanks! Actually yes. I’m using Waud library in my type script angular project. I needed to write the type definition myself, and it actually works well. I once started this project but never finished it: Thanks guys Have either of you run into any nasty edge cases that wouldn’t be obvious otherwise? Of course, compile-time features like Haxe abstracts aren’t available, or macros, but generating new class instances and calling standard properties (what about get/set methods?) work? JavaScript doesn’t care if a property or a method is private or public, so to expose some privates I just defined their name in the Interface. Creating instances, using maps and even typedefs is easy. But I will add that I didn’t write this library, Adi wrote it and it was optimised for JavaScript. We’re doing it here to reuse internal Haxe libs with TS. We ran into some limits with @nadako’s lib but it could be fixed - we’ll get back to that. I have mixed feelings about generating d.ts files for every Haxe class; I don’t think it’s very elegant when used from JS import if you don’t have a flat class hierarchy. I’d try exposing an index.ts re-exporting useful top-level classes. Additionally producing a NPM module using Haxe JS with TS definitions is a tricky exercise which would be worth a good blog post. I’m trying to get some basic TypeScript definitions working, based off the output I’m seeing from hxstdgen { "compilerOptions": { "target": "es5" }, "files": [ "openfl.d.ts", "Test.ts" ] } export namespace openfl.geom { class Rectangle { constructor(x: number, y?: number, width?: number, height?: number); bottom: number; height: number; left: number; right: number; top: number; width: number; x: number; y: number; clone(): openfl.geom.Rectangle; contains(x: number, y: number): boolean; } } import "./openfl"; class Test { public static embed (div:string) { alert (div); let rect = new openfl.geom.Rectangle (0, 0, 100, 100); console.log (rect); } } Test.embed ("openfl-content"); I’m getting the following error: Test.ts(11,18): error TS2552: Cannot find name ‘openfl’. Did you mean ‘open’? I know this is likely a beginner TypeScript question, but how do I use this in the browser? Importing doesn’t give you an openfl variable. import * as openfl from "./openfl"; To use in the browser: normally tsc should transform your Test.ts into Test.js - the import should have been translated into something like: const openfl = require('./openfl'); Now you need to bundle Test.js and openfl.js into a single JS file that you can reference in an HTML page - you can use Browserify to keep things simple. browserify src/Test.js -o www/index.js -d © 2018-2020 Haxe Foundation - Powered by Discourse
https://community.haxe.org/t/using-haxe-javascript-from-typescript/216
CC-MAIN-2022-21
refinedweb
543
68.06
Compiled AIR app: Error #1009 when dispatching events from mediators When running a compiled AIR application I get 1009 errrors when dispatching events from the mediators. Is that problem familiar to anyone? Known solution? I'm using Flex sdk 4.5.1 and latest versions of RL/SS. Working on a windows machine. Typical button click handler from a test mediator causing this error when compiled: protected function onBtnClick(event:MouseEvent):void { trace('onBtnClick'); try { this.eventDispatcher.dispatchEvent(new Event('TEST')); } catch(error:Error) { Alert.show(error.message); // <<<-------- Displays Error #1009 when compiled } } Comments are currently closed for this discussion. You can start a new one. Keyboard shortcuts Generic Comment Form You can use Command ⌘ instead of Control ^ on Mac Support Staff 1 Posted by Ondina D.F. on 26 Oct, 2011 01:04 PM Hi Jonas, One possibility could be that you have a button -let’s call it someButton- in your view and in the mediator’s onRegister you do something like this: eventMap.mapListener(view.someButton, MouseEvent.CLICK, onBtnClick); Let’s say you have 2 states in your view: stateOne and stateTwo, where stateOne is the first state, but someButton is available only in stateTwo. In this case the mediator won’t be able to add an event listener to the button because someButton is null in stateOne and you’d get a TypeError: Error #1009 already when the mediator would try to map the event. In this case your onBtnClick() method would never be called and the new Event(‘TEST’) never dispatched, of course. But if your onBtnClick() has been called, you should be able to dispatch the new Event(‘TEST’) without any problems. Are you sure the error doesn’t come from the eventMap? Maybe you should paste the entire error in here. I’m using FlashBuilder 4.5.1 AIR 3 SDK on Windows , robotlegs-framework-v1.5.2.swc. I’ve tried your code and it worked: You don’t need to use the dispatcher like this: this.eventDispatcher.dispatchEvent(new Event('TEST')); You can simply use this convenience method: dispatch(new Event('TEST')); You also could do something like this: SomeView.mxml import yourpath.events.SomeEvent; private function onSomeButtonClicked():void { dispatchEvent(new SomeEvent(SomeEvent.DO_SOMETHING, payload)); } SomeMediator.as addViewListener(SomeEvent.DO_SOMETHING, dispatch); is the same as: eventMap.mapListener(view, SomeEvent.DO_SOMETHING, dispatch); or as: override public function onRegister():void { eventMap.mapListener(view, SomeEvent.DO_SOMETHING, onDoSomething); } protected function onDoSomething (event: SomeEvent):void { dispatch(event); } I hope this helps. Ondina 2 Posted by jonasnys on 26 Oct, 2011 02:00 PM Thank you, Ondina! It seems that the problem was caused because I didn't include the Inject and PostConstruct compiler directives... Now it works like a charm! Support Staff 3 Posted by Ondina D.F. on 26 Oct, 2011 03:44 PM Aah!! Now, it’s clear to me that my first question should have been: “Are you linking against the source? If so, then add -keep-as3-metadata+=Inject -keep-as3-metadata+=PostConstruct to the compiler arguments”, because you were referring to the “latest versions of RL/SS”, robotlegs and swiftsuspenders!! I just assumed you were using the robotlegs swc. My bad:) Ondina D.F. closed this discussion on 01 Nov, 2011 11:45 AM.
http://robotlegs.tenderapp.com/discussions/problems/410-compiled-air-app-error-1009-when-dispatching-events-from-mediators
CC-MAIN-2019-13
refinedweb
543
58.79
In C# a complete program instruction is called a statement. Programs consist of sequences of C# statements. Each statement must end with a semicolon (;). For example: int x; // a statement x = 23; // another statement int y = x; // yet another statement C# statements are evaluated in order. The compiler starts at the beginning of a statement list and makes its way to the bottom. This would be entirely straightforward, and terribly limiting, were it not for branching. There are two types of branches in a C# program: unconditional branching and conditional branching. Program flow is also affected by looping and iteration statements, which are signaled by the keywords for, while, do, in, and foreach. Iteration is discussed later in this chapter. For now, let's consider some of the more basic methods of conditional and unconditional branching. An unconditional branch is created in one of two ways. The first way is by invoking a method. When the compiler encounters the name of a method, it stops execution in the current method and branches to the newly "called" method. When that method returns a value, execution picks up in the original method on the line just below the method call. Example 3-6 illustrates.( ). Program flow begins in Main( ) and proceeds until SomeMethod( ) is invoked (invoking a method is sometimes referred to as "calling" the method). At that point, program flow branches to the method. When the method completes, program flow resumes at the next line after the call to that method. The second way to create an unconditional branch is with one of the unconditional branch keywords: goto, break, continue, return, or throw. Additional information about the first three jump statements is provided later in this chapter; the final statement, throw, is discussed in Chapter 11. A conditional branch is created by a conditional statement, which is signaled by keywords such as if, else, or switch. A conditional branch occurs only if the condition expression evaluates true. If...else statements branch based on a condition. The condition is an expression, tested in the head of the if statement. If the condition evaluates true, the statement (or block of statements) in the body of the if statement is executed. If statements may contain an optional else statement. The else statement is executed only if the expression in the head of the if statement evaluates false: if (expression ) statement1 [else statement2 ] This is the kind of description of the if statement you are likely to find in your compiler documentation. It shows you that the if statement takes a Boolean expression (an expression that evaluates true or false) in parentheses, and executes statement1 if the expression evaluates true. Note that statement1 can actually be a block of statements within braces. You can also see that the else statement is optional, as it is enclosed in square brackets. Although this gives you the syntax of an if statement, an illustration will make its use clear. See Example 3-7. using System; class Values { static void Main( ) { int valueOne = 10; int valueTwo = 20; if ( valueOne > valueTwo ) { Console.WriteLine( "ValueOne: {0} larger than ValueTwo: {1}", valueOne, valueTwo); } else { Console.WriteLine( "ValueTwo: {0} larger than ValueOne: {1}", valueTwo,valueOne); } valueOne = 30; // set valueOne higher if ( valueOne > valueTwo ) { valueTwo = valueOne++; Console.WriteLine("\nSetting valueTwo to valueOne value, "); Console.WriteLine("and incrementing ValueOne.\n"); Console.WriteLine("ValueOne: {0} ValueTwo: {1}", valueOne, valueTwo); } else { valueOne = valueTwo; Console.WriteLine("Setting them equal. "); Console.WriteLine("ValueOne: {0} ValueTwo: {1}", valueOne, valueTwo); } } } In Example 3-7, the first if statement tests whether valueOne is greater than valueTwo. The relational operators such as greater than (>), less than (<), and equal to (==) are fairly intuitive to use. The test of whether valueOne is greater than valueTwo evaluates false (because valueOne is 10 and valueTwo is 20, so valueOne is not greater than valueTwo). The else statement is invoked, printing the statement: ValueTwo: 20 is larger than ValueOne: 10 The second if statement evaluates true and all the statements in the if block are evaluated, causing two lines to print: Setting valueTwo to valueOne value, and incrementing ValueOne. ValueOne: 31 ValueTwo: 30 It is possible, and not uncommon, to nest if statements to handle complex conditions. For example, suppose you need to write a program to evaluate the temperature, and specifically to return the following types of information: If the temperature is 32 degrees or lower, the program should warn you about ice on the road. If the temperature is exactly 32 degrees, the program should tell you that there may be ice patches. There are many good ways to write this program. Example 3-8 illustrates one approach, using nested if statements. using System; class Values { static void Main( ) { int temp = 32; if (temp <= 32) { Console.WriteLine("Warning! Ice on road!"); if (temp == 32) { Console.WriteLine( "Temp exactly freezing, beware of water."); } else { Console.WriteLine("Watch for black ice! Temp: {0}", temp); } } } } The logic of Example 3-8 is that it tests whether the temperature is less than or equal to 32. If so, it prints a warning: if (temp <= 32) { Console.WriteLine("Warning! Ice on road!"); The program then checks whether the temp is equal to 32 degrees. If so, it prints one message; if not, the temp must be less than 32 and the program prints the second message. Notice that this second if statement is nested within the first if, so the logic of the else is "since it has been established that the temp is less than or equal to 32, and it isn't equal to 32, it must be less than 32." Nested if statements are hard to read, hard to get right, and hard to debug. When you have a complex set of choices to make, the switch statement is a more powerful alternative. The logic of a switch statement is "pick a matching value and act accordingly." switch (expression ) { case constant-expression : statement jump-statement [default: statement ] } As you can see, like an if statement, the expression is put in parentheses in the head of the switch statement. Each case statement then requires a constant expression; that is, a literal or symbolic constant or an enumeration. If a case is matched, the statement (or block of statements) associated with that case is executed. This must be followed by a jump statement. Typically, the jump statement is break, which transfers execution out of the switch. An alternative is a goto statement, typically used to jump into another case, as illustrated in Example 3-9. using System; class Values { static void Main( ) { const int Democrat = 0; const int LiberalRepublican = 1; const int Republican = 2; const int Libertarian = 3; const int NewLeft = 4; const int Progressive = 5; int myChoice = Libertarian; switch (myChoice) { case Democrat: Console.WriteLine("You voted Democratic.\n"); break; case LiberalRepublican: // fall through //Console.WriteLine( //"Liberal Republicans vote Republican\n"); case Republican: Console.WriteLine("You voted Republican.\n"); break; case NewLeft: Console.WriteLine("NewLeft is now Progressive"); goto case Progressive; case Progressive: Console.WriteLine("You voted Progressive.\n"); break; case Libertarian: Console.WriteLine("Libertarians are voting Republican"); goto case Republican; default: Console.WriteLine("You did not pick a valid choice.\n"); break; } Console.WriteLine("Thank you for voting."); } } In this whimsical example, we create constants for various political parties. We then assign one value (Libertarian) to the variable myChoice and switch on that value. If myChoice is equal to Democrat, we print out a statement. Notice that this case ends with break. break is a jump statement that takes us out of the switch statement and down to the first line after the switch, on which we print "Thank you for voting." The value LiberalRepublican has no statement under it, and it "falls through" to the next statement: Republican. If the value is LiberalRepublican or Republican, the Republican statements execute. You can only "fall through" in this way if there is no body within the statement. If you uncomment the WriteLine( ) under LiberalRepublican, this program will not compile. If you do need a statement but you then want to execute another case, you can use the goto statement, as shown in the NewLeft case: goto case Progressive; It is not required that the goto take you to the case immediately following. In the next instance, the Libertarian choice also has a goto, but this time it jumps all the way back up to the Republican case. Because our value was set to Libertarian, this is just what occurs. We print out the Libertarian statement, go to the Republican case, print that statement, and then hit the break, taking us out of the switch and down to the final statement. The output for all of this is: Libertarians are voting Republican You voted Republican. Thank you for voting. Note the default case, excerpted from Example 3-9: default: Console.WriteLine( "You did not pick a valid choice.\n"); If none of the cases match, the default case will be invoked, warning the user of the mistake. In the previous example, the switch value was an integral constant. C# offers the ability to switch on a string, allowing you to write: case "Libertarian": If the strings match, the case statement is entered. C# provides an extensive suite of iteration statements, including for, while and do...while loops, as well as foreach loops (new to the C family but familiar to VB programmers). In addition, C# supports the goto, break, continue, and return jump statements. The goto statement is the seed from which all other iteration statements have been germinated. Unfortunately, it is a semolina seed, producer of spaghetti code and endless confusion. Most experienced programmers properly shun the goto statement, but in the interest of completeness, here's how you use it: Create a label. goto that label. The label is an identifier followed by a colon. The goto command is typically tied to a condition, as illustrated in Example 3-10. using System; public class Tester { public static int Main( ) { int i = 0; repeat: // the label Console.WriteLine("i: {0}",i); i++; if (i < 10) goto repeat; // the dastardly deed return 0; } } If you were to try to draw the flow of control in a program that makes extensive use of goto statements, the resulting morass of intersecting and overlapping lines looks like a plate of spaghetti; hence the term "spaghetti code." It was this phenomenon that led to the creation of alternatives, such as the while loop. Many programmers feel that using goto in anything other than a trivial example creates confusion and difficult-to-maintain code. The semantics of the while loop are "while this condition is true, do this work." The syntax is: while (expression) statement As usual, an expression is any statement that returns a value. While statements require an expression that evaluates to a Boolean (true/false) value, and that statement can, of course, be a block of statements. Example 3-11 updates Example 3-10, using a while loop. using System; public class Tester { public static int Main( ) { int i = 0; while (i < 10) { Console.WriteLine("i: {0}",i); i++; } return 0; } } The code in Example 3-11 produces results identical to the code in Example 3-10, but the logic is a bit clearer. The while statement is nicely self-contained, and it reads like an English sentence: "while i is less than 10, print this message and increment i." Notice that the while loop tests the value of i before entering the loop. This ensures that the loop will not run if the condition tested is false; thus if i is initialized to 11, the loop will never run. There are times when a while loop might not serve your purpose. In certain situations, you might want to reverse the semantics from "run while this is true" to the subtly different "do this while this condition remains true." In other words, take the action, and then, after the action is completed, check the condition. For this you will use the do...while loop. do statement while expression An expression is any statement that returns a value. Example 3-12 shows the do...while loop. using System; public class Tester { public static int Main( ) { int i = 11; do { Console.WriteLine("i: {0}",i); i++; } while (i < 10); return 0; } } Here i is initialized to 11 and the while test fails, but only after the body of the loop has run once. A careful examination of the while loop in Example 3-11 reveals a pattern often seen in iterative statements: initialize a variable (i = 0), test the variable (i < 10), execute a series of statements, and increment the variable (i++). The for loop allows you to combine all these steps in a single loop statement: for ([initializers]; [expression]; [iterators]) statement The for loop is illustrated in Example 3-13. using System; public class Tester { public static int Main( ) { for (int i=0;i<100;i++) { Console.Write("{0} ", i); if (i%10 == 0) { Console.WriteLine("\t{0}", i); } } return 0; } } Output: 0 0 1 2 3 4 5 6 7 8 9 for loop makes use of the modulus operator described later in this chapter. The value of i is printed until i is a multiple of 10. if (i%10 == 0) A tab is then printed, followed by the value. Thus the 10s (20, 30, 40, etc.) are called out on the right side of the output. The individual values are printed using Console.Write( ), which is much like WriteLine( ) but which does not enter a newline character, allowing the subsequent writes to occur on the same line. A few quick points to notice: in a for loop, the condition is tested before the statements are executed. Thus, in the example, i is initialized to zero, then it is tested to see if it is less than 100. Because i < 100 returns true, the statements within the for loop are executed. After the execution, i is incremented (i++). Note that the variable i is scoped to within the for loop (that is, the variable i is visible only within the for loop). Example 3-14 will not compile. using System; public class Tester { public static int Main( ) { for (int i=0; i<100; i++) { Console.Write("{0} ", i); if ( i%10 == 0 ) { Console.WriteLine("\t{0}", i); } } Console.WriteLine("\n Final value of i: {0}", i); return 0; } } The line shown in bold fails, as the variable i is not available outside the scope of the for loop itself. The foreach statement is new to the C family of languages; it is used for looping through the elements of an array or a collection. Discussion of this incredibly useful statement is deferred until Chapter 9. There are times when you would like to restart a loop without executing the remaining statements in the loop. The continue statement causes the loop to return to the top and continue executing. The obverse side of that coin is the ability to break out of a loop and immediately end all further work within the loop. For this purpose the break statement exists. Example 3-15 illustrates the mechanics of continue and break. This code, suggested to me by one of my technical reviewers, Donald Xie, is intended to create a traffic signal processing system. The signals are simulated by entering numerals and uppercase characters from the keyboard, using Console.ReadLine( ), which reads a line of text from the keyboard. The algorithm is simple: receipt of a 0 (zero) means normal conditions, and no further action is required except to log the event. (In this case, the program simply writes a message to the console; a real application might enter a timestamped record in a database.) On receipt of an abort signal (here simulated with an uppercase "A"), the problem is logged and the process is ended. Finally, for any other event, an alarm is raised, perhaps notifying the police. (Note that this sample does not actually notify the police, though it does print out a harrowing message to the console.) If the signal is "X," the alarm is raised, but the while loop is also terminated. using System; public class Tester { public static int Main( ) { string signal = "0"; // initialize to neutral while (signal != "X") // X indicates stop { Console.Write("Enter a signal: "); signal = Console.ReadLine( ); // do some work here, no matter what signal you // receive Console.WriteLine("Received: {0}", signal); if (signal == "A") { // faulty - abort signal processing // Log the problem and abort. Console.WriteLine("Fault! Abort\n"); break; } if (signal == "0") { // normal traffic condition // log and continue on Console.WriteLine("All is well.\n"); continue; } // Problem. Take action and then log the problem // and then continue on Console.WriteLine("{0} -- raise alarm!\n", signal); } return 0; } } Output: Enter a signal: 0 Received: 0 All is well. Enter a signal: B Received: B B -- raise alarm! Enter a signal: A Received: A Fault! Abort Press any key to continue The point of this exercise is that when the A signal is received, the action in the if statement is taken and then the program breaks out of the loop, without raising the alarm. When the signal is 0, it is also undesirable to raise the alarm, so the program continues from the top of the loop.
http://etutorials.org/Programming/Programming+C.Sharp/Part+I+The+C+Language/Chapter+3.+C+Language+Fundamentals/3.5+Statements/
crawl-001
refinedweb
2,880
64.61
When building iOS applications with Swift, you often need to use third party libraries as dependencies to avoid rewriting code that other developers have already created and shared. Swift Package Manager (SwiftPM) is a command-line tool for building, testing and managing Swift project dependencies. We will create a very basic Swift command line program that will parse some JSON using the popular SwiftyJSON framework to learn how to work with SwiftPM. Making sure you have the right tools You will need Xcode 8.0 or greater and Swift 3.0 or greater to use Swift Package Manager on OS X. Swift 3 is also available on Linux. To find out which version of Swift you’re running, enter the following command in your terminal: swift --version Creating a Swift package Open your terminal and create a new directory where you want this project to live. Now create a Swift package in this directory with the following command. This command will name your Swift package after your current directory. For future reference, I named mine SPMTest: swift package init --type executable The most important part of creating your Swift package is the Package.swift file. There are two important elements in the Package.swift file that you may wish to edit: - The name element. It indicates the name of the project and the name of the executable file which will be generated when the project is built. - The dependencies list. This indicates all of the subprojects that your application is dependent upon. Each item in this array consists of a “.Package” with a repository URL and a version. Open Package.swift and add SwiftyJSON as a dependency: import PackageDescription let package = Package( name: "SPMTest", dependencies: [ .Package(url: "", majorVersion: 3, minor: 1) ] ) Now download the dependencies with the following command: swift package fetch All dependencies are downloaded into a Packages directory which the SPM will automatically create. Using our installed dependencies Now that SwiftyJSON is installed, we can use it as a framework. Create a file called main.swift in the Sources directory. SwiftPM looks at this file to decide what to do when we run our program. Add some basic JSON parsing code to Sources/main.swift: import SwiftyJSON import Foundation let jsonString = "{\"name\": \"Sagnewshreds\"}" if let dataFromString = jsonString.data(using: String.Encoding.utf8, allowLossyConversion: false) { let json = JSON(data: dataFromString) print(json["name"]) } Now build the project by running the following command from the root directory of your project: swift build SPM will generate an executable for this project. From the root directory of your project, run this executable and hope the JSON is parsed correctly: ./.build/debug/SPMTest You can also use SwiftPM to generate an Xcode project for you by running: swift package generate-xcodeproj After this, open your newly generated Xcode project: open SPMTest.xcodeproj/ This allows you to reap all of the benefits that editing Swift in Xcode gives you, such as awesome autocompletion. Moving on with your Swift projects Although it is new, it is exciting that Swift now has an official package manager. If you are worried about whether it is ready for production applications or not, you may still want to use CocoaPods or Carthage until SwiftPM has been more widely adopted. If you want to see what kind of awesome packages you can use with SwiftPM, check out the Swift Modules website. I can’t wait to see what you build. Feel free to reach out and share your experiences or ask any questions. - Twitter: @Sagnewshreds - Github: Sagnew - Twitch (streaming live code): Sagnewshreds
https://www.twilio.com/blog/2016/10/getting-started-with-swift-package-manager.html
CC-MAIN-2019-04
refinedweb
592
63.9
Notes on Optimizing Clojure Code: Reflection Clojure is a dynamic language. That's great for (some notion of) expressivity, but sometimes it can get in the way of performance. To get the best performance out of JVM Clojure, we have to understand how Clojure's brand of dynamic typing meshes with the JVM's. This is what this post is about. The JVM JVM bytecode is typed in roughly the same way that Java is. That is, when a method is invoked on an object, the JVM bytecode includes the full signature of that method, namely its name, the "target static type" on which it is (statically) invoked, and the (static) types of each of its arguments. At runtime, the JVM will check whether the given static type has a method with that name and those arguments (when loading the code) and whether the object it is invoked on (at invocation time, when running the code) actually implements that target static type. In this context, the "target static type" is one of interface, class, or abstract class. This gives us some level of dynamism, because we can then pass to that bytecode any object that implements the target type, even one that did not exist when that bytecode itself was loaded. But it sure does sound expensive to have to scan the entire hierarchy of an object for each method invocation. And it would be, if not for the amazing HotSpot JIT. One of the things it's really good at is figuring out when this information does not need to be checked anymore. Basically, given a block of N instructions that each invoke a method (and would thus in principle all need to check if their argument matches their individual static target types), the JVM can figure out that a given concrete, runtime class can bypass all these checks and replace all of them with a single check at the start of that block. And then, for good measure, inline the specific implementation of all these methods for that one class. In practice, the "type checking" cost basically disappears. What if we want more dynamism? Languages like Ruby or Python can do things like "call a method called foo with two arguments on this object, and see what happens". They don't need to know what class that method comes from. Can the JVM do something like that? After all, there is JRuby (and there was Jython). Yes, it can, although if you're trying to do that from straight Java it's going to seem a lot more complex. That "invoke" bytecode I described earlier really does need to know the static target type, so it can't be used for this level of dynamism. What you end up needing to do is the following1: - Call getClasson the object. Because that's a method on Object, and (almost) everything is an instance of Object, that always succeeds and can follow the pattern of having a well-defined static target type as explained above. - We know from the signature of getClassthat the result is an object of type Class, so we can now invoke methods on that as the static target type. We call getMethods. - We now have an array of Methodobjects, each of which we can ask for its name as a string with getName, which we can compare with the method name we wanted to invoke. If there is none, we give up and throw some type of Throwable. If there is one or more, we keep going. - We then need to check, for each matching name, the number of arguments the function expects versus the number of arguments that we are given. Note that at this level there is no variadic method, so every Methodobject has a well-defined number of expected argument, which we can access with getParameterCount. (Java variadic functions are a Java compiler feature, not a bytecode-level one. At the bytecode level, "variadic" methods just take an array as their last argument.) If there is no matching method, we give up and throw; if there is at least one matching method, we keep going. - We now need to check the type of each argument. Now this part is a bit tricky, because Java makes the static type of the arguments part of a method's signature, but static and dynamic types don't always match (specifically, the dynamic type can be a subtype of the static type). Since we're coming from a dynamic language, it's fair to assume we don't have access to static types for the arguments, but if we did we could use that to select the best match. If we don't, we can either try to guess based on the dynamic types, or give up and throw an exception. Let's imagine we have somehow settled on a single method to call at this point. We're still not quite done. - Finally, we can call the invokemethod on that Methodobject, which will call the method. That's a variadic method, so we first have to collect all of the arguments into an array of Objects. We can assume that that method will call statically-typed methods from then on following the "fast" path described above, so it doesn't matter that all the arguments are typed as Objectat this point, except that it does mean primitives always get boxed when going through this path. Not only is this a lot more work than checking for the static target type, it's also not an access pattern that the HotSpot JIT knows how to optimize. So that's bad for performance. What does all of this have to do with Clojure? Why Clojure is fast Clojure is very fast for a dynamic language because it mostly manages to always stick to the "known target static type" case, which HotSpot then optimizes. For example, let's look at the conj function in the Clojure standard library. It can act on lists, vectors, queues, maps, and sets. One could have made five separate classes with no common ancestor (besides Object) and relied on the slow, getClass approach in the Clojure compiler. That would work but be horribly slow. For performance, the most obvious path would be to create a Conjable interface and make sure each of these five types implements that. Then conj could just compile to a static call to that, and you'd get a runtime error if the argument you give to conj happens to not implement that interface. This is basically what Clojure does: a call to conj compiles down to a call to clojure.lang.RT#conj, which looks like this: static public IPersistentCollection conj(IPersistentCollection coll, Object x){ if(coll == null) return new PersistentList(x); return coll.cons(x); } It's not called Conjable, because there are other methods in it, but that's a plain old Java interface. If one wanted to add a new type that works with conj, one would just have to implement that interface. And the net result here is that, from a performance perspective, if your collection is implemented in Java and implements the IPersistentCollection interface, there is no performance hit for calling conj from Clojure compared to calling your conj method from Java. Even though the Clojure code that does the calling is untyped, the generated bytecode invokes the typed method conj on the static target type IPersistentCollection. But conj is a "strictly Clojure" function; Clojure also has a lot of functions that integrate with Java. Notably, the seq abstraction that a lot of collection-processing functions are based on works just as well on Java collections: t.core=> (let [al (doto (java.util.ArrayList.) (.add 1) (.add 2) (.add 3))] #_=> (filter odd? al)) (1 3) t.core=> Whereas conj could rely on its argument implementing the IPersistentCollection interface, there's no way filter here can pull that off, as ArrayList is a pre-existing JVM class. So how does filter work? Like most seq-returning functions in the Clojure standard library, filter works by first calling seq on its argument, and then working on the result. The result of seq is an ISeq, which is, again, a well-known static target type, so we're back to the fast path. How does the conversion to seq work? Let's look at first as it's a bit simpler than filter, but follows the same general principles. The implementation of first is: (def ^{:arglists '([coll]) :doc "Returns the first item in the collection. Calls seq on its argument. If coll is nil, returns nil." :added "1.0" :static true} first (fn ^:static first [coll] (. clojure.lang.RT (first coll)))) It looks funny because at that point in time the Clojure compiler does not know about defn yet. But basically all it's doing is deferring to clojure.lang.RT#first, which is defined as: static public Object first(Object x){ if(x instanceof ISeq) return ((ISeq) x).first(); ISeq seq = seq(x); if(seq == null) return null; return seq.first(); } Calls to instanceof in either if or switch statements are also among the things that the HotSpot JIT can recognize and optimize very well. So this is still on the fast path. What is seq doing, though? Here it is: static public ISeq seq(Object coll){ if(coll instanceof ASeq) return (ASeq) coll; else if(coll instanceof LazySeq) return ((LazySeq) coll).seq(); else return seqFrom(coll); } // N.B. canSeq must be kept in sync with this! static ISeq seqFrom(Object coll){ if(coll instanceof Seqable) return ((Seqable) coll).seq(); else if(coll == null) return null; else if(coll instanceof Iterable) return chunkIteratorSeq(((Iterable) coll).iterator()); else if(coll.getClass().isArray()) return ArraySeq.createFromObject(coll); else if(coll instanceof CharSequence) return StringSeq.create((CharSequence) coll); else if(coll instanceof Map) return seq(((Map) coll).entrySet()); else { Class c = coll.getClass(); Class sc = c.getSuperclass(); throw new IllegalArgumentException("Don't know how to create ISeq from: " + c.getName()); } } static public boolean canSeq(Object coll){ return coll instanceof ISeq || coll instanceof Seqable || coll == null || coll instanceof Iterable || coll.getClass().isArray() || coll instanceof CharSequence || coll instanceof Map; } This is all on the "fast" path; Clojure is explicitly handling the Java standard library (through supporting Iterable) as well as many existing (and future) Java libraries ( Iterable being a very standard interface to implement for Java collections), and gives an explicitly hook for Clojure-specific custom collections with the Seqable interface. Bascially, as long as you only call Clojure core functions, you're on the fast path and don't need to worry about JVM bytecode limitations with regards to dynamism. So why am I wirting about this? Why should you care? How to get the slow path The Clojure compiler can generate the slow code path I described above, and if you care about performance it's important to know when it does so and how to avoid it. Clojure core functions are "preoptimized" for the fast path as explained above, but Clojure is built on interop as one of its fundamental pillars, so you have free range to call any arbitrary Java method on any Clojure value — which can be any arbitrary Java object. The fundamental interop operator in Clojure is ., but it is rarely directly used. Instead, the .method form is used as syntactic sugar. For example, to call the charAt method on a String object, one can write: t.core=> (def s "hello") #'t.core/s t.core=> (defn char-at [arg] (.charAt arg 2)) #'t.core/char-at t.core=> (char-at s) \l t.core=> where \l is Clojure literal syntax for the Character object corresponding to the letter "l". Because there's no way for the Clojure compiler to know that arg is of type String when compiling the char-at function, that function is generated using the slow path described above. In other words, it's just hoping that the argument has a method, any method, called charAt and taking a single argument. We can get a rough measure of its performance through benchmarking it (reusing the bench function from last week): t.core=> (bench #(char-at s)) 7.408092233176674E-6 t.core=> Without a comparison point, it's hard to know whether that's good or bad. So let's get a comparison point. We can tell the Clojure compiler that we do expect a String argument, and make it compile to the fast path described above, by defining the function this way: t.core=> (defn fast-char-at [^String arg] (.charAt arg 2)) #'t.core/fast-char-at t.core=> (bench #(fast-char-at s)) 1.626647040930247E-8 t.core=> (/ 7.408092233176674E-6 1.626647040930247E-8) 455.42100079314895 t.core=> Improving speed by 455x is a nice speedup for the relatively low effort of adding one type hint. Benchmarking at the REPL is not always reliable, so we can write a small program to double-check those results: (ns t.core (:require [criterium.core :as crit]) (:gen-class)) (defn bench [f] (->> (crit/benchmark (f) {}) :mean first)) (defn char-at [s idx] (.charAt s idx)) (defn fast-char-at [^String s ^long idx] (.charAt s idx)) (defn -main [& args] (let [r1 (char-at "hello" 2) r2 (fast-char-at "hello" 2) t1 (bench #(char-at "hello" 2)) t2 (bench #(fast-char-at "hello" 2))] (println (format "%-15s: %.2e (%d)" "char-at" t1 r1)) (println (format "%-15s: %.2e (%d)" "fast-char-at" t2 r2)) (println (format "speedup: %6.2f" (/ t1 t2))))) which yields: $ java -server -jar target/uberjar/t-app-standalone.jar char-at : 2.34e-06 (l) fast-char-at : 3.56e-09 (l) speedup: 656.51 $ *warn-on-reflection* I have written before about Clojure compiler flags, so I won't repeat all of the context here. The point is, there is a flag that will let the Clojure compiler tell us when we need to add a type hint, so we don't accidentally end up with a slow path. Unlike *unchecked-math*, *warn-on-reflection* really should always be enabled at a project level. If you are using Leingingen, you can turn it on from the project.clj file by adding this line to your default profile (i.e. top-level map): :global-vars {*warn-on-reflection* true} Here's what it looks like: t.core=> (set! *warn-on-reflection* true) true t.core=> (.charAt "hello" 2) \l t.core=> (let [s "hello"] (.charAt s 2)) \l t.core=> (defn char-at [arg] (.charAt arg 2)) Reflection warning, /private/var/folders/wv/9lkw754x31l1m4b228b663400000gn/T/form-init12357873795856465089.clj:1:21 - call to method charAt can't be resolved (target class is unknown). #'t.core/char-at t.core=> I think this illustrates nicely why the flag is useful: not all calls need to be annotated (though you may still want to for clarity), because, in some situations, the compiler can infer the type from context. For example, here, we can see that it knows that literal strings are instances of String, and that it is able to propagate type information on let-bound locals. The only down side I can think of for turning that flag project-wide by default is that it could generate a spurious warning in cases where you actually do want to call a method based on its name regardless of the providing type. In all of my career so far, I've wanted to do that exactly once. I was actually working in Java at the time, so I solved it by going through the reflection APIs (i.e. essentially the "slow path" described above). If you do end up with a similar use-case, and you somehow can't fix it upstream by getting your objects to implement a common interface when they have an identical method, I would still recommend keeping *warn-on-reflection* set at the project level, and simply deactivating it for the one method where you actually want reflection: (set! *warn-on-reflection* false) (defn wrapping-weird-apis [arg] (;;... calling some method arg)) (set! *warn-on-reflection* true) Conclusion Clojure's support for reflection is really nice when exploring Java APIs in the REPL, but it should rarely be used in production code. Just turn on `warn-on-reflection by default, and ensure you get no warnings through CI. (Or discipline, if you're into that.) For performance-sensitive code, reflection is very bad. Not only is it slow itself, it also prevents a lot of HotSpot optimizations on the surrounding code. Even if you do end up with a situation where the method you want to call does not have a single static target type, you may be better served by writing a case against a handful of target types instead, as Clojure is doing for seq. What I'm describing here is the situation prior to the introduction of↩ invokeDynamic. Clojure was designed before it, and the existing design does not benefit from it, so Clojure is not using it and I'm not going to talk about it further in this post. It did change the game for JRuby, which used to work roughly as I describe the "slow" path here and now has better options for basically teaching HotSpot how to optimize some typical Ruby code patterns.
https://cuddly-octo-palm-tree.com/posts/2022-02-20-opt-clj-6/
CC-MAIN-2022-40
refinedweb
2,892
71.65
Catharsis is a powerful RAD (Rapid Application Development) tool which is built around a solid architecture using ASP.NET MVC and nHibernate. Best practice Design Patterns and separation of concerns between layers were key factors in the design of the Catharsis framework. Using Guidance, the framework offers RAD functionality on a robust enterprise level architecture by automatically generating most of the code you need for your web application entities. Filling in a few simple dialog screens is all that is required in many cases. This article explains how you can quickly build an application to create, read, update, and delete entities (CRUD). The Catharsis Guidance automatically generates the multi-tier architecture and adds a skeleton infrastructure of classes and interfaces which will work without much additional coding. The article builds on the previous one in this series which examined the Catharsis example project. In this article, we will add a new entity to that example project which is available to download. This information will allow you to quickly create your own CRUD application. In addition to creating simple entities, this article will also explain how to use the framework to code references between entities, for example, where one entity is used as an entity-type in another entity. Finally, we will look at how to add business rules to your application. If you have a database and want to quickly create a robust enterprise level web application to access that database, Catharsis offers the best way to achieve this. Unlike many frameworks, Catharsis was written using public and protected methods which makes it completely extensible. The programmer can take control of their own application and override methods when they want to add new functionality when they need to. This is not necessary when creating a lot of applications on Catharsis, but for enterprise level applications, it is nice to know that the option is available if needed. Before reading this article, please read the Catharsis installation guide. This is Catharsis Tutorial 01 in the list available here:. A powerful example solution based on the Catharsis framework is available for download. The example solution is called Firm.SmartControls, and can be downloaded here:. The second article in this series looks into the example solution and explains how it is set up. Read this before continuing (Catharsis Tutorial 02): catharsis_tutorial_02.aspx. A good way to lean more about Catharsis is to install the demo solution and follow the step by step guide in this article to add a new entity to that solution. The solution contains entities called Agent and AgentContract, we will add an additional one called Client. Agent AgentContract Client The database create script for the Firm.SmartControls solution actually contains two tables which are not yet mapped so we will use one of these as an example of how to add a new entity. "Client" will be our new entity. (We will add it to the Insurance namespace.) Insurance Before we can add the new entity using Guidance, we need to enable the Guidance package. Click on Tools -> Guidance Package Manager. Click Enable/Disable Packages on the dialog that appears, select ProjectBase.Guidance, and click OK. Close the next two dialogs that appear as we do not need them now. Note that if you want to add a complete web infrastructure, the best place to add it is via the Entity (or Web project). It is also possible to add it in the Data Layer, but this would exclude the GUI elements which we will require in this instance. The folder into which the new entity is added will become part of the namespace for that new entity. If you want the entity to be in a new namespace, you should create a folder in the entity project and add it there; alternatively, you can add it to an existing folder as we will do in this case because we want our new entity to be in the Insurance namespace. Right click on the folder and select "(Re) Create COMPLETE WEB infrastructure" from the menu. "(Re)" signifies that if the entity already exists in the folder, the files will be overwritten with new empty skeleton classes. This offers a way to undo code or correct mistakes. "COMPLETE WEB" means that skeleton class-files will be added to every project (even unit tests). If you select "(Re) Create Project - (Entity, Business, Common, Tests)", no files in the Models, Controllers, and Web projects will be added (or changed). This is for cases where no GUI elements are required. The namespace of the new entity should be Firm.SmartControls.Entity.Insurance.Client so we click on the Insurance folder as shown to generate the web infrastructure via Guidance. Firm.SmartControls.Entity.Insurance.Client Here is the main dialog which we need to fill. Giving as much information here as possible will reduce the amount of work that we need to do later. Type the name of the new entity in the dialog. You can now add up to three additional properties. In SQL Server 2005, we saw the columns of the InsuranceClient table so we can add the first three: Code, Name, and BankCode. Guidance will automatically generate checks to ensure that Code is unique (this can be deleted if it is not required). Adding properties here reduces the amount of work that we will have to do later; however, we can only add value types, for example, a string property "name"; we cannot add Entity types, for example, a foreign key which references another table such as Country. The namespace is provided because it is determined by where you add the entity in the solution. The entity type in this case should be 'Persistent'. That will create the skeleton for the business object which has no ancestors for the business type (it is derived directly from the Persistent object). Other types allow reusing previously implemented functionality. Persistent The second and third options are for CodeLists, you can choose "simple" or "separate entity". First, we need to be clear on what a CodeList is. CodeList entities are often used to populate comboboxes, for allowing the user to select one option from a collection of predefined options. All of the countries in the EU could be represented in this way, the genders Male and Female is another good example. Another general property of CodeList entities is that the data is static. It will not be necessary to add or delete objects of this type. Gender, for example, will never need more than "Male" and "Female". The base classes give CodeList entities a code and a name, so for Gender, the name could be Male and the code could be M. These are simple entities because no additional information is required. Therefore, using the option for a simple CodeList is suitable for something like Gender or Country. If you need a simple entity like Gender, you can use the ICodeList option. In that case, you do not have to implement anything, your new ICodeList property will work immediately without any additional coding. ICodeList The framework also gives the option to create a CodeList but allows for the entity to be extended with additional information. Currency, for example, could have an object with the Name "Dollar" and the Code "USD", but we might also want to add a column for subunit and give it a value of "cents" or "c". If we need to extend the basic functionality of a CodeList, a "separate entity" can be used. In this case, a column in the database table should hold a value for the subunit. The Tracked entity type is the same as a Persistent type but additional code is provided which allows an "Audit Trail" to be maintained for the entity. If you need to track when an entity is changed, who changed it, and what state it is in, this is the best option. Tracked Click Finish, and after some time, all of the files will be automatically generated and a pop-up will appear to tell you what you should do next: So we follow these instructions and open the Str.Controller.cs file and add the highlighted line: Now we open the menu.config file: The highlighted code should be added: It is added at the same level as Agent, so it will appear as a sibling of this node in the navigation tree. Attempting to use this new entry in the navigation menu will cause an error of course because the database table has not yet been mapped via nHibernate: The table that we are mapping looks like this: Open the nHibernate file which was automatically generated for this entity: Firm.SmartControls.Data.Insurance.Client.hbm.xml. The data which you supplied during the creation of the entity is already added: <?xml version='1.0' encoding='utf-8' ?> <hibernate-mapping <class name='Client' table='Client' lazy='true' > <id name='ID' column='ClientId' > <generator class='native'></generator> </id> <property not- <property not- <property not- </class> </hibernate-mapping> The following sections need to be changed: the table name is InsuranceClient, not Client. The ID column is InsuranceClientId, not ClientId. CountryId and GenderId are CodeLists and will require many-to-one mappings. CountryId GenderId Here is the completed version: <?xml version='1.0' encoding='utf-8' ?> <hibernate-mapping <class name='Client' table='InsuranceClient' lazy='true' > <id name='ID' column='InsuranceClientId' > <generator class='native'></generator> </id> <property not- <property not- <property not- <many-to-one</many-to-one> <many-to-one</many-to-one> </class> </hibernate-mapping> We can see a reference to Firm.SmartControls.Entity.Insurance in the file above so this will need to be changed to reflect the changes we made in the mapping file. Firm.SmartControls.Entity.Insurance The DAO (Data Access Object) will also need to be changed, but before we do that, we will add the properties to the Entity file which were not automatically generated by Guidance. Open the Client entity file: Three properties exist: Code, Name, and BankCode. We will now add Gender and Country. These are CodeList objects so we need to add a using directive for Firm.SmartControls.Entity.CodeLists in order for the Gender and Country datatypes to be recognized. The code we should add is in bold. Code Name BankCode Gender Country CodeList using Firm.SmartControls.Entity.CodeLists using System; // =================================== using System.Collections.Generic; // Guidance generated code © Catharsis using System.Linq; // =================================== using ProjectBase.Core; using ProjectBase.Core.PersistentObjects; using ProjectBase.Core.Collections; using Firm.SmartControls.Entity.CodeLists; namespace Firm.SmartControls.Entity.Insurance { /// <summary> /// Entity Client. /// </summary> [Serializable] public class Client : Persistent { public virtual string Code { get; set; } public virtual string Name { get; set; } public virtual string BankCode { get; set; } /// <summary> /// codelist /// </summary> public virtual Gender Gender { get; set; } /// <summary> /// codelist /// </summary> public virtual Country Country { get; set; } Now we add these additional fields to the DAO (Firm.SmartControls.Data.Insurance.ClientDao). The newly added Gender and Country should be available in intellisense when we add the two new entries, this is obviously because they are now properties of the Client entity. Firm.SmartControls.Data.Insurance.ClientDao Now we have made the necessary changes to the nHibernate file, the Entity, and the DAO so the "Client" menu item will work. Of course, there are no Clients in the database yet, so we will need to add these. The next step is to extend the functionality behind the "New" button to allow us to add new Clients. If we click the "New" button now, we will see that the properties which we specified during the Guidance setup (Code, Name, and BankCode) are automatically added. Now we will add the Gender and Country properties. Open the ClientDetailsWC.ascx file (the abbreviation WC is for "Web Control"). This file shows the HTML markup used to create the page shown above. We will reduce the size of the two columns (Identification and Description) and add a third column for CodeLists and will add CodeLists for Gender and Country. Each row in the HTML contains a number of fieldsets. There is currently one fieldset for Identification and one for Description. We will reduce the percentage width of these two to 32% so we will have enough room in the row for three fieldsets. <div class='newRow mh50'> <fieldset class='newDetail w32p'> w32p represents a CSS class for width. We can examine these CSS classes in the following file: w32p The CSS style .w32p { width: 32%; } will be used in our case. Now we can add a third fieldset for the two CodeLists, the code is shown here: <fieldset class='newDetail w32.Item.Country); %> <smart:AsyncComboBoxWC </div> </div> <div class='inputWC inputWC60 w100p'> <div class='label'><%= GetLocalized(Str.Controllers.Gender)%></div> <div class='input'><% Gender.SetEntityAsDataSource(Model.Item.Gender); %> <smart:AsyncComboBoxWC </div> </div> </div> </fieldset> This code uses the Model to access the item (the entity) and its properties. Now we can see that two new dropdown lists have been added and populated with the data that we require. Attempting to actually add a new Client will fail: This is because when we click the Add button, the Controller will try to add the entity but it cannot do so because it cannot yet handle the entity-type properties. We need to look at the Controller for the new entity which has been automatically generated at the following location: Here are many regions which are available for us to add code to, most of these are empty in a newly created Controller file. It may be useful to know that holding down the CTRL key and typing mo will expand all the regions, likewise CTRL ml will collapse all the regions. OnAfterBindModel OnAfterBindSearch ClientController We will now make the required changes to allow us to save a new Client. We need to add two methods to the OnAfter region to handle the entity types: OnAfter /// <summary> /// Binds non value type properties for an Item /// </summary> /// <returns></returns> protected override bool OnAfterBindModel() { var success = base.OnAfterBindModel(); int id = 0; // Country if (Request.Form.AllKeys.Contains(Str.Controllers.Country) && int.TryParse(Request.Form[Str.Controllers.Country], out id)) { Model.Item.Country = CountryFacade.GetById(id); } // Gender if (Request.Form.AllKeys.Contains(Str.Controllers.Gender) && int.TryParse(Request.Form[Str.Controllers.Gender], out id)) { Model.Item.Gender = GenderFacade.GetById(id); } return success; } /// <summary> /// Binds non value type properties for searching /// </summary> /// <returns></returns> protected override bool OnAfterBindSearch() { var success = base.OnAfterBindSearch(); int id; // Country if (Request.Form.AllKeys.Contains(Str.Controllers.Country)) // there was some selection { // clear previous to null (it could be final stage also) Model.SearchParam.Example.Country = null; if (int.TryParse(Request.Form[Str.Controllers.Country], out id)) { Model.SearchParam.Example.Country = CountryFacade.GetById(id); } } // Gender if (Request.Form.AllKeys.Contains(Str.Controllers.Gender)) // there was some selection { // clear previous to null (it could be final stage also) Model.SearchParam.Example.Gender = null; if (int.TryParse(Request.Form[Str.Controllers.Gender], out id)) { Model.SearchParam.Example.Gender = GenderFacade.GetById(id); } } return success; } As you can see from the code, some checks are performed to make sure that a value for Country was provided on the form (in the ASCX control) and also to ensure that the supplied value is an integer. Then we call CountryFacade to find the Country which has the ID which was sent from the form and the Country object is returned and added to the Client object. CountryFacade We also need to add some properties in the Properties region: public override string ControllerName { get { return Str.Controllers.Client; } } /// <summary> /// Allowes LAZILY (or via IoC) to work with Country /// </summary> public virtual ICountryFacade CountryFacade { protected get { if (_countryFacade.IsNull()) { _countryFacade = FacadeFactory.CreateFacade<ICountryFacade>(Model.Messages); } return _countryFacade; } set { Check.Require(value.Is(), " ICountryFacade cannot be null"); _countryFacade = value; } } /// <summary> /// Allowes LAZILY (or via IoC) to work with Gender /// </summary> public virtual IGenderFacade GenderFacade { protected get { if (_genderFacade.IsNull()) { _genderFacade = FacadeFactory.CreateFacade<IGenderFacade>(Model.Messages); } return _genderFacade; } set { Check.Require(value.Is(), " IGenderFacade cannot be null"); _genderFacade = value; } } This provides a façade for the two entity types which will be used in the OnAfter methods above. The methods above require two local members and these are added in the members region as shown here: #region members IGenderFacade _genderFacade; ICountryFacade _countryFacade; #endregion members Now we have added all the required code to allow us to add a new Client. The newly added Client above can be seen in the list view when we click on the Client menu item: Note that Gender and Country do not appear in the list. The properties of the Client entity which do appear are the ones which we provided to the Guidance dialogs when we were creating the web infrastructure. As mentioned above, the OnList region in the control should be expanded to handle this. OnList In this section, we will add to the OnList method in the ClientController to show the Gender and Country of the listed Client entities. Here is the code which controls what appears in the list: protected override void OnListToDisplay() { Model.ListModel.ItemsToDisplay = Facade.GetBySearch(Model.SearchParam) .Select(i => new ItemToDisplay() { ID = i.ID, Description = i.ToDisplay(), Items = new List<IHeaderDescription> { new HeaderDescription { HeaderName = "Code", Value = i.Code}, new HeaderDescription { HeaderName = "Name" , Value = i.Name }, new HeaderDescription { HeaderName = "BankCode" , Value = i.BankCode }, new HeaderDescription { HeaderName = Str.Common.ID , Value = i.ID.ToDisplay(), Align = Align.right }, } } as IItemToDisplay); } We will add another line to display the Country code: new HeaderDescription { HeaderName = Str.Controllers.Country, Value = i.Country.Code, SortByObject=Str.Controllers.Country, SortByProperty=Str.Common.Code}, Column sorting attributes are also provided in this line. You can choose to display Country.Code such as "IR", or Country.Display such as "IR (Ireland)". The second entity-type property, Gender, is added in a similar way. It is important to note when working with the Catharsis framework, it will often be necessary to rebuild the entire application in order to see changes in the web browser when the application is running in Debug mode. This is because of the separation of concerns between the layers of the Catharsis framework. When you make some changes in the code (as in the ClientController in this case) and press F5 or click the Debug button, only the files (DLLs) which Visual Studio thinks need to be updated will be updated, because there is no references existing between the Controller and the web project. This will be explained in more detail later, but remember that if you expect to see changes, rebuild the entire solution before you test your changes. No additional coding is required to make the entities editable. When looking at an entity in the Detail view, click the Edit button and the text boxes become editable, change the property that needs to be updated, and click Update to save the entity. The search function is accessible by clicking the Search button. The default search created by Guidance handles the properties that we provide while setting up the Guidance for the new entity. The HTML and CSS can be adjusted to suit your needs. The use of ID, Code, Name and Bank Code for searching is obvious. The number of rows displayed on the search page can be defined on the search page. It is also possible to display the search results in a new window. We will now add the code required to search for entity type properties like Country and Gender. First, we will add the elements to the ASCX control. A fieldset containing the comboboxes for the two properties will be added: <fieldset class='newDetail w30.SearchParam.Example.Country); %> <smart:AsyncComboBoxWC </div> </div> <div class='inputWC inputWC60 w100p'> <div class='label'><%= GetLocalized(Str.Controllers.Gender)%></div> <div class='input'><% Gender.SetEntityAsDataSource( Model.SearchParam.Example.Gender).SetComboBoxName( Str.Controllers.Gender) ; %> <smart:AsyncComboBoxWC </div> </div> </div> </fieldset> This will create the GUI elements that we need and they will be populated with the expected lists. This is enough to allow the system to search through Country and Gender. It is also possible to expand the search functionality to search by Country name for example, this will be described in a later section. Most applications will need some business rules to be employed when manipulating entities. For example, if we have a Client who has "Germany" as Country, it is not a good idea to allow the system to delete the Country Germany from the available Countries. This would result in a situation whereby an entity uses an entity which no longer exists in the system. This is similar to foreign key data integrity at database level. We do not rely on the database to take care of this, it is more efficient to handle such situations in the code, so we will see how this is done now. Business rules are applied on the business façade which can be found at the location shown here: To enforce a business rule to disallow a Country to be deleted if it is used by a Client, we need to get the CountryFacade to ask the ClientFacade if any Clients use the country which we wish to delete. ClientFacade This involves four steps. Check IClientfacade IsCountryInUse We begin by opening CountryFacade and adding the following code: /// <summary> /// Must check if current entity is not used! /// if any other entity use this instance we stop deleting and add an error message /// </summary> /// <param name="entity"></param> /// <returns></returns> protected override bool CheckDelete(Country entity) { var result = base.CheckDelete(entity); if (ClientFacade.IsCountryInUse(entity)) { Messages.AddError(this, Str.Messages.Keys.CannotDeleteItem, Str.Messages.Templates.CannotDelete1, entity.ToDisplay()); result = false; } return result; } This method uses ClientFacade so we need to add a local member _clientFacade... _clientFacade #region members IClientFacade _clientFacade; #endregion members We also need a property for ClientFacade: #region properties /// <summary> /// Allowes to set Agent using login /// </summary> public virtual IAgentFacade AgentFacade { protected get { if (_agentFacade.IsNull()) { _agentFacade = FacadeFactory.CreateFacade<IAgentFacade>(Messages); } return _agentFacade; } set { Check.Require(value.Is(), " IAgentFacade cannot be null"); _agentFacade = value; } } #endregion properties The CheckDelete method above calls the method IsCountryInUse and will determine whether the Country can be deleted based on the results of that call. IsCountryInUse must be added to IClientFacade: IClientFacade /// <summary> /// Allows to provide check before delete. /// Is there any Agent using 'entity' instance as Country /// </summary> /// <param name="entity"></param> /// <returns>true if is in use</returns> bool IsCountryInUse(Country entity); Note that it is also necessary to add a using directive so the interface has access to the CodeList namespace because it needs access to the Country object: using Firm.SmartControls.Entity.CodeLists; The above using directive also needs to be added to the ClientFacade. Now we implement the IsCountryInUse method in ClientFacade. #region IClientFacade /// <summary> /// Provides checking before a deletion takes place. /// Are there any Clients using 'entity' instance as Country /// </summary> /// <param name="entity"></param> /// <returns>true if is in use</returns> public virtual bool IsCountryInUse(Country entity) { var item = Dao.GetBySearch(new ClientSearch() { MaxRowsPerPage = 1, Example = new Client() { Country = entity } }).FirstOrDefault(); if (item.Is()) { Messages.AddWarning(this, Str.Messages.Keys.ItemInUse, Str.Messages.Templates.ItemIsUsedForEntity3, entity.ToDisplay(), item.ToDisplay(), Str.Controllers.Country); return true; } return false; } #endregion IClientFacade Now we can test our code. Run the application and click on Clients to see the list of current clients. We can see that Ireland is in use as a country, so now open the CodeLists branch of the navigation tree and click on Country. We can use the red X next to the country Ireland to attempt to delete it. The deletion will fail because of the business rule which we have added. Note that the error messages can be formatted differently if you wish. Business rules can also be used to control what is allowed during the addition or upgrading on an entity. You should now have understood how to add new entities to the example solution, link them with other entities (CodeLists), and add some basic business rules. Using this information and the guidelines in the first document in this series, you should now be able to create a new database in SQL Server and rapidly develop a CRUD web application using the Catharsis framework. In future tutorials in this series, we will look at troubleshooting some problems that users of the Catharsis framework have experienced. We will look more deeply into how to use Catharsis and will produce more example applications. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
http://www.codeproject.com/script/Articles/View.aspx?aid=37612
CC-MAIN-2016-07
refinedweb
4,087
54.63
wnutil - Man Page utility functions used by the interface code Synopsis #include "wn.h" int wninit(void); int re_wninit(void); int cntwords(char *str, char separator); char *strtolower(char *str); char *ToLowerCase(char *str); char *strsubst(char *str, char from, char to); int getptrtype(char *ptr_symbol); int getpos(char *ss_type); int getsstype(char *ss_type); int StrToPos(char **pos); SynsetPtr GetSynsetForSense(char *sense_key); long GetDataOffset(char *sense_key); int GetPolyCount(char *sense_key); char *WNSnsToStr(IndexPtr idx, int sense_num); IndexPtr GetValidIndexPointer(char *str, int pos); int GetWNSense(char *lemma, *lex_sense); SnsIndexPtr GetSenseIndex(char *sense_key); int GetTagcnt(IndexPtr idx, int sense); int default_display_message(char *msg); Description The WordNet library contains many utility functions used by the interface code, other library functions, and various applications and tools. Only those of importance to the WordNet search code, or which are generally useful are described here. wninit() opens the files necessary for using WordNet with the WordNet library functions. The database files are opened, and morphinit() is called to open the exception list files. Returns 0 if successful, -1 otherwise. The database and exception list files must be open before the WordNet search and morphology functions are used. If the database is successfully opened, the global variable OpenDB is set to 1. Note that it is possible for the database files to be opened (OpenDB == 1), but not the exception list files. re_wninit() is used to close the database files and reopen them, and is used exclusively for WordNet development. re_morphinit() is called to close and reopen the exception list files. Return codes are as described above. cntwords() counts the number of underscore or space separated words in str. A hyphen is passed in separator if is is to be considered a word delimiter. Otherwise separator can be any other character, or an underscore if another character is not desired. strtolower() converts str to lower case and removes a trailing adjective marker, if present. str is actually modified by this function, and a pointer to the modified string is returned. ToLowerCase() converts str to lower case as above, without removing an adjective marker. strsubst() replaces all occurrences of from with to in str and returns resulting string. getptrtype() returns the integer ptr_type corresponding to the pointer character passed in ptr_symbol. See wnsearch(3) for a table of pointer symbols and types. getpos() returns the integer constant corresponding to the synset type passed. ss_type may be one of the following: n, v, a, r, s. If s is passed, ADJ is returned. Exits with -1 if ss_type is invalid. getsstype() works like getpos(), but returns SATELLITE if ss_type is s. StrToPos() returns the integer constant corresponding to the syntactic category passed in pos. string must be one of the following: noun, verb, adj, adv. -1 is returned if pos is invalid. GetSynsetForSense() returns the synset that contains the word sense sense_key and NULL in case of error. GetDataOffset() returns the synset offset for synset that contains the word sense sense_key, and 0 if sense_key is not in sense index file. GetPolyCount() returns the polysemy count (number of senses in WordNet) for lemma encoded in sense_key and 0 if word is not found. WNSnsToStr() returns sense key encoding for sense_num entry in idx. GetValidIndexPointer() returns the Index structure for word in pos. Calls morphstr(3) to find a valid base form if word is inflected. GetWNSense() returns the WordNet sense number for the sense key encoding represented by lemma and lex_sense. GetSenseIndex() returns parsed sense index entry for sense_key and NULL if sense_key is not in sense index. GetTagcnt() returns the number of times the sense passed has been tagged according to the cntlist file. default_display_message() simply returns -1. This is the default value for the global variable display_message, that points to a function to call to display an error message. In general, applications (including the WordNet interfaces) define an application specific function and set display_message to point to it. Notes include/wn.h lists all the pointer and search types and their corresponding constant values. There is no description of what each search type is or the results returned. Using the WordNet interface is the best way to see what types of searches are available, and the data returned for each. See Also wnintro(3), wnsearch(3), morph(3), wnintro(5), wnintro(7). Warnings Error checking on passed arguments is not rigorous. Passing NULL pointers or invalid values will often cause an application to die. Referenced By binsrch(3), morph(3), wnintro(3), wnsearch(3).
https://www.mankier.com/3/wnutil
CC-MAIN-2022-40
refinedweb
747
56.35
to open new window from a servlet manisha Gupta Garg Ranch Hand Joined: Jul 03, 2009 Posts: 41 posted Sep 11, 2009 05:40:04 0 hi, is it possible to open a new window from a servlet ? Actually i want to show a message "please wait..." in new window while servlet is doing some processing ! I have the code to display the "please wait message..." on the screen it self ...but the problem is that, my servlet is used for reading a file(doc, pdf , xls ,text,html) from server. so if i display the message in servlet page itself like below - public class ContentRetriever extends HttpServlet implements Serializable { public void doPost(javax.servlet.http.HttpServletRequest req, javax.servlet.http.HttpServletResponse res) throws javax.servlet.ServletException, java.io.IOException { doGet(req, res); } public void doGet(javax.servlet.http.HttpServletRequest req, javax.servlet.http.HttpServletResponse res) throws javax.servlet.ServletException, java.io.IOException { /* START - SH7149 - Opening of reports via servlet to fix the blank window issue and opening of 2007 format reports */ //fetch the report location from goto2.jsp String reportLocation = req.getParameter("reportLocation"); String debug = req.getParameter("debug"); ServletOutputStream out =null; String pout.println( "<meta http-equiv=\"Refresh\" content=\"0\">" ); pout.println( "</head><body bgcolor='#CCCCCC'>" ); pout.println( "<br><br><br>" ); pout.println( "<center><h1>file is opening.<br>" ); pout.close(); } else { repSession.removeAttribute( "waitPage" ); try { //Do your lengthy process here like file reading,processing... if ( reportLocation!=null ) { BufferedInputStream bis = null; BufferedOutputStream bos = null; res.reset(); // Code to obtain the report name. int iReportLocn = reportLocation.lastIndexOf("/"); String reportName = reportLocation.substring(iReportLocn + 1); // System.out.println("iReportLocn " + iReportLocn); //System.out.println("reportName " + reportName); //Code to fetch the mime type of the report being accessed. String contentType = getServletContext().getMimeType(reportName); System.out.println("getServletContextContent Type " +contentType); try { // Code to read all the details of the digital certificates on the remote server and accept them TrustManager[] trustAllCerts =new TrustManager[]{new eBIPTrustManager()}; try { SSLContext sc = SSLContext.getInstance("SSL"); sc.init(null, trustAllCerts, null); HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory()); } catch (Exception ep) { System.out.println ( "SSLContext excp in ContentRetriever: "+ep.toString());} // Read the input stream URL url = new URL(reportLocation); java.net.URLConnection urlcon = url.openConnection(); // System.out.println("url" +url); // System.out.println("urlcon" +urlcon); int length = urlcon.getContentLength(); // Set the response parameters res.setContentLength(length); res.setContentType(contentType); res.setHeader( "Content-disposition", "inline; filename="+reportName); //Use Buffered Stream for reading/writing. bis = new BufferedInputStream(url.openStream()); out = res.getOutputStream(); bos = new BufferedOutputStream(out); byte[] buff = new byte[1024]; // Simple read/write loop. while(-1 != (bytesRead = bis.read(buff, 0, buff.length))) { bos.write(buff, 0, bytesRead); readCount++; } } catch(final MalformedURLException e) { System.out.println ( "MalformedURLException in ContentRetriever servlet: "+e.toString()); } catch(final IOException e) { System.out.println ( "IOException in ContentRetriever Servlet: "+e.toString() ); } finally { if (bis != null){ bis.close();} if( out != null ) { out.close();} if (bos != null){ bos.close();} } } else{ // If report location is null out = res.getOutputStream(); out.println(" The Report is unavailable.."); if( out != null ) { out.close();} } } catch (Exception ex) { System.out.println ( "exception -- "+ex.getMessage()); } } } } so if i used the above code, the message gets displayed on the page. but the window should get closed automatically when the xls/ doc file is open. Please help !! Anirvan Majumdar Ranch Hand Joined: Feb 22, 2005 Posts: 261 posted Sep 11, 2009 07:43:03 0 After the response text is sent back to the client, there's nothing you can do with it. So basically, once you close the PrintWriter instance, the response body is sent back to the client while you keep working with the OutputStream associated with the response. When you close the stream, you can't do anything about the page's display content. Personally I don't think it's wise to work on both the PrintWriter and OutputStream associated with a HTTPServletResponse instance. More often than not, one gets to bump into an IllegalStateException . What you should probably try to do is - when you send the control to this servlet, display the message on the screen at that point while you submit the request to this particular servlet. For eg - if you come here from a JSP , in your JSP you can write a small JavaScript snippet which directs the user's browser to a static HTML using something like - location.href = "path/to/static/HTML/file"; Thereafter, you can simply submit the JSP's form object. [your form object's action attribute should map to this servlet]. Andrew Monkhouse author and jackaroo Marshal Commander Joined: Mar 28, 2003 Posts: 11776 126 I like... posted Sep 11, 2009 18:54:40 0 Interesting solution Arnivan, however I am not sure about the implementation of it - when you reset the location.href are you appending all the form values and the submitting the form from the new page? Presumably with the entire form being hidden on the new page. Personally I would submit the form as normal, and have the receiving servlet pass off the majority of the work to a pool of worker threads. The servlet is then free to return almost instantly to a screen that says "please wait". This new screen would then spawn an Ajax request to get the real response from the server once the worker thread completed. The Sun Certified Java Developer Exam with J2SE 5: paper version from Amazon , PDF from Apress , Online reference: Books 24x7 Personal blog Anirvan Majumdar Ranch Hand Joined: Feb 22, 2005 Posts: 261 posted Sep 12, 2009 02:37:18 0 Yes Andrew, it works although it sounds a bit strange. location.href simply changes the display page and doesn't submit the form. A snippet like this - form.action = "/someServlet"; location.href = "/some/HTML"; form.submit(); will display the HTML page while submitting the form to the servlet. When the servlet sends back its response, the browser will refresh with the servlet's content. I think the only limiting factor about this is that the time taken by the servlet to send back the response > time taken for moving to the HTML page [which usually is the case]. Andrew Monkhouse author and jackaroo Marshal Commander Joined: Mar 28, 2003 Posts: 11776 126 I like... posted Sep 12, 2009 06:22:12 0 Thanks for explaining. manisha Gupta Garg Ranch Hand Joined: Jul 03, 2009 Posts: 41 posted Sep 14, 2009 00:33:28 0 Thanks Arivan and Andrew, for your interest!! but my problem is xls,pdf reports have to be open in their native application and our code and property of IE does this. so if they open in microsoft applications and if we have applied something like the approaches i mentioend in my post and the approaches that you folks are suggesting then the jsp which has opened the servlet to read the file, would remain open as it is and we want to close the jsp/servlet page when the report is openned. Please help! Thanks, Manisha Andrew Monkhouse author and jackaroo Marshal Commander Joined: Mar 28, 2003 Posts: 11776 126 I like... posted Sep 14, 2009 06:09:40 0 The same basic approaches can be used. You could have a secondary servlet tracking whether the report has been sent or not Use Anirvan's suggestion to get the temporary page displaying, and have that page generate an Ajax request every 'x' seconds to the secondary servlet to check the status - if the report has been sent then the tab can close itself. I agree. Here's the link: subject: to open new window from a servlet Similar Threads downloading with applet from servlet Servlet Not Responding How to Return a File using Servlet Servlet downloads Excel file ok but also attempts Servlet & JSP files !! Download file from Servlet via Applet All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/462121/Servlets/java/open-window-servlet
CC-MAIN-2015-48
refinedweb
1,307
58.69
I'm using requests module with Django and trying to send a file from a form but when I do I get "invalid file :" error when I try to open the file. I think that it's only trying to open the filename as a string instead of opening the actual file. How can I go about opening the actual file from the form instead of just trying to open the filename, so I can send it as a payload? class AddDocumentView(LoginRequiredMixin, SuccessMessageMixin, CreateView): login_url = reverse_lazy('users:login') form_class = FileUploadForm template_name = 'docman/forms/add-document.html' success_message = 'Document was successfully added' def form_valid(self, form): pk = self.kwargs['pk'] user = get_object_or_404(User, pk=pk) file = form.save(commit=False) file.user = user if not self.post_to_server(file, user.id): file.delete() return super(AddDocumentView, self).form_valid(form) def post_to_server(self, file, cid): url = '' headers = {'token': '333334wsfSecretToken'} # I get error here when trying to open file payload = {'file': open(file, 'rb'), 'client_id': cid} r = requests.post(url, data=payload, headers=headers) print(r.text) if r.status_code == requests.codes.ok: return True else: return False open(file, 'rb') receiving django model object from file = form.save(commit=False) line, not file. send original file. You can do something like file = self.request.FILES.get('name') self.post_to_server(file, user.id) Edit: No need to call open on the file, it's already open. open(file, 'rb') takes file path. the file is already open from above lines just use that. best practice files = {'file': file} r = requests.post(url, files=files, data=payload)
https://codedump.io/share/w0PwELY4TjD4/1/invalid-file-error-when-trying-to-open-file---requests-and-django
CC-MAIN-2017-47
refinedweb
263
52.36
Software Development Via Functional Programming Software Development Via Functional Programming Get a look at how to implement a functional approach to Java, complete with examples. Join the DZone community and get the full member experience.Join For Free Verify, standardize, and correct the Big 4 + more– name, email, phone and global addresses – try our Data Quality APIs now at Melissa Developer Portal! Introduction When I started my career back in 2009, I had to work on an IVR (Interactive Voice Response) application developed in Java. It was a lightweight server-side application but after some time, the way it was structured as well, was not able to evolve well. Code and configuration duplications, being unable to use the same call flows with minor changes as per client request, without refactoring, or in some cases like tight deadlines, developers resorted to duplication. So after some time, a very simple application became a nightmare to manage and maintain. At that time, an initiative was started to cope with this. I thought of an application design and even created a PoC. The idea is to develop smaller configurable independent functions based on an interface having a single abstract method (SAM). Such functions would represent a unit of work just like a class method in Java. To what level of detail a unit of work is defined is up to the implementor. But such functions shouldn't be broader in the sense that they perform multiple tasks in a single implementation or perform very minute tasks. Now what to do with such functions. Well, we could configure them not only to initialize them but also to loosely wire them to create a flow or multiple flows. By loosely wiring them, I mean that the functions will not have a direct reference of other functions. No direct dependency among them. Based on what the case is the function will provide the name of the next function as configured. But to provide to what is the next question. Well, there'll a simple container that will contain all the functions mapped by their names. So it'll simply look for the name of next function, get its reference & call its process method. Example Enough of the boring theory. Let's see some example Java code so that it makes some sense. Here's my proposed interface: @FunctionalInterface // Not needed but just to show that FP was possible prior to Java 8 as well public interface Task { void process(Map<String, Object> data); } Now a simple example. The purpose of this example is to help in understanding the above concept only. Let's create a simple IVR flow where the caller is played a welcome message and then given a language selection choice. This doesn't imply that idea only applies to IVR or IVR like applications. We create a function that plays some sound files and may wait for input as configured. This is just an example code: public class IvrInputTask implements Task { private String soundFile; private int waitTime; //0 means no input allowed private String defaultNextTask; private String userInputNextTask; // assume necessary constructors or setters exist @Override public void process(Map<String, Object> dataMap) { String nextTask; char userInput = playAndWait(soundFile, waitTime);// assume such lib function exists if(userInput == '\0') { // means no input nextTask = defaultNextTask; } else { dataMap.put("userInput", userInput); nextTask = userInputNextTask; } dataMap.put("nextTask", nextTask); } } Now we create instances of above function and run the flow in a very basic way: public class DemoApp { public static void main(String args[]) { IvrInputTask welcome = new IvrInputTask(); // set soundFile="welcome.wav", waitTime=0, defaultNextTask="languageSelection" IvrInputTask langSel = new IvrInputTask(); // set soundFile="langSel.wav", waitTime=3000, defaultNextTask="bye", userInputNextTask="whatever" IvrInputTask bye = new IvrInputTask(); // set soundFile="bye.wav", waitTime=0, defaultNextTask="" Map<String, Task> taskMap = new HashMap<>(); Map<String, Object> dataMap = new HashMap<>(); String nextTask = "welcome"; Task task; taskMap.put("welcome", welcome); taskMap.put("languageSelection", langSel); taskMap.put("bye", bye); while((task=taskMap.get(nextTask))!=null) { task.process(dataMap); nextTask = (String)dataMap.get("nextTask"); } } } Hopefully, the example is self-explanatory. As we can see, we only created the function once but reused it by simply reconfiguring it. Also, we managed the flow-through configurations, as seen from above. I have created a sample project on GitHub that can create, initialize, and run such task instances. It's very basic for now but does work. Check out the links in the references. Conclusion So we can see that the world of software development could be transformed so simply. Just create a function once and then use it anywhere anytime. No code changes or even application restarts required. A GUI could be provided as well to build or alter flows. References }}
https://dzone.com/articles/software-via-functional-programming
CC-MAIN-2018-51
refinedweb
779
56.15
Proposed exercise Output Create a program that says if a data belongs in a list that was previously created. The steps to take are: - Ask the user how many data will he enter. - Reserve space for that amount of numbers (floating point). - Request the data to the user - Later, repeat: * Ask the user for a number (execution ends when he enters "end" instead of a number). * Say if that number is listed or not. Must be done in pairs. but you must provide a single source file, containing the names of both programmers in a comment. Solution using System; public class SearchArray { public static void Main() { Console.Write("Amount: "); int amount = Convert.ToInt32( Console.ReadLine() ); float number; float[] list = new float[amount]; for(int i=1; i <= amount; i++) { Console.Write("Enter a number {0}: ", i); list[i] = Convert.ToSingle( Console.ReadLine() ); } Console.Write("Number a search: "); number = Convert.ToSingle(Console.ReadLine()); while(Console.ReadLine() != "end") { for(int i=1; i <= amount; i++) { if (list[i] == number) Console.WriteLine("The number {0} exist", number ); } } } }
https://www.exercisescsharp.com/2013/04/402-search-in-array.html
CC-MAIN-2019-51
refinedweb
174
60.61
Opened 8 years ago Closed 5 years ago #13597 closed defect (fixed) tutorial: fix hash-bang in section on programming Description The section in the tutorial on standalone scripts needs some fixing. It suggests writing a script starting with #!/usr/bin/env sage -python import sys from sage.all import * ... But on sage.math.washington.edu, and probably on other linux systems, /usr/bin/env doesn't handle multiple arguments very well. I think that replacing the first line with #!/usr/bin/env sage should work. Change History (13) comment:1 Changed 8 years ago by - Cc kcrisman added - Branch set to u/jdemeyer/tutorial__fix_hash_bang_in_section_on_programming comment:7 Changed 5 years ago by - Commit set to 70f7b1c5d206e8627f3d124a28b7083e3a82313a - Status changed from new to needs_review comment:8 Changed 5 years ago by - Commit changed from 70f7b1c5d206e8627f3d124a28b7083e3a82313a to 99c96af9aeab5460ec0a75462786cfbb94f69c86 comment:9 Changed 5 years ago by This doesn't fix what is in the description, which is fixed already, I guess. What I don't understand is why you remove the -python, since otherwise there is no need for from sage.all import * is there? Maybe I'm missing something. comment:10 Changed 5 years ago by I think this is the right thing to do. First, we should certainly not advocate #!/usr/bin/env followed by multiple arguments. Second, I just tried the script from the tutorial but with just #!/usr/bin/env sage at the top. It worked fine, but it didn't work if I removed from sage.all import *: Traceback (most recent call last): File "./my_script", line 10, in <module> print factor(sage_eval(sys.argv[1])) NameError: name 'factor' is not defined comment:11 Changed 5 years ago by - Reviewers set to John Palmieri, Karl-Dieter Crisman In particular, I'm happy to give this a positive review. Karl-Dieter, any objections? comment:12 Changed 5 years ago by - Reviewers changed from John Palmieri, Karl-Dieter Crisman to John Palmieri - Status changed from needs_review to positive_review No objections if it works this way and doesn't otherwise! I didn't do anything useful here so I'm taking my name off, though. comment:13 Changed 5 years ago by - Branch changed from u/jdemeyer/tutorial__fix_hash_bang_in_section_on_programming to 99c96af9aeab5460ec0a75462786cfbb94f69c86 - Resolution set to fixed - Status changed from positive_review to closed Branch pushed to git repo; I updated commit sha1. New commits:
https://trac.sagemath.org/ticket/13597?cversion=1&cnum_hist=9
CC-MAIN-2021-10
refinedweb
388
65.42
I have been working on putting together a cost effective tool chain that would allow me to develop and debug native C code on Arduino UNO using the AVR Dragon debugger I had sitting around for a few years now. As a Windows user for as long as I can remember, I also decided to take up the challenge of doing this under Linux (Ubuntu to be precise). As you can imagine, it has been a painstaking journey that required a lot of patience and persistence. Now that I have finally got a working code development and debugging under Linux, I wanted to put together this tutorial for the benefits of others as well. AVR Freaks has been very generous to me over the years and this is my humble effort to be able to give a little back. I hope many people will find this useful and beneficial in their work one way or another. 1 - HARDWARE TOOLS Following is what I worked with while preparing this tutorial. You can see the full set up in the photo provided below. - Ubuntu 14.04 machine - AVR Dragon ICE device – parallel programming mod has been made to this device to rescue atmega328p devices that are bricked while working. (Arduino UNO board has atmega328p MCU installed) - Arduino UNO board (connected to a power supply) - ISP cable – built this cable to interface Dragon with the UNO board I must note that a small hardware mod on Arduino UNO is required before getting all of this to work. Because this tutorial uses debugWire, and the debugWire is connected to the reset line, the reset button on Arduino UNO needs to be detached from the reset pin of the device. All that needs to be done is to remove the 100 nF capacitor marked as C5 in the following schematics 2 - SOFTWARE TOOLS As the first step (after the Ubuntu 14.04 is installed and is running reliably of course), the AVR tools need to be installed. These tools are as follows: avr-gcc: The actual C/C++ compiler avr-binutils: A collection of tools including the assembler, linker and some other tools to manipulate the generated binary files. avr-libc: A subset of the standard C Library with some additional AVR specific functions. The libc-avr package also includes the AVR specific header files. avr-gdb: This is the debugger. avrdude: A Program to download/upload/manipulate the ROM and EEPROM of an AVR MCU. I used the following command to achieve this on the Ubuntu system. sudo apt-get install gcc-avr binutils-avr gdb-avr avr-libc avrdude If you need the most up-to-date tools, you may be better off building the tool chain yourself. following link tells you how you can achieve this. However, my notes here are based on using the existing tool chain as mentioned on the apt-get line just above. Install Eclipse IDE for C/C++ Developers from the Eclipse Downloads page. At the time of writing, Mars.2 Release (4.5.2) was used. Configuration of Eclipse to work on AVR projects is covered in the next section. A few more useful links: Link to AVR Dragon debugger Link to avr-libc and another one from Atmel Link to Arduino UNO interactive hardware visual reference Link to Arduino UNO (rev 3) schematics 3 - CONFIGURING ECLIPSE Start Eclipse and specify a workspace. It is recommended that you specify a new and dedicated workspace only for the AVR projects. Apparently, this protects your workspaces where you develop on other platforms except the AVR. On the main menu of Eclipse (this is the very top bar outside the main window of Eclipse usually!) Click on Help > Install New Software. In the field that is called “Work with:” type in and press Enter. After a few seconds “AVR Eclipse Plugin” should appear. Tick the button beside it and hit Next at the bottom. Install the plugin by following the instructions. Once the installation is completed you will need to restart Eclipse as you will be prompted. You can access more information on this process using this link. After Eclipse restarts, create a new C project. On the main menu, click File > New > C Project. Go to AVR Cross Target Application and click Empty Project. Note in that window under Toolchains: “AVR-GCC Toolchain” should already be pre-selected. Provide a name for your project. For this example, I will use the name project_test for argument’s sake. This project will be created under the default location as it is marked on the same window. After naming your project, click on Next at the bottom. On the next window, you will select the Debug or the Release configurations. For now make sure both are selected. While working on the project, depending on what you are trying to do, you can activate/deactivate each of them as needed. Click Finish. The project will be created with no source files in it. At this point you can either create new *.h and *.c files or import existing ones that are located in another directory to your projects directory. I will explain importing an existing file called toggleLed.c in this example. Right click on the project name and select Import. Select General > File System and click Next. Then browse to the *.c and *.h files you would like to import to your project. Make sure the files to be imported are located in a different directory to your project. Otherwise you will get an error message. When you import the files, a copy of those files will be created under the new project’s directory and the original files will remain untouched, which is a good thing. After selecting the files to import, click Finish and the window will be closed. On the main menu, select Project > Properties. You can make the same selection by right clicking the name of the project on Eclipse IDE. Then click on C/C++Build and select Settings. There are a number of options we need to specify here. Under “Additional Tools in Toolchain” make sure the following options are ticked and then click Apply at the bottom: AVRDude Print Size Generate HEX file for Flash memory Under “AVR Assembler” option click Debugging. Choose standard debugging info under “Generate Debugging Info”. Then choose stabs under “Debug Info Format”. Similarly repeat this for the same Debugging options under “AVR Compiler” Under “AVR Compiler” select Optimizations and set the optimization level to “No Optimizations”. Make sure the following 4 options are all selected on that same page and click Apply for changes to take effect. Pack structs Short enums Each function in its own section Each data item in its own section Once the C/C++ Build configuration is complete, select AVRDude under AVR option. Then click on the Programmer tab. The field under “Programmer configuration” will be empty. Click New to create a new one. Type in a configuration name for the programmer. I will call it My Dragon config. You can also type in a brief explanatory description under the name field. We will be using AVR Dragon in ISP mode so select that option from the menu. It may also be worth defining another configuration for AVR Dragon in PP mode at some point in case you need to rescue some bricked UNO MCUs. On the same page, type in usb under “Override default port” field. It is recommended to use baudrate of 19200 so change that under “Override default baudrate” option. Another important point not to miss on this screen is “Delay between avrdude invocations” which I have set to 800 milliseconds to be safe. (Note: This long delay causes the device uploads to proceed a bit slower but it ensures that the AVR Dragon and the Arduino UNO boards sync reliably. When this delay is around 150 milliseconds or so, I have observed that Dragon and UNO fail to talk to each other as Arduino UNO seems to respond quite slowly) Press OK when all the changes are made. Under Flash/EEPROM tab, you may wish to select “do not upload flash memory image” if you prefer not to upload the flash image to UNO automatically after each compile and link process. Similarly, it makes sense to select the “do not upload eeprom image” on the same page. Click on Apply once done. On the Fuses tab, select “do not set fuse bytes” option. You can set the fuse bytes once as and when needed but you do not need to set them every time you perform a code upload to UNO. It is worth mentioning the debugWire fuse setting at this point (since we will use the debugWire on Arduino UNO to program and debug our device). Within the Fuses tab, click on the “direct hex values” radio button. To the right of that radio button (after the low, high and ext. fields), there is an icon with a black chip icon corresponding to “Load from MCU”. Click on that to read the fuse values already programmed on the MCU. Once the fuse values are successfully read, the low, high and ext. fields will display the corresponding values. Once those values are read, click on the first icon that reads “Start editor”, which will bring up a user-friendly fuse editor window. In that window, simply make sure the “DWEN – Debug Wire enable” is selected as “Yes”. Then click OK and go back to the previous screen and close that screen as well. In order to program the fuse bytes, you will simply need to Select “AVR” tab on the main menu (while the Eclipse IDE is active of course) and click on “Upload Project to Target Device”. After programming the Fuse Bytes, you can go back to the Fuse Byte screen and disable programming the fuse bytes by selecting “do not set fuse bytes” option. Select AVRDude under AVR option again and continue. On the Lockbits tab, select “do not set lockbits”. On the Advanced tab, tick the boxes that you need. In order to avoid signature errors during upload, it will make sense to enable the Device Signature Check option (-F) here. On the Other tab, make sure the “Enable erase cycle counter” option is left unselected unless you have a good reason to do otherwise. If you would like to add any other options flag for the avrdude command, use the “Other options” field to do that. In my example, this field is left blank. Once done, click Apply. At this point it will make sense to perform a quick sanity check to make sure Eclipse is able to talk to Arduino UNO via AVR Dragon. To do this, select “Target Hardware” option under AVR. Click “Load from MCU” and wait for a few seconds. If all goes well, the MCU Type should read ATmega328. If there is a connection problem you may see an error message such as: Programmer “dragon_isp” could not connect to the target hardware. Please check that the target hardware is connected correctly. Reason: avrdude: failed to sync with the AVR Dragon in ISP mode If all has gone well until this point, you should be able to compile the toggleLed.c source file which is part of the project_test. Just to make sure we are on the same page, the source code in my toggleLed.c source file is as follows: #include <avr/io.h> #include <util/delay.h> #define BLINK_DELAY_MS 250 /* in msec */ int main (void) { int counter = 0; /* set pin 5 of PORTB for output*/ DDRB |= _BV(DDB5) | _BV(DDB1); /* Bit Value of DDB5 (UNO pin 13) and DDB1 (UNO pin 9) is used here. */ DDRD |= _BV(DDD6); //OC0A (UNO pin 6) is configured as output PORTB &= ( ~_BV(PORTB5) & ~_BV(PORTB1) ); // output is at zero at start up while(1) { PORTB ^= _BV(PORTB5); _delay_ms(BLINK_DELAY_MS); ++counter; }//end while }//end main() All this code does is to toggle the LED connected to pin 13 of Arduino UNO on and off. It’s that simple! There is some redundant code lines in this source file but don’t worry about them for now. If you compile this code and run it, the LED should toggle at a rate of 2 blinks per second basically. The local variable called counter is also incremented at each LED state change for no apparent reason. However, the reason will become obvious later on during debugging. When you build the code, you should see two files under the Release directory under project_test folder called project_test.elf and project_test.hex. These confirm that the build process has been successful. Note that until now we have not flashed code to the Arduino UNO board. We can now move to the debugging phase in the following section. 4 - CONFIGURING avarice and avr-gdb Normally, one would like to be able to debug an application on Arduino UNO using the Eclipse IDE. However, despite my best efforts and countless attempts, I have not managed to achieve this yet. Maybe in the future if and when I do, I will be able to share how that can be done. For now, I use the Eclipse environment to write and build code. Then for debugging purposes, I use the terminal to run avrdude, avarice and avr-gdb. Although having to use the terminal to debug an embedded application may not sound appealing, as someone who is used to working with professional debugging tools that work in Windows I did not find it too hard. As a matter of fact, given the power of gdb commands, I think one would not spend too much time debugging using an ugly black terminal screen so this option is acceptable. In the rest of this piece, I will be describing this debugging operation. After building your code, there are 3 basic steps you need to follow to start debugging your application on the device. Step 1 – Flash your debuggable binary to the device first. Since you already have your debuggable binary (i.e. project_test.elf file) ready, open a terminal and invoke the following command to upload the elf file to the device. sudo avrdude –F –V dragon_isp –p ATMEGA328P –P usb 115200 –U flash:w:project_test.elf Above command will kickstart the program upload to Arduino UNO. Depending on the upload speed, this can take about 10-15 seconds. After the code is successfully uploaded, you may need to power cycle the Arduino UNO board before executing Step 2. Otherwise, you may receive the following error: AVaRICE version 2.13, Jun 22 2016 11:17:20 JTAG config starting. Found a device: AVRDRAGON Serial number: 00:a2:00:01:46:4c set paramater command failed: DEBUGWIRE SYNC FAILED Failed to activate debugWIRE debugging protocol USB bulk write error: error submitting URB: No such device USB daemon died If the device is successfully flashed, you should see an output similar to the following: avrdude: AVR device initialized and ready to accept instructions Reading | ################################################## | 100% 0.16s avrdude: Device signature = 0x1e9514 avrdude: Expected signature for ATmega328P is 1E 95 0F avrdude: NOTE: "flash" memory has been specified, an erase cycle will be performed To disable this feature, specify the -D option. avrdude: erasing chip avrdude: reading input file "project_test.elf" avrdude: input file project_test.elf auto detected as ELF avrdude: writing flash (746 bytes): Writing | ################################################## | 100% 4.12s avrdude: 746 bytes of flash written avrdude: safemode: Fuses OK (H:06, E:95, L:BF) avrdude done. Thank you. Step 2 – Start avarice to create the bridge between the device and gdb After a successful code upload to UNO as shown in Step 1, invoke the following command at the same terminal screen to run avarice. Avarice connects to the UNO board over the debugWire on one side and talks to the avr-gdb on the other via a TCP connection. In other words, it acts as a translator between the debugger and the target hardware. avarice –part atmega328p –debugWire –dragon :4242 If all goes well, you should see a response similar to the following on the terminal: AVaRICE version 2.13, Jun 22 2016 11:17:20 JTAG config starting. Found a device: AVRDRAGON Serial number: 00:a2:00:01:46:4c Reported debugWire device ID: 0x950F Configured for device ID: 0x950F atmega328p -- Matched with atmega328p JTAG config complete. Preparing the target device for On Chip Debugging. Waiting for connection on port 4242. Note that after starting successfully, the avarice expects a connection from the avr-gdb to port 4242. This can be seen on the last line of the above terminal output. Step 3 – Start the gdb server and connect to avarice On a new terminal screen type avr-gdb project_test.elf Note that you need to provide the same elf file to avr-gdb as well to make sure the correct list of symbols is known to the tool. After invoking the avr-gdb command as above, you will see the (gdb) prompt as below where you can type in your commands. GNU gdb (GDB) 7.6.50.201312=avr". Type "show configuration" for configuration details. For bug reporting instructions, please see: <>. Find the GDB manual and other documentation resources online at: <>. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from project_test.elf...done. (gdb) Type in the following at the prompt to connect to the avarice that should be running on the other terminal screen. target remote localhost:4242 You should see the following upon establishing a successful connection: Remote debugging using localhost:4242 0xfffffffe in ?? () The last line may also indicate the reset vector located at address 0x00000000 and another function name. Do not worry if you see a difference. Note that once the connection is successfully established between the avarice and the avr-gdb, the terminal screen mentioned at Step 2 should be updated with the following line (the port indicated may be completely different to what is mentioned here. It will be different to 4242 as well so don’t be alarmed). Connection opened by host 127.0.0.1, port 38164. To make sure you are in the correct debugging session, type in list at the (gdb) prompt. You should see a few source code lines (with the line numbers) in toggleLed.c where main() function is located. This confirms that you have successfully started your debug session. However, you may also see strange looking errors as below upon typing command “list” as follows: (gdb) list 35 ../../../libm/fplib/fp_round.S: No such file or directory. Knowing that the fp_round.S file is not something we have created, this indicates there is something going wrong here. Let’s go back to our source code and try to insert a random breakpoint somewhere. I chose line 22 in toggleLed.c file where the following line of code is supposed to exist: ++counter; At the gdb prompt, type in the following to insert a breakpoint: break toggleLed.c:22 Following is the response I receive, which clearly indicates our debugger is not in sync with the source file we think we are working on! No line 22 in file “toggleLed.c”. Another way to confirm that there is an issue is to access a local variable such as “counter” in our example. Below you will see the invoked command and the self-explanatory response by the gdb where the local variable cannot be found. print counter No symbol “counter” in current context. When I first encountered this issue, I was quite disappointed and had no idea how to sort it out. After digging a bit deep and spending a few hours searching for the answer, I found the solution. Basically before we build the code, we need to tell the compiler to generate more debug information into the elf file. Therefore, we need to go back to the avr-gcc step described earlier within the Eclipse tool and add the option –ggdb which will produce debugging information specifically intended for gdb. There are other options that may be worth exploring as well: -g produces debugging information in the operating system’s native format (stabs, COFF, XCOFF, or DWARF 2). -ggdb3 produces extra debugging information, for example: including macro definitions (i.e., gdb for level 3) -ggdb2 (i.e., gdb for level 2). I have not explored any of them except the –ggdb and that’s why the rest of this section will be based on this tried and tested option. In Eclipse project properties for project-test, go to C/C++ Build option and click on Settings. Under AVR Compiler go to Miscellaneous and under “Other flags” type –ggdb. Then click OK at the bottom. After making this change, rebuild the project to generate the new elf file. Once you have the new elf file rewind and go through Steps 1, 2 and 3 again. Then try the following at the gdb prompt. When we invoke the list command again, you should start seeing part of the source file that you are debugging with the line numbers. This is a clear indication that we are on the right track! (gdb) list 1 2 3 #include <avr/io.h> 4 #include <util/delay.h> 5 6 #define BLINK_DELAY_MS 250 /* in msec */ 7 8 int main (void) 9 { 10 int counter = 0; (gdb) When you press enter, the listing of the source code shall continue as below. 11 12 /* set pin 5 of PORTB for output*/ 13 DDRB |= _BV(DDB5) | _BV(DDB1); /* Bit Value of DDB5 (UNO pin 13) and DDB1 (UNO pin 9) is used here. */ 14 DDRD |= _BV(DDD6); //OC0A (UNO pin 6) is configured as output 15 16 PORTB &= ( ~_BV(PORTB5) & ~_BV(PORTB1) ); // output is at zero at start up 17 18 while(1) 19 { 20 PORTB ^= _BV(PORTB5); (gdb) When you try to see the local variable “counter” now, following is what you should get: (gdb) print counter No symbol "counter" in current context. (gdb) Oops! That does not seem right. Did we miss something? Not really. This is expected since we did not start running our code yet! In other words, we did not make an entry into main() function yet while debugging and therefore the current scope does not know about the local variables in main(). By the way, even if we had run our program before inserting the –ggdb option as described above, we would not have been able to access the variable then due to the missing symbol information. (trust me, I tried that too during my explorations and path finding :) ) Before running the code freely, let’s insert a breakpoint to line 22 like before. (gdb) break toggleLed.c:22 Breakpoint 1 at 0x1c6: file ../toggleLed.c, line 22. (gdb) OK. It all looks good and the response clearly indicates a breakpoint has been added where we want it. Now let’s start running the code as follows. (gdb) continue Continuing. Breakpoint 1, main () at ../toggleLed.c:22 22 ++counter; (gdb) You can see that the code has stopped on line 22 where we have a breakpoint. Perfect! This is the line that increments counter by one. Now let’s see the value of counter at that point. (gdb) print counter $1 = 0 (gdb) The debugger indicates that counter= 0, which is expected. Let’s run the debugger until the next breakpoint (which is where we are now). Since we are in an infinite while loop, we will stop here again. (gdb) continue Continuing. Breakpoint 1, main () at ../toggleLed.c:22 22 ++counter; (gdb) Code stopped again. Let’s have a look at the same variable again. This time we would expect counter to be incremented by one. (gdb) print counter $2 = 1 (gdb) Success! It all works as expected. No nasty surprises. Well, if you have been patient enough to come this far and obtained the results I have reported, hopefully you will have the confidence to dig deeper into debugging more complex applications besides the simple LED project provided as an example here. There is quite a few resources to learn about the GDB debugger on the net, however, I suggest you take a look at this one which is simple and comprehensive enough to help you in many projects. THANK YOU FOR READING & GOOD LUCK DEBUGGING! :) :) avr-gdb how to install it on Ubuntu 20.04? Top - Log in or register to post comments sudo apt install gdb-avr Top - Log in or register to post comments
https://www.avrfreaks.net/comment/3236921
CC-MAIN-2022-40
refinedweb
4,091
72.66
strsep - extract token from string Synopsis Description Notes Bugs Colophon #include <string.h> char *strsep(char **stringp, const char *delim); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): strsep(): _BSD_SOURCE If *stringp is NULL, the strsep() function returns NULL and does nothing else. Otherwise, this function finds the first token in the string *stringp, where tokens are delimited by symbols. The strsep() function returns a pointer to the token, that is, it returns the original value of *stringp. 4.4BSD. The strsep() function was introduced as a replacement for strtok(3), since the latter cannot handle empty fields. However, strtok(3) conforms to C89/C99 and hence is more portable. Be cautious when using this function. If you do use it, note that: index(3), memchr(3), rindex(3), strchr(3), string(3), strpbrk(3), strspn(3), strstr(3), strtok(3) This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.sgvulcan.com/strsep.3.php
CC-MAIN-2017-13
refinedweb
170
64.51
I've been attempting to teach myself Python for a few months now. I have definantly made some progress and am really enjoying the language! Recently, I have been designing a Tic-Tac-Toe game using ASCII characters. The program works, so now I am doing some cleanup to the code. After searching the internet for a bit I cant seem to find any command that clears the screen. Here is what happens right now: Every time a player inputs where they wish to place their 'X' or 'O', I have a command called printBoard that prints the new board onto the screen. This is fine, but obviously, after two turns you have the old boards still on the screen. My question is, is there a command that I can place into my program that clears the screen BEFORE a new board is printed? That way the interface is not clogged up with old instances of the playing board. So far this is all I can think of. def clear(): for i in range(60): print() Any help would be appreceated!
http://www.dreamincode.net/forums/topic/293592-clearing-the-screen/
CC-MAIN-2016-50
refinedweb
181
80.51
The. A few years ago, the Computer Sciences Department converted most of its classes from C++ to Java as the principal language for programming projects. CS 537 was the first course to make the switch, Fall term, 1996. At that time virtually all the students had heard of Java and none had used it. Over the next few years more and more of our courses were converted to Java until, by 1998-99, the introductory programming prerequisites for this course, CS 302 and CS 367, were taught in Java. The department now offers a "C++ for Java Programmers" course, CS 368. The remainder of these notes provide some advise on programming and style that may be helpful to 537 students. In particular, we describe threads and synchronized methods, Java features that you probably haven't seen before. Note that some of the examples assume that you are using Java version 1.5 or later. The Java language (API stands for "Application Programming Interface"). At last count, there were over 166 packages in the Platform API, but you will probably only use classes from three of them: Case is significant in identifiers in Java, so if and If are considered to be quite different. The language has a small set of reserved words such as if, while, etc. They are all sequences of lower-case letters. The Java language places no restrictions on what names you use for functions, variables, classes, etc. However, there is a standard naming convention, which all the standard Java libraries follow, and which you must follow in this class. Simple class definitions in Java look rather like class definitions in C++ (although, as we shall see later, there are important differences). public. You can compile this class with the command javac Pair.javaAssuming there are no errors, you will get a file named Pair.class. There are exceptions to the rule that requires a separate source file for each class but you should ignore them. In particular, class definitions may be nested. However, this is an advanced feature of Java, and you should not nest class definitions unless you know what you're doing. There is a large set of predefined classes, grouped into packages. The full name of one of these predefined classes includes the name of the package as prefix. For example, the library class java.util.Random is in package java.util, and a program may use the class with code like this: java.util.Random r = new java.util.Random();The import statement allows you to omit the package name from one of these classes. A Java program that includes the line import java.util.Random;can abbreviate the use of Random to Random r = new Random();You can import all the classes in a package at once with a notation like import java.util.*;The package java.lang is special; every program behaves as if it started with import java.lang.*;whether it does or not. You can define your own packages, but defining packages is an advanced topic beyond the scope of what's required for this course. The import statement doesn't really "import" anything. It just introduces a convenient abbreviation for a fully-qualified class name. When a class needs to use another class, all it has to do is use it. The Java compiler will know that it is supposed to be a class by the way it is used, will import the appropriate .class file, and will even compile a .java file if necessary. (That's why it's important for the name of the file to match the name of the class). For example, here is a simple program that uses two classes: public class HelloTest { public static void main(String[] args) { Hello greeter = new Hello(); greeter.speak(); } } public class Hello { void speak() { System.out.println("Hello World!"); } }Put each class in a separate file (HelloTest.java and Hello.java). Then try this: javac HelloTest.java java HelloTestYou should see a cheery greeting. If you type ls you will see that you have both HelloTest.class and Hello.class even though you only asked to compile HelloTest.java. The Java compiler figured out that class HelloTest uses class Hello and automatically compiled it. Try this to learn more about what's going on: rm -f *.class javac -verbose HelloTest.java java HelloTest. There are exactly eight primitive types in Java, boolean, char, byte, short, int, long, float, and double.. There are four integer types, each of which represents a signed integer with a specific number of bits. The types float and double represent 32-bit and 64-bit floating point values. Objects are instances of classes. They are created by the new operator. Each object is an instance of a unique class, which is itself an object. Class objects are automatically created whenever you refer the class; there is no need to use new. Each object "knows" what class it is an instance of. Pair p = new Pair(); Class c = p.getClass(); System.out.println(c); // prints "class Pair"Each object has a set of fields and methods, collectively called members. (Fields and methods correspond to data members and function members in C++). Like variables, each field can hold either a primitive value (a boolean, int, etc.) or a reference, which is either null or points to another object. When a new object is created, its fields are initialized to zero, null or false as appropriate, but a constructor (a method with the same name as the class) can supply different initial values (see below). By contrast, variables are not automatically initialized. It is a compile-time error to use a variable that has not been initialized. The compiler may complain if it's not "obvious" that a variable is initialized before use. You can always make it "obvious" by initializing the variable when it is declared: int i = 0;You'll probably miss reference parameters most in situations where you want a procedure to return more than one value. As a work-around you can return an object or array or pass in a pointer to an object or array. See Section 2.6.4 on page 62 of the Java book for more information. New objects are create // This is C++ code p = new Pair(); // ... q = p; // ... much later delete p; q -> x = 5; // oops!while deleting them too late (or not at all) can lead to garbage, also known as a storage leak. Each field or method of a class has an access, which is one of public, protected, private, or package. The first three of these are specified by preceding the field or method declaration by one of the words public, protected, or private. Package access is specified by omitting these words. public class Example { public int a; protected int b; private int c; int d; // has package access }It is a design flaw of Java that the default access is "package". For this course, all fields and methods must be declared with one of the words public, protected, or private. As a general rule only methods should be declared public; fields are normally protected or private. Private members can only be accessed from inside the bodies of methods (function members) of the class, not "from the outside." Thus if x is an instance of C, x.i is not legal, but i can be accessed from the body of x.f(). (protected access is discussed further below). The keyword static does not mean "unmoving" as it does in common English usage. Instead is means something like "class" or "unique". Ordinary members have one copy per instance, whereas a static member has only one copy, which is shared by all instances. Ordinary (non-static) fields are sometimes called "instance variables". In effect, a static member lives in the class object itself, rather than instances. public class C { public int x = 1; public static int y = 1; public void f(int n) { x += n; } public static int g() { return ++y; } } // ... elsewhere ... C p = new C(); C q = new C(); p.f(3); q.f(5); System.out.println(p.x); // prints 4 System.out.println(q.x); // prints 6 System.out.println(C.y); // prints 1 System.out.println(p.y); // means the same thing as C.y; prints 1 System.out.println(C.g());// prints 2 System.out.println(q.g());// prints 3 C.x; // invalid; which instance of x? C.f(); // invalid; which instance of f? Static members are often used instead of global variables and functions, which do not exist in Java. For example, Math.tan(x); // tan is a static method of class Math Math.PI; // a static "field" of class Math with value 3.14159... Integer.parseInt("123"); // converts a string of digits into a numberStarting with Java 1.5, the word static can also be used in an import statement to import all the static members of a class. import static java.lang.Math.*; ... double theta = tan(y / x); double area = PI * r * r;This feature is particularly handy for System.out. import static java.lang.System.*; ... out.println(p.x); // same as System.out.println(q.x), but shorterFrom now on, we will assume that every Java file starts with import static java.lang.System.*; The keyword final means that a field or variable may only be assigned a value once. It is often used in conjunction with static to defined named constants. public class Card { public int suit = CLUBS; // default public final static int CLUBS = 1; public final static int DIAMONDS = 2; public final static int HEARTS = 3; public final static int SPADES = 4; } // ... elsewhere ... Card c = new Card(); out.println("suit " + c.suit); c.suit = Card.SPADES; out.println("suit " + c.suit);Each Card has its own suit. The value CLUBS is shared by all instances of Card so it only needs to be stored once, but since it's final, it doesn't need to be stored at all. Java 1.5 introduced enums, which are the preferable way of declaring a variable that can only take on a specified set of distinct values. Using enums, the example becomes enum Suit { CLUBS, DIAMONDS, HEARTS, SPADES }; class Card { public Suit suit = Suit.CLUBS; // default } // ... elsewhere ... Card c = new Card(); out.println("suit " + c.suit); c.suit = Card.SPADES; out.println("suit " + c.suit);One advantage of this version is that it produces the user-friendly output suit CLUBS suit SPADESrather than suit 1 suit 4with no extra work on the part of the programmer. In Java, arrays are objects. Like all objects in Java, you can only point to them. Unlike a C++ array: a[0] ... a[a.length-1]. Once you create an array (using new), you can't change its size. If you need more space, you have to create a new (larger) array and copy over the elements (but see the List library classes below). int[] arrayOne; // a pointer to an array object; initially null int arrayTwo[]; // allowed for compatibility with C; don't use this! arrayOne = new int[10]; // now arrayOne points to an array object arrayOne[3] = 17; // accesses one of the slots in the array arrayOne = new int[5]; // assigns a different array to arrayOne // the old array is inaccessible (and so // is garbage-collected) out.println(arrayOne.length); // prints 5 int[] alias = arrayOne; // arrayOne and alias share the same array object // Careful! This could cause surprises alias[3] = 17; // Changes an element of the array pointed to by // alias, which is the same as arrayOne out.println(arrayOne[3]); // prints 17 all classes you define. This is great for debugging. String s = "hello"; String t = "world"; out.println(s + ", " + t); // prints "hello, world" out.println(s + "1234"); // "hello1234" out.println(s + (12*100 + 34)); // "hello1234" out.println(s + 12*100 + 34); // "hello120034" (why?) out.println("The value of x is " + x); // will work for any x out.println("System.out = " + System.out); // "System.out = java.io.PrintStream@80455198" String numbers = ""; for (int i=0; i<5; i++) { numbers += " " + i; // correct but slow }and more. You can't modify a string, but you can make a string variable point to a new string (as in numbers += " " + i;). The example above is not good way to build up a string a little at a time. Each iteration of numbers += " " + i; creates a new string and makes the old value of numbers into a garbage object, which requires time to garbage-collect. Try running this code public class Test { public static void main(String[] args) { String numbers = ""; for (int i = 0; i < 10000; i++) { numbers += " " + i; // correct but slow } out.println(numbers.length()); } }A better way to do this is with a StringBuffer, which is a sort of "updatable String". public class Test { public static void main(String[] args) { StringBuffer sb = new StringBuffer(); for (int i = 0; i < 10000; i++) { sb.append(" " + i); } String numbers = sb.toString(); out.println(numbers.length()); } } A constructor is method with the same name as the class. It does not have any return type, not even void. operator overloading is not supported.! } } Java supports two kinds of inheritance, which are sometimes called interface inheritance or sub-typing, and method inheritance. is Runnable if it has a method named run that is public1 them (via extends) or define them itself. class Words extends StringTokenizer implements Enumeration, Runnable { public void run() { for (;;) { String s = nextToken(); if (s == null) { return; }. Interfaces can also be used to declare variables. Enumeration e = new StringTokenizer("hello there");Enumeration is an interface, not a class, so it does not have any instances. However, class StringTokenizer is consistent with (implements) Enumeration, so e can point to an object of type StringTokenizer In this case, the variable e has type Enumeration, but it is pointing to an instance of class StringTokenizer. A cast in Java is a type name in parentheses preceding an expression. A cast can be applied to primitive types. double pi = Math.PI; int three = (int) pi; // throws away the fractionA cast can also be used to convert an object reference to a super class or subclass. For example, Words w = new Words("this is a test"); Object o = w.nextElement(); String s = (String) o;. If we were wrong about the type of o we would) { err.println("Oops: " + e); , as well as a call trace. WARNING Never write an empty catch clause. If you do, you will regret it. Maybe not today, but tomorrow and for the rest of your life. king of class. You can define and throw your own exceptions. class SytaxError extends Exception { int lineNumber; Sytax) { err.println(e); } } }Each function must declare in its header (with the keyword throws) all the exceptions that may be thrown by it or any function it calls. It doesn't have to declare exceptions it catches. Some exceptions, such as IndexOutOfBoundsException, are so common that Java makes an exception for them (sorry about that pun) and doesn't require that they be declared. This rule applies to RuntimeException and its subclasses. You shouldThe constructor for the built-in class Thread takes one argument, which is any object that has a method called run. This requirement is specified by requiring that command implement the Runnable interface described earlier. (More precisely, command must be an instance of a class that implements Runnable). The way a thread "runs" a command is simply by calling its run() method. It's as simple as that! In project 1, you are supposed to run each command in a separate thread. Thus you might declare something like this: class Command implements Runnable { String commandLine; Command(String commandLine) { this.commandLine = commandLine; } public void run() { // Do what commandLine says to do } }You can parse the command string either in the constructor or at the start of the run() method. The main program loop reads a command line, breaks it up into commands, runs all of the commands concurrently (each in a separate thread), and waits for them to all finish before issuing the next prompt. In outline, it may look like this. for (;;) { out.print("% "); out.flush(); String line = inputStream.readLine(); int numberOfCommands = // count how many commands(); } }This main loop is in the main() method of your main class. It is not necessary for that class to implement Runnable. Although you won't need it for project 1, the next project will require to to synchronize threads with each other. There are two reasons why you need to do this: to prevent threads from interfering2 and List queue = new ArrayList(); public synchronized void put(Object o) { queue.add(o); notify(); } public synchronized Object get() { while (queue.isEmpty()) { wait(); } return queue.remove(0); } }This class solves the so-call "producer-consumer" problem. (The class ArrayList and interface List are part of the java.util package.) print it: class Buffer { private List queue = new ArrayList(); public synchronized void put(Object o) { queue.add(o); notify(); } public synchronized Object get() { while (queue.isEmpty()) { try { wait(); } catch (InterruptedException e) { e.printStackTrace(); } } return queue.remove(0); } (queue.isEmpty()) rather than if (queue. Input/Output, as described in Chapter 20 of the Java book, is not as complicated as it looks. You can get pretty far just writing to System.out (which is of type PrintStream ) with methods print , println , and printf . The method print simply writes converts its argument to a String and writes it to the output stream. The method println is similar, but adds a newline so that following output starts on a new line. The method printf is new in Java 1.5. It expects a String as its first argument and zero or more additional arguments. Each '%' in the first argument indicates a request to print one of the other arguments. The details are spelled out by one or more characters following the '%'. For example, out.printf("pair(%d,%d)%n", pair.x, pair.y);produces exactly the same thing as out.println("pair(" + pair.x + "," + pair.y + ")");but is much easier to read, and to write. The characters %d are replaced by the result of converting the next argument to a decimal integer. Similarly, "%f" looks for a float or double, "%x" looks for an integer and prints it in hexadecimal, and "%s" looks for a string. Fancier formatting is supported; for example, "%6.2f" prints a float or double with exactly two digits following the decimal point, padding with leading spaces as necessary to make the result at least 6 character long. For input, you probably want to wrap the standard input System.in in a BufferedReader , which provides the handy method readLine() BufferedReader input = new BufferedReader(new InputStreamReader(System.in)); for(;;) { String line = input.readLine(); if (line == null) { break; } // do something with the next line } If you want to read from a file, rather than from the keyboard (standard input), you can use FileReader, probably wrapped in a BufferedReader. BufferedReader input = new BufferedReader(new FileReader("somefile")); for (;;) { String line = input.readLine(); if (line == null) { break; } // do something with the next line }Similarly, you can use new PrintWriter(new FileOutputStream("whatever")) to write to a file. The library of pre-defined classes has several other handy tools. See the online manual , particularly java.lang and java.util for more details. int i; Integer ii; ii = 3; // Same as ii = new Integer(3); i = ii; // Same as i = ii.intValue();These classes also serve as convenient places to define utility functions for manipulating value of the given types, often as static methods or defined constants. int i = Integer.MAX_VALUE; // 2147483648, the largest possible int int j =' A List is like an array, but it grows as necessary to allow you to add as many elements as you like. The elements can be any kind of Object, but they cannot be primitive values such as integers. When you take objects out of a List, you have to use a cast to recover the original type. Use the method add(Object o) to add an object to the end of the list and get(int i) to remove the ith element from the list. Use iterator() to get an Iterator for running through all the elements in order.3 List is an interface, not a class, so you cannot create a new list with new. Instead, you have to decide whether you want a LinkedList or an ArrayList. The two implementations have different preformance characteristics. List list = new ArrayList(); // an empty list for (int i = 0; i < 100; i++) { list.add(new Integer(i)); } // now it contains 100 Integer objects // print their squares for (int i = 0; i < 100; i++) { Integer member = (Integer) list.get(i); int n = member.intValue(); out.println(n*n); } // another way to do that for (Iterator i = list.iterator(); i.hasNext(); ) { int n = ((Integer) i.next()).intValue(); out.println(n*n); } list.set(5, "hello"); // like list[5] = "hello" Object o = list.get(3); // like o = list[3]; list.add(6, "world"); // set list[6] = "world" after first shifting // element list[7], list[8], ... to the right // to make room list.remove(3); // remove list[3] and shift list[4], ... to the // left to fill in the gapElements of a List must be objects, not values. That means you can put a String or an instance of a user-defined class into a Vector, but if you want to put an integer, floating-point number, or character into List, you have to wrap it: list.add(new Integer(47)); // or list.add(47), using Java 1.5 autoboxing sum += ((Integer) list.get(i)).intValue();The class ArrayList is implemented using an ordinary array that is generally only partially filled. As its name implies LinkedList is implemented as a doubly-linked list. Don't forget to import java.util.List; import java.util.ArrayList; or import java.util.*; . Lists and other similar classes are even easier to use with the introduction of generic types in Java 1.5. Instead of List l which declares l to be a list of Objects of unspecified type, use List List<Integer> list = new ArrayList<Integer>(); // an empty list for (int i = 0; i < 100; i++) { list.add(i); } // now it contains 100 Integer objects // print their squares for (Iterator<Integer> i = list.iterator(); i.hasNext(); ) { int n = i.next(); out.println(n*n); } // or even simpler for (int n : list) { out.println(n*n); } List<String> strList = new ArrayList<String>(); for (int i = 0; i < 100; i++) { strList.add("value " + i); } strList.set(5, "hello"); // like strList[5] = "hello" String s = strList.get(3); // like o = strList[3]; strList.add(6, "world"); // set strList[6] = "world" after first shifting // element strList[7], strList[8], ... to the // right to make room strList.remove(3); // remove strList[3] and shift strList[4], ... to // the left to fill in the gap The interface Map4 represents a table mapping keys to values. It is sort of like an array or List, except that the "subscripts" can be any objects, rather than non-negative integers. Since Map is an interface rather than a class you cannot create instances of it, but you can create instances of the class HashMap, which implements Map using a hash table or TreeMap which implements it as a binary search tree (a "red/black" tree). Map<String,Integer> table // a mapping from Strings to Integers = new HashMap<String,Integer>(); table.put("seven", new Integer(7)); // key is the String "seven"; // value is an Integer object table.put("four", 4); // similar, using autoboxing Object o = table.put("seven", 70); // binds "seven" to a different object // (a mistake?) and returns the // previous value int n = ((Integer)o).intValue(); out.printf("n = %d\n", n); // prints 7 n = table.put("seven", 7); // fixes the mistake out.printf("n = %d\n", n); // prints 70 out.println(table.containsKey("seven")); // true out.println(table.containsKey("twelve")); // false // print out the contents of the table for (String key : table.keySet()) { out.printf("%s -> %d\n", key, table.get(key)); } n = table.get("seven"); // get value bound to "seven" n = table.remove("seven"); // get binding and remove it out.println(table.containsKey("seven")); // false table.clear(); // remove all bindings out.println(table.containsKey("four")); // false out.println(table.get("four")); // nullSometimes, you only care whether a particular key is present, not what it's mapped to. You could always use the same object as a value (or use null), but it would be more efficient (and, more importantly, clearer) to use a Set. out.println("What are your favorite colors?"); BufferedReader input = new BufferedReader(new InputStreamReader(in)); Set<String> favorites = new HashSet<String>(); try { for (;;) { String color = input.readLine(); if (color == null) { break; } if (!favorites.add(color)) { out.println("you already told me that"); } } } catch (IOException e) { e.printStackTrace(); } int n = favorites.size(); if (n == 1) { out.printf("your favorite color is"); } else { out.printf("your %d favorite colors are:", n); } for (String s : favorites) { out.printf(" %s", s); } out.println(); forgottenThe second argument to the constructor is a String containing the characters that language itself (which is not a surprise, considering that the Java compiler is written in Java). See Chapters 16 and 17 of the Java book for information about other handy classes. 1All the members of an Interface are implicitly public. You can explicitly declare them to be public, but you don't have to, and you shouldn't. 2In particular, it won't necessarily be the one that has been sleeping the longest. 3Interface Iterator was introduced with Java 1.2. It is a somewhat more convenient version of the older interface Enumeration discussed earlier. 4Interfaces Map and Set were introduced with Java 1.2. Earlier versions of the API contained only Hashtable, which is similar to
http://pages.cs.wisc.edu/~solomon/cs537-old/last/java-tutorial.html
crawl-002
refinedweb
4,295
67.25
Glenn Spies - Total activity 28 - Last activity - Member since - Following 0 users - Followed by 0 users - Votes 0 - Subscriptions 12 Glenn Spies created a post, [BUG] property accessor is never used and aspx EvalHiWhen using the "Analyzing errors in solution" tool, R# ver 4.5.1277 currently incorrectly marks a property as not used when a list (array, collection, or some such like) of these objects are data... Glenn Spies created a post, Resharper Quality [rant]HiI have been using resharper for about 3 years now and absolutely loved it when it came out. It made development a pleasure even though I used perhaps 1/10th of its features, I liked it enough to ... Glenn Spies commented, Glenn Spies commented, Glenn Spies created a post, RS4 feature requestHiThere is an option to insert a blank line after using list.Could there be options to insert blank lines after namespace and type declarations as well?eg:namespace X {__class C {____method M1 {}__... Glenn Spies created a post, RS4 Switch/case formattingHiI cant seem to find the option to stop resharper moving the braces ( '{', '}') after a case statement to the next line. How can I fix it so thatswitch (a) case 1: {}remains with the braces next ... Glenn Spies created a post, RS4 Issue copying source filesHiWhen I have a class file and use copy-paste in VS2005 and edit the new file to remove the (obviously) duplicated class-name, resharpher does not colour-code or otherwise seem to operate correctly...
https://resharper-support.jetbrains.com/hc/en-us/profiles/2132968025-Glenn-Spies
CC-MAIN-2020-24
refinedweb
250
55.68
So, I was wondering if it was possible to read values off an Excel spreadsheet and use those values to create an order and add an order line. In short – but of-course! 🙂 Though Smart Office provides functionality natively through mforms::ExcelUtilities, I thought it would be interesting to use a more traditional approach to illustrate how you can use other external programs without Lawson providing direct functionality – for example, you could use Microsoft Word to record all the values for an item in MMS001 for the purposes of documentation. Though I haven’t played around with it, I’m assuming that this would be a good time to use automation – but I haven’t read that far through the pdf 😉 So the basic premise of what I am going to demonstrate is - From OIS100 - Open a Spreadsheet - Read some of the values - Do a PressKey - The PressKey will fire the RequestCompleted event once we move on the next panel - PressKey until we get to the Order Line (which will generate a new RequestCompleted event) - Write some of the values to the order lines that we extracted from the Spreadsheet We use the RequestCompleted, as we need to wait until the first event is complete before moving on to the next event. Doing the PressKeys one after the other won’t achieve the results that we want. The sample spreadsheet which I saved as D:\Test.xlsx, you’ll see that in the A column I have two field names, I have done this purely for the purposes of demonstrating that we can extract the field name and the values from the spreadsheet and it made it easier to muck around using different fields without changing the scripts 🙂 The script below doesn’t add a button, it just runs and creates an order for customer 165, with an order type of D01. We have a panel sequence of EFGHB1, and we have an extra confirm on panel A. We then set the Item Number and the quantity before entering. Anyways, lots of comments in the code – hopefully it all makes sense and still works when you deploy it 😀 import System; import System.Windows; import System.Windows.Controls; import MForms; import Excel; import System.Reflection; package MForms.JScript { class ExcelTest { // cache some of the variables var gcontroller; var gdebug; var gcontent; var giStep; // where we will store the Item Number var gITNO; // Quantity of the order var gQty; // has the RequestCompleted event been removed? var bRemoved; public function Init(element: Object, args: Object, controller : Object, debug : Object) { var content : Object = controller.RenderEngine.Content; gdebug = debug; gcontroller = controller; gcontent = content; // subscribe to the RequestCompleted event controller.add_RequestCompleted(OnRequestCompleted); bRemoved = false; try { // we want to create an Excel App var exExcel = new ActiveXObject("Excel.Application"); if(null != exExcel) { // for the purposes of demonstrating, we'll show // Excel - we do get better performance if it is // hidden exExcel.Visible = true; // load the spreadsheet which we will use to submit // the information var wbBook = exExcel.Workbooks.Open("D:\Test.xlsx"); if(null != wbBook) { // Cell B1 - this is the customer var strCUNO = wbBook.Worksheets(1).Cells(1,2).Value; // Cell B2 - this is the order type var strORTP = wbBook.Worksheets(1).Cells(2,2).Value; // Just for the purposes of because we can... // we are storing the field names in column A1 and A2 // CUNO, Customer Number field name var strFNCUNO = wbBook.Worksheets(1).Cells(1,1).Value; // ORTP, Order Type field name var strFNORTP = wbBook.Worksheets(1).Cells(2,1).Value; // go out and retrieve the Customer Number TextBox control var tbCUNO = ScriptUtil.FindChild(content, strFNCUNO); // go out and retrieve the Order Type TextBox control var tbORTP = ScriptUtil.FindChild(content, strFNORTP); if(null != tbCUNO) { // set the TextBox Customer Number value tbCUNO.Text = strCUNO; } if(null != tbORTP) { // set the order type text box tbORTP.Text = strORTP; } // we're also going to demonstrate adding an order line, only one so // we'll cache it globally // get the Item Number gITNO = wbBook.Worksheets(1).Cells(4,1).Value; // get the Quantity gQty = wbBook.Worksheets(1).Cells(4,2).Value; // now we are going to press enter controller.PressKey("ENTER"); giStep = 0; // close the Workbook wbBook.Close(); } // Quit Excel exExcel.Quit(); } } catch(ex) { } } public function OnRequestCompleted(sender: Object, e: RequestEventArgs) { if((e.CommandType == MNEProtocol.CommandTypeKey) && (e.CommandValue == MNEProtocol.KeyEnter)) { // we already know the panel sequence, so we won't bother checking it // if it is likely to change you should look it up on the A panel if(true == gcontroller.RenderEngine.PanelHeader.EndsWith("A")) { // if panel a, then we just want to enter through it, we have nothing useful to add gcontroller.PressKey("ENTER"); } else if(true == gcontroller.RenderEngine.PanelHeader.EndsWith("E")) { // if panel E, then we just want to enter through it, we have nothing useful to add gcontroller.PressKey("ENTER"); } else if(true == gcontroller.RenderEngine.PanelHeader.EndsWith("F")) { // if panel F, then we just want to enter through it, we have nothing useful to add gcontroller.PressKey("ENTER"); } else if(true == gcontroller.RenderEngine.PanelHeader.EndsWith("G")) { // if panel G, then we just want to enter through it, we have nothing useful to add gcontroller.PressKey("ENTER"); } else if(true == gcontroller.RenderEngine.PanelHeader.EndsWith("H")) { // if panel H, then we just want to enter through it, we have nothing useful to add gcontroller.PressKey("ENTER"); } else if(true == gcontroller.RenderEngine.PanelHeader.EndsWith("B1")) { // if panel B1, this where we enter the order lines if(giStep == 0) // we'll just make sure we only execute it once { // from the order line, we will enter the Item Number and Quantity // so we retrieve the TextBoxes from the panel var tbITNO = ScriptUtil.FindChild(gcontent, "WBITNO"); var tbQty = ScriptUtil.FindChild(gcontent, "WBORQA"); // Fill in the Item Number TextBox tbITNO.Text = gITNO; // Fill in the Quantity TextBox tbQty.Text = gQty; // Press Enter to add the line gcontroller.PressKey("ENTER"); // we change the giStep so we don't loop through this giStep = 1; } // now we'll remove the RequestCompleted Event gcontroller.remove_RequestCompleted(OnRequestCompleted); bRemoved = true; } } if(true != bRemoved) { gcontroller.remove_RequestCompleted(OnRequestCompleted); } } } } Have fun! Scott – I’m waiting with much anticipation for our upgrade so I can start doing some much needed customisation and alterations to reflect process flow – so am following the progress of your blog. Preseumably without too much effort you could perhaps add a button to extract the data rather than autorun etc. Good stuff here , Cheers, Paul Hi Paul, yup, you can add buttons to extract data. It does get more complicated trying to extract data from different panels and different programs though – which depending on the situation WebServices may be the best bet. I must say that going to MoveX Explorer to Smart Office has opened up a whole new world of possibilities – I’m very pleased with the potential now that we are on 10.1. Cheers, Scott Thank you very much for this, something I was hoping to be able to do with MO creation in PMS001 and with your code I can save hours of work by automating the order creation part. Cheers.
https://potatoit.kiwi/2011/01/10/excel-populating-values-in-to-multi-panel-sequences/
CC-MAIN-2017-39
refinedweb
1,176
54.42
JavaScript Comes of AgeBy Darren Jones JavaScript will be 20 years old next year (counting from when it first debuted in the Netscape browser). It’s a language with a chequered history and carries a lot of baggage from its early years, but as it leaves its teenage years behind it, I think it’s a language that has now finally grown up. JavaScript revolutionized the web by allowing scripts to run in a browser. But after its initial popularity it soon started to get a bad reputation and was often associated with poorly written, cut-and-pasted code that was used to create annoying pop-ups and cheesy ‘effects’. The phrase DHTML became a dirty word in web development. JavaScript also had some annoying shortcomings as a programming language. But, despite all of its problems, JavaScript has something that other languages don’t have – reach and ubiquity. It only requires a browser to run, which means that anybody with a computer or smartphone is capable of running a JavaScript application. JavaScript has achieved the dream that Java had of being available on all platforms by using the browser as its virtual machine. And it can now run without a browser thanks to the development of engines such as Node.js. JavaScript also has a low barrier to entry when it comes to development; since all you need to write a program is a simple text editor. It is the most popular language on GitHub by a number of measures. This means that there is a lot of JavaScript code out there and many problems have already been solved, often in many different ways. It also means that help is often easy to come by and libraries of code are very well tested. After an awkward first decade, JavaScript spent its teenage years growing up. The revolution started with the advent of Ajax, when people started to sit up and take JavaScript seriously. jQuery then got people using JavaScript to build some serious applications and Node has taken it all to a whole new level. People have started to recognize that JavaScript is a powerful and flexible language with some cool features such as: - Asynchronous event-driven programming - Functions as objects - Closures - Prototypal Inheritance - Object literals and JSON JavaScript has also proven to be flexible enough to allow solutions to be written that overcome its main shortcomings. A number of frameworks and libraries have been written to address these issues and make JavaScript a nicer language to program in. Modern web browsers have also had a big effect on the language by virtually eradicating the inconsistencies in implementation that plagued it in the past (who remembers having to write multiple versions of code just to get a simple event to work, for example?). And speed is no longer an issue as the various engines used in modern browsers are already blazingly fast, and they’re only getting faster. I strongly believe that JavaScript will be the most important language to learn over the next few years. The way websites are developed has evolved and they are now likely to be single-page web applications that rely heavily on JavaScript to do the heavy lifting on the client side, often using modern front-end frameworks such as Backbone or Angular.js. Isomorphic JavaScript is the process of using JavaScript to program the server-side of a web application and is gaining in popularity because of the advantages of using the same language for the whole application. The data that is transported from databases is often stored in JSON format. It’s possible to build an application for iOS, Android and FireFox OS using a combination of HTML, CSS and JavaScript. The Internet of Things is a broad term used to describe anything from household gadgets to small robots, most of which are using JavaScript to interact with their APIs. In short, JavaScript is becoming the language of choice, not just for the front and back end of web development, but also for interacting with a huge number of modern devices. SitePoint has recently published my book “JavaScript Novice To Ninja,” which takes you from the very beginning and works up to the more advanced topics in JavaScript. It begins by introducing the basics of programming, covering topics such as variables, conditional logic, loops, arrays, functions and objects in the earlier chapters. It then moves on to using JavaScript to interact with a browser environment, covering events, the DOM, animation and forms. Then in the later part of the book, more advanced concepts such as testing and debugging, object oriented programming and functional programming are covered, showing that JavaScript is capable of handling these. We also take a loom at recent developments such as HTML5 APIs, Ajax, frameworks and task runners, such as Grunt. There’s also a practical project that involves building an interactive quiz application that develops in each chapter. If you’ve always wanted to learn how to program then now is the perfect time to get started and JavaScript is the perfect language to learn. As it moves into its 20s, JavaScript has finally grown up and is starting to go places! I've been checking out the new Java SDK 8 with the Nashorn js engine and JavaFX. Very nice. Would love to see a good article on using it and bundling apps using it. I think JS is going to become even more popular after ES6 comes in with its improvements. I read an interesting article on the tidal wave of frameworks though. Suggested that a more atomic style of using specific little libraries for specific things might be the future. Interesting stuff. ES7 is quite a way through the development process now so JavaScript should continue to evolve and become more and more powerful. It's a real pity that most college classes teach how to write JavaScript for Netscape 2-4 rather than any of the more modern versions. Wow - that's equivalent to learning how to ride a horse and cart instead of getting a drivers license. A number frameworks and libraries - should be a number of... Definitely. The other problem is that it will still be some time before we see this JS being used everywhere, what with slow browser upgrade adoption etc. I like Javascript (or ECMAScript) very much. Thanks to that language's object notation, we have a splendid lightweight data format named JSON. There's a bit about this in one of the later chapters in the book. I defnitely think the future will be all about small modules that do one thing very well, rather than huge monolithic frameworks that try to do everthing. This will definitely be easier when the module pattern from ES6 gets adopted. You could even put lots of different modules together under an umbrella namespace using something like Ender or a build process, effectively creating your own custom bespoke framework. This is such a shame. Most modern browsers have good support for the latest JavaScript and even quite a bit of ES6. You can also use various transpilers to use most of ES6 today, so it would be good to see courses teaching the most up to date methods so that people could see what JS is capable of! Thanks - will try to get this changed! I actually think that most modern browsers are updating quite quickly, so this isn't as much of a problem as it used to be. I love JSON! It hits the sweet spot between being a human and machine readable data format. And the fact that it can be encoded as a simple string makes it amazingly portable! To start, this rand it not targeted to you personally. I like your article, it shows that JS is indeed a very mature language. One that has been around for a long while. Also comparing it to Java's all platform mission is a +1! But... Talking about Object Oriented programming vs Functional programming in JS is simply not understanding JS. To be honest, I'm sick and tired of all those 'write OO in JS'-tutorials. They make for bad code, var self=this; crap and people panicking when this suddenly is not this but that, or window. As long as people write articles about JS Objects and how to write 'OO' code in JS, it will stay in puberty. Also, stop calling it asynchronous. It simply isn't. JavaScript is a blocking synchronous language. The only asynchronous thing in JavaScript are certain browser api's. for instance the XHR object, a good answer can be found here: You also state 'Closure' as being a cool feature of the language. Not sure you understand what a Closure is, nearly all languages have them, not so special imo. And your last statement about learning JS as a first programming language. Couldn't agree less. Javascript is a very hard language to grasp. Just try and explain what 'this' is: or perfectionkills(dot)com/know-thy-reference/ . Add the hard to grasp prototyping + new key word, and you have yourself a very powerfull but hard to learn language. A small snippet to end: typeof new String('foo') != typeof 'foo' && new String('foo') == 'foo'. I rest my case, hard to learn Hi pinoniq, Thanks for your feedback - it's always good to see the other side of things, and the links you sent made interesting reading. I think that JS struggles with both an OO and functional approach, but the fact is that you can do both in JS, if you want to. I'd be interested to hear why you think the OO approach is so bad? Yes I agree that the language itself isn't asynchronous, but the way callbacks (and promises) are used makes the language behave as if it was asynchronous. I still stand by my assertion that JS is a good language to learn for a first timer. I think the advantage is that people can learn JS using just their browser and it also gives them access to lots of other places to practice and see examples. Yes it has more quirks than most languages, but don't all languages have some difficult ideas to get your head around? I think you can get the basics of programming without running into too many JS gotchas. Cheers, DAZ i love javascript from server side to client side gonna rock ! The fact that Javascript is being used for programming hardware now is an indication of it's maturity. On a lighter note, emulation of older computers for games is far better now. Here are some examples: No more having to faff about with software such as DOSbox, just play in browser. After years of hating the language, I love it for the above sites alone! I've just been reliving some childhood memories playing Chuckie Egg! Yes I agree, that JavaScript's quirks and annoyances are outweighed by its ubiquity in the browser nowaday. And more and more applications are showing what it is capable of! Chuckie Egg!!!! That is a blast from the past! How many hours did I spend on that game? I spent many childhood hours playing this - the shock when the giant bird started chasing you!
https://www.sitepoint.com/javascript-comes-age-2/
CC-MAIN-2017-17
refinedweb
1,883
69.72
I am forever indebted to you for this informaiton. i need one simple application in core java, please help me. What is the use of thread in java program explain any program using thread Basic Java Method Post your Comment java if statement java if statement If statement in Java Switch Statement with Strings in Java Switch Statement with Strings in Java Switch Statement with Strings in Java if else statement in java if else statement in java explain about simple if else statement and complex if else statement in java with an example switch Java callable statement Java callable statement What is callable statement? Tell me the way to get the callable statement Switch Statement Switch Statement How we can use switch case in java program ?  ... switches to statement by testing the value. import java.io.BufferedReader; import... switch case statement. The program displays the name of the days according to user conditional statement in java conditional statement in java Write a conditional statement in Java Logical and comparison statements in OOPs are also know as conditional statements. You can learn "How to write conditional statement in Java " from Java switch statement Java switch statement What restrictions are placed on the values of each case of a switch statement GOTO Statement in java GOTO Statement in java I have tried many time to use java goto statement but it never works i search from youtube google but e the example on net are also give compile error. if possible please give me some code with example ELSE STATEMENT!! - Java Beginners ELSE STATEMENT!! Hi! I just want to know why doesn't my else statement works? Else statement is there in the code below which is: else JOptionPane.showMessageDialog(null, n+ " is not in the array!");//doesn't work 'if' Statement - if inside if Java Notes'if' Statement - if inside if if inside if You can put an if statement inside another if statement. Example -- series... the condition in an if statement, Java thinks it's finished with the body Java IF statement problem Java IF statement problem Dear Sir/Madam i am using the following code. expected to not have any output but it still showing "welcome".. please help me why it showing "welcome". class c1 { int a=5; while(a>1... is a statement? A statement is a part of a Java program. We have already... directly after the condition in an if statement, Java thinks it's finished What is the difference between an if statement and a switch statement? What is the difference between an if statement and a switch statement? Hi, What is the difference between an if statement and a switch statement? Thanks, Hi, The if statement gives options to select one option While loop Statement. While loop Statement. How to Print Table In java using While Loop 'if' Statement - Braces Java Notes'if' Statement - Braces Braces { } not required for one statement If the true or false part of and if statement has only one... was expected. What is a statement? A statement is a part of a Java Break Statement in java 7 Break Statement in java 7 In this tutorial we will discuss about break statement in java 7. Break Statement : Java facilitate you to break the flow... java provides the way to do this by using labeled break statement. You can jump 'while' Statement Java Notes'while' Statement Purpose - to repeat statements The purpose of the while statement is to repeat a group of Java statements many times. It's written just like an if statement, except that it uses Java Import Statement Cleanup - Java Tutorials Java Import Statement Cleanup 2002-06-18 The Java Specialists' Newsletter [Issue 051] - Java Import Statement Cleanup Author: Dr. Cay S. Horstmann... to receive training in Design Patterns. Java Import Statement Cleanup for statement for statement for(int i=0;i<5;i++); { system.out.println("The value of i is :"+i); } if i end for statement what will be the output got the answer.. it displays only the last iteration that is "The value of i JDBC Prepared Statement Example JDBC Prepared Statement Example Prepared Statement is different from Statement object, When it is created ,it is represented as SQL statement. The advantage Java Switch Statement Java Switch Statement In java, switch is one of the control statement which turns the normal flow... more about the switch statement click on: http:/ Java Break Statement Java Break Statement  ... is of unlabeled break statement in java. In the program break statement.... Syntax for Break Statement in Java QRKFnizaMgtrqlVutXannon September 23, 2011 at 11:03 AM I am forever indebted to you for this informaiton. core javaarun December 3, 2011 at 2:13 PM i need one simple application in core java, please help me. threads in java programessathish February 19, 2012 at 10:45 PM What is the use of thread in java program explain any program using thread Basic Java MethodCharulatha June 25, 2012 at 3:13 PM Basic Java Method Post your Comment
http://www.roseindia.net/discussion/18667-Java---Continue-statement-in-Java.html
CC-MAIN-2014-10
refinedweb
839
61.26