text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
In this artcle i will try to explain the basic concepts to a novice MS CRM developer who wants to customize Microsoft CRM 3.0. This article consists of external referances and code samples those are provided with the permission of the copyright holders' permission. If you will get quotations from this article please provide the original link as the referance info in your material. Outline is as follows:
o What is a callout? Where do them used?
o Which events trigger callouts?
o A simple case example: Validator For Turkish Citizen Number (=TCKN)
Having a strong C# programming knowledge is an asset to benefit from this article. First i will try to explain basic concepts for callout programming for MS CRM 3.0.
Callouts are simply class libraries those are compiled using .NET Framework (=fw) 1.1.
First question in our minds is: Can we write callouts using fw2.0 or fw3.0 ? Unfortunately NO! Since MS CRM 3.0 is written using fw 1.1 and callout assemblies are loaded in the same application pool with crm, there is a limitation in executing only one framework in one application pool (=app pool), crm application pool uses fw1.1 and you cannot load an assembly that requires fw2.0 or above in that app pool. So we have to compile our callout code in fw1.1.
There are two ways to generate fw1.1 code.
First is the conventional way, we use Visual Studio.NET (=VSNET) 2003, we write our code in the IDE and hit F5 to generate the compiled code.
Second is a bit compilcated, you may utilize VSNET 2005 and fw1.1 to generate fw1.1 assemblies. However this technique requires msbee setup and having a proper config file. Then you only need to write an msbuild command in the VSNET command prompt to generate fw1.1 assembly.
Hence this is a bit more complicated i will explain the issue in another article.
Callouts are used to extend MS CRM 3.0 features. Mostly in integration projects callouts are used to provide data integrity, and consistance. For instance you dont want to save two people having the same employee number, however by default ms crm allows having two records with the same employee number. Just at this point you may write a callout and it checks the existance of the employee number, if that number exists aborts the save process, otherwise makes the process continue the usual way.
This class resides in the microsoft.crm.platform.callout.base.dll file which is located in the system folder of the Microsoft Crm 3.0 installation directory. This is the main starting point for the callout developer, that is:
<formulas /></formulas />All the callouts must be derived from the CrmCalloutBase class, thus you have to reference microsoft.crm.platform.callout.base.dll file all of your callout projects. This dll file may be found at the MS CRM 3.0 installation CD's following path:Microsoft CRM 3.0 Professional Edition Disk\Bin\Assembly\Microsoft.Crm.Platform.Callout.Base.dll
Here we see the events that fire callouts. Using these events MS CRM custom checks and additional functionalities are achieved.
PreAssign / PostAssign
Specifies a method called before / after an assign operation.
Pre
Specifies a method called before / after a delete operation.
PreUpdate / PostUpdate
Specifies a method called before / after an update operation.
PreCreate / PostCreate
Specifies a method called before /after a create operation.
PreMerge / PostMerge
Specifies a method called before / after a merge operation.
PreSetState / PostSetState
Specifies a method called before / after a set state operation.
PreSend
Specifies a method called before a send e-mail operation.
PostDeliver
Specifies a method called after a deliver e-mail operation.
Reference:
Table describes the types of operations can be handled using callouts. We may fully customize Create/Update /Delete/Assign operations by writing pre and post callout methods. Since names describe the usages of the methods some of them need explanation.
Assign event occurs when an entity's owner is changed.
SetState event occurs when an entity's state is changed to active or inactive. See Apprendix A1 for a sample code about this event.
Merge event occurs when
By deriving from CrmCalloutBase class and limplementing one of the above methods we write our callout code, then we compile it to a DLL file. Now it's time to deploy the resultant assembly to the MS CRM server.
.NET assembly (the class library) is put in the MS CRM server's installation directory's sub folder bin\assembly. i.e: If you installed the server to C:\Program Files\Microsoft CRM then the callout assembly must be put to C:\Program Files\Microsoft CRM\Server\bin\assembly.
Assuming the CRM installation directory is C:\Program Files\Microsoft CRM\All the callouts must reside at C:\Program Files\Microsoft CRM\Server\bin\assembly\
After putting the callout DLL to the assembly directory callout declaration must be made to the CRM server, otherwise server cannot know when to call the callout assembly. Notifying the server about callouts is done using the callout.config.xml file that is found in the assembly directory of the CRM.
Above we see a callout assembly notification to CRM. Callout class library's namespace is called DontPermitDuplicates. Just before a Lead entity is created in the CRM this callout is called.
CRM instantiates an object of DontPermitDuplicates.NoDblNames class. Then executes the PreCreate method in that class. That method may check things or do the required custom business operations then object may either abort the creation operation by raising an error message, or makes the operation continue so the entity is created in the CRM.
1. Create a Class Library project in VSNET 2003.
2. Add referance to microsoft.crm.platform.callout.base.dll file
3. Derive a class from CrmCalloutBase.
4. Write PreCreate and then CTRL+SPACE to see the method signature find PreCreate method and press enter.
Use the following sample code to achieve your first callout code.
<p style="LINE-HEIGHT: normal">public override PreCalloutReturnValue PreCreate(
</p>
Above code contains a method called IsTcknValid this method returns a boolean value, weather the parameter of the method is a valid TCKN (=Republic of Turkey Citizen Number) value.
Below we may see the TCKN checker code block. This is very useful to preserve the data validity. (The code presented is a port of the SQL function that is located at following URL: )
<p style="LINE-HEIGHT: normal">private bool IsTcknValid(string TcknToValidate)
</p>
After writing above code to a Class Library project we have to compile it and then the resultant DLL file msut be copied to the MS CRM's Callout directory that is: "C:\Program Files\Microsoft CRM 3.0\Server\bin\assembly" This directory has a callout.config file which contains the executed callout DLLs for every Entity those are needed to be checked.
We will add the following line of codes in the callout.config.xml file in order to test our callout.
<?xml version="1.0" encoding="utf-8" ?> <callout.config <callout entity="contact" event="PreCreate"> <subscription assembly="DontPermitDuplicates.dll" class="DontPermitDuplicates.NoDblNames" /> </callout> </callout.config>
ATTENTION! Don't forget to put the XML extension to the callout.canfig file, i.e: callout.config.xml
Now we have to test it using a valid and an invalid TCKN value
First we have to create an attribute that will hold the citizen number, the attribute will be called "new_tckn". After creation, field will be placed on the contact's form.
1) Create the attribute for the contact entity
2) Place the field on the contact form, then publish the changes.
3) Edit the following file, if it does not exists then create it using notepad.
"C:\Program Files\Microsoft CRM\Server\bin\assembly\callout.config.xml"
4) Make iis start again by doing Start > Run > "iisreset"
Now try to enter 12345678901 value for the tckn field of a new contact recrod. This operation will be aborted by the callout since the value is not a valid TCKN number.
In this article the basics of callout programming are shown. As an example of a simple callout Turkish Citizen Number validator callout is developed and tested. Since callout assembies are loaded and run by the MS CRM application using reflection, we have to generate .NET Framework 1.1 assemblies for our callouts. That is beacause MS CRM 3.0 is implemented using .NET 1.1 Framework. As a restriction .NET 1.1 processes cannot execute any higher version of the framework assemblies in their application domain.
In our example there were no connection strings in assembly. Since we don't want to keep our connection string hard coded we need to store them in a config file, thus the config file must be the web.Config of the MS CRM application.
For the curious reader it is beneficial to mention that callouts' DataAccessLayer may be implemented using Enterprise Library's Data Application Block. Until next article keep coding!
Well written two trobleshooting articles may be found at the following locations:
"Is your Microsoft CRM callout not working?",
"The functionality that you expect from a callout .dll file does not work as expected in Microsoft Dynamics CRM 3.0"
Microsoft's sample callout from CRM SDK's 3.0.7 version
Source:
This page provides an overview of the Enterprise Library Data Access Application Block. This is reusable and extensible source code-based guidance that simplifies development of common data access functionality in .NET-based applications.
Downloads Latest Release (for .NET Framework 1.1): Enterprise Library, June 2005See also: Enterprise Library for .NET Framework 2.0, January 2006License End User Licensing Agreement (EULA)Hands-On Labs Enterprise Library - Hands on LabsWebcast Enterprise Library Data Access Application Block (Level 300)Community Enterprise Library Community
System Requirements for Enterprise Library for .NET Framework 1.1* Supported OS: Windows 2000; Windows Server 2003; Windows XP Professional Edition * Microsoft .NET Framework version 1.1* Microsoft Visual Studio .NET 2003 development system (Enterprise Architect, Enterprise Developer, or Professional edition)* Some blocks and samples require the use of Microsoft SQL Server or other database products.* NUnit 2.1.4 is required if you want to execute unit tests
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
callout
subscription
br
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/18949/Writing-Microsoft-CRM-3-0-Callouts-Sample-Code-Tur | CC-MAIN-2016-26 | refinedweb | 1,786 | 58.18 |
Tiny Tkinter Calculator (Python)
Please support our Python advertiser: Programming Forums
Just.
The program also has a guard against the bad guys that like to abuse the underlying eval() function to wipe out files.
I wrote the code in detail, so you can hopefully figure it out. You can fancy up the calculator quite a bit, just experiment with the code.
Last edited : Mar 31st, 2007.
•
•
•
•
python Syntax
- # a tiny/simple Tkinter calculator (improved v.1.1)
- # if you enter a number with a leading zero it will be an octal number!
- # eg. 012 would be a decimal 10 (0.12 will not be affected)
- # used a more modern import to give Tkinter items a namespace
- # tested with Python24 vegaseat 08dec2006
-
- """
- calculator has a layout like this ...
- < display >
- 7 8 9 * C
- 4 5 6 / M->
- 1 2 3 - ->M
- 0 . = + neg
- """
-
- import Tkinter as tk
-
- def click(key):
- global memory
- if key == '=':
- # avoid division by integer
- if '/' in entry.get() and '.' not in entry.get():
- entry.insert(tk.END, ".0")
- # guard against the bad guys abusing eval()
- str1 = "-+0123456789."
- if entry.get()[0] not in str1:
- entry.insert(tk.END, "first char not in " + str1)
- # here comes the calculation part
- try:
- result = eval(entry.get())
- entry.insert(tk.END, " = " + str(result))
- except:
- entry.insert(tk.END, "--> Error!")
- elif key == 'C':
- entry.delete(0, tk.END) # clear entry
- elif key == '->M':
- memory = entry.get()
- # extract the result
- if '=' in memory:
- ix = memory.find('=')
- memory = memory[ix+2:]
- root.title('M=' + memory)
- elif key == 'M->':
- entry.insert(tk.END, memory)
- elif key == 'neg':
- if '=' in entry.get():
- entry.delete(0, tk.END)
- try:
- if entry.get()[0] == '-':
- entry.delete(0)
- else:
- entry.insert(0, '-')
- except IndexError:
- pass
- else:
- # previous calculation has been done, clear entry
- if '=' in entry.get():
- entry.delete(0, tk.END)
- entry.insert(tk.END, key)
-
- root = tk.Tk()
- root.title("Simple Calculator")
-
- btn_list = [
- '7', '8', '9', '*', 'C',
- '4', '5', '6', '/', 'M->',
- '1', '2', '3', '-', '->M',
- '0', '.', '=', '+', 'neg' ]
-
- # create all buttons with a loop
- r = 1
- c = 0
- for b in btn_list:
- rel = 'ridge'
- cmd = lambda x=b: click(x)
- tk.Button(root,text=b,width=5,relief=rel,command=cmd).grid(row=r,column=c)
- c += 1
- if c > 4:
- c = 0
- r += 1
-
- # use Entry widget for an editable display
- entry = tk.Entry(root, width=33, bg="yellow")
- entry.grid(row=0, column=0, columnspan=5)
-
- root.mainloop() | http://www.daniweb.com/code/snippet610.html | crawl-002 | refinedweb | 402 | 61.02 |
This series has since been revised and combined. [size="5"]Regular expressions (regexes) Think of an identifier in C++ (such as this, temp_var2, assert etc.). How would you describe what qualifies for an identifier? Lets see if we remember... an identifier consists of letters, digits and the underscore character (_), and must start with a letter (it's possible to start with an underscore, but better not to, as these identifiers are reserved by the language). A regular expression is a notation that allows us to define such things precisely: letter (letter | digit | underscore) * In regular expressions, the vertical bar | means "or", the parentheses are used to group sub expressions (just like in math) and the asterisk (*) means "zero or more instances of the previous". So the regular expression above defines the set of all C++ identifiers (a letter followed by zero or more letters, digits or underscores). Lets see some more examples:
- abb*a - all words starting with a, ending with a, and having at least one b in-between. For example: "aba", "abba", "abbbba" are words that fit, but "aa" or "abab" are not.
- 0|1|2|3|4|5|6|7|8|9 - all digits. For example: "1" is a digit, "fx" is not.
- x(y|z)*(a|b|c) - all words starting with x, then zero or more y or z, and end with a, b or c. For example: "xa", "xya", "xzc", "xyzzzyzyyyzb".
- xyz - only the word "xyz". There is another symbol we'll use: eps (usually denoted with the Greek letter Epsilon) eps means "nothing". So, for example the regular expression for "either xy or xyz" is: xy(z|eps). People familiar with regexes know that there are more complicated forms than * and |. However, anything can be built from *, | and eps. For instance, x? (zero or one instance of x) is a shorthand for (x|eps). x+ (one or more instances of x) is a shorthand for xx*. Note also the interesting fact that * can be represented with +, namely: x* is equivalent to (x+)|eps. [Perl programmers and those familiar with Perl syntax (Python programmers, that would include you) will recognize eps as a more elegant alternative to the numerical notation {m, n} where both m and n are zero. In this notation, x* is equivalent to x{0,} (unbound upper limit), x+ is x{1,} and all other cases can be built from these two base cases. - Ed.] [size="5"]Recognizing strings with regexes Usually a regex is implemented to solve some recognition problem. For example, suppose your application asks a question and the user should anwswer Yes or No. The legal input, expressed as a regex is (yes)|(no). Pretty simple and not too exciting - but things can get much more interesting. Suppose we want to recognize the following regex: (a|b)*abb, namely all words consisting of a's and b's and ending with abb. For example: "ababb", "aaabbbaaabbbabb". Say you'd like to write the code that accepts such words an rejects others. The following function does the job:
bool recognize(string str) { string::size_type len = str.length(); // can't be shorter than 3 chars if (len < 3) return false; // last 3 chars must be "abb" if (str.substr(len - 3, 3) != "abb") return false; // must contain no chars other than "a" and "b" if (str.find_first_not_of("ab") != string::npos) return false; return true; }It's pretty clean and robust - it will recognize (a|b)*abb and reject anything else. However, it is clear that the techniques employed in the code are very "personal" to the regex at hand. If we slightly change the regex to (a|b)*abb(a|b)*, for instance (all sequences of a's and b's that have abb somewhere in them), it would change the algorithm completely. (We'd then probably want to go over the string, a char at a time and record the appearance of abb. If the string ends and abb wasn't found, it's a rejection, etc.). It seems that for each regex, we should think of some algorithm to handle it, and this algorithm can be completely different from algorithms for other regexes. So what is the solution? Is there any standartized way to handle regexes? Can we even dream of a general algorithm that can produce a recognizer function given a regex? We sure can (otherwise, what would this article be about :-) )! Read on. [size="5"]FSMs to the rescue It happens so that Finite State Machines are a very useful tool for regular expressions. More specifically, a regex (any regex!) can be represented as an FSM. To show how, however, we must present two additional definitions (which actually are very logical, assuming we use a FSM for a regex).
- A start state is the state in which a FSM starts.
- An accepting state is a final state in which the FSM returns with some answer. It is best presented with an example:
The start state 0 is denoted with a "Start" arrow. 1 is the accepting state (it is denoted with the bold border). Now, try to figure out what regex this FSM represents. It actually represents xy* - x and then 0 or more of y. Do you see how? Note that x leads the FSM to state 1, which is the accepting state. Adding y keeps the FSM in the accepting state. If a x appears when in state 1, the FSM moves to state 2, which is non accepting and "stuck", since any input keeps the FSM in state 2 (because xy* rejects strings where a x comes after y-s). But what happens with other letters ? For simplicity we'll now assume that for this FSM our language consists of solely x and y. If our input set would be larger (say the whole lowercase alphabet), we could define that each transition not shown (for instance, on input "z" in state 0) leads us into some "unaccepting" state. I will now present the general algorithm for figuring out whether a given FSM recognizes a given word. It's called "FSM Simulation". But first lets define an auxiliary function: move(state, input) returns the state resulting from getting input in state state. For the sample FSM above, move(0, X) is 1, move (0, Y) is 0, etc. So, the algorithm goes as follows:
state = start_state input = get_next_input while not end of input do state = move(state, input) input = get_next_input end if state is a final state return "ACCEPT" else return "REJECT"The algorithm is presented in very general terms and should be well understood. Lets "run" it on the simple xy* FSM, with the input "xyy". We start from the start state - 0; Get next input: x, end of input ? not yet; move(0, x) moves us to state 1; input now becomes y; not yet end of input; move(1, y) moves us to state 1; exactly the same with the second y; now it's end of input; state 1 is a final state, so "ACCEPT"; Piece of cake isn't it ? Well, it really is ! It's a straightforward algorithm and a very easy one for the computer to execute. Lets go back to the regex we started with - (a|b)*abb. Here is the FSM that represents (recognizes) it:
Although it is much more complicated than the previous FSM, it is still simple to comprehend. This is the nature of FSMs - looking at them you can easily characterize the states they can be in, what transitions occur, and when. Again, note that for simplicity our alphabet consists of solely "a" and "b". Paper and pencil is all you need to "run" the FSM Simulation algorithm on some simple string. I encourage you to do it, to understand better how this FSM relates to the (a|b)*abb regex. Just for example take a look at the final state - 3. How can we reach it ? From 2 with the input b. How can we reach 2 ? From 1 with input b. How can we reach 1 ? Actually from any state with input a. So, "abb" leads us to accept a string, and it indeed fits the regex. [size="5"]Coding it the FSM way
#includeTake a good look at the recognize function. You should immediately see how closely it follows the FSM Simulation algorithm. The FSM is initialized to the start state, and the first input is read. Then, in a loop, the machine moves to its next state and fetches the next input, etc. until the input string ends. Eventually, we check whether we reached a final state. Note that this recognize function will be the same for any regex. The only functions that change are the trivial is_final_state and get_start_state, and the more complicated transition function move. But move is very structural - it closely follows the graphical description of the FSM. As you'll see! [hr](C) Copyright by Eli Bendersky, 2003. All rights reserved.
#include #include using namespace std; typedef int fsm_state; typedef char fsm_input; bool is_final_state(fsm_state state) { return (state == 3) ? true : false; } fsm_state get_start_state(void) { return 0; } fsm_state move(fsm_state state, fsm_input input) { // our alphabet includes only 'a' and 'b' if (input != 'a' && input != 'b') assert(0); switch (state) { case 0: if (input == 'a') { return 1; } else if (input == 'b') { return 0; } break; case 1: if (input == 'a') { return 1; } else if (input == 'b') { return 2; } break; case 2: if (input == 'a') { return 1; } else if (input == 'b') { return 3; } break; case 3: if (input == 'a') { return 1; } else if (input == 'b') { return 0; } break; default: assert(0); } } bool recognize(string str) { if (str == "") return false; fsm_state state = get_start_state(); string::const_iterator i = str.begin(); fsm_input input = *i; while (i != str.end()) { state = move(state, *i); ++i; } if (is_final_state(state)) return true; else return false; } // simple driver for testing int main(int argc, char** argv) { recognize(argv[1]) ? cout << 1 : cout << 0; return 0; } | https://www.gamedev.net/articles/programming/general-and-gameplay-programming/algorithmic-forays-part-2-r2076?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024 | CC-MAIN-2017-30 | refinedweb | 1,659 | 73.07 |
Also see Moose::Manual::Delta for more details of, and workarounds for, noteworthy changes. 2.0205 Tue, Sep 06, 2011 [NEW FEATURES] * The Array and Hash native traits now provide a "shallow_clone" method, which will return a reference to a new container with the same contents as the attribute's reference. [ENHANCEMENTS] * Specifying an invalid value in a hashref 'handles' value now throws a sensible error (Dave Rolsky) [BUG FIXES] * When specifying an attribute trait, passing options for the trait besides -alias or -excludes caused a warning. However, passing other options is totally valid when using MooseX::Role::Parameterized. (sartak) * Allow regexp objects in duck_type constraints (to bring this in line with the Object constraint) 2.0204 Thu, Aug 25, 2011 [BUG FIXES] * Validating duck_type type constraint turned out to work only by accident, and only when not running under the debugger. This has been fixed. (Florian Ragwitz) [OTHER] * Loosen the dependency on ExtUtils::ParseXS. 2.0203 Tue, Aug 23, 2011 [BUG FIXES] * is_class_loaded now properly detects packages which have a version object in their $VERSION. * Fix XS compilation under blead. 2.0202 Tue, Jul 26, 2011 [BUG FIXES] * Be more consistent about how type constraint messages are handled. 2.0201 Fri, Jul 22, 2011 Mon, Jul 18, 2011 [OTHER] * No changes from 2.0105 (other than a few minor documentation tweaks). 2.0105-TRIAL Mon, Jun 27, 2011 [ENHANCEMENTS] * Moose::Util::does_role now respects overridden ->does methods. (doy) 2.0104-TRIAL Mon, Jun 20, 2011 [OTHER] * Include changes from 2.0010. 2.0103-TRIAL Mon, Jun 20,-TRIAL Sat, Jun 18, 2011 [ENHANCEMENTS] * The native Array trait now has a 'first_index' method, which works just like the version in List::MoreUtils. (Karen Etheridge) * Clean up some internal code to help out extensions. [OTHER] * Include changes from Moose 2.0008. 2.0101-TRIAL Mon, Jun 06, 2011 [OTHER] * Various packaging issues. 2.0100-TRIAL Mon, Jun 06, Mon, Jun 20, 2011 [BUG FIXES] * Fix regression in 2.0009 and 2.0103 when applying roles during init_meta in an exporter that also re-exports Moose or Moose::Role. (t0m, ilmari) 2.0009 Sun, Jun 19, 2011 [BUG FIXES] *). [OTHER] * Better error message if Moose->init_meta is called with a 'metaclass' option when that metaclass hasn't been loaded. (jasonmay) 2.0008 Thu, Jun 16, 2011 [BUG FIXES] * The 'accessor' native delegation for hashrefs now allows setting the value to undef. (sugoik, doy) [ENHANCEMENTS] * Various generated methods have more useful context information. (doy) 2.0007 Sun, May 15, 2011 [BUG FIXES] * Make sure weak attributes remain weak when cloning. (doy, rafl) 2.0006 Mon, May 09, 2011 [BUG FIXES] * Revert the List::MoreUtils version bump, as it breaks backwards compatibility. The dependency will be bumped with Moose 2.0200. 2.0005 Mon, May 09, 2011 [BUG FIXES] * Only sort the alias keys when determining caching. 2.0004 Mon, May 09, 2011 [BUG FIXES] * Bump the List::MoreUtils dep to avoid buggy behavior in old versions. * Sort the list of roles and the alias and excludes parameters when determining caching, since their order doesn't matter. 2.0003 Mon, May 09, 2011 [BUG FIXES] *) 2.0002 Thu, Apr 28, 2011 [ENHANCEMENTS] *. [BUG FIXES] * Stop hiding warnings produced by throwing errors in DEMOLISH methods. * The 'reset' native delegation for Counter attributes will now also respect builders (previously, it only respected defaults). 2.0001 Fri, Apr 22, 2011 Mon, Apr 11, 2011 [API CHANGES] * The RegexpRef type constraint now accepts regular expressions blessed into other classes, such as those found in pluggable regexp engines. Additionally the 'Object' constraint no longer rejects objects implemented as a blessed regular expression. (David Leadbeater) [OTHER] * Moose::Manual::Support now explicitly states when major releases are allowed to happen (January, April, July, or October). 1.9906-TRIAL Mon, Apr 04, 2011 [OTHER] * Update conflicts list. * Minor pod updates. 1.9905-TRIAL Mon, Mar 28, 2011 ) 1.9904-TRIAL Fri, Mar 04, 2011 [BUG FIXES] * Reinitializing anonymous roles used to accidentally clear out the role's stash in some circumstances. This is now fixed. (doy) * The Int type constraint now rejects integers with trailing newlines. (Matthew Horsfall) 1.9903-TRIAL Mon, Feb 28,) 1.9902-TRIAL Mon, Jan 03, 2011 [OTHER] * Fix generation of CCFLAGS. * Add a bit more Dist::Zilla functionality. 1.9901-TRIAL Mon, Jan 03, 2011 [OTHER] * Fix some indexing issues. * Fix a few issues with the conflict checking stuff. 1.9900-TRIAL Sat, Jan 01, 2011 ) 1.25 Fri, Apr 1, 2011 [BUG FIXES] * Reinitializing anonymous roles used to accidentally clear out the role's stash in some circumstances. This is now fixed. (doy) (backported from 1.9904) 1.24 Tue, Feb 24,) (backported from 1.9903) 1.23 Sun, Feb 13, 2011 [PACKAGING FIX] * The 1.22 release had a bad MANIFEST. This has been fixed. 1.22 Sun, Feb 13, 2011 Wed, Nov 24, 2010 [ENHANCEMENTS] * The Support manual has been updated to reflect our new major/minor version policy. (Chris Prather) * The Contributing manual has been updated to reflect workflow changes based on this new support policy. (doy) [BUG FIXES] * The role attribute metaclass did not inherit from Class::MOP::Object, which could cause errors when trying to resolve metaclass compatibility issues. Reported by Daniel Ruoso. (doy) * The lazy_build feature was accidentally removed from all the docs. Now it's listed in Moose.pm again. (Chris Prather) 1.20 Fri, Nov 19, 2010 [BUG FIXES] * When using native delegations, if an array or hash ref member failed a type constraint check, Moose ended up erroring out with "Can't call method "get_message" on unblessed reference" instead of generating a useful error based on the failed type constraint. Reported by t0m. RT #63113. (Dave Rolsky) 1.19 Tue, Nov 2, 2010 [BUG FIXES] * There was still one place in the code trying to load Test::Exception instead of Test::Fatal. (Karen Etheridge) 1.18 Sun, Oct 31, 2010 [ENHANCEMENTS] * Type constraint objects now have an assert_coerce method which will either return a valid value or throw an error. (rjbs) * We now warn when an accessor for one attribute overwrites an accessor for another attribute. RT #57510. (Dave Rolsky) [BUG FIXES] *. Reported by Karen Etheridge. RT #62351. (Dave Rolsky) * An attribute using native delegations did not always properly coerce and type check a lazily set default value. (doy and Dave Rolsky) * Using a regexp to define delegations for a class which was not yet loaded did not actually work, but did not explicitly fail. However, it caused an error when the class was loaded later. Reported by Max Kanat-Alexander. RT #60596. (Dave Rolsky) * Attempting to delegate to a class or role which is not yet loaded will now throw an explicit error. (Dave Rolsky) * Attempting to set lazy_build in an inherited attribute was ignored. RT #62057. (perigrin) [OTHER] * The Moose test suite now uses Test::Fatal instead of Test::Exception. (rjbs) 1.17 [ENHANCEMENTS] * Almost every native delegation method which changes the attribute value now has an explicitly documented return value. In general, this return value matches what Perl would return for the same operation. (Dave Rolsky) * Lots of work on native delegation documentation, including documenting what arguments each native delegation method allows or requires. (Dave Rolsky) * Passing an odd number of args to ->new() now gives a more useful warning than Perl's builtin warning. Suggested by Sir Robert Burbridge. (Dave Rolsky) * Allow disabling stack traces by setting an environment variable. See Moose::Error::Default for details. This feature is considered experimental, and may change in a future release. (Marcus Ramberg) * The deprecation warning for using alias and excludes without a leading dash now tells you the role being applied and what it was being applied to. (mst). [BUG FIXES] * A number of native trait methods which expected strings as arguments did not allow the empty string. This included Array->join, String->match, String->replace, and String->substr. Reported by Whitney Jackson. RT #61962. (Dave Rolsky) * 'no Moose' no longer inadvertently removes imports it didn't create itself. RT #60013. (Florian Ragwitz, doy) * Roles now support passing an array reference of method names to method modifier sugar functions. (doy) * Native traits no longer use optimized inlining routines if the instance requests it (in particular, if inline_get_slot_value doesn't return something that can be assigned to). This should fix issues with KiokuDB::Class. (doy) * We now ignore all Class::MOP and Moose classes when determining what package called a deprecated feature. This should make the deprecation warnings saner, and make it possible to turn them off more easily. (Dave Rolsky) * The deprecated "default is" warning no longer happens if the attribute has any accessor method defined (accessor, reader, writer). Also, this warning only happens when a method that was generated because of the "default is" gets called, rather than when the attribute is defined. (Dave Rolsky) * The "default default" code for some native delegations no longer issues a deprecation warning when the attribute is required or has a builder. (Dave Rolsky) * Setting a "default default" caused a fatal error if you used the builder or lazy_build options for the attribute. Reported by Kent Fredric. RT #59613. (Dave Rolsky) 1.15 Tue, Oct 5, 2010 [API CHANGES] *) [ENHANCEMENTS] * Native Trait delegations are now all generated as inline code. This should be much faster than the previous method of delegation. In the best case, native trait methods will be very highly optimized. * Reinitializing a metaclass no longer removes the existing method and attribute objects (it instead fixes them so they are correct for the reinitialized metaclass). This should make the order of loading many MooseX modules less of an issue. (doy) * The Moose::Manual docs have been revised and updated. (Dave Rolsky) [BUG FIXES] * If an attribute was weak, setting it to a non-ref value after the object was constructed caused an error. Now we only call weaken when the new value is a reference. * t/040_type_constraints/036_match_type_operator.t failed on 5.13.5+. Fixed based on a patch from Andreas Koenig.) 1.12 Sat, Aug 28, 2010 [BUG FIXES] * Fix the MANIFEST. Fixes RT #60831, reported by Alberto Simões. 1.11 Fri, Aug 27, 2010 [API CHANGES] * An attribute in a subclass can now override the value of "is". (doy) * The deprecation warnings for alias and excludes have been turned back off for this release, to give other module authors a chance to tweak their code. (Dave Rolsky) [BUG FIXES] * mro::get_linear_isa was being called as a function rather than a method, which caused problems with Perl 5.8.x. (t0m) * Union types always created a type constraint, even if their constituent constraints did not have any coercions. This bogus coercion always returned undef, which meant that a union which included Undef as a member always coerced bad values to undef. Reported by Eric Brine. RT #58411. (Dave Rolsky) * Union types with coercions would always fall back to coercing the value to undef (unintentionally). Now if all the coercions for a union type fail, the value returned by the coercion is the original value that we attempted to coerce. (Dave Rolsky). 1.10 Sun, Aug 22, 2010 [API CHANGES] * The long-deprecated alias and excludes options for role applications now issue a deprecation warning. Use -alias and -excludes instead. (Dave Rolsky) [BUG FIXES] * Inlined code no longer stringifies numeric attribute defaults. (vg, doy) * default => undef now works properly. (doy) * Enum type constraints now throw errors if their values are nonsensical. (Sartak) [ENHANCEMENTS] * Optimizations that should help speed up compilation time (Dave Rolsky). 1.09 Tue, Jul 25, 2010 [API CHANGES] * You can no longer pass "coerce => 1" for an attribute unless its type constraint has a coercion defined. Doing so will issue a deprecation warning. (Dave Rolsky) * Previously, '+foo' only allowed a specific set of options to be overridden, which made it impossible to change attribute options related to extensions. Now we blacklist some options, and anything else is allowed. (doy, Tuomas Jormola) * Most features which have been declared deprecated now issue a warning using Moose::Deprecated. Warnings are issued once per calling package, not repeatedly. See Moose::Deprecated for information on how you can shut these warnings up entirely. Note that deprecated features will eventually be removed, so shutting up the warnings may not be the best idea. (Dave Rolsky) * Removed the long-deprecated Moose::Meta::Role->alias_method method. (Dave Rolsky). [NEW FEATURES] * We no longer unimport strict and warnings when Moose, Moose::Role, or Moose::Exporter are unimported. Doing this was broken if the user explicitly loaded strict and warnings themself, and the results could be generally surprising. We decided that it was best to err on the side of safety and leave these on. Reported by David Wheeler. RT #58310. (Dave Rolsky) * New with_traits helper function in Moose::Util. (doy) [BUG FIXES] * Accessors will no longer be inlined if the instance metaclass isn't inlinable. (doy) * Use Perl 5.10's new recursive regex features, if possible, for the type constraint parser. (doy, nothingmuch) [ENHANCEMENTS] * Attributes now warn if their accessors overwrite a locally defined function (not just method). (doy) [OTHER] * Bump our required perl version to 5.8.3, since earlier versions fail tests and aren't easily installable/testable. 1.08 modules no longer have methods - this functionality was moved back up into Moose::Meta::Class and Moose::Meta::Role individually (through the Class::MOP::Mixin::HasMethods mixin) (doy). * BUILDALL is now called by Moose::Meta::Class::new_object, rather than by Moose::Object::new. (doy) [NEW FEATURES] * strict and warnings are now unimported when Moose, Moose::Role, or Moose::Exporter are unimported. (doy, Adam Kennedy) * Added a 'consumers' method to Moose::Meta::Role for finding all classes/roles which consume the given role. (doy) [BUG FIXES] * Fix has '+attr' in Roles to explode immediately, rather than when the role is applied to a class (t0m). * Fix type constraint validation messages to not include the string 'failed' twice in the same sentence (Florian Ragwitz). * New type constraints will default to being unequal, rather than equal (rjbs). * The tests no longer check for perl's behavior of clobbering $@, which has been fixed in perl-5.13.1 (Florian Ragwitz). * Metaclass compatibility fixing has been completely rewritten, and should be much more robust. (doy) 1.04 [BUG FIXES] * Stop the natatime method provided by the native Array trait from returning an exhausted iterator when being called with a callback. (Florian Ragwitz) * Make Moose::Meta::TypeConstraint::Class correctly reject RegexpRefs. (Florian Ragwitz) * Calling is_subtype_of on a Moose::Meta::TypeConstraint::Class with itself or the class the TC represents as an argument incorrectly returned true. This behavior is correct for is_type_of, not is_subtype_of. (Guillermo Roditi) * Use File::Temp for temp files created during tests. Previously, files were written to the t/ dir, which could cause problems of the user running the tests did not have write access to that directory.. (Chris Weyl, Ævar Arnfjörð Bjarmason) * Pass role arguments along when applying roles to instances. (doy, lsm)::Native::Trait::Code no longer creates reader methods by default. (Florian Ragwitz) [DOCUMENTATION] * Improve various parts of the documentation and fix many typos. (Dave Rolsky, Mateu Hunter, Graham Knop, Robin V, Jay Hannah, Jesse Luehrs) [OTHER] * Paid the $10 debt to doy from 0.80 Sat, Jun 6, 2009 (Sartak) 0.99 Mon, Mar 8, 2010 [NEW FEATURES] * New method find_type_for in Moose::Meta::TypeConstraint::Union, for finding which member of the union a given value validates for. (Cory Watson) [BUG FIXES] * DEMOLISH methods in mutable subclasses of immutable classes are now called properly (Chia-liang Kao, Jesse Luehrs) [NEW DOCUMENTATION] * Added Moose::Manual::Support that defines the support, compatiblity, and release policies for Moose. (Chris Prather) 0.98 [BUG FIXES] * Calling ->reinitialize on a cached anonymous class effectively uncached the metaclass object, causing the metaclass to go out of scope unexpectedly. This could easily happen at a distance by applying a metarole to an anonymous class. (Dave Rolsky). 0.96 Sat, Feb 6, 2010 [NEW FEATURES] * ScalarRef is now a parameterized type. You can now specify a type constraint for whatever the reference points to. (Closes RT#50857) (Michael G. Schwern, Florian Ragwitz) [BUG FIXES] * ScalarRef now accepts references to other references. (Closes RT#50934) (Michael G. Schwern) 0.95 Thu, Feb 4, 2010 [NEW FEATURES] * Moose::Meta::Attribute::Native::Trait::Code now provides execute_method as a delegation option. This allows the code reference to be called as a method on the object. (Florian Ragwitz) [ENHANCEMENTS] * Moose::Object::does no longer checks the entire inheritance tree, since Moose::Meta::Class::does_role already does this. (doy) * Moose::Util::add_method_modifier (and subsequently the sugar functions Moose::before, Moose::after, and Moose::around) can now accept arrayrefs, with the same behavior as lists. Types other than arrayref and regexp result in an error. (Dylan Hardison) 0.94 Mon, Jan 18, 2010 [API CHANGES] * Please see the changes listed for 0.93_01 and Moose::Manual::Delta. [ENHANCEMENTS] * Improved support for anonymous roles by changing various APIs to take Moose::Meta::Role objects as well as role names. This included - Moose::Meta::Class->does_role - Moose::Meta::Role->does_role - Moose::Util::does_role - Moose::Util::apply_all_roles - Moose::Util::ensure_all_roles - Moose::Util::search_class_by_role Requested by Shawn Moore. Addresses RT #51143 (and then some). (Dave Rolsky) [BUG FIXES] * Fix handling of non-alphanumeric attributes names like '@foo'. This should work as long as the accessor method names are explicitly set to valid Perl method names. Reported by Doug Treder. RT #53731. (Dave Rolsky) 0.93_03 Tue, Jan 5, 2010 [BUG FIXES] * Portability fixes to our XS code so we compile with 5.8.8 and Visual C++. Fixes RT #53391. Reported by Taro Nishino. (rafl) 0.93_02 Tue, Jan 5, 2010 [BUG FIXES] * Depend on Class::MOP 0.97_01 so we can get useful results from CPAN testers. (Dave Rolsky) 0.93_01 Mon, Jan 4, 2010 [API CHANGES] See Moose::Manual::Delta for more details on backwards compatiblity issues. * Role attributes are now objects of the Moose::Meta::Role::Attribute class. (Dave Rolsky). * There were major changes to how metaroles are applied. We now distinguish between metaroles for classes vs those for roles. See the Moose::Util::MetaRole docs for details. (Dave Rolsky) * The old MetaRole API has been deprecated, but will continue to work. However, if you are applying an attribute metaclass role, this may break because of the fact that roles now have an attribute metaclass too. (Dave Rolsky) * Moose::Util::MetaRole::apply_metaclass_roles is now called apply_metaroles. The old name is deprecated. (Dave Rolsky) * The unimport subs created by Moose::Exporter now clean up re-exported functions like blessed and confess, unless the caller imported them from somewhere else too. See Moose::Manua::Delta for backcompat details. (rafl) [ENHANCEMENTS AND BUG FIXES] * Changed the Str constraint to accept magic lvalue strings like one gets from substr et al, again. (sorear) * Sped up the type constraint parsing regex. (Sam Vilain) * The Moose::Cookbook::Extending::Recipe2 recipe was broken. Fix suggested by jrey. * Added Moose::Util::TypeConstraints exports when using oose.pm to allow easier testing of TypeConstraints from the command line. (perigrin) * Added a with_immutable test function to Test::Moose, to run a block of tests with and without certain classes being immutable. (doy) * We now use Module::Install extensions explicitly to avoid confusing errors if they're not installed. We use Module::Install::AuthorRequires to stop test extraction and general failures if you don't have the author side dependencies installed. * Fixed a grammar error in Moose::Cookbook::Basics::Recipe4. rt.cpan.org #51791. (Amir E. Aharoni) 0.93 Thu, Nov 19, 2009 * Moose::Object - Calling $object->new() is no longer deprecated, and no longer warns. (doy) * Moose::Meta::Role - The get_attribute_map method is now deprecated. (Dave Rolsky) * Moose::Meta::Method::Delegation - Preserve variable aliasing in @_ for delegated methods, so that altering @_ affects the passed value. (doy) * Moose::Util::TypeConstraints - Allow array refs for non-anonymous form of enum and duck_type, not just anonymous. The non-arrayref forms may be removed in the future. (doy) - Changed Str constraint to not accept globs (*STDIN or *FOO). (chansen) - Properly document Int being a subtype of Str. (doy) * Moose::Exporter - Moose::Exporter using modules can now export their functions to the main package. This applied to Moose and Moose::Role, among others. (nothingmuch) * Moose::Meta::Attribute - Don't remove attribute accessors we never installed, during remove_accessors. (doy) * Moose::Meta::Attribute::Native::Trait::Array - Don't bypass prototype checking when calling List::Util::first, to avoid a segfault when it is called with a non-code argument. (doy) * Moose::Meta::Attribute::Native::Trait::Code - Fix passing arguments to code execute helpers. (doy) 0.92 Tue, Sep 22, 2009 * Moose::Util::TypeConstraints - added the match_on_type operator (Stevan) - added tests and docs for this (Stevan) * Moose::Meta::Class - Metaclass compat fixing should already happen recursively, there's no need to explicitly walk up the inheritance tree. (doy) * Moose::Meta::Attribute - Add tests for set_raw_value and get_raw_value. (nothingmuch) 0.91 Thu, Sep 17, 2009 * Moose::Object - Don't import any functions, in order to avoid polluting our namespace with things that can look like methods (blessed, try, etc) (nothingmuch) * Moose::Meta::Method::Constructor - The generated code needs to called Scalar::Util::blessed by its fully-qualified name or else Perl can interpret the call to blessed as an indirect method call. This broke Search::GIN, which in turn broke KiokuDB. (nothingmuch) 0.90 Tue, Sep 15, 2009 * Moose::Meta::Attribute::Native::Trait::Counter * Moose::Meta::Attribute::Native::Trait::String - For these two traits, an attribute which did not explicitly provide methods to handles magically ended up delegating *all* the helper methods. This has been removed. You must be explicit in your handles declaration for all Native Traits. (Dave Rolsky) * Moose::Object - DEMOLISHALL behavior has changed. If any DEMOLISH method dies, we make sure to rethrow its error message. However, we also localize $@ before this so that if all the DEMOLISH methods success, the value of $@ will be preserved. (nothingmuch and Dave Rolsky) - We now also localize $? during object destruction. (nothingmuch and Dave Rolsky) - The handling of DEMOLISH methods was broken for immutablized classes, which were not receiving the value of Devel::GlobalDestruction::in_global_destruction. - These two fixes address some of RT #48271, reported by Zefram. - This is all now documented in Moose::Manual::Construction. - Calling $object->new() is now deprecated. A warning will be issued. (perigrin) * Moose::Meta::Role - Added more hooks to customize how roles are applied. The role summation class, used to create composite roles, can now be changed and/or have meta-roles applied to it. (rafl) - The get_method_list method no longer explicitly excludes the "meta" method. This was a hack that has been replaced by better hacks. (Dave Rolsky) * Moose::Meta::Method::Delegation - fixed delegated methods to make sure that any modifiers attached to the accessor being delegated on will be called (Stevan) - added tests for this (Stevan) * Moose::Meta::Class - Moose no longer warns when a class that is being made immutable has mutable ancestors. While in theory this is a good thing to warn about, we found so many exceptions to this that doing this properly became quite problematic. 0.89_02 Thu, Sep 10, 2009 * Moose::Meta::Attribute::Native - Fix Hash, which still had 'empty' instead of 'is_empty'. (hdp) * Moose::Meta::Attribute::Native::Trait::Array - Added a number of functions from List::Util and List::MoreUtils, including reduce, shuffle, uniq, and natatime. (doy) * Moose::Exporter - This module will now generate an init_meta method for your exporting class if you pass it options for Moose::Util::MetaRole::apply_metaclass_roles or apply_base_class_roles. This eliminates a lot of repetitive boilerplate for typical MooseX modules. (doy). - Documented the with_meta feature, which is a replacement for with_caller. This feature was added by josh a while ago. - The with_caller feature is now deprecated, but will not issue a warning yet. (Dave Rolsky) - If you try to wrap/export a subroutine which doesn't actually exist, Moose::Exporter will warn you about this. (doy) * Moose::Meta::Role::Application::ToRole - When a role aliased a method from another role, it was only getting the new (aliased) name, not the original name. This differed from what happens when a class aliases a role's methods. If you _only_ want the aliased name, make sure to also exclue the original name. (Dave Rolsky) 0.89_01 Wed Sep 2, 2009 * Moose::Meta::Attribute - Added the currying syntax for delegation from AttributeHelpers to the existing delegation API. (hdp) * Moose::Meta::Attribute::Native - We have merged the functionality of MooseX::AttributeHelpers into the Moose core with some API tweaks. You can continue to use MooseX::AttributeHelpers, but it will not be maintained except (perhaps) for critical bug fixes in the future. See Moose::Manual::Delta for details. (hdp, jhannah, rbuels, Sartak, perigrin, doy) * Moose::Error::Croak * Moose::Error::Confess - Clarify documentation on how to use different error-throwing modules. (Curtis Jewell) * Moose - Correct POD for builder to point to Recipe8, not 9. (gphat) * Moose::Exporter - When a nonexistent sub name is passed to as_is, with_caller, or with_meta, throw a warning and skip the exporting, rather than installing a broken sub. (doy) * Moose::Meta::Class - Moose now warns if you call C<make_immutable> for a class with mutable ancestors. (doy) 0.89 Thu Aug 13, 2009 * Moose::Manual::Attributes - Clarify "is", include discussion of "bare". (Sartak) * Moose::Meta::Role::Method::Conflicting * Moose::Meta::Role::Application::ToClass - For the first set of roles involved in a conflict, report all unresolved method conflicts, not just the first method. Fixes #47210 reported by Ovid. (Sartak) * Moose::Meta::TypeConstraint - Add assert_valid method to use a TypeConstraint for assertion (rjbs) * Moose::Exporter - Make "use Moose -metaclass => 'Foo'" do alias resolution, like -traits does. (doy) - Allow specifying role options (alias, excludes, MXRP stuff) in the arrayref passed to "use Moose -traits" (doy) * Moose::Util - Add functions meta_class_alias and meta_attribute_alias for creating aliases for class and attribute metaclasses and metatraits. (doy) * Moose::Meta::Attribute * Moose::Meta::Method::Accessor - A trigger now receives the old value as a second argument, if the attribute had one. (Dave Rolsky) * Moose::Meta::Method::Constructor - Fix a bug with $obj->new when $obj has stringify overloading. Reported by Andrew Suffield [rt.cpan.org #47882] (Sartak) - However, we will probably deprecate $obj->new, so please don't start using it for new code! * Moose::Meta::Role::Application * Moose::Meta::Role::Application::RoleSummation - Rename alias and excludes to -alias and -excludes (but keep the old names for now, for backcompat) (doy) 0.88 Fri Jul 24, 2009 * Moose::Manual::Contributing - Re-write the Moose::Manual::Contributing document to reflect the new layout and methods of work for the Git repository. All work now should be done in topic branches and reviewed by a core committer before being applied to master. All releases are done by a cabal member and merged from master to stable. This plan was devised by Yuval, blame him. (perigrin) * Moose::Meta::Role - Create metaclass attributes for the different role application classes. (rafl) * Moose::Util::MetaRole - Allow applying roles to a meta role's role application classes. (rafl) * Moose::Meta::Attribute - Add weak_ref to allowed options for "has '+foo'" (mst) * Moose::Meta::Method::Accessor - No longer uses inline_slot_access in accessors, to support non-lvalue-based meta instances. (sorear) miss 0.81 Sun, Jun 7, 2009 * Bumped our Class::MOP prereq to the latest version (0.85), since that's what we need. 0.80 Sat, Jun 6, 2009 * Moose::Manual::FAQ - Add FAQ about the coercion change from 0.76 because it came up three times today (perigrin) - Win doy $10 dollars because Sartak didn't think anybody would document this fast enough (perigrin) * Moose::Meta::Method::Destructor - Inline a DESTROY method even if there are no DEMOLISH methods to prevent unnecessary introspection in Moose::Object::DEMOLISHALL * Moose::* - A role's required methods are now represented by Moose::Meta::Role::Method::Required objects. Conflicts are now represented by Moose::Meta::Role::Method::Conflicting objects. The benefit for end-users in that unresolved conflicts generate different, more instructive, errors, resolving Ovid's #44895. (Sartak) * Moose::Role - Improve the error message of "extends" as suggested by Adam Kennedy and confound (Sartak) - Link to Moose::Manual::Roles from Moose::Role as we now have excellent documentation (Adam Kennedy) * Tests - Update test suite for subname change in Class::MOP (nothingmuch) - Add TODO test for infinite recursion in Moose::Meta::Class (groditi) 0.79 Wed, May 13, 2009 * Tests - More fixes for Win32 problems. Reported by Robert Krimen. * Moose::Object - The DEMOLISHALL method could still blow up in some cases during global destruction. This method has been made more resilient in the face of global destruction's random garbage collection order. * Moose::Exporter - If you "also" a module that isn't loaded, the error message now acknowledges that (Sartak) * Moose - When your ->meta method does not return a Moose::Meta::Class, the error message gave the wrong output (Sartak). 0.77 Sat, May 2, 2009 * Moose::Meta::Role - Add explicit use of Devel::GlobalDestruction and Sub::Name (perigrin) * Moose::Object - Pass a boolean to DEMOLISHALL and DEMOLISH indicating whether or not we are currently in global destruction (doy) - Add explicit use of Devel::GlobalDestruction and Sub::Name (perigrin) * Moose::Cookbook::FAQ - Reworked much of the existing content to be more useful to modern Moose hackers (Sartak) * Makefile.PL - Depend on Class::MOP 0.83 instead of 0.82_01. 0.76 Mon, April 27, 2009 * Moose::Meta::TypeConstraint - Do not run coercions in coerce() if the value already passes the type constraint (hdp) * Moose::Meta::TypeConstraint::Class - In validation error messages, specifically say that the value is not an instance of the class. This should alleviate some frustrating forgot-to-load-my-type bugs. rt.cpan.org #44639 (Sartak) * Moose::Meta::Role::Application::ToClass - Revert the class-overrides-role warning in favor of a solution outside of the Moose core (Sartak) * Tests - Make Test::Output optional again, since it's only used in a few files (Sartak) 0.75_01 Thu, April 23, 2009 * Moose::Meta::Role::Application::ToClass - Moose now warns about each class overriding methods from roles it consumes (Sartak) * Tests - Warnings tests have standardized on Test::Output which is now an unconditionally dependency (Sartak) * Moose::Meta::Class - Changes to immutabilization to work with Class::MOP 0.82_01+. 0.75 Mon, April 20, 2009 * Moose * Moose::Meta::Class - Move validation of not inheriting from roles from Moose::extends to Moose::Meta::Class::superclasses (doy) * Moose::Util - add ensure_all_roles() function to encapsulate the common "apply this role unless the object already does it" pattern (hdp) * Moose::Exporter - Users can now select a different metaclass with the "-metaclass" option to import, for classes and roles (Sartak) * Moose::Meta::Role - Make method_metaclass an attr so that it can accept a metarole application. (jdv) 0.74 Tue, April 7, 2009 * Moose::Meta::Role * Moose::Meta::Method::Destructor - Include stack traces in the deprecation warnings. (Florian Ragwitz) * Moose::Meta::Class - Removed the long-deprecated _apply_all_roles method. * Moose::Meta::TypeConstraint - Removed the long-deprecated union method. 0.73_02 Mon, April 6, 2009 * More deprecations and renamings - Moose::Meta::Method::Constructor - initialize_body => _initialize_body (this is always called when an object is constructed) * Moose::Object - The DEMOLISHALL method could throw an exception during global destruction, meaning that your class's DEMOLISH methods would not be properly called. Reported by t0m. * Moose::Meta::Method::Destructor - Destructor inlining was totally broken by the change to the is_needed method in 0.72_01. Now there is a test for this feature, and it works again. * Moose::Util - Bold the word 'not' in the POD for find_meta (t0m) 0.73_01 Sun, April 5, 2009 * Moose::* - Call user_class->meta in fewer places, with the eventual goal of allowing the user to rename or exclude ->meta altogether. Instead uses Class::MOP::class_of. (Sartak) * Moose::Meta::Method::Accessor - If an attribute had a lazy default, and that value did not pass the attribute's type constraint, it did not get the message from the type constraint, instead using a generic message. Test provided by perigrin. * Moose::Util::TypeConstraints - Add duck_type keyword. It's sugar over making sure an object can() a list of methods. This is easier than jrockway's suggestion to fork all of CPAN. (perigrin) - add tests and documentation (perigrin) * Moose - Document the fact that init_meta() returns the target class's metaclass object. (hdp) * Moose::Cookbook::Extending::Recipe1 * Moose::Cookbook::Extending::Recipe2 * Moose::Cookbook::Extending::Recipe3 * Moose::Cookbook::Extending::Recipe4 - Make init_meta() examples explicitly return the metaclass and point out this fact. (hdp) * Moose::Cookbook::Basics::Recipe12 - A new recipe, creating a custom meta-method class. * Moose::Cookbook::Meta::Recipe6 - A new recipe, creating a custom meta-method class. * Moose::Meta::Class * Moose::Meta::Method::Constructor - Attribute triggers no longer receive the meta-attribute object as an argument in any circumstance. Previously, triggers called during instance construction were passed the meta-attribute, but triggers called by normal accessors were not. Fixes RT#44429, reported by Mark Swayne. (hdp) * Moose::Manual::Attributes - Remove references to triggers receving the meta-attribute object as an argument. (hdp) * Moose::Cookbook::FAQ - Remove recommendation for deprecated Moose::Policy and Moose::Policy::FollowPBP; recommend MooseX::FollowPBP instead. (hdp) * Many methods have been renamed with a leading underscore, and a few have been deprecated entirely. The methods with a leading underscore are consider "internals only". People writing subclasses or extensions to Moose should feel free to override them, but they are not for "public" use. - Moose::Meta::Class - check_metaclass_compatibility => _check_metaclass_compatibility - Moose::Meta::Method::Accessor - initialize_body => _initialize_body (this is always called when an object is constructed) - /(generate_.*_method(?:_inline)?)/ => '_' . $1 - Moose::Meta::Method::Constructor - initialize_body => _initialize_body (this is always called when an object is constructed) - /(generate_constructor_method(?:_inline)?)/ => '_' . $1 - attributes => _attributes (now inherited from parent) - meta_instance => _meta_instance (now inherited from parent) - Moose::Meta::Role - alias_method is deprecated. Use add_method 0.73 Fri, March 29, 2009 * No changes from 0.72_01. 0.72_01 Thu, March 26, 2009 * Everything - Almost every module has complete API documentation. A few methods (and even whole classes) have been intentionally excluded pending some rethinking of their APIs. * Moose::Util::TypeConstraints - Calling subtype with a name as the only argument is now an exception. If you want an anonymous subtype do: my $subtype = subtype as 'Foo'; * Moose::Cookbook::Meta::Recipe7 - A new recipe, creating a custom meta-instance class. * Moose::Cookbook::Basics::Recipe5 - Fix various typos and mistakes. Includes a patch from Radu Greab. * Moose::Cookbook::Basics::Recipe9 - Link to this recipe from Moose.pm's builder blurb * Moose::Exporter - When wrapping a function with a prototype, Moose::Exporter now makes sure the wrapped function still has the same prototype. (Daisuke Maki) * Moose::Meta::Attribute - Allow a subclass to set lazy_build for an inherited attribute. (hdp) * Makefile.PL - Explicitly depend on Data::OptList. We already had this dependency via Sub::Exporter, but since we're using it directly we're better off with it listed. (Sartak) * Moose::Meta::Method::Constructor - Make it easier to subclass the inlining behaviour. (Ash Berlin) * Moose::Manual::Delta - Details significant changes in the history of Moose, along with recommended workarounds. * Moose::Manual::Contributing - Contributor's guide to Moose. * Moose::Meta::Method::Constructor - The long-deprecated intialize_body method has been removed (yes, spelled like that). * Moose::Meta::Method::Destructor - This is_needed method is now always a class method. * Moose::Meta::Class - Changes to the internals of how make_immutable works to match changes in latest Class::MOP. 0.72 * Moose::Cookbook - Hopefully fixed some POD errors in a few recipes that caused them to display weird on search.cpan.org. * Moose::Util::TypeConstraints - Calling type or subtype without the sugar helpers (as, where, message) is now deprecated. - The subtype function tried hard to guess what you meant, but often got it wrong. For example: my $subtype = subtype as 'ArrayRef[Object]'; This caused an error in the past, but now works as you'd expect. * Everywhere - Make sure Moose.pm is loaded before calling Moose->throw_error. This wasn't normally an issue, but could bite you in weird cases. 0.71 Thu, February 19, 2009 * Moose::Cookbook::Basics::Recipe11 - A new recipe which demonstrates the use of BUILDARGS and BUILD. (Dave Rolsky) * Moose::Cookbook::Roles::Recipe3 - A new recipe, applying a role to an object instance. (Dave Rolsky) * Moose::Exporter - Allow overriding specific keywords from "also" packages. (doy) * Tests - Replace hardcoded cookbook tests with Test::Inline to ensure the tests match the actual code in the recipes. (Dave Rolsky) * Moose::Cookbook - Working on the above turned up a number of little bugs in the recipe code. (Dave Rolsky) * Moose::Util::TypeConstraints::Optimized - Just use Class::MOP for the optimized ClassName check. (Dave Rolsky) 0.70 Sat, February 14, 2009 * Moose::Util::TypeConstraints - Added the RoleName type (stevan) - added tests for this (stevan) * Moose::Cookbook::Basics::Recipe3 - Updated the before qw[left right] sub to be a little more defensive about what it accepts (stevan) - added more tests to t/000_recipies/basics/003_binary_tree.t (stevan) * Moose::Object - We now always call DEMOLISHALL, even if a class does not define DEMOLISH. This makes sure that method modifiers on DEMOLISHALL work as expected. (doy) - added tests for this (EvanCarroll) * Moose::Util::MetaRole - Accept roles for the wrapped_method_metaclass (rafl) - added tests for this (rafl) * Moose::Meta::Attribute - We no longer pass the meta-attribute object as a final argument to triggers. This actually changed for inlined code a while back, but the non-inlined version and the docs were still out of date. * Tests - Some tests tried to use Test::Warn 0.10, which had bugs. Now they require 0.11. (Dave Rolsky) * Documentation - Lots of small changes to the manual, cookbook, and elsewhere. These were based on feedback from various users, too many to list here. (Dave Rolsky)) 0.68 Wed, February 4, 2009 * POD - Many spelling, typo, and formatting fixes by daxim. * Moose::Manual::Attributes - The NAME section in the POD used "Attribute" so search.cpan didn't resolve links from other documents properly. * Moose::Meta::Method::Overriden - Now properly spelled as Overridden. Thanks to daxim for noticing this. 0.67 * Moose::Manual - This is a brand new, extensive manual for Moose. This aims to provide a complete introduction to all of Moose's features. This work was funded as part of the Moose docs grant from TPF. (Dave Rolsky) * Moose::Meta::Attribute - Added a delegation_metaclass method to replace a hard-coded use of Moose::Meta::Method::Delegation. (Dave Rolsky) * Moose::Util::TypeConstraints - If you created a subtype and passed a parent that Moose didn't know about, it simply ignored the parent. Now it automatically creates the parent as a class type. This may not be what you want, but is less broken than before. (Dave Rolsky) * Moose::Util::TypeConstraints - This module tried throw errors by calling Moose->throw_error, but it did not ensure that Moose was loaded first. This could cause very unhelpful errors when it tried to throw an error before Moose was loaded. (Dave Rolsky) * Moose::Util::TypeConstraints - You could declare a name with subtype such as "Foo!Bar" that would be allowed, but if you used it in a parameterized type such as "ArrayRef[Foo!Bar]" it wouldn't work. We now do some vetting on names created via the sugar functions, so that they can only contain alphanumerics, ":", and ".". (Dave Rolsky) 0.65 Thu, January 22, 2009 * Moose and Moose::Meta::Method::Overridden - If an overridden method called super(), and then the superclass's method (not overridden) _also_ called super(), Moose went into an endless recursion loop. Test provided by Chris Prather. (Dave Rolsky) * Moose::Meta::TypeConstraint - All methods are now documented. (gphat) * t/100_bugs/011_DEMOLISH_eats_exceptions.t - Fixed some bogus failures that occurred because we tried to validate filesystem paths in a very ad-hoc and not-quite-correct way. (Dave Rolsky) * Moose::Util::TypeConstraints - Added maybe_type to exports. See docs for details. (rjbs) * Moose - Added Moose::Util::TypeConstraints to the SEE ALSO section. (pjf) * Moose::Role - Methods created via an attribute can now fulfill a "requires" declaration for a role. (nothingmuch) * Moose::Meta::Method::* - Stack traces from inlined code will now report its line and file as being in your class, as opposed to in Moose guts. (nothingmuch). 0.64 Wed, December 31, 2008 * Moose::Meta::Method::Accessor - Always inline predicate and clearer methods (Sartak) * Moose::Meta::Attribute - Support for parameterized traits (Sartak) - verify_against_type_constraint method to avoid duplication and enhance extensibility (Sartak) * Moose::Meta::Class - Tests (but no support yet) for parameterized traits (Sartak) * Moose - Require Class::MOP 0.75+, which has the side effect of making sure we work on Win32. (Dave Rolsky) 0.63 Mon, December 8, 2008 * Moose::Unsweetened - Some small grammar tweaks and bug fixes in non-Moose example code. (Dave Rolsky) 0.62_02 Fri, December 5, 2008 * Moose::Meta::Role::Application::ToClass - When a class does not provide all of a role's required methods, the error thrown now mentions all of the missing methods, as opposed to just the first one found. Requested by Curtis Poe (RT #41119). (Dave Rolsky) * Moose::Meta::Method::Constructor - Moose will no longer inline a constructor for your class unless it inherits its constructor from Moose::Object, and will warn when it doesn't inline. If you want to force inlining anyway, pass "replace_constructor => 1" to make_immutable. Addresses RT #40968, reported by Jon Swartz. (Dave Rolsky) - The quoting of default values could be broken if the default contained a single quote ('). Now we use quotemeta to escape anything potentially dangerous in the defaults. (Dave Rolsky) 0.62_01 Wed, December 3, 2008 * Moose::Object - use the method->execute API for BUILDALL and DEMOLISHALL (Sartak) * Moose::Util::TypeConstraints - We now make all the type constraint meta classes immutable before creating the default types provided by Moose. This should make loading Moose a little faster. (Dave Rolsky) 0.62 Wed November 26, 2008 * Moose::Meta::Role::Application::ToClass Moose::Meta::Role::Application::ToRole - fixed issues where excluding and aliasing the same methods for a single role did not work right (worked just fine with multiple roles) (stevan) - added test for this (stevan) * Moose::Meta::Role::Application::RoleSummation - fixed the error message when trying to compose a role with a role it excludes (Sartak) * Moose::Exporter - Catch another case where recursion caused the value of $CALLER to be stamped on (t0m) - added test for this (t0m) * Moose - Remove the make_immutable keyword, which has been deprecated since April. It breaks metaclasses that use Moose without no Moose (Sartak) * Moose::Meta::Attribute - Removing an attribute from a class now also removes delegation (handles) methods installed for that attribute (t0m) - added test for this (t0m) * Moose::Meta::Method::Constructor - An attribute with a default that looked like a number (but was really a string) would accidentally be treated as a number when the constructor was made immutable (perigrin) - added test for this (perigrin) * Moose::Meta::Role - create method for constructing a role dynamically (Sartak) - added test for this (Sartak) - anonymous roles! (Sartak) - added test for this (Sartak) * Moose::Role - more consistent error messages (Sartak) * Moose::Cookbook::Roles::Recipe1 - attempt to explain why a role that just requires methods is useful (Sartak) 0.61 Fri November 7, 2008 * Moose::Meta::Attribute - When passing a role to handles, it will be loaded if necessary (perigrin) * Moose::Meta::Class - Method objects returned by get_method (and other methods) Could end up being returned without an associated_metaclass attribute. Removing get_method_map, which is provided by Class::MOP::Class, fixed this. The Moose version did nothing different from its parent except introduce a bug. (Dave Rolsky) - added tests for this (jdv79) * Various - Added a $VERSION to all .pm files which didn't have one. Fixes RT #40049, reported by Adam Kennedy. (Dave Rolsky) * Moose::Cookbook::Basics::Recipe4 * Moose::Cookbook::Basics::Recipe6 - These files had spaces on the first line of the SYNOPSIS, as opposed to a totally empty line. According to RT #40432, this confuses POD parsers. (Dave Rolsky) 0.60 Fri October 24, 2008 * Moose::Exporter - Passing "-traits" when loading Moose caused the Moose.pm exports to be broken. Reported by t0m. (Dave Rolsky) - Tests for this bug. (t0m) * Moose::Util - Change resolve_metaclass alias to use the new load_first_existing_class function. This makes it a lot simpler, and also around 5 times faster. (t0m) - Add caching to resolve_metaclass_alias, which gives an order of magnitude speedup to things which repeatedly call the Moose::Meta::Attribute->does method, notably MooseX::Storage (t0m) * Moose::Util::TypeConstraint - Put back the changes for parameterized constraints that shouldn't have been removed in 0.59. We still cannot parse them, but MooseX modules can create them in some other way. See the 0.58 changes for more details. (jnapiorkowski) - Changed the way subtypes are created so that the job is delegated to a type constraint parent. This clears up some hardcoded checking and should allow correct subtypes of Moose::Meta::Type::Constraint. Don't rely on this new API too much (create_child_type) because it may go away in the future. (jnapiorkowski) * Moose::Meta::TypeConstraint::Union - Type constraint names are sorted as strings, not numbers. (jnapiorkowski) * Moose::Meta::TypeConstraint::Parameterizable - New parameterize method. This can be used as a factory method to make a new type constraint with a given parameterized type. (jnapiorkowski) - added tests (jnapiorkowski) 0.59 Tue October 14, 2008 * Moose - Add abridged documentation for builder/default/initializer/ predicate, and link to more details sections in Class::MOP::Attribute. (t0m) * Moose::Util::TypeConstraints - removed prototypes from all but the &-based stuff (mst) * Moose::Util::TypeConstraints - Creating a anonymous subtype with both a constraint and a message failed with a very unhelpful error, but should just work. Reported by t0m. (Dave Rolsky) * Tests - Some tests that used Test::Warn if it was available failed with older versions of Test::Warn. Reported by Fayland. (Dave Rolsky) - Test firing behavior of triggers in relation to builder/default/ lazy_build. (t0m) - Test behavior of equals/is_a_type_of/is_a_subtype_of for all kinds of supported type. (t0m) * Moose::Meta::Class - In create(), do not pass "roles" option to the superclass - added related test that creates an anon metaclass with a required attribute * Moose::Meta::TypeConstraint::Class * Moose::Meta::TypeConstraint::Role - Unify behavior of equals/is_a_type_of/is_a_subtype_of with other types (as per change in 0.55_02). (t0m) * Moose::Meta::TypeConstraint::Registry - Fix warning when dealing with unknown type names (t0m) * Moose::Util::TypeConstraints - Reverted changes from 0.58 related to handle parameterized types. This caused random failures on BSD and Win32 systems, apparently related to the regex engine. This means that Moose can no longer parse structured type constraints like ArrayRef[Int,Int] or HashRef[name=>Str]. This will be supported in a slightly different way via MooseX::Types some time in the future. (Dave Rolsky) 0.58 Sat September 20, 2008 !! This release has an incompatible change regarding !! !! how roles add methods to a class !! * Roles and role application !. (Dave Rolsky) * Makefile.PL - From this release on, we'll try to maintain a list of conflicting modules, and warn you if you have one installed. For example, this release conflicts with ... - MooseX::Singleton <= 0.11 - MooseX::Params::Validate <= 0.05 - Fey::ORM <= 0.10 In general, we try to not break backwards compatibility for most Moose users, but MooseX modules and other code which extends Moose's metaclasses is often affected by very small changes in the Moose internals. * Moose::Meta::Method::Delegation * Moose::Meta::Attribute - Delegation methods now have their own method class. (Dave Rolsky) * Moose::Meta::TypeConstraint::Parameterizable - Added a new method 'parameterize' which is basically a factory for the containing constraint. This makes it easier to create new types of parameterized constraints. (jnapiorkowski) * Moose::Meta::TypeConstraint::Union - Changed the way Union types canonicalize their names to follow the normalized TC naming rules, which means we strip all whitespace. (jnapiorkowski) * Moose::Util::TypeConstraints - Parameter and Union args are now sorted, this makes Int|Str the same constraint as Str|Int. (jnapiorkowski) - Changes to the way Union types are parsed to more correctly stringify their names. (jnapiorkowski) - When creating a parameterized type, we now use the new parameterize method. (jnapiorkowski) - Incoming type constraint strings are now normalized to remove all whitespace differences. (jnapiorkowski) - Changed the way we parse type constraint strings so that we now match TC[Int,Int,...] and TC[name=>Str] as parameterized type constraints. This lays the foundation for more flexible type constraint implementations. * Tests and docs for all the above. (jnapiorkowski) * Moose::Exporter * Moose -. (Dave Rolsky) - added tests for this (rafl) * Moose::Meta::Class - Changes to how we fix metaclass compatibility that are much too complicated to go into. The summary is that Moose is much less likely to complain about metaclass incompatibility now. In particular, if two metaclasses differ because Moose::Util::MetaRole was used on the two corresponding classes, then the difference in roles is reconciled for the subclass's metaclass. (Dave Rolsky) - Squashed an warning in _process_attribute (thepler) * Moose::Meta::Role - throw exceptions (sooner) for invalid attribute names (thepler) - added tests for this (thepler) * Moose::Util::MetaRole - If you explicitly set a constructor or destructor class for a metaclass object, and then applied roles to the metaclass, that explicitly set class would be lost and replaced with the default. * Moose::Meta::Class * Moose::Meta::Attribute * Moose::Meta::Method * Moose * Moose::Object * Moose::Error::Default * Moose::Error::Croak * Moose::Error::Confess - All instances of confess() changed to use overridable C<throw_error> method. This method ultimately calls a class constructor, and you can change the class being called. In addition, errors now pass more information than just a string. The default C<error_class> behaves like C<Carp::confess>, so the behavior is not visibly different for end users. 0.57 Wed September 3, 2008 * Moose::Intro - A new bit of doc intended to introduce folks familiar with "standard" Perl 5 OO to Moose concepts. (Dave Rolsky) * Moose::Unsweetened - Shows examples of two classes, each done first with and then without Moose. This makes a nice parallel to Moose::Intro. (Dave Rolsky) * Moose::Util::TypeConstraints - Fixed a bug in find_or_parse_type_constraint so that it accepts a Moose::Meta::TypeConstraint object as the parent type, not just a name (jnapiorkowski) - added tests (jnapiorkowski) * Moose::Exporter - If Sub::Name was not present, unimporting failed to actually remove some sugar subs, causing test failures (Dave Rolsky) 0.56 Mon September 1, 2008 For those not following the series of dev releases, there are several major changes in this release of Moose. ! Moose::init_meta should now be called as a method. See the docs for details. - Major performance improvements by nothingmuch. - New modules for extension writers, Moose::Exporter and Moose::Util::MetaRole by Dave Rolsky. - Lots of doc improvements and additions, especially in the cookbook sections. - Various bug fixes. * Removed all references to the experimental-but-no-longer-needed Moose::Meta::Role::Application::ToMetaclassInstance. * Require Class::MOP 0.65. 0.55_04 Sat August 30, 2008 * Moose::Util::MetaRole * Moose::Cookbook::Extending::Recipe2 - This simplifies the application of roles to any meta class, as well as the base object class. Reimplemented metaclass traits using this module. (Dave Rolsky) * Moose::Cookbook::Extending::Recipe1 - This a new recipe, an overview of various ways to write Moose extensions (Dave Rolsky) * Moose::Cookbook::Extending::Recipe3 * Moose::Cookbook::Extending::Recipe4 - These used to be Extending::Recipe1 and Extending::Recipe2, respectively. 0.55_03 Fri August 29, 2008 * No changes from 0.55_02 except increasing the Class::MOP dependency to 0.64_07. 0.55_02 Fri August 29, 2008 * Makefile.PL and Moose.pm - explicitly require Perl 5.8.0+ (Dave Rolsky) * Moose::Util::TypeConstraints - Fix warnings from find_type_constraint if the type is not found (t0m). * Moose::Meta::TypeConstraint - Predicate methods (equals/is_a_type_of/is_subtype_of) now return false if the type you specify cannot be found in the type registry, rather than throwing an unhelpful and coincidental exception. (t0m). - added docs & test for this (t0m) * Moose::Meta::TypeConstraint::Registry - add_type_constraint now throws an exception if a parameter is not supplied (t0m). - added docs & test for this (t0m) * Moose::Cookbook::FAQ - Added a faq entry on the difference between "role" and "trait" (t0m) * Moose::Meta::Role - Fixed a bug that caused role composition to not see a required method when that method was provided by another role being composed at the same time. (Dave Rolsky) - test and bug finding (tokuhirom) 0.55_01 Wed August 20, 2008 !! Calling Moose::init_meta as a function is now !! !! deprecated. Please see the Moose.pm docs for details. !! * Moose::Meta::Method::Constructor - Fix inlined constructor so that values produced by default or builder methods are coerced as required. (t0m) - added test for this (t0m) * Moose::Meta::Attribute - A lazy attribute with a default or builder did not attempt to coerce the default value. The immutable code _did_ coerce. (t0m) - added test for this (t0m) * Moose::Exporter - This is a new helper module for writing "Moose-alike" modules. This should make the lives of MooseX module authors much easier. (Dave Rolsky) * Moose * Moose::Cookbook::Meta::Recipe5 - Implemented metaclass traits (and wrote a recipe for it): use Moose -traits => 'Foo' This should make writing small Moose extensions a little easier (Dave Rolsky) * Moose::Cookbook::Basics::Recipe1 - Removed any examples of direct hashref access, and applied an editorial axe to reduce verbosity. (Dave Rolsky) * Moose::Cookbook::Basics::Recipe1 - Also applied an editorial axe here. (Dave Rolsky) * Moose * Moose::Cookbook::Extending::Recipe1 * Moose::Cookbook::Extending::Recipe2 - Rewrote extending and embedding moose documentation and recipes to use Moose::Exporter (Dave Rolsky) * Moose * Moose::Role - These two modules now warn when you load them from the main package "main" package, because we will not export sugar to main. Previously it just did nothing. (Dave Rolsky) * Moose::Role - Now provide an init_meta method just like Moose.pm, and you can call this to provide an alternate role metaclass. (Dave Rolsky and nothingmuch) - get_method_map now respects the package cache flag (nothingmuch) * Moose::Meta::Role - Two new methods - add_method and wrap_method_body (nothingmuch) * many modules - Optimizations including allowing constructors to accept hash refs, making many more classes immutable, and making constructors immutable. (nothingmuch) 0.55 Sun August 3, 2008 * Moose::Meta::Attribute - breaking down the way 'handles' methods are created so that the process can be more easily overridden by subclasses (stevan) * Moose::Meta::TypeConstraint - fixing what is passed into a ->message with the type constraints (RT #37569) - added tests for this (Charles Alderman) * Moose::Util::TypeConstraints - fix coerce to accept anon types like subtype can (mst) * Moose::Cookbook - reorganized the recipes into sections - Basics, Roles, Meta, Extending - and wrote abstracts for each section (Dave Rolsky) * Moose::Cookbook::Basics::Recipe10 - A new recipe that demonstrates operator overloading in combination with Moose. (bluefeet) * Moose::Cookbook::Meta::Recipe1 - an introduction to what meta is and why you'd want to make your own metaclass extensions (Dave Rolsky) * Moose::Cookbook::Meta::Recipe4 - a very simple metaclass example (Dave Rolsky) * Moose::Cookbook::Extending::Recipe1 - how to write a Moose-alike module to use your own object base class (Dave Rolsky) * Moose::Cookbook::Extending::Recipe2 - how to write modules with an API just like C<Moose.pm> (Dave Rolsky) * all documentation - Tons of fixes, both syntactical and grammatical (Dave Rolsky, Paul Fenwick) 0.54 * Moose - added "FEATURE REQUESTS" section to the Moose docs to properly direct people (stevan) (RT #34333) - making 'extends' croak if it is passed a Role since this is not ever something you want to do (fixed by stevan, found by obra) - added tests for this (stevan) * Moose::Object - adding support for DOES (as in UNIVERSAL::DOES) (nothingmuch) - added test for this * Moose::Meta::Attribute - added legal_options_for_inheritance (wreis) - added tests for this (wreis) * Moose::Cookbook::Snacks::* - removed some of the unfinished snacks that should not have been released yet. Added some more examples to the 'Keywords' snack. (stevan) * Moose::Cookbook::Style - added general Moose "style guide" of sorts to the cookbook (nothingmuch) (RT #34335) * t/ - added more BUILDARGS tests (stevan) 0.51 Thurs. Jun 26, 2008 * Moose::Role - add unimport so "no Moose::Role" actually does something (sartak) * Moose::Meta::Role::Application::ToRole - when RoleA did RoleB, and RoleA aliased a method from RoleB in order to provide its own implementation, that method still got added to the list of required methods for consumers of RoleB. Now an aliased method is only added to the list of required methods if the role doing the aliasing does not provide its own implementation. See Recipe 11 for an example of all this. (Dave Rolsky) - added tests for this * Moose::Meta::Method::Constructor - when a single argument that wasn't a hashref was provided to an immutabilized constructor, the error message was very unhelpful, as opposed to the non-immutable error. Reported by dew. (Dave Rolsky) - added test for this (Dave Rolsky) * Moose::Meta::Attribute - added support for meta_attr->does("ShortAlias") (sartak) - added tests for this (sartak) - moved the bulk of the `handles` handling to the new install_delegation method (Stevan) * Moose::Object - Added BUILDARGS, a new step in new() * Moose::Meta::Role::Application::RoleSummation - fix typos no one ever sees (sartak) * Moose::Util::TypeConstraints * Moose::Meta::TypeConstraint * Moose::Meta::TypeCoercion - Attempt to work around the ??{ } vs. threads issue (not yet fixed) - Some null_constraint optimizations 0.50 Thurs. Jun 11, 2008 - Fixed a version number issue by bumping all modules to 0.50. 0.49 Thurs. Jun 11, 2008 !! This version now approx. 20-25% !! !! faster with new Class::MOP 0.59 !! * Moose::Meta::Attribute - fixed how the is => (ro|rw) works with custom defined reader, writer and accessor options. - added docs for this (TODO). - added tests for this (Thanks to Penfold) - added the custom attribute alias for regular Moose attributes which is "Moose" - fix builder and default both being used (groditi) * Moose Moose::Meta::Class Moose::Meta::Attribute Moose::Meta::Role Moose::Meta::Role::Composite Moose::Util::TypeConstraints - switched usage of reftype to ref because it is much faster * Moose::Meta::Role - changing add_package_symbol to use the new HASH ref form * Moose::Object - fixed how DEMOLISHALL is called so that it can be overrided in subclasses (thanks to Sartak) - added test for this (thanks to Sartak) * Moose::Util::TypeConstraints - move the ClassName type check code to Class::MOP::is_class_loaded (thanks to Sartak) * Moose::Cookbook::Recipe11 - add tests for this (thanks to tokuhirom) 0.48 !! This version now approx. 20-25% !! !! faster with new Class::MOP 0.57 !! * Moose::Meta::Class - some optimizations of the &initialize method since it is called so often by &meta * Moose::Meta::Class Moose::Meta::Role - now use the get_all_package_symbols from the updated Class::MOP, test suite is now 10 seconds faster * Moose::Meta::Method::Destructor - is_needed can now also be called as a class method for immutablization to check if the destructor object even needs to be created at all * Moose::Meta::Method::Destructor Moose::Meta::Method::Constructor - added more descriptive error message to help keep people from wasting time tracking an error that is easily fixed by upgrading. 0.45 Saturday, May 24, 2008 * Moose - Because of work in Class::MOP 0.57,.57 *) 0.44 Sat. May 10, 2008 * Moose - made make_immutable warning cluck to show where the error is (thanks mst) * Moose::Object - BUILDALL and DEMOLISHALL now call ->body when looping through the methods, to avoid the overloaded method call. - fixed issue where DEMOLISHALL was eating the $@ values, and so not working correctly, it still kind of eats them, but so does vanilla perl - added tests for this * Moose::Cookbook::Recipe7 - added new recipe for immutable functionality (thanks Dave Rolsky) * Moose::Cookbook::Recipe9 - added new recipe for builder and lazy_build (thanks Dave Rolsky) * Moose::Cookbook::Recipe11 - added new recipe for method aliasing and exclusion with Roles (thanks Dave Rolsky) * t/ - fixed Win32 test failure (thanks spicyjack) ~ removed Build.PL and Module::Build compat since Module::Install has done that. 0.43 ~~ numerous documentation updates ~~ - Changed all usage of die to Carp::croak for better error reporting (initial patch by Tod Hagan) ** IMPORTANT NOTE ** - the make_immutable keyword is now deprecated, don't use it in any new code and please fix your old code as well. There will be 2 releases, and then it will be removed. * Moose Moose::Role Moose::Meta::Class - refactored the way inner and super work to avoid any method/@ISA cache penalty (nothingmuch) * Moose::Meta::Class - fixing &new_object to make sure trigger gets the coerced value (spotted by Charles Alderman on the mailing list) - added test for this * Moose::Meta::Method::Constructor - immutable classes which had non-lazy attributes were calling the default generating sub twice in the constructor. (bug found by Jesse Luehrs, fixed by Dave Rolsky) - added tests for this (Dave Rolsky) - fix typo in initialize_body method (nothingmuch) * Moose::Meta::Method::Destructor - fix typo in initialize_body method (nothingmuch) * Moose::Meta::Method::Overriden Moose::Meta::Method::Augmented - moved the logic for these into their own classes (nothingmuch) * Moose::Meta::Attribute - inherited attributes may now be extended without restriction on the type ('isa', 'does') (Sartak) - added tests for this (Sartak) - when an attribute property is malformed (such as lazy without a default), give the name of the attribute in the error message (Sartak) - added the &applied_traits and &has_applied_traits methods to allow introspection of traits - added tests for this - moved 'trait' and 'metaclass' argument handling to here from Moose::Meta::Class - clone_and_inherit_options now handles 'trait' and 'metaclass' (has '+foo' syntax) (nothingmuch) - added tests for this (t0m) * Moose::Object - localize $@ inside DEMOLISHALL to avoid it eating $@ (found by Ernesto) - added test for this (thanks to Ernesto) * Moose::Util::TypeConstraints - &find_type_constraint now DWIMs when given an type constraint object or name (nothingmuch) - &find_or_create_type_constraint superseded with a number of more specific functions: - find_or_create_{isa,does}_type_constraint - find_or_parse_type_constraint * Moose::Meta::TypeConstraint Moose::Meta::TypeConstraint::Class Moose::Meta::TypeConstraint::Role Moose::Meta::TypeConstraint::Enum Moose::Meta::TypeConstraint::Union Moose::Meta::TypeConstraint::Parameterized - added the &equals method for comparing two type constraints (nothingmuch) - added tests for this (nothingmuch) * Moose::Meta::TypeConstraint - add the &parents method, which is just an alias to &parent. Useful for polymorphism with TC::{Class,Role,Union} (nothingmuch) * Moose::Meta::TypeConstraint::Class - added the class attribute for introspection purposes (nothingmuch) - added tests for this * Moose::Meta::TypeConstraint::Enum Moose::Meta::TypeConstraint::Role - broke these out into their own classes (nothingmuch) * Moose::Cookbook::Recipe* - fixed references to test file locations in the POD and updated up some text for new Moose features (Sartak) * Moose::Util - Added &resolve_metaclass_alias, a helper function for finding an actual class for a short name (e.g. in the traits list) 0.40 Fri. March 14, 2008 - I hate Pod::Coverage 0.39 Fri. March 14, 2008 * Moose - documenting the use of '+name' with attributes that come from recently composed roles. It makes sense, people are using it, and so why not just officially support it. - fixing the 'extends' keyword so that it will not trigger Ovid's bug () * oose - added the perl -Moose=+Class::Name feature to allow monkeypatching of classes in one liners * Moose::Util - fixing the 'apply_all_roles' keyword so that it will not trigger Ovid's bug () * Moose::Meta::Class - added ->create method which now supports roles (thanks to jrockway) - added tests for this - added ->create_anon_class which now supports roles and caching of the results (thanks to jrockway) - added tests for this - made ->does_role a little more forgiving when it is checking a Class::MOP era metaclasses. * Moose::Meta::Role::Application::ToInstance - it is now possible to pass extra params to be used when a role is applied to an the instance (rebless_params) - added tests for this * Moose::Util::TypeConstraints - class_type now accepts an optional second argument for a custom message. POD anotated accordingly (groditi) - added tests for this - it is now possible to make anon-enums by passing 'enum' an ARRAY ref instead of the $name => @values. Everything else works as before. - added tests for this * t/ - making test for using '+name' on attributes consumed from a role, it works and makes sense too. * Moose::Meta::Attribute - fix handles so that it doesn't return nothing when the method cannot be found, not sure why it ever did this originally, this means we now have slightly better support for AUTOLOADed objects - added more delegation tests - adding ->does method to this so as to better support traits and their introspection. - added tests for this * Moose::Object - localizing the Data::Dumper configurations so that it does not pollute others (RT #33509) - made ->does a little more forgiving when it is passed Class::MOP era metaclasses. 0.38 Fri. Feb. 15, 2008 * Moose::Meta::Attribute - fixed initializer to correctly do type checking and coercion in the callback - added tests for this * t/ - fixed some finicky tests (thanks to konobi) 0.37 Thurs. Feb. 14, 2008 * Moose - fixed some details in Moose::init_meta and its superclass handling (thanks thepler) - added tests for this (thanks thepler) - 'has' now dies if you don't pass in name value pairs - added the 'make_immutable' keyword as a shortcut to make_immutable * Moose::Meta::Class Moose::Meta::Method::Constructor Moose::Meta::Attribute - making (init_arg => undef) work here too (thanks to nothingmuch) * Moose::Meta::Attribute Moose::Meta::Method::Constructor Moose::Meta::Method::Accessor - make lazy attributes respect attr initializers (rjbs) - added tests for this * Moose::Util::TypeConstraints Moose::Util::TypeConstraints::OptimizedConstraints Moose::Meta::TypeConstraints Moose::Meta::Attribute Moose::Meta::Method::Constructor Moose::Meta::Method::Accessor - making type errors use the assigned message (thanks to Sartak) - added tests for this * Moose::Meta::Method::Destructor - making sure DESTROY gets inlined properly with successive DEMOLISH calls (thanks to manito) * Moose::Meta::Attribute Moose::Meta::Method::Accessor - fixed handling of undef with type constraints (thanks to Ernesto) - added tests for this * Moose::Util - added &get_all_init_args and &get_all_attribute_values (thanks to Sartak and nothingmuch) 0.36 Sat. Jan. 26, 2008 * Moose::Role Moose::Meta::Attribute - role type tests now support when roles are applied to non-Moose classes (found by ash) - added tests for this (thanks to ash) - couple extra tests to boost code coverage * Moose::Meta::Method::Constructor - improved fix for handling Class::MOP attributes - added test for this * Moose::Meta::Class - handled the add_attribute($attribute_meta_object) case correctly - added test for this 0.35 Tues. Jan. 22, 2008 * Moose::Meta::Method::Constructor - fix to make sure even Class::MOP attributes are handled correctly (Thanks to Dave Rolsky) - added test for this (also Dave Rolsky) * Moose::Meta::Class - improved error message on _apply_all_roles, you should now use Moose::Util::apply_all_roles and you shouldnt have been using a _ prefixed method in the first place ;) 0.34 Mon. Jan. 21, 2008 ~~~ more misc. doc. fixes ~~~ ~~ updated copyright dates ~~ Moose is now a postmodern object system :) - (see the POD for details) * <<Role System Refactoring>> - this release contains a major reworking and cleanup of the role system - 100% backwards compat. - Role application now restructured into seperate classes based on type of applicants - Role summation (combining of more than one role) is much cleaner and anon-classes are no longer used in this process - new Composite role metaclass - runtime application of roles to instances is now more efficient and re-uses generated classes when applicable * <<New Role composition features>> - methods can now be excluded from a given role during composition - methods can now be aliased to another name (and still retain the original as well) * Moose::Util::TypeConstraints::OptimizedConstraints - added this module (see above) * Moose::Meta::Class - fixed the &_process_attribute method to be called by &add_attribute, so that the API is now correct * Moose::Meta::Method::Accessor - fixed bug when passing a list of values to an accessor would get (incorrectly) ignored. Thanks to Sartak for finding this ;) - added tests for this (Sartak again) * Moose::Meta::Method::Accessor Moose::Meta::Method::Constructor Moose::Meta::Attribute Moose::Meta::TypeConstraint Moose::Meta::TypeCoercion - lots of cleanup of such things as: - generated methods - type constraint handling - error handling/messages (thanks to nothingmuch) * Moose::Meta::TypeConstraint::Parameterizable - added this module to support the refactor in Moose::Meta::TypeConstraint::Parameterized * Moose::Meta::TypeConstraint::Parameterized - refactored how these types are handled so they are more generic and not confined to ArrayRef and HashRef only * t/ - shortened some file names for better VMS support (RT #32381) 0.33 Fri. Dec. 14, 2007 !! Moose now loads 2 x faster !! !! with new Class::MOP 0.49 !! ++ new oose.pm module to make command line Moose-ness easier (see POD docs for more) * Moose::Meta::Class * Moose::Meta::Role - several tweaks to take advantage of the new method map caching in Class::MOP * Moose::Meta::TypeConstraint::Parameterized - allow subtypes of ArrayRef and HashRef to be used as a container (sartak) - added tests for this - basic support for coercion to ArrayRef and HashRef for containers (sartak) - added tests for this * Moose::Meta::TypeCoercion - coercions will now create subtypes as needed so you can now add coercions to parameterized types without having to explictly define them - added tests for this * Moose::Meta::Method::Accessor - allow subclasses to decide whether we need to copy the value into a new variable (sartak) 0.32 Tues. Dec. 4, 2007 * Moose::Util::TypeConstraints - fixing how subtype aliases of unions work they should inherit the parent's coercion - added tests for this - you can now define multiple coercions on a single type at different times instead of having to do it all in one place - added tests for this * Moose::Meta::TypeConstraint - there is now a default constraint of sub { 1 } instead of Moose::Util::TypeConstraints setting this for us * Moose::Meta::TypeCoercion * Moose::Meta::TypeCoercion::Union - added the &has_coercion_for_type and &add_type_coercions methods to support the new features above (although you cannot add more type coercions for Union types) 0.31 Mon. Nov. 26, 2007 * Moose::Meta::Attribute - made the +attr syntax handle extending types with parameters. So "has '+foo' => (isa => 'ArrayRef[Int]')" now works if the original foo is an ArrayRef. - added tests for this. - delegation now works even if the attribute does not have a reader method using the get_read_method_ref method from Class::MOP::Attribute. - added tests for this - added docs for this * Moose::Util::TypeConstraints - passing no "additional attribute info" to &find_or_create_type_constraint will no longer attempt to create an __ANON__ type for you, instead it will just return undef. - added docs for this 0.30 Fri. Nov. 23, 2007 * Moose::Meta::Method::Constructor -builder related bug in inlined constructor. (groditi) * Moose::Meta::Method::Accessor - genereate unnecessary calls to predicates and refactor code generation for runtime speed (groditi) * Moose::Util::TypeConstraints - fix ClassName constraint to introspect symbol table (mst) - added more tests for this (mst) - fixed it so that subtype 'Foo' => as 'HashRef[Int]' ... with work correctly. - added tests for this * Moose::Cookbook - adding the link to Recipie 11 (written by Sartak) - adding test for SYNOPSIS code * t/ - New tests for builder bug. Upon instantiation, if an attribute had a builder, no value and was not lazy the builder default was not getting run, oops. (groditi) 0.29 Tues. Nov. 13, 2007 * Moose::Meta::Attribute - Fix error message on missing builder method (groditi) * Moose::Meta::Method::Accessor - Fix error message on missing builder method (groditi) * t/ - Add test to check for the correct error message when builder method is missing (groditi) 0.28 Tues. Nov. 13, 2007 - 0.27 packaged incorrectly (groditi) 0.27 Tues. Nov. 13, 2007 * Moose::Meta::Attribute - Added support for the new builder option (groditi) - Added support for lazy_build option (groditi) - Changed slot initialization for predicate changes (groditi) * Moose::Meta::Method::Accessor - Added support for lazy_build option (groditi) - Fix inline methods to work with corrected predicate behavior (groditi) * Moose::Meta::Method::Constructor - Added support for lazy_build option (groditi) * t/ - tests for builder and lazy_build (groditi) * fixing some misc. bits in the docs that got mentioned on CPAN Forum & perlmonks * Moose::Meta::Role - fixed how required methods are handled when they encounter overriden or modified methods from a class (thanks to confound). - added tests for this * Moose::Util::TypeConstraint - fixed the type notation parser so that the | always creates a union and so is no longer a valid type char (thanks to konobi, mugwump and #moose for working this one out.) - added more tests for this 0.26 Thurs. Sept. 27, 2007 == New Features == * Parameterized Types We now support parameterized collection types, such as: ArrayRef[Int] # array or integers HashRef[Object] # a hash with object values They can also be nested: ArrayRef[HashRef[RegexpRef]] # an array of hashes with regex values And work with the type unions as well: ArrayRef[Int | Str] # array of integers of strings * Better Framework Extendability Moose.pm is now "extendable" such that it is now much easier to extend the framework and add your own keywords and customizations. See the "EXTENDING AND EMBEDDING MOOSE" section of the Moose.pm docs. * Moose Snacks! In an effort to begin documenting some of the various details of Moose as well as some common idioms, we have created Moose::Cookbook::Snacks as a place to find small (easily digestable) nuggets of Moose code. ==== ~ Several doc updates/cleanup thanks to castaway ~ - converted build system to use Module::Install instead of Module::Build (thanks to jrockway) * Moose - added all the meta classes to the immutable list and set it to inline the accessors - fix import to allow Sub::Exporter like { into => } and { into_level => } (perigrin) - exposed and documented init_meta() to allow better embedding and extending of Moose (perigrin) * t/ - complete re-organization of the test suite - added some new tests as well - finally re-enabled the Moose::POOP test since the new version of DBM::Deep now works again (thanks rob) * Moose::Meta::Class - fixed very odd and very nasty recursion bug with inner/augment (mst) - added tests for this (eilara) * Moose::Meta::Attribute Moose::Meta::Method::Constructor Moose::Meta::Method::Accessor - fixed issue with overload::Overloaded getting called on non-blessed items. (RT #29269) - added tests for this * Moose::Meta::Method::Accessor - fixed issue with generated accessor code making assumptions about hash based classes (thanks to dexter) * Moose::Coookbook::Snacks - these are bits of documentation, not quite as big as Recipes but which have no clear place in the module docs. So they are Snacks! (horray for castaway++) * Moose::Cookbook::Recipe4 - updated it to use the new ArrayRef[MyType] construct - updated the accompanying test as well +++ Major Refactor of the Type Constraint system +++ +++ with new features added as well +++ * Moose::Util::TypeConstraint - no longer uses package variable to keep track of the type constraints, now uses the an instance of Moose::Meta::TypeConstraint::Registry to do it - added more sophisticated type notation parsing (thanks to mugwump) - added tests for this * Moose::Meta::TypeConstraint - some minor adjustments to make subclassing easier - added the package_defined_in attribute so that we can track where the type constraints are created * Moose::Meta::TypeConstraint::Union - this is now been refactored to be a subclass of Moose::Meta::TypeConstraint * Moose::Meta::TypeCoercion::Union - this has been added to service the newly refactored Moose::Meta::TypeConstraint::Union and is itself a subclass of Moose::Meta::TypeCoercion * Moose::Meta::TypeConstraint::Parameterized - added this module (taken from MooseX::AttributeHelpers) to help construct nested collection types - added tests for this * Moose::Meta::TypeConstraint::Registry - added this class to keep track of type constraints 0.25 Mon. Aug. 13, 2007 * Moose - Documentation update to reference Moose::Util::TypeConstraints under 'isa' in 'has' for how to define a new type (thanks to shlomif). * Moose::Meta::Attribute - required attributes now will no longer accept undef from the constructor, even if there is a default and lazy - added tests for this - default subroutines must return a value which passes the type constraint - added tests for this * Moose::Meta::Attribute * Moose::Meta::Method::Constructor * Moose::Meta::Method::Accessor - type-constraint tests now handle overloaded objects correctly in the error message - added tests for this (thanks to EvanCarroll) * Moose::Meta::TypeConstraint::Union - added (has_)hand_optimized_constraint to this class so that it behaves as the regular Moose::Meta::TypeConstraint does. * Moose::Meta::Role - large refactoring of this code - added several more tests - tests for subtle conflict resolition issues added, but not currently running (thanks to kolibre) * Moose::Cookbook::Recipe7 - added new recipe for augment/inner functionality (still in progress) - added test for this * Moose::Spec::Role - a formal definition of roles (still in progress) * Moose::Util - utilities for easier working with Moose classes - added tests for these * Test::Moose - This contains Moose specific test functions - added tests for these 0.24 Tues. July 3, 2007 ~ Some doc updates/cleanup ~ * Moose::Meta::Attribute - added support for roles to be given as parameters to the 'handles' option. - added tests and docs for this - the has '+foo' attribute form now accepts changes to the lazy option, and the addition of a handles option (but not changing the handles option) - added tests and docs for this * Moose::Meta::Role - required methods are now fetched using find_method_by_name so that required methods can come from superclasses - adjusted tests for this 0.23 Mon. June 18, 2007 * Moose::Meta::Method::Constructor - fix inlined constructor for hierarchy with multiple BUILD methods (mst) * Moose::Meta::Class - Modify make_immutable to work with the new Class::MOP immutable mechanism + POD + very basic test (groditi) * Moose::Meta::Attribute - Fix handles to use goto() so that caller() comes out properly on the other side (perigrin) 0.22 Thurs. May 31, 2007 * Moose::Util::TypeConstraints - fix for prototype undeclared issue when Moose::Util::TypeConstraints loaded before consumers (e.g. Moose::Meta::Attribute) by predeclaring prototypes for functions - added the ClassName type constraint, this checks for strings which will respond true to ->isa(UNIVERSAL). - added tests and docs for this - subtyping just in name now works correctly by making the default for where be { 1 } - added test for this * Moose::Meta::Method::Accessor - coerce and lazy now work together correctly, thanks to merlyn for finding this bug - tests added for this - fix reader presedence bug in Moose::Meta::Attribute + tests * Moose::Object - Foo->new(undef) now gets ignored, it is assumed you meant to pass a HASH-ref and missed. This produces better error messages then having it die cause undef is not a HASH. - added tests for this 0.21 Thursday, May 2nd, 2007 * Moose - added SUPER_SLOT and INNER_SLOT class hashes to support unimport - modified unimport to remove super and inner along with the rest - altered unimport tests to handle this * Moose::Role - altered super export to populate SUPER_SLOT * Moose::Meta::Class - altered augment and override modifier application to use *_SLOT - modified tests for these to unimport one test class each to test * Moose::Meta::Role - fixed issue where custom attribute metaclasses where not handled correctly in roles - added tests for this * Moose::Meta::Class - fixed issue where extending metaclasses with roles would blow up. Thanks to Aankhen`` for finding this insidious error, and ~~ compatibility sucks *cough* ;) 0.18 Sat. March 10, 2007 ~~ Many, many documentation updates ~~ * misc. - We now use Class::MOP::load_class to load all classes. - added tests to show types and subtypes working with Declare::Constraints::Simple and Test::Deep as constraint engines. :) 0.17 Tues. Nov. 14, 2006 * Moose::Meta::Method::Accessor - bugfix for read-only accessors which are have a type constraint and lazy. Thanks to chansen for finding it. 0.16 Tues. Nov. 14, 2006 ++ NOTE ++ There are some speed improvements in this release, but they are only the begining, so stay tuned. * Moose::Object - BUILDALL and DEMOLISHALL no longer get called unless they actually need to be. This gave us a signifigant speed boost for the cases when there is no BUILD or DEMOLISH method present. * Moose::Util::TypeConstraints * Moose::Meta::TypeConstraint - added an 'optimize_as' option to the type constraint, which allows for a hand optimized version of the type constraint to be used when possible. - Any internally created type constraints now provide an optimized version as well. 0.15 Sun. Nov. 5, 2006 ++ NOTE ++ This version of Moose *must* have Class::MOP 0.36 in order to work correctly. A number of small internal tweaks have been made in order to be compatible with that release. * Moose::Util::TypeConstraints - added &unimport so that you can clean out your class namespace of these exported keywords * Moose::Meta::Class - fixed minor issue which occasionally comes up during global destruction (thanks omega) - moved Moose::Meta::Method::Overriden into its own file. * Moose::Meta::Role - moved Moose::Meta::Role::Method into its own file. * Moose::Meta::Attribute - changed how we do type checks so that we reduce the overall cost, but still retain correctness. *** API CHANGE *** - moved accessor generation methods to Moose::Meta::Method::Accessor to conform to the API changes from Class::MOP 0.36 * Moose::Meta::TypeConstraint - changed how constraints are compiled so that we do less recursion and more iteration. This makes the type check faster :) - moved Moose::Meta::TypeConstraint::Union into its own file * Moose::Meta::Method::Accessor - created this from methods formerly found in Moose::Meta::Attribute * Moose::Meta::Role::Method - moved this from Moose::Meta::Role * Moose::Meta::Method::Overriden - moved this from Moose::Meta::Class * Moose::Meta::TypeConstraint::Union - moved this from Moose::Meta::TypeConstraint 0.14 Mon. Oct. 9, 2006 * Moose::Meta::Attribute - fixed lazy attributes which were not getting checked with the type constraint (thanks ashley) - added tests for this - removed the over-enthusiastic DWIMery of the automatic ArrayRef and HashRef defaults, it broke predicates in an ugly way. - removed tests for this 0.13 Sat. Sept. 30, 2006 ++ NOTE ++ This version of Moose *must* have Class::MOP 0.35 in order to work correctly. A number of small internal tweaks have been made in order to be compatible with that release. * Moose - Removed the use of UNIVERSAL::require to be a better symbol table citizen and remove a dependency (thanks Adam Kennedy) **~~ removed experimental & undocumented feature ~~** - commented out the 'method' and 'self' keywords, see the comments for more info. * Moose::Cookbook - added a FAQ and WTF files to document frequently asked questions and common problems * Moose::Util::TypeConstraints - added GlobRef and FileHandle type constraint - added tests for this * Moose::Meta::Attribute - if your attribute 'isa' ArrayRef of HashRef, and you have not explicitly set a default, then make the default DWIM. This will also work for subtypes of ArrayRef and HashRef as well. - you can now auto-deref subtypes of ArrayRef or HashRef too. - new test added for this (thanks to ashley) * Moose::Meta::Role - added basic support for runtime role composition but this is still *highly experimental*, so feedback is much appreciated :) - added tests for this * Moose::Meta::TypeConstraint - the type constraint now handles the coercion process through delegation, this is to support the coercion of unions * Moose::Meta::TypeConstraint::Union - it is now possible for coercions to be performed on a type union - added tests for this (thanks to konobi) * Moose::Meta::TypeCoercion - properly capturing error when type constraint is not found * Build.PL - Scalar::Util 1.18 is bad on Win32, so temporarily only require version 1.17 for Win32 and cygwin. (thanks Adam Kennedy) 0.12 Sat. Sept. 1, 2006 * 0.11 * ;) 0.09_03 Fri. June 23, 2006 ++ 0.09_02 ++ compatibility * 0.05 Thurs. April 27, 2006 * Moose - keywords are now exported with Sub::Exporter thanks to chansen for this commit - has keyword now takes a 'metaclass' option to support custom attribute meta-classes on a per-attribute basis - added tests for this - the 'has' keyword not accepts inherited slot specifications (has '+foo'). This is still an experimental feature and probably not finished see t/038_attribute_inherited_slot_specs.t for more details, or ask about it on #moose - added tests for this * Moose::Role - keywords are now exported with Sub::Exporter * Moose::Utils::TypeConstraints - reorganized the type constraint hierarchy, thanks to nothingmuch and chansen for his help and advice on this - added some tests for this - keywords are now exported with Sub::Exporter thanks to chansen for this commit * Moose::Meta::Class - due to changes in Class::MOP, we had to change construct_instance (for the better) * Moose::Meta::Attribute - due to changes in Class::MOP, we had to add the initialize_instance_slot method (it's a good thing) * Moose::Meta::TypeConstraint - added type constraint unions - added tests for this - added the is_subtype_of predicate method - added tests for this 0.04 Sun. April 16th, 2006 * Moose::Role - Roles can now consume other roles - added tests for this - Roles can specify required methods now with the requires() keyword - added tests for this * Moose::Meta::Role - ripped out much of it's guts ,.. much cleaner now - added required methods and correct handling of them in apply() for both classes and roles - added tests for this - no longer adds a does() method to consuming classes it relys on the one in Moose::Object - added roles attribute and some methods to support roles consuming roles * Moose::Meta::Attribute - added support for triggers on attributes - added tests for this - added support for does option on an attribute - added tests for this * Moose::Meta::Class - added support for attribute triggers in the object construction - added tests for this * Moose - Moose no longer creates a subtype for your class if a subtype of the same name already exists, this should DWIM in 99.9999% of all cases * Moose::Util::TypeConstraints - fixed bug where incorrect subtype conflicts were being reported - added test for this * Moose::Object - this class can now be extended with 'use base' if you need it, it properly loads the metaclass class now - added test for this 0.03_02 Wed. April 12, 2006 * Moose - you must now explictly use Moose::Util::TypeConstraints it no longer gets exported for you automatically * Moose::Object - new() now accepts hash-refs as well as key/value lists - added does() method to check for Roles - added tests for this * Moose::Meta::Class - added roles attribute along with the add_role() and does_role() methods - added tests for this * Moose::Meta::Role - now adds a does() method to consuming classes which tests the class's hierarchy for roles - added tests for this 0.03_01 Mon. April 10, 2006 * Moose::Cookbook - added new Role recipe (no content yet, only code) * Moose - added 'with' keyword for Role support - added test and docs for this - fixed subtype quoting bug - added test for this * Moose::Role - Roles for Moose - added test and docs * Moose::Util::TypeConstraints - added the message keyword to add custom error messages to type constraints * Moose::Meta::Role - the meta role to support Moose::Role - added tests and docs * Moose::Meta::Class - moved a number of things from Moose.pm to here, they should have been here in the first place * Moose::Meta::Attribute - moved the attribute option macros here instead of putting them in Moose.pm * Moose::Meta::TypeConstraint - added the message attributes and the validate method - added tests and docs for this 0.03 Thurs. March 30, 2006 * Moose::Cookbook - added the Moose::Cookbook with 5 recipes, describing all the stuff Moose can do. * Moose - fixed an issue with &extends super class loading it now captures errors and deals with inline packages correctly (bug found by mst, solution stolen from alias) - added super/override & inner/augment features - added tests and docs for these * Moose::Object - BUILDALL now takes a reference of the %params that are passed to &new, and passes that to each BUILD as well. * Moose::Util::TypeConstraints - Type constraints now survive runtime reloading - added test for this * Moose::Meta::Class - fixed the way attribute defaults are handled during instance construction (bug found by chansen) * Moose::Meta::Attribute - read-only attributes now actually enforce their read-only-ness (this corrected in Class::MOP as well) 0.02 Tues. March 21, 2006 * Moose - many more tests, fixing some bugs and edge cases - &extends now loads the base module with UNIVERSAL::require - added UNIVERSAL::require to the dependencies list ** API CHANGES ** - each new Moose class will also create and register a subtype of Object which correspond to the new Moose class. - the 'isa' option in &has now only accepts strings, and will DWIM in almost all cases * Moose::Util::TypeConstraints - added type coercion features - added tests for this - added support for this in attributes and instance construction ** API CHANGES ** - type construction no longer creates a function, it registers the type instead. - added several functions to get the registered types * Moose::Object - BUILDALL and DEMOLISHALL were broken because of a mis-named hash key, Whoops :) * Moose::Meta::Attribute - adding support for coercion in the autogenerated accessors * Moose::Meta::Class - adding support for coercion in the instance construction * Moose::Meta::TypeConstraint * Moose::Meta::TypeCoercion - type constraints and coercions are now full fledges meta-objects 0.01 Wed. March 15, 2006 - Moooooooooooooooooose!!! | https://metacpan.org/changes/release/DOY/Moose-2.0205 | CC-MAIN-2016-50 | refinedweb | 14,685 | 52.7 |
.
I think you should put them back into the System.Collections namespace and suffix each type with ‘base’. e.g. CollectionBase<T>
Seems to me that you are guaranteeing that Collection<T> et al will end up in a backwater.
Im not sure that the needs of intellisense should drive any decisions about how to structure namespaces.
That said, if you were going to jigger the namespaces to push these classes into the background, may i suggest System.Collections.Generic.Fundamental
Since VB doesn’t import System.Collections.Generic by default I guess I would just leave these types in System.Collections.Generic. I cringe when I hear folks make "unusual" API design choices just to accomodate Intellisense. Ew.
OK I just tried this in the FebCTP bits. If you manually import System.Collections.ObjectModel you get this in the Intellisense list:
Collection
Collection(Of T)
That doesn’t seem confusing to me at all. Just put ’em back where they belong. 🙂
Please stop punishing the rest of us for things that you assume VB people won’t understand.
We don’t want to suffix them with "Base" because they will often show in APIs and it’s slightly worse to see
public CollectionBase<FileInfo> Files { get;}
… than
public Collection<FileInfo> Files { get;}.
I do agree with the sentiment that seeing both Collection and Collection(of T) in intellisense is not bad (and I will pass this feedback to the VB team). I would presonally prefer the collection in the main namesapce myself, but I did get strong feedback from the VB team that it is a problem for many people.
Thirdly, I have to say that I disagree that intellisense should not influence API design. Modern development platforms are APIs + Tools. They are not seperable. You need to design them together. Of course, knowing when to change APIs and when to fix tools is an important consideration. In this particular case we decided that changing the API was net better than changing the tools.
Well here in lies the conundrum. When you put it into a namespace like ‘ObjectModel’ you iply it’s going to be used in someones object model. So consider a User object in a ECC design pattern style object model. You would have User, UserCollection and UserFactory. You would not have User, Collection<User>, UserFactory.
I would mark it abstract, teach people how to inherit ala
public class UserCollection : Collection<User>
{}
and drive good design. Let’s face it, in an ‘ObjectModel’ you would strong type directly (even if it’s a simple like above).
When you consume ala Collection<User> the collection is not part of the same object model as the User (my OM). It is part of your OM and I am a consumer.
Sometimes you people really make me wonder. Are you seriously going to muck up the netFx BCL to support backwards compat with a VB6 object?
If you ask me you should either put it back and deprecate the VB object, or mark it abstract and force people to use it via inheritiance in ‘thier’ object model.
This is just dumb…
In Beta2, we will be adding the default import of System.Collections.Generic to VB – for some reason it didn’t make it into Beta1. So this will be an issue.
And Krzysztof makes it sound like we think that VB users would be scared simply by the fact that they saw a generic and non-generic type together. This is incorrect and misleading. The issue is not that you’ll see "Collection" and "Collection(Of T)", but that the two types are *radically different*. The naming implies that the latter is just a generic version of the former, but nothing could be further than the truth. There are many, many behavioral differences between the two types, some of them extremely subtle.
The point is just that whether you are a VB developer, C# developer or a C++ developer (or, hell, an Eiffel developer), seeing two types with the same name overloaded on arity implies that the types have some passing relationship to each other and that one can move to a more strongly typed world simply by adding "<x>" or "(Of x)". This violates that simple principle pretty heavily.
In fact, the *CLR itself* has cited just this issue as one of the reasons why we have ArrayList and List(Of T) instead of ArrayList and ArrayList(Of T). Frankly, what’s good for the goose is good for the gander.
Well pushing Collection(Of T) into ObjectModel isn’t going to really solve the problem is it. VB programmers will see that someone’s API returns a Collection(Of Foo) (programmer was lazy and didn’t derive from the type but used it directly) and by your reckoning, they will be confused because the generic version is much more rich than the VB collection. OK I can believe that. 🙂 Seems like to me the VB folks would really prefer that you rename Collection(Of T) to something else. Perhaps Paul Murphy has it right – mark them abstract. Then name them Collection*Base(Of T) and put them back in System.Collections.Generic.
How are we support sort a collection that is derived from Collection(Of t)? Since Me.InnerList doesn’t exist in Collection(Of t)?
I think Paul is correct here. Vb already has the Type Collection, and it woudl make sense in the future if that was Collection<T>, whihc would indicate a type whereh the items are strongly typed and accessible by both index adn key.
<br>
<br>ObjectModel.Colleciton<T> on the other hand seems to be nothign more than a class that attemtps to encapsualte List<T>, yet does so rather poorly as it still exposes a reference to the lists via the Items property.
<br>
<br>Seriously I find myself asking why even have ObjectModel.Collection<T> when we have List<T>. Andto name it so as the only thing preventing a clash with a generic from of the Vb Colleciton class is by having to use namespaces, seems really silly.
<br>
<br>$0.02
<br>
<br>
<br>
scratch that last bit about not seeing the reasong for ObjectModel.Collection(Of T).. I thought Items was public, not protected. So that changes things a lot 😉
<br>
<br>Still, on the other point, the potential clash with a VB Colelction class that is generic is still problematic and a rename would have been more suitable, perhaps something like SimpleList would have been more appropiate and be more self describing too when looking at the plethera of list/colleciton types to choose
<br>
Blog link of the week 11
I have to agree with "Paul D. Murphy". You’re about to make another poor namespace decision because of VB6, and make all of us (a number of Vb.net/C# consumers that should be GROWING at a much more substantial pace than VB6)
Not to mention that VB6 people STILL don’t use VB.Net… Why? because they like (*REALLY LIKE*)/understand/don’t-wanna-learn-OO-VB VB6.
Why don’t you focus more on Education and documentation. For instance, push the responsibility onto the VB team. When intellisense comes up, are ya gonna show the "OLD" Microsoft.VisualBasic.Collections (which STILL use 1 based numbering) in the list? That’s fine, just give the people who know "VB.NET" the correct option of System.Collections.Collection (hmmm, did the VB6 people get confused with the two "Collection" names in the namespace?)
I urge you to reconsider such a silly idea. I urge the VB team to step up and consider modifying intellisense to cater to BOTH crowds that you continue to coddle. (IMO, the VB.Net crowd SHOULD be who you’re focusing on, since they are obviously more forward thinking than the people that still start new projects in VB6 "because they dont wanna learn OO-VB")
This is the lamest thing I’ve every heard. Could Microsoft design worse APIs? Why can Microsoft create such great runtimes but such crappy APIs?
Collection is NOT the sensible name for a class that implements ICollection *AND* IList. ListCollection is the right name.
Either do that or redefine ICollection and IList.
As ICollection clearly defines; ICollection is a generic definition of an interface for an object that "holds" something in some way. It does not define the additional features an object implemeting IList would. You think Collection is the best name for an object that implements ICollection *AND* IList? I don’t understand why you just didn’t call the class "Thing".
Why can Sun get OOP but not Microsoft?
I recommend Microsoft programmers have a look at Java’s collection classes and marvel at their beauty. The names "List" and "Collection" used for concrete classes in the beta2 BCL makes me shudder.
List and Collections are *abstract* concepts. LinkedList, ArrayList and HashMap, TreeMap, etc are concrete concepts. Any first year algorithms student knows this…..
You could just rename "ObjectModel" to "BaseClasses" since the ObjectModel just contains abstract base classes for other classes to inherit from.
Majority of ObjectModel collections are not abstract and in fact are often used directly.
I have to say that this was a truly terrible idea, for a number of reasons:
1. "Modern development platforms are APIs + Tools. They are not seperable. You need to design them together."
Yet another example of Microsoft’s astonishing arrogance. Are you implying that the only way to write .NET code is to use Visual Studio? Because, guess what, its not. Design tools are an essential part of a modern development environment, yes; but the framework should be built to stand the test of ANY environment, and the specific tools which are built ON TOP OF (not alongside) that framework should conform to the framework.
2. Please, please, PLEASE stop assuming that all Visual Basic programmers are stupid. If they don’t know enough about the environment they are working in to know the difference between VisualBasic’s Collection and the framework’s Collection<>, that is what the MSDN is for; they can go look it up. Relegating an otherwise useful collection to a backwater namespace where no one can find it because you are afraid that a small subset of users is going to get confused is what happens when marketing people start making programming decisions.
3. If you are afraid that people might think the two classes are related when in fact they are not, maybe that should tell you that one (or both) of the classes are misnamed. In fact, Collection<> was a horrible choice for that class. "Collection" is a general term for a structure which holds things; hence the interfaces ICollection and ICollection<>. Collection<> is actually a list; in fact, the only different between Collection<> and List<> is that Collection<> has some overridable methods for taking specific actions when the list changes. That makes Collection<> a MORE specific version of List<>, and yet it was given a LESS specific name. How did this make it out of the first round of design? Surely even the interns at Microsoft could have figured this one out? And if you were so worried about name confusion, why does the vast difference between the similarly named ICollection<> and Collection<>?
I love .NET, but I am too often confounded by the obviously terrible decisions that are made by the designers (witness the forth-coming collection intializers in C# 3.0). At the very least Microsoft needs to spend more time getting community feedback on design issues in earlier stages, so that we can point out the obvious flaws before it is too late to correct them.
David,
Thanks for your perspective.
“Are you implying that the only way to write .NET code is to use Visual Studio?”
No. First, we build roads because many people use cars and cars drive better on roads. The fact that we build roads does not imply that cars are the only means of transportation. Secondly, Intellisense is common in many code editors these days.
“Please, please, PLEASE stop assuming that all Visual Basic programmers are stupid.”
I don’t think all VB programmers are stupid. Please don’t imply that I do :-). As to the main point, I hope you are not arguing that we can make APIs arbitrarily confusing and expect that MSDN will straighten it all out. If you are just pointing out that we made a bad trade off in this case, then point taken. And, no marketing person was involved in making this decision.
Could you suggest a better name for Collection<T>? We wanted a name that is short and works well in the main scenario: the type of properties. In such scenarios, the user does not care much about anything besides that the property represents a collection of something.
I know I’m late in here but how is the new Collection(Of T) better than CollectionBase? There are no protected overridable methods like OnInsertComplete. How do you write a base collection class and not include methods to hook events on for something like ObjectAdded or ObjectRemoved?
These are the basic constructs that make working with collections usable. I have noticed that in .NET 3.0 there is an ObservableCollection that I wish was in .NET 2.0.
Generics are such a great feature to 2.0, I just wish I could use something in the framework that makes sense on collections.
Collection(Of T) doesn’t even have overridable Add(), or Remove() methods that you could hook your own events to. Instead I have to use the cheesy Shadows keyword (or new in c#).
Seems CollectionBase has more flexibiliy although when I extend it in a generic derived collection, it no longer works with the "Show As Collection Association" in the Class Diagram.
Dave, you should take a look at the following MSDN topic on Collection<T>:
It shows how to override protected APIs to hook up events.
That’s fine, but having a base class to do it is the whole point to me. Bacially I might as well write my own implementation of IList(Of T) if I have to override and the functionality of Collection(Of T) anyway. I just need to make sure I implement them correct, which saves no time.
I’ll look forward to ObservableCollection in 3.0. For now I’ll stick with CollectionBase.
Thanks.
Dave, I don’t understand how is it different from CollectionBase. You have to do exactly the same with CollectionBase. In addition, when you inherit from CB you need to provide strongly typed accessors. Also, CB will box value types, but Collection<T> won’t. Collection<T> is basically superior CB.
It’s better because I am fairly certain that the .NET team implemented the OnInsertComplete and OnRemoveComplete methods correctly and I can capture these methods and raise my own events. With Collection<T> I have to make sure my implementation is working at every entry point on Add(), Remove(), AddRange(), RemoveAt(), I have to override every method with my own implementation or any other method I might add to the collection, which costs time in testing.
I can also just extend CollectionBase with my own Generic abstract class which takes care of the type, I would have to override Collection<T>’s methods anyway. The biggest point is whatever method an object gets added or removed from the collection I can count on getting notified by the protected methods and not have to worry about my implementation doing it correctly. To me the most usefull thing in collections is managing and notifying when objects are added or removed, all collections have the basic enumeration methods, any custom Get() methods have to be added regardless. Updating UI list components with underlying domain objects is my main use.
PingBack from
Any time I get involved in discussions about why irreconsiliable stuff had to be done to accommodate VB users, I wonder which VB users Microsoft talks to. In the 90’s I needed a quick-to-learn programming language to do my research (I was working then as a PhD in medical research). I learnt VB. Within 2 years of getting into the groove of this stuff, I loved it but felt ashamed of myself. First, over 10% of the really powerful stuff could not be done with Microsoft VB. Meanwhile, many 3rd party folks made a living out of providing those same capabilities (as VB plugins) for a fee.
Then, the coder community would constantly derride the VB world as not really …. bla! bla! It was an admirable language but I had to hate it because it does not help your self esteem for your language of choice to be talked down at like that.
I don’t blame the c++ or the java communities for this. I blame Microsoft. Microsoft created VB as programming for dummies. And even now that equally smart people are doing VB, that sentiment about even in design choices. O! we have to do so and so because the VB people can’t wrap their head around it!! That’s a shame.
It was so hard dealing introducing zero indexing to VB.NET and so ensued the option_base sentiments of 2000/2001. But daily, head spinning stuff get added to C# and it’s understood that it won’t be a problem.
Since then, I have gone to school to do an advanced degree in Computer Science and majored in Programming Languages. I was hoping to find out the reason why VB could not do most of the things C++ could. I found none. And I challenge anyone who to a one-on-one forum. The only restriction was: How much can we add to this language and "confuse the target audience, seeing as they are dummies".
Now, I fluently write many languages from C to java, from c# to prolog. I am currently playing with F#. I am at the right place to take a wide angle look at all these and come to the conclusion that only VB has been so dasterdly disrespected and that is a political problem that Microsoft has festered on the percieved professional competence of the VB developer.
I am senior architect at a major IT firm and I hire developers. I write C# for a living because of its textual elegance. But even if VB.NET had textual elegance, I would think twice because that name is sine qua non with "language for dummies".
It aches my heart when we have to offer a lower compensation package to VB developers compared to their C# counterparts.
The common language infrastructure initiative is predicated on the ideals of "what one language can do, another language also can". That should be the ‘independence of the VB developer from his stigma’. But Microsoft is so entrenched in this VB philosophy for so long that it’s now so unstoppable that everything else, including even the otherwise elegant .NET BCL has to be polluted into alignment and dummied down to the level of comprehension of the VB developer who can’t understand intellisense.
As an aside, the problem with designing API and tool together is that you make basic assumptions as to the creativity of the users of the tool. No other field does this so brazenly. Users of ReSharper will find this excuse so laughable! because it automatically offer imports of type namespaces when the type is used. So, the Microsoft argument will have to fail if all namespaces where one continuum of automatically injected namespace in the fluid world of ReSharper. That’s just one example.
What I would do is to make the API elegant, intuitive and obey the rules. Then make the tool follow closely behind.
Peter, I appreciate your high level message/point. I do agree that we sometimes forget how powerful VB become with the move to .NET and we keep treating it as a RAD only tool, and it is not only that anymore.
Having said that, the decision to shield the VB developers from Collection<T> is unrelated to this issue. We would do the same if C# had a preexisting collection that had a very similar name but had nothing to do with the generic collection. In fact we do it all the time when we design the Framework. That is, we are very careful to avoid having two types with the same name differing only in the generic parameters, unless the types are very closely related.
Alright Krzysztof, let’s go on an exploratory journey through the BCL as seen in C#.
The following Timer classes are even exactly the same name but a namespace separation is sufficient to indicate that they are different. And indeed, they are different. Very different!
System.Threading.Timer;
System.Timers.Timer;
System.Collections already has CollectionBase. I would have supported calling the generic version CollectionBase<T> as well, except that ClassBase seems to be a convention I’ll like to reserve for abstract base classes.
What does VB collection do that will be so difficult to deprecate, anyway? The only difference I see b/w the Microsoft.VisualBasic.Collection and the System.Collection.Generics.ObjectModel.Collection is that the VB collection can insert an object between two other objects. If this is so important, implement it in the System.Collections namespace and make Collection<T> to also do it. If you ask me, a VB Collection is nearly like ArrayList. Retaining the name made sense for automatically migrating VB code in .NET 1.0 and 1.1. Any further than that, it begins to be excess luggage like C++ inherited from C, stifling its evolution.
In practice, VB migration to VB.NET has no real turn-key solution. Most of the old code requires redesign, anyway!
PingBack from
If you need a collection of objects accessed by a key, and the key is one of the object’s properties, then you should use the KeyedCollection class instead of Dictionary.
PingBack from
OK, so I’m a bit late. But I so totally agree with Thong Nguyen (April 1, 2005 7:08 AM) that I had to express my support for what he says…
"This is the lamest thing I’ve every heard. Could Microsoft design worse APIs?"
"Collection is NOT the sensible name for a class that implements ICollection *AND* IList. ListCollection is the right name.
Either do that or redefine ICollection and IList. "
"List and Collections are *abstract* concepts. LinkedList, ArrayList and HashMap, TreeMap, etc are concrete concepts."
Collection<> actually being an IList<>, and List<> actually being logically a vector/ArrayList, have both tripped me up. I’m sure I’m not the only one.
Please, naming is one of the most important things to get right in an API.
PingBack from | https://blogs.msdn.microsoft.com/kcwalina/2005/03/15/the-reason-why-collection-readonlycollection-and-keyedcollection-were-moved-to-system-collections-objectmodel-namespace/ | CC-MAIN-2016-50 | refinedweb | 3,821 | 64.41 |
I'm trying to understand the expected behaviour of the assignment of
columns from an EOD SQL @Select to a data class using JavaBeans style
property accessors. I'm looking at section "19.1.5 User Class" of JDBC 4.0.
I have a class similar to the example but a little different. The
difference is the private field name does not match the name of the
JavaBean property, is that allowed?
public class Person
{
private String myName;
public void setName(String name) {
this.myName = name;
}
public String getName() {
return myName;
}
}
If my select returns a column called 'NAME' then it does not map to the
JavaBean property called 'name'. Instead the name of the column needs to
map to the name of the private field, 'myName'. Then the field is set
correctly but the setter is never used. Is this a bug, it seems like it?
Thanks,
Dan. | http://mail-archives.apache.org/mod_mbox/db-derby-dev/200609.mbox/%3C4509FCBC.2020800@apache.org%3E | CC-MAIN-2014-41 | refinedweb | 149 | 73.98 |
socket.connect() hangs in SYN_SENT state.
- From: bukzor <workitharder@xxxxxxxxx>
- Date: Sat, 12 Jul 2008 20:23:16 -0700 (PDT)
I'm having an issue where my program hangs while doing
socket.connect() for a couple minutes, then times out the connection
and crashes.?
Doing a minimal amount of research, I found this in the netstat
manual:
The state SYN_SENT means that an application has made arequest for a
TCP session, but has not yet received the return SYN+ACK packet.
This would indicate it's a server issue, but it seems very stable when
I look at it via a browser.
Here's the server. If you browse to it, it documents the exported
functions:
Here's my test client that's hanging. Turn 'verbose' to True to get
more debugging info.
[code]
#!/usr/bin/env python
from xmlrpclib import ServerProxy
s = ServerProxy("",
verbose=False)
print s.helloworld()
print s.add(1,2)
print s.subtract(1,2)
[/code]
Thanks,
--Buck
.
- Follow-Ups:
- Re: socket.connect() hangs in SYN_SENT state.
- From: Miles
- Re: socket.connect() hangs in SYN_SENT state.
- From: Miles
- Prev by Date: Re: How to create a timer/scheduler in Python?
- Next by Date: filtering keyword arguments
- Previous by thread: New TTF fonts in IDLE on Linux question
- Next by thread: Re: socket.connect() hangs in SYN_SENT state.
- Index(es): | http://coding.derkeiler.com/Archive/Python/comp.lang.python/2008-07/msg01251.html | CC-MAIN-2015-35 | refinedweb | 223 | 77.94 |
Suppose we have a series called f. Each term of f, follows this rule f[i] = f[i – 1] – f[i – 2], we have to find the Nth term of this sequence. f[0] = X and f[1] = Y. If X = 2 and Y = 3, and N = 3. The result will be -2.
If we see this closely, there will be almost six terms before the sequence starts repeating itself. So we will find the first 6 terms of the series and then the Nth term will be the same as (N mod 6)th term.
#include< iostream> using namespace std; int searchNthTerm(int x, int y, int n) { int terms[6]; terms[0] = x; terms[1] = y; for (int i = 2; i < = 5; i++) terms[i] = terms[i - 1] - terms[i - 2]; return terms[n % 6]; } int main() { int x = 2, y = 3, n = 3; cout << "Term at index " < < n << " is: "<< searchNthTerm(x, y, n); }
Term at index 3 is: -2 | https://www.tutorialspoint.com/find-the-nth-term-of-the-series-where-each-term-f-f-f-in-cplusplus | CC-MAIN-2021-39 | refinedweb | 162 | 86.64 |
SVG allows inclusion of elements from foreign namespaces anywhere with the SVG content. In general, the SVG user agent will include the unknown elements in the DOM but will otherwise ignore unknown elements. (The notable exception is described under the Embedding foreign object types section, below.) 'foreignObject' element is an extensibility point which allows user agents to offer graphical rendering features beyond those which are defined within this specification..
The 'foreignObject' element has two ways of including foreign content. One is to reference external content by using the 'xlink:href' attribute, the other is to include child content of the 'foreignObject' element. When the 'xlink:href' attribute is specified the child content of the 'foreignObject' element must not be displayed.
All mouse events:
The x-axis coordinate of one corner of the rectangular region into which the graphics associated with the contents of the 'foreignObject' will be rendered.
The lacuna value is '0'.
Animatable: yes.
The y-axis coordinate of one corner of the rectangular region into which the referenced document is placed.
The lacuna value is '0'.
Animatable: yes.
The width of the rectangular region into which the referenced document is placed.
A negative value is unsupported. A value of zero disables visual rendering of the element. The lacuna value is '0'.
Animatable: yes.
The height of the rectangular region into which the referenced document is placed.
A negative value is unsupported. A value of zero disables visual rendering of the element. The lacuna value is '0'.
Animatable: yes.
An IRI reference. If this attribute is present, then the foreign content must be loaded from this resource and what child content the 'foreignObject' element may have must not be displayed. If this attribute is not present then the 'foreignObject' child content must be displayed if the user agent is capable of handling it.
Animatable: yes.
See attribute definition for description.
Animatable: yes.
See definition.
Here are several examples using a switch and the 'foreignObject' element.
This is an example of an arbitrary XML language, with comments to explain what happens at each step:
<svg xmlns="" width="4in" height="3in"> <desc> This example uses the 'switch' element to provide a fallback graphical representation of the text, if weirdML is not supported. </desc> <!-- The 'switch' element will process the first child element whose testing attributes evaluate to true.--> <switch> <!-- Process the embedded weirdML if the requiredExtensions attribute evaluates to true (i.e., the user agent supports weirdML embedded within SVG). --> <foreignObject x="50" y="20" width="100" height="50" requiredExtensions=""> <!-- weirdML content goes here --> <FreakyText xmlns=""> <sparklies q="42"/> <throbber seed="1234"/> <swirl twist="yeah, baby"/> <txt>This is throbbing, swirly text with sparkly bits</txt> </FreakyText> </foreignObject> <!-- Else, process the following alternate SVG. Note that there are no testing attributes on the 'textArea' element. If no testing attributes are provided, it is as if there were testing attributes and they evaluated to true.--> <textArea x="50" y="20" width="100" height="50" font- This is plain, conservative SVGT 1.2 text in a textArea. The text wraps within the confines of the element's dimensions. </textArea> </switch> </svg>
This is an example of MathML in SVG:
<svg xmlns="" width="100%" height="100%" viewBox="0 0 600 500"> <title>Quadratic Equation</title> <desc> A sample of MathML in SVG, using the 'foreignObject' element to represent a quadratic equation, with a graphical SVG representation for fallback. </desc> <switch> <foreignObject x="20" y="20" width="600" height="500" requiredExtensions=""> <math xmlns=""> <mrow> <mrow> <mi>f</mi> <mfenced> <mi>x</mi> </mfenced> </mrow> <mo>=</mo> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <mrow> <mn>4</mn> <mi>x</mi> </mrow> <mo>-</mo> <mrow> <mn>3</mn> </mrow> </mrow> </mrow> </math> </foreignObject> <g fill="gray" transform="translate(300,250)"> <rect x="-300" y="-250" width="600" height="500" fill="white" stroke="gray" /> <g id="axes" font- <line id="x-axis" x1="-300" y1="0" x2="300" y2="0" stroke="gray"/> <line id="x-axis-markers" x1="-300" y1="0" x2="300" y2="0" stroke="gray" stroke- <line id="y-axis" x1="0" y1="-250" x2="0" y2="250" stroke="gray"/> <line id="y-axis-markers" x1="0" y1="-200" x2="0" y2="250" stroke="gray" stroke- <text x="-200" y="20" font--4</text> <text x="-100" y="20" font--2</text> <text x="100" y="20" font-2</text> <text x="200" y="20" font-4</text> <text x="15" y="-198" font-4</text> <text x="15" y="-98" font-2</text> <text x="15" y="102" font--2</text> <text x="15" y="202" font--4</text> </g> <path id="graph" stroke- <circle id="vertex" cx="-50" cy="200" r="2" fill="blue" /> <circle id="y-intercept-1" cx="0" cy="150" r="2" fill="red" /> <circle id="x-intercept-1" cx="-150" cy="0" r="2" fill="red" /> <circle id="x-intercept-2" cx="50" cy="0" r="2" fill="red" /> </g> </switch> </svg>
This is an example of XHTML in SVG:
<svg xmlns="" width="100%" height="100%" viewBox="0 0 300 140"> <title>Chinese-English Unicode Table</title> <desc> A sample of XHTML in SVG, using the 'foreignObject' element to include an XHTML 'table' with some Chinese-to-English correspondances, with an ad-hoc SVG representation for fallback. </desc> <switch> <foreignObject width="300" height="140" requiredExtensions=""> <table xmlns=""> <caption>Using Chinese Characters in SVG</caption> <tr> <th>English</th> <th>Chinese</th> </tr> <tr y="75"> <td>moon</td> <td>月</td> <td>6708</td> </tr> <tr y="100"> <td>tree</td> <td>木</td> <td>6728</td> </tr> <tr y="125"> <td>water</td> <td>水</td> <td>6c34</td> </tr> </table> </foreignObject> <text font- <tspan x="150" y="25" font-Using Chinese Characters in SVG</tspan> <tspan y="50"> <tspan x="50">English</tspan> <tspan x="150">Chinese</tspan> <tspan x="250">Unicode</tspan> </tspan> <tspan y="75"> <tspan x="50">moon</tspan> <tspan x="150">月</tspan> <tspan x="250">6708</tspan> </tspan> <tspan y="100"> <tspan x="50">tree</tspan> <tspan x="150">木</tspan> <tspan x="250">6728</tspan> </tspan> <tspan y="125"> <tspan x="50">water</tspan> <tspan x="150">水</tspan> <tspan x="250">6c34</tspan> </tspan> </text> </switch> </svg> | http://www.w3.org/TR/SVGMobile12/extend.html | CC-MAIN-2015-22 | refinedweb | 1,041 | 50.36 |
I have a class, and that class contains an struct. but i am having problems returning the struct to main and read out the values the right way!.
Code:void member::setgroup (Group thegroup) { TypeAcc = thegroup; std::cout << TypeAcc.Type << std::endl; std::cout << TypeAcc.CheckNum << std::endl; } //setgroup function, everything works fine here.Code:Group member::getgroup () { return (TypeAcc); }// the problem?output :output :Code:int main () { Group Newgroup; Newgroup.Type = CharGroup[0] ; Newgroup.CheckNum = ADMIN; std::cout << Newgroup.Type <<"\n"<< Newgroup.CheckNum << std::endl; member admin ("KaK","sven_v45@hotmail.com",16,Newgroup ); Group N; admin.getgroup() = N; // might be here... std::cout << N.Type <<"\n"<< N.CheckNum << std::endl; return 0; }
C:\Documents andCode:Settings\Sven\cbproject\ConsoleApp1\windows\Debug_Build\ConsoleApp1.exe ADMIN -> GOOD 1 -> GOOD ADMIN -> GOOD // these are the output functions called inside setgroup 1 -> GOOD -> BAD // this is faulty, did i return TypeAcc wrong? 8 -> BAD // same
well, its got me wondering... | http://cboard.cprogramming.com/cplusplus-programming/88019-struct-class-problem.html | CC-MAIN-2015-48 | refinedweb | 156 | 56.01 |
Lots of people have asked me to create a short version of the Design Guidelines. Here it is. You can also email me directly at kcwalina@microsoft.com if you would like to get an MS Word copy of the digest, hich has a bit better formatting.
[UPDATE: I recently updated this document and placed it online. The details are described here.]
Also, I would be interested in knowing what you think about the selection of the guidelines in the digest. Have I ommited something important? Have I included something that could be cut?
API Design Guidelines Digest
Krzysztof Cwalina (kcwalina@microsoft.com)
Program Manager, Microsoft
The full .NET Framework Design Guidelines document consists of more than 200 pages of detailed prescriptive guidance. In can be accesses at MSDN. This document is a distilled version highlighting the most important of those guidelines.
Moving to a managed execution environment (the .NET Framework) offers an opportunity to improve the programming model. For several reasons, we strongly advise designers to treat the Design Guidelines as if they were prescriptive. We believe that developer productivity can be seriously hampered by inconsistent design.
Development tools and add-ins will turn some of these guidelines into de facto prescriptive rules, and reduce the value of non-conforming components. These non-conforming components will function, but not to their full potential.
It is very important that you follow the guidelines provided here. However, there are instances where good library design dictates that these guidelines be broken. In such cases, it is important to provide solid justification.
Scenario Driven Design: Start the design process of your public API by defining the top scenarios for each feature area. Write code you would like the end users to write when they will be implementing these scenarios using your API. Design your API based on the sample code you wrote.
Usability Studies: Test usability of your API. Choose developers who are not familiar with your API and have them implement the main scenarios. Try to identify which parts of your API are not intuitive.
Self Documenting API: Developers using your API should be able to implement main scenarios without reading the documentation. Help users to discover what types they need to use in main scenarios and what the semantics of the main methods are by choosing intuitive names for most used types and members. Talk about naming choices during specification reviews.
Understand Your Customer: Realize that the majority of your customers are not like you. You should design the API for your customer, not for developers working in your close working group, who unlike majority of your customers are experts in the technology you are trying to expose.
Casing and naming guidelines apply only to public and protected identifiers, and privately implemented interface members. Teams are free to choose their own guidelines for internal and private identifiers.
Do use PascalCasing (capitalize the first letter of each word) for all identifiers except parameter names. For example, use TextColor rather than Textcolor or Text_color.
Do use camelCasing (capitalize first letters of each word except for the first word) for all parameter names.
Do use PascalCasing or camelCasing for any acronyms over two characters long. For example, use HtmlButton rather than HTMLButton, but System.IO instead of System.Io.
Do not use acronyms that are not generally accepted in the field.
Do use well-known acronyms only when absolutely necessary. For example, use UI for User Interface and Html for Hyper-Text Markup Language.
Do not use of shortenings or contractions as parts of identifier names. For example, use GetWindow rather than GetWin.
Do not use underscores, hyphens, or any other non-alphanumeric characters.
Do not use Hungarian notation.
Do name types and properties with nouns or noun phrases.
Do name methods and events with verbs or verb phrases. Always give events names that have a concept of before and after using the present particle and simple past tense. For example, an event that is raised before a Form closes should be named Closing. An event raised after a Form is closed should be named Closed.
Do not use the “Before” or “After” prefixes to indicate pre and post events.
Do use the following prefixes:
· “I” for interfaces.
· “T” for generic type parameters (except single letter parameters).
Do use the following postfixes:
· “Exception” for types inheriting from System.Exception.
· “Collection” for types implementing IEnumerable.
· “Dictionary” for types implementing IDictionary or IDictionary<K,V>.
· “EventArgs” for types inheriting from System.EventArgs.
· “EventHandler” for types inheriting from System.Delegate.
· “Attribute” for types inheriting from System.Attribute.
Do not use the postfixes listed above for any other types.
Do not postfix type names with “Flags” or “Enum”.
Do use plural noun phrases for flag enums (enums with values that support bitwise operations) and singular noun phrases for non-flag enums.
Do apply FlagsAttribute to flag enums.
Do use the following template for naming namespaces: <Company>.<Technology>[.<Feature>]. For example, Microsoft.Office.ClipGallery. Operating System components should use System namespaces instead for the <Company> namespaces.
Do not use organizational hierarchies as the basis for namespace hierarchies. Namespaces should correspond to scenarios regardless of what teams contribute APIs for those scenarios.
Do use the most derived type for return values and the least derived type for input parameters. For example take IEnumerable as an input parameter but return Collection<string> as the return type.
Do provide a clear API entry point for every scenario. Every feature area should have preferably one, but sometimes more, types that are the starting points for exploring given technology. We call such types Aggregate Components. Implementation of large majority of scenarios in given technology area should start with one of the Aggregate Components.
Do write sample code for your top scenarios. The first type used in all these samples should be an Aggregate Component and the sample code should be straightforward. If the code gets longer than several lines, you need to redesign. Writing to an event log in Win32 API was around 100 lines of code. Writing to .NET Framework EventLog takes one line of code.
Do model higher level concepts (physical objects) rather than system level tasks with Aggregate Components. For example File, Directory, Drive are easier to understand than Stream, Formatter, Comparer.
Do not require users of your APIs to instantiate multiple objects in main scenarios. Simple tasks should be done with one new statement.
Do support so called “Create-Set-Call” programming style in all Aggregate Components. It should be possible to instantiate every component with the default constructor, set one or more properties, and call simple methods or respond to events.
EventLog applicationLog = new EventLog();
applicationLog.Source = “MySource”;
applicationLog.WriteEntry(exception.Message);
Do not require extensive initialization before Aggregate Components can be used. If some initialization is necessary, the exception resulting from not having the component initialized should clearly explain what needs to be done.
Do carefully choose names for your types, methods, and parameters. Think hard about the first name people will try typing in the code editor when they explore the feature area. Reserve and use this name for the Aggregate Component. A common mistake is to use the “best” name for a base type.
Do Run FxCop on your libraries.
Do ensure your library is CLS compliant. Apply CLSCompliantAttribute to your assembly.
Do prefer classes over interfaces.
Do not seal types unless you have a strong reason to do it.
Do not create mutable value types.
Do not ship interfaces without providing at least one default implementation (a type implementing the interface). This helps to validate the interface design.
Do not ship interfaces without providing at least one API consuming the interface (a method taking the interface as a parameter). This helps to validate the interface design.
Avoid public nested types.
Do strongly prefer collections over arrays in public API.
Do provide strongly typed collections.
Do not use ArrayList, List<T>, Hashtable, or Dictionary<K,V> in public APIs. Use Collection<T>, ReadOnlyCollection<T>, KeyedCollection<K,T>, or CollectionBase subtypes instead. Note that the generic collections are only supported in the Framework version 2.0 and above.
Do not use error codes to report failures. Use Exceptions instead.
Do not throw Exception or SystemException the base type.
Avoid catching the Exception base type.
Do prefer throwing existing common general purpose exceptions like ArgumentNullException, ArgumentOutOfRangeException, InvalidOperationException instead of defining custom exceptions.
Do throw the most specific exception possible.
Do ensure that exception messages are clear and actionable.
Do provide delegates with signatures following the pattern below far all events: <EventName>EventHandler(object sender, <EventArgs> e)
Do prefer event based APIs over delegate based APIs.
Do prefer constructors over factory methods.
Do not expose public fields. Use properties instead.
Do prefer properties for concepts with logical backing store but use methods in the following cases:
· The operation is a conversion (such as Object.ToString())
· The operation is expensive (orders of magnitude slower than a field set would be).
· Obtaining a property value using the Get accessor has an observable side effect.
· Calling the member twice in succession results in different results.
· The member returns an array. Note: Members returning arrays should return copies of an internal master array, not a reference to the internal array.
Do allow properties to be set in any order. Properties should be stateless with respect to other properties.
Do not make members virtual unless you have a strong reason to do it.
Avoid finalizers.
Do implement IDisposable on all types acquiring native resources and those that provide finalizers.
Do be consistent in the ordering and naming of method parameters.
It is common to have a set of overloaded methods with an increasing number of parameters to allow the developer to specify a desired level of information. The Make sure all the related overloads have a consistent parameter order (same parameter shows in the same place in the signature) and naming pattern.
public class Foo{
readonly string defaultForA = "default value for a";
readonly int defaultForB = 42;
public void Bar(){
Bar(defaultForA, defaultForB);
}
public void Bar(string a){
Bar(a, defaultForB);
public void Bar(string a, int b){
// core implementation here
}
}
The only method in such a group that should be virtual is the one that has the most parameters and only when extensibility is needed.
Avoid out and ref parameters.
FxCop is a code analysis tool that checks .NET managed code assemblies for conformance to the Microsoft .NET Framework Design Guidelines. It uses reflection, MSIL parsing, and callgraph analysis to inspect assemblies for more than 200 defects in the following areas:
· Library design
· Localization
· Naming conventions
· Performance
· Security
FxCop includes both GUI and command line versions of the tool, as well as an SDK to create custom rules. The tool can be downloaded from
Full Design Guidelines can be accessed at. These provide some more detail and some justifications for the guidelines described above.
Design Guideline updates are posted to the following blog. | http://blogs.msdn.com/b/kcwalina/archive/2004/09/28/235232.aspx | CC-MAIN-2014-41 | refinedweb | 1,819 | 50.73 |
Dialog windows are not turning modal anyway
Bug Description
When I create any GtkDialog in my application, i set these two methods: set_transient_
#!/usr/bin/python
from gi.repository import Gtk
def button_
fch = Gtk.FileChooser
fch.
fch.
fch.show_all()
fch.run()
win = Gtk.Window()
btn = Gtk.Button("Click me!")
win.add(btn)
btn.connect(
win.set_
win.show_all()
win.connect(
Gtk.main()
When I uninstall the overlay-scrollbar and liboverlay-
The workaround for me is disable overlay scrollbars to solve my issue.
vinicius@
Description: Ubuntu 11.10
Release: 11.10
vinicius@
overlay-scrollbar:
Instalado: (nenhum)
Candidato: 0.2.11-0ubuntu1
Tabela de versão:
0.
500 http://
What I expected to happen: setting any child dialog modal property to true, all parent window widgets would be blocked.
What happened: some widgets, GtkButton, GtkComboBox, are not blocking.
ProblemType: Bug
DistroRelease: Ubuntu 11.10
Package: overlay-scrollbar (not installed)
ProcVersionSign
Uname: Linux 3.0.0-14-generic x86_64
ApportVersion: 1.23-0ubuntu4
Architecture: amd64
Date: Mon Dec 12 15:18:17 2011
InstallationMedia: Ubuntu 11.10 "Oneiric Ocelot" - Release amd64 (20111012)
ProcEnviron:
LANGUAGE=
PATH=(custom, no user)
LANG=pt_BR.UTF-8
SHELL=/bin/bash
SourcePackage: overlay-scrollbar
UpgradeStatus: No upgrade log present (probably fresh install)
Status changed to 'Confirmed' because the bug affects multiple users.
The only workaround I found is to uninstall the package "overlay-
#!/usr/bin/python
import os
os.environ[
from gi.repository import Gtk
...
This is known to severely break at least one upstream package; please see http://
Confirm this bug:
When I create a custom dialog in GTK (both, GTK2 or GTK3) and set it to be modal, all input to other windows of my application is ignored. This works nearly always, but it fails under certain conditions.
When I add a ScrolledWindow containing a TreeView to my dialog, it still works as supposed. But if I fill the TreeView with entries until the ScrolledWindow starts to display its scroll bars --- the modality is suddenly lost and I can click on my other windows!
On GNU/Linux Debian ubuntu 13.04 (32bits or 64bits), using the official 3.8.1 claws mail or the PPA 3.9.1 claws mail, the window managers (gnome 3 or unity or openbox(lxde)) do loose some popup windows. Have to kill the application.
To reproduce, try to configure a filter, you should "loose" a popup in the process, the popup is unresponsive and you are then stuck. It did happen for several different popups.
re-opened because I modified the wrong bug report!!
Tried with lxde and Claws 3.9.2 and it works fine for me.
I suspect that, if anything, it's mishandling of modal windows in unity, and that rather than losing a window it's actually a modal window that wrongly goes behind another window, thus making the presented window appear to be unresponsive.
This is not specific to unity. As I wrote down, it seems all window managers are affected. What I forgot to write down, sylpheed (the "other one") is affected by the same bug (evolution is not).
I saw what you wrote but, nevertheless, not all window managers are affected. Of the 3 you listed, I tried lxde and could not reproduce the problem.
evolution is gtk3. sylpheed, like claws-mail is gtk2.
Then it would be GNU/Linux Debian Ubuntu specific, with high suspicion on their gtk2 build.
This report could be relevant:
https:/
Do you have the overlay-scrollbar packages installed?
Tested: popup windows now responsive after full overlay-gtk removal.
Great work!
moving the bug to proper faulty components.
Confirmed in ubuntu 13.04 (x86 and x86-64). In claws or sylpheed which are gtk2 applications, some modal boxes can become unresponsive with scrollbar overlay installed (add a proper filter in claws to reproduce). Everything works back to normal when the scrollbar ovrelay is removed.
Some instances of this bug in action:
https:/
http://
http://
http://
http://
http://
Confirmed in ubuntu 14.04 -14.10 - 15.04 (x86 and x86-64).
I can add: This also affects GTK2 applications! In my opninion it's a very severe bug!
I'm developing an application (It's compatible to GTK2 and GTK3 and this problem persists regardless of the GTK version used) and I want to prevent users doing some nasty stuff while the dialog is opened. This is impossible! It affects also all GtkFileChoosers since there are many ScrollBars inside... you can test it with Gedit's save-as dialog.
In my opinion this exposes severe risks concerning application misbehaviour since many applications won't work as the developer intended them to work... | https://bugs.launchpad.net/ubuntu/+source/overlay-scrollbar/+bug/903302 | CC-MAIN-2015-40 | refinedweb | 772 | 59.8 |
I’ve got a 16 bit DAC here, which is called DAC8550 (by TI)… I’ve managed to link it with SPI connection with arduino UNO, But I’m having problem with getting predictable results form it.
This chip expects data to be sent to it in two’s complement format from –32768 to +32767, most significant bit first.
This particular model on power-up voltage is set at midscale.
This DAC requires 3 bytes sent to it. In first byte first 6 bit are ‘do not care’, last two are for sleep and power-down. Other 2 bytes are for the data.
I’ve made a for loop to generate values form –32760 to +32760 to be written to DAC and monitored output of a DAC with a multimeter.
The output of from this DAC was loosely correlated with values sent to it.
In my case midrange is at about 2.45v.
This is what I’ve observed when for loop sent values to DAC:
from -32k to -20k output voltage went up and down about midrange from 2.4 to 2.7
at around -20k voltage jumped to 3.2v
at around -15k voltage dropped to 0
from -15k to about -8k voltage moved up and down between 0 and 0.2v
at around -8k voltage started steadily climbing to midpoint.
from 0 to 7k voltage jumped about midpoint
at around 8k voltage climbed to 3.1v and then dropped back to midpoint
at arouns 16k voltage climbed to 3.6v and then dropped back to midpoint
from around 20k to 32k voltage climbed form midpoint to 4.9v
#include "SPI.h" int dacCSpin = 10, i; byte data1 = 0, data2 = 0; void setup() { pinMode(dacCSpin, OUTPUT); SPI.begin(); } void loop() { i = 1 - 32760; for ( i = i; i < 32760; i = i + 10) { data1 = highByte(i); data2 = lowByte(i); digitalWrite(dacCSpin, LOW); SPI.transfer(0b00000000); SPI.transfer(data1); SPI.transfer(data2); digitalWrite(dacCSpin, HIGH); Serial.print("sent to DAC: "); Serial.print(i); Serial.print(" in binary: "); Serial.println(i, BIN); Serial.println(" "); } delay (11000); }
Perhaps I should format values I send to DAC as Two’s Complement somehow? Datasheet specifically mentioned DAC8551 as ‘decimal’ counterpart to DAC8550.
Will be grateful for any advice! | https://forum.arduino.cc/t/solved-problem-with-sending-two-s-complement-values-to-dac/379532 | CC-MAIN-2021-43 | refinedweb | 373 | 75.81 |
in Python
A Workbook
John D. Hunter
Fernando Pérez
Andrew Straw
Contents
Chapter 1. Introduction 5
Chapter 2. Simple non-numerical problems 7
1. Sorting quickly with QuickSort 7
2. Dictionaries for counting words 8
Chapter 7. Statistics 37
1. Descriptive statistics 37
2. Statistical distributions 40
3
CHAPTER 1
Introduction
This document contains a set of small problems, drawn from many different fields, meant to
illustrate commonly useful techniques for using Python in scientific computing.
All problems are presented in a similar fashion: the task is explained including any necessary
mathematical background and a ‘code skeleton’ is provided that is meant to serve as a starting
point for the solution of the exercise. In some cases, some example output of the expected solution,
figures or additional hints may be provided as well.
The accompanying source download for this workbook contains the complete solutions, which
are not part of this document for the sake of brevity.
For several examples, the provided skeleton contains pre-written tests which validate the cor-
rectness of the expected answers. When you have completed the exercise successfully, you should
be able to run it from within IPython and see something like this (illustrated using a trapezoidal
rule problem, whose solution is in the file trapezoid.py):
In [7]: run trapezoid.py
....
----------------------------------------------------------------------
Ran 4 tests in 0.003s
OK
This message tells you that 4 automatic tests were successfully executed. The idea of in-
cluding automatic tests in your code is a common one in modern software development, and
Python includes in its standard library two modules for automatic testing, with slightly dif-
ferent functionality: unittest and doctest. These tests were written using the unittest
system, whose complete documentation can be found here:
module-unittest.html.
Other exercises will illustrate the use of the doctest system, since it provides complementary
functionality.
5
CHAPTER 2
def qsort(lst):
"""Return a sorted copy of the input list."""
raise NotImplementedError
if __name__ == ’__main__’:
from unittest import main, TestCase
import random
class qsortTestCase(TestCase):
def test_sorted(self):
seq = range(10)
sseq = qsort(seq)
self.assertEqual(seq,sseq)
def test_random(self):
tseq = range(10)
rseq = range(10)
random.shuffle(rseq)
sseq = qsort(rseq)
self.assertEqual(tseq,sseq)
main()
Hints.
• Python has no particular syntactic requirements for implementing recursion,
but it does have a maximum recursion depth. This value can be queried
7
8 2. SIMPLE NON-NUMERICAL PROBLEMS
def word_freq(text):
"""Return a dictionary of word frequencies for the given text."""
# XXX you need to write this
def print_vk(lst):
"""Print a list of value/key pairs nicely formatted in key/value order."""
# Find the longest key: remember, the list has value/key paris, so the key
# is element [1], not [0]
longest_key = max(map(lambda x: len(x[1]),lst))
# Make a format string out of it
fmt = ’%’+str(longest_key)+’s -> %s’
# Do actual printing
for v,k in lst:
print fmt % (k,v)
def freq_summ(freqs,n=10):
"""Print a simple summary of a word frequencies dictionary.
Inputs:
- freqs: a dictionary of word frequencies.
Optional inputs:
- n: the number of items to print"""
print_vk(items[:n])
print
print ’%d most frequent words:’ % n
print_vk(items[-n:])
if __name__ == ’__main__’:
text = # XXX
# You need to read the contents of the file HISTORY.gz. Do NOT unzip it
# manually, look at the gzip module from the standard library and the
# read() method of file objects.
freqs = word_freq(text)
freq_summ(freqs,20)
Hints.
• The print_vk function is already provided for you as a simple way to summarize your
results.
• You will need to read the compressed file HISTORY.gz. Python has facilities to do this
without having to manually uncompress it.
• Consider ‘words’ simply the result of splitting the input text into a list, using any form
of whitespace as a separator. This is obviously a very naïve definition of ‘word’, but it
shall suffice for the purposes of this exercise.
• Python strings have a .split() method that allows for very flexible splitting. You can
easily get more details on it in IPython:
In [2]: a = ’somestring’
In [3]: a.split?
Type: builtin_function_or_method
Base Class: <type ’builtin_function_or_method’>
Namespace: Interactive
Docstring:
S.split([sep [,maxsplit]]) -> list of strings
This section is a general overview to show how easy it is to load and manipulate data on the
file system and over the web using python’s built in data structures and numpy arrays. The goal
is to exercise basic programming skills like building filename or web addresses to automate certain
tasks like loading a series of data files or downloading a bunch of related files off the web, as well
as to illustrate basic numpy and pylab skills.
a = 2 # 2 volt amplitude
f = 10 # 10 Hz frequency
sigma = 0.5 # 0.5 volt standard deviation noise
# create the t and v arrays; see the scipy commands arange, sin, and randn
t = XXX # an evenly sampled time array
v = XXX # a noisy sine wave
# create a 2D array X and put t in the 1st column and v in the 2nd;
# see the numpy command zeros
X = XXX
11
12 3. WORKING WITH FILES, THE INTERNET, AND NUMPY ARRAYS
# save the output file as ASCII; see the pylab command save
XXX
# plot the arrays t vs v and label the x-axis, y-axis and title save
# the output figure as noisy_sine.png. See the pylab commands plot,
# xlabel, ylabel, grid, show
XXX
and the graph will look something like Figure 1
The second part of this exercise is to write a script which loads data from the data file into
an array X, extracts the columns into arrays t and v, and computes the RMS (root-mean-square)
intensity of the signal using the load command.
an entry called “Historical Prices” which will take you to a page where you can download the price
history of your stock. Near the bottom of this page you should see a “Download To Spreadsheet”
link – instead of clicking on it, right click it and choose “Copy Link Location” and paste this into
a python script or ipython session as a string named url. Eg, for SPY page better
url = ’?’ +\
’s=SPY&d=9&e=20&f=2007&g=d&a=0&b=29&c=1993&ignore=.csv’
I’ve broken the url into two strings so they will fit on the page. If you spend a little time looking at
this pattern, you can probably figure out what is going on. The URL is encoding the information
about the stock, the variable s for the stock ticker, d for the latest month, e for the latest day,
f for the latest year, c for the start year, and so on (similarly a, b, and c for the start month,
day and year). This is handy to know, because below we will write some code to automate some
downloads for a stock universe.
One of the great things about python is it’s “batteries included” standard library, which includes
support for dates, csv files and internet downloads. The example interactive session below shows
how in just a few lines of code using python’s urllib for retrieving information from the internet,
and matplotlib’s csv2rec function for loading numpy record arrays, we are ready to get to
work analyzing some web based data. Comments have been added to a copy-and-paste from the
interactive session
# import a couple of libraries we’ll be needing
In [23]: import urllib
In [24]: import matplotlib.mlab as mlab
# this will grab that web file and save it as ’SPY.csv’ on our local
# filesystem
In [27]: urllib.urlretrieve(url, ’SPY.csv’)
Out[27]: (’SPY.csv’, <httplib.HTTPMessage instance at 0x2118210>)
# here we use the UNIX command head to peak into the file, which is
# a comma separated and contains various types, dates, ints, floats
In [28]: !head SPY.csv
Date,Open,High,Low,Close,Volume,Adj Close
2007-10-19,153.09,156.48,149.66,149.67,295362200,149.67
2007-10-18,153.45,154.19,153.08,153.69,148367500,153.69
2007-10-17,154.98,155.09,152.47,154.25,216687300,154.25
2007-10-16,154.41,154.52,153.47,153.78,166525700,153.78
2007-10-15,156.27,156.36,153.94,155.01,161151900,155.01
2007-10-12,155.46,156.35,155.27,156.33,124546700,156.33
2007-10-11,156.93,157.52,154.54,155.47,233529100,155.47
2007-10-10,156.04,156.44,155.41,156.22,101711100,156.22
2007-10-09,155.60,156.50,155.03,156.48,94054300,156.48
# csv2rec will import the file into a numpy record array, inspecting
# the columns to determine the correct data type
In [29]: r = mlab.csv2rec(’SPY.csv’)
# the dtype attribute shows you the field names and data types.
# O4 is a 4 byte python object (datetime.date), f8 is an 8 byte
# float, i4 is a 4 byte integer and so on. The > and < symbols
# indicate the byte order of multi-byte data types, eg big endian or
14 3. WORKING WITH FILES, THE INTERNET, AND NUMPY ARRAYS
# Each of the columns is stored as a numpy array, but the types are
# preserved. Eg, the adjusted closing price column adj_close is a
# floating point type, and the date column is a python datetime.date
In [31]: print r.adj_close
[ 149.67 153.69 154.25 ..., 34.68 34.61 34.36]
In [32]: print r.date
[2007-10-19 00:00:00 2007-10-18 00:00:00 2007-10-17 00:00:00 ...,
1993-02-02 00:00:00 1993-02-01 00:00:00 1993-01-29 00:00:00]
For your exercise, you’ll elaborate on the code here to do a batch download of a number of
stock tickers in a defined stock universe. Define a function fetch_stock(ticker) which takes
a stock ticker symbol as an argument and returns a numpy record array. Select the rows of the
record array where the date is greater than 2003-01-01 and plot the returns (p − p0 )/p0 where p
are the prices and p0 is the initial price. by date for each stock on the same plot. Create a legend
for the plot using the matplotlib legend command, and print out a sorted list of final returns (eg
assuming you bought in 2003 and held to the present) for each stock. Here is the exercise skeleton.:
Listing 3.2
"""
Download historical pricing record arrays for a universe of stocks
from Yahoo Finance using urllib. Load them into numpy record arrays
using matplotlib.mlab.csv2rec, and do some batch processing -- make
date vs price charts for each one, and compute the return since 2003
for each stock. Sort the returns and print out the tickers of the 4
biggest winners
"""
import os, datetime, urllib
import matplotlib.mlab as mlab # contains csv2rec
import numpy as npy
import pylab as p
def fetch_stock(ticker):
"""
download the CSV file for stock with ticker and return a numpy
record array. Save the CSV file as TICKER.csv where TICKER is the
stock’s ticker symbol.
Extra credit for supporting a start date and end date, and
checking to see if the file already exists on the local file
system before re-downloading it
"""
fname = ’%s.csv’%ticker
url = XXX # create the url for this ticker
# note that the CSV file is sorted most recent date first, so you
# will probably want to sort the record array so most recent date
# is last
XXX
return r
# we’ll store a list of each return and ticker for analysis later
data = [] # a list of (return, ticker) for each stock
fig = p.figure()
for ticker in tickers:
print ’fetching’, ticker
r = fetch_stock(ticker)
# plot the returns by date for each stock using pylab.plot, adding
# a label for the legend
XXX
# now sort the data by returns and print the results for each stock
XXX
In [19]: print x
[[ 0.56331918 0.519582 ]
[ 0.22685429 0.18371135]
[ 0.19384767 0.27367054]
[ 0.35935445 0.95795884]
[ 0.37646642 0.14431089]]
In [23]: y.shape
Out[23]: (10,)
In [25]: print y
[[ 0.56331918 0.519582 ]
[ 0.22685429 0.18371135]
[ 0.19384767 0.27367054]
[ 0.35935445 0.95795884]
[ 0.37646642 0.14431089]]
3. LOADING AND SAVING BINARY DATA 17
The advantage of numpy tofile and fromfile over ASCII data is that the data storage is
compact and the read and write are very fast. It is a bit of a pain that that meta ata like array
datatype and shape are not stored. In this format, just the raw binary numeric data is stored, so
you will have to keep track of the data type and shape by other means. This is a good solution if
you need to port binary data files between different packages, but if you know you will always be
working in python, you can use the python pickle function to preserve all metadata (pickle also
works with all standard python data types, but has the disadvantage that other programs and
applications cannot easily read it)
# create a 6,3 array of random integers
In [36]: x = (256*numpy.random.rand(6,3)).astype(numpy.int)
In [37]: print x
[[173 38 2]
[243 207 155]
[127 62 140]
[ 46 29 98]
[ 0 46 156]
[ 20 177 36]]
Elementary Numerics
The listing 4.1 contains a skeleton with no implementation but with some plotting commands
already inserted, so that you can visualize the convergence rate of this formula as more terms are
kept.
Listing 4.1
#!/usr/bin/env python
"""Simple demonstration of Python’s arbitrary-precision integers."""
def pi(n):
"""Compute pi using n terms of Wallis’ product.
pi(n) = 2 \prod_{i=1}^{n}\frac{4i^2}{4i^2-1}."""
XXX
# This part only executes when the code is run as a script, not when it is
# imported as a library
if __name__ == ’__main__’:
# Simple convergence demo.
19
20 4. ELEMENTARY NUMERICS
# 16-digit value)
diff = XXX
# Make a new figure and build a semilog plot of the difference so we can
# see the quality of the convergence
P.figure()
# Line plot with red circles at the data points
P.semilogy(nrange,diff,’-o’,mfc=’red’)
2. Trapezoidal rule
In this exercise, you are tasked with implementing the simple trapezoid rule formula for nu-
merical integration. If we want to compute the definite integral
Z b
(2) f (x)dx
a
we can partition the integration interval [a, b] into smaller subintervals, and approximate the area
under the curve for each subinterval by the area of the trapezoid created by linearly interpolating
between the two function values at each end of the subinterval. This is graphically illustrated in
Figure 2, where the blue line represents the function f (x) and the red line represents the successive
linear segments.
2. TRAPEZOIDAL RULE 21
The area under f (x) (the value of the definite integral) can thus be approximated as the sum
of the areas of all these trapezoids. If we denote by xi (i = 0, . . . , n, with x0 = a and xn = b) the
abscissas where the function is sampled, then
Z b n
1X
(3) f (x)dx ≈ (xi − xi−1 ) (f (xi ) + f (xi+1 )) .
a 2 i=1
The common case of using equally spaced abscissas with spacing h = (b − a)/n reads simply
Z b n
hX
(4) f (x)dx ≈ (f (xi ) + f (xi+1 )) .
a 2 i=1
One frequently receives the function values already precomputed, yi = f (xi ), so equation (3)
becomes
Z b n
1X
(5) f (x)dx ≈ (xi − xi−1 ) (yi + yi−1 ) .
a 2 i=1
Listing 4.2 contains a skeleton for this problem, written in the form of two incomplete functions
and a set of automatic tests (in the form of unit tests, as described in the introduction).
Listing 4.2
#!/usr/bin/env python
"""Simple trapezoid-rule integrator."""
import numpy as N
Inputs:
- x,y: arrays of the same length.
Output:
- The result of applying the trapezoid rule to the input, assuming that
y[i] = f(x[i]) for some function f to be integrated.
raise NotImplementedError
22 4. ELEMENTARY NUMERICS
def trapzf(f,a,b,npts=100):
"""Simple trapezoid-based integrator.
Inputs:
- f: function to be integrated.
Optional inputs:
- npts(100): the number of equally spaced points to sample f at, between
a and b.
Output:
- The value of the trapezoid-rule approximation to the integral."""
if __name__ == ’__main__’:
# Simple tests for trapezoid integrator, when this module is called as a
# script from the command line.
import unittest
import numpy.testing as ntest
class trapzTestCase(unittest.TestCase):
def test_err(self):
self.assertRaises(ValueError,trapz,range(2),range(3))
def test_call(self):
x = N.linspace(0,1,100)
y = N.array(map(square,x))
ntest.assert_almost_equal(trapz(x,y),1./3,4)
class trapzfTestCase(unittest.TestCase):
def test_square(self):
ntest.assert_almost_equal(trapzf(square,0,1),1./3,4)
def test_square2(self):
ntest.assert_almost_equal(trapzf(square,0,3,350),9.0,4)
unittest.main()
In this exercise, you’ll need to write two functions, trapz and trapzf. trapz applies
the trapezoid formula to pre-computed values, implementing equation (5), while trapzf takes a
function f as input, as well as the total number of samples to evaluate, and computes eq. (4).
3. NEWTON’S METHOD 23
3. Newton’s method
Consider the problem of solving for t in
Z t
(6) f (s)ds = u
o
where f (s) is a monotonically increasing function of s and u > 0.
This problem can be simply solved if seen as a root finding question. Let
Z t
(7) g(t) = f (s)ds − u,
o
then we just need to find the root for g(t), which is guaranteed to be unique given the conditions
above.
The SciPy library includes an optimization package that contains a Newton-Raphson solver
called scipy.optimize.newton. This solver can optionally take a known derivative for the
function whose roots are being sought, and in this case the derivative is simply
dg(t)
(8) = f (t).
dt
For this exercise, implement the solution for the test function
f (t) = t sin2 (t),
using
1
u= .
4
The listing 4.3 contains a skeleton that includes for comparison the correct numerical value.
Listing 4.3
#!/usr/bin/env python
"""Root finding using SciPy’s Newton’s method routines.
"""
quad = scipy.integrate.quad
newton = scipy.optimize.newton
# Use u=0.25
def g(t): XXX
# main
tguess = 10.0
print "To six digits, the answer in this case is t==1.06601."
CHAPTER 5
Linear algebra
Like matlab, numpy and scipy have support for fast linear algebra built upon the highly
optimized LAPACK, BLAS and ATLAS fortran linear algebra libraries. Unlike Matlab, in which
everything is a matrix or vector, and the ’*’ operator always means matrix multiple, the default
object in numpy is an array, and the ’*’ operator on arrays means element-wise multiplication.
Instead, numpy provides a matrix class if you want to do standard matrix-matrix multipli-
cation with the ’*’ operator, or the dot function if you want to do matrix multiplies with plain
arrays. The basic linear algebra functionality is found in numpy.linalg
# the matrix class will create matrix objects that support matrix
# multiplication with *
In [7]: Xm = npy.matrix(X)
In [8]: Ym = npy.matrix(Y)
In [9]: print Xm*Ym
[[ 0.10670678 0.68340331 0.39236388]
[ 0.27840642 1.14561885 0.62192324]
[ 0.48192134 1.32314856 0.51188578]]
25
26 5. LINEAR ALGEBRA
See L. Glass. ’Moire effect from random dots’ Nature 223, 578580 (1969).
"""
from numpy import cos, sin, pi, matrix
import numpy as npy
import numpy.linalg as linalg
from pylab import figure, show
def csqrt(x):
’sqrt func that handles returns sqrt(x)j for x<0’
XXX
def myeig(M):
"""
compute eigen values and eigenvectors analytically
Solve quadratic:
lamba^2 - tau*lambda + Delta = 0
where tau = trace(M) and Delta = Determinant(M)
1L. Glass. ’Moiré effect from random dots’ Nature 223, 578580 (1969).
1. GLASS MOIRÉ PATTERNS 27
name = ’saddle’
#sx, sy, angle = XXX
#name = ’center’
#sx, sy, angle = XXX
Signal processing
numpy and scipy provide many of the essential tools for digital signal processing.
scipy.signal provides basic tools for digital filter design and filtering (eg Butterworth filters),
a linear systems toolkit, standard waveforms such as square waves, and saw tooth functions, and
some basic wavelet functionality. scipy.fftpack provides a suite of tools for Fourier domain
analysis, including 1D, 2D, and ND discrete fourier transform and inverse functions, in addition to
other tools such as analytic signal representations via the Hilbert trasformation (numpy.fft also
provides basic FFT functions). pylab provides Matlab compatible functions for computing and
plotting standard time series analyses, such as historgrams (hist), auto and cross correlations
(acorr and xcorr), power spectra and coherence spectra (psd, csd, cohere and specgram).
1. Convolution
The output of a linear system is given by the convolution of its impulse response function with
the input. Mathematically
Z t
(9) y(t) = x(τ )r(t − τ )dτ
0
This fundamental relationship lies at the heart of linear systems analysis. It is used to model the
dynamics of calcium buffers in neuronal synapses, where incoming action potentials are represented
as Dirac δ-functions and the calcium stores are represented with a response function with multiple
exponential time constants. It is used in microscopy, in which the image distortions introduced by
the lenses are deconvolved out using a measured point spread function to provide a better picture of
the true image input. It is essential in structural engineering to determine how materials respond
to shocks.
The impulse response function r is the system response to a pulsatile input. For example, in
Figure 1 below, the response function is the sum of two exponentials with different time constants
and signs. This is a typical function used to model synaptic current following a neuronal action
potential. The figure shows three δ inputs at different times and with different amplitudes. The
corresponsing impulse response for each input is shown following it, and is color coded with the
impulse input color. If the system response is linear, by definition, the response to a sum of
inputs is the sum of the responses to the individual inputs, and the lower panel shows the sum
of the responses, or equivalently, the convolution of the impulse response function with the input
function.
In Figure 1, the summing of the impulse response function over the three inputs is conceptually
and visually easy to understand. Some find the concept of a convolution of an impulse response
function with a continuos time function, such as a sinusoid or a noise process, conceptually more
difficult. It shouldn’t be. By the sampling theorem, we can represent any finite bandwidth contin-
uous time signal as the sum of Dirac-δ functions where the height of the δ function at each time
point is simply the amplitude of the signal at that time point. The only requirement is that the
sampling frequency be at least as high as the Nyquist frequency, defined as the highest spectral
frequency in the signal divided by 2. See Figure 2 for a representation of a delta function sampling
of a damped, oscillatory, exponential function.
29
30 6. SIGNAL PROCESSING
In the exercise below, we will convolve a sample from the normal distribution (white noise) with
a double exponential impulse response function. Such a function acts as a low pass filter, so the
resultant output will look considerably smoother than the input. You can use numpy.convolve
to perform the convolution numerically.
We also explore the important relationship that a convolution in the tempoeral (or spatial)
domain becomes a multiplication in the spectral domain, which is mathematically much easier to
work with.
Y =R∗X
where Y , X, and R are the Fourier transforms of the respective variable in the temporal
convolution equation above. The Fourier transform of the impulse response function serves as an
1. CONVOLUTION 31
amplitude weighting and phase shifting operator for each frequency component. Thus, we can get
deeper insight into the effects of impulse response function r by studying the amplitude and phase
spectrum of its transform R. In the example below, however, we simply use the multiplication
property to perform the same convolution in Fourier space to confirm the numerical result from
numpy.convolve.
Listing 6.1
"""
In signal processing, the output of a linear system to an arbitrary
input is given by the convolution of the impule response function (the
system response to a Dirac-delta impulse) and the input signal.
Mathematically:
where x(t) is the input signal at time t, y(t) is the output, and r(t)
is the impulse response function.
* using numpy.convolve
def impulse_response(t):
’double exponential response function’
return XXX
# now inverse fft and extract the real part, just the part up to
# len(x)
yi = XXX
the transformation values beyond the midpoint of the frequency spectrum (the Nyquist frequency)
correspond to the values for negative frequencies and are simply the mirror image of the positive
frequencies below the Nyquist (this is true for the 1D, 2D and ND FFTs in numpy).
In this exercise we will compute the 2D spatial frequency spectra of the luminance image, zero
out the high frequency components, and inverse transform back into the time domain. We can
plot the input and output images with the pylab.imshow function, but the images must first
be scaled to be withing the 0..1 luminance range. For best results, it helps to amplify the image
by some scale factor, and then clip it to set all values greater than one to one. This serves to
enhance contrast among the darker elements of the image, so it is not completely dominated by
the brighter segments
Listing 6.2
#!/usr/bin/env python
"""Image denoising example using 2-dimensional FFT."""
import numpy as N
import pylab as P
import scipy as S
def mag_phase(F):
"""Return magnitude and phase components of spectrum F."""
# XXX Next, clip all values larger than one to one. You can set all
# elements of an array which satisfy a given condition with array indexing
# syntax: ARR[ARR<VALUE] = NEWVALUE, for example.
# Display: this one already works, if you did everything right with M
P.imshow(M, P.cm.Blues)
# ’main’ script
im = # XXX make an image array from the file ’moonlanding.png’, using the
# pylab imread() function. You will need to just extract the red
# channel from the MxNx4 RGBA matrix to represent the grayscale
# intensities
F = # Compute the 2d FFT of the input image. Look for a 2-d FFT in N.dft
# XXX Call ff a copy of the original transform. Numpy arrays have a copy
...method
# for this purpose.
# XXX Set r and c to be the number of rows and columns of the array. Look for
# the shape attribute...
# The code below already works, if you did everything above right.
P.figure()
P.subplot(221)
P.title(’Original image’)
P.imshow(im, P.cm.gray)
P.subplot(222)
P.title(’Fourier transform’)
plot_spectrum(F)
P.subplot(224)
P.title(’Filtered Spectrum’)
plot_spectrum(ff)
P.subplot(223)
P.title(’Reconstructed Image’)
P.imshow(im_new, P.cm.gray)
P.show()
2. FFT IMAGE DENOISING 35
Statistics
R, a statistical package based on S, is viewd by some as the best statistical software on the
planet, and in the open source world it is the clear choice for sophisticated statistical analysis. Like
python, R is an interpreted language written in C with an interactive shell. Unlike python, which
is a general purpose programming language, R is a specialized statistical language. Since python
is a excellent glue language, with facilities for providing a transparent interface to FORTRAN,
C, C++ and other languages, it should come as no surprise that you can harness R’s immense
statistical power from python, through the rpy third part extension library.
However, R is not without its warts. As a language, it lacks python’s elegance and advanced
programming constructs and idioms. It is also GPL, which means you cannot distribute code based
upon it unhindered: the code you distribute must be GPL as well (python, and the core scientific
extension libraries, carry a more permissive license which support distribution in closed source,
proprietary application).
Fortunately, the core tools scientific libraries for python (primarily numpy and scipy.stats)
provide a wide array of statistical tools, from basic descriptive statistics (mean, variance, skew,
kurtosis, correlation, . . . ) to hypothesis testing (t-tests, χ-Square, analysis of variance, general
linear models, . . . ) to analytical and numerical tools for working with almost every discrete and
continuous statistical distribution you can think of (normal, gamma, poisson, weibull, lognormal,
levy stable, . . . ).
1. Descriptive statistics
The first step in any statistical analysis should be to describe, charaterize and importantly,
visualize your data. The normal distribution (aka Gaussian or bell curve) lies at the heart of
much of formal statistical analysis, and normal distributions have the tidy property that they
are completely characterized by their mean and variance. As you may have observed in your
interactions with family and friends, most of the world is not normal, and many statistical analyses
are flawed by summarizing data with just the mean and standard deviation (square root of variance)
and associated signficance tests (eg the T-Test) as if it were normally distributed data.
In the exercise below, we write a class to provide descriptive statistics of a data set passed into
the constructor, with class methods to pretty print the results and to create a battery of standard
plots which may show structure missing in a casual analysis. Many new programmers, or even
experienced programmers used to a proceedural environment, are uncomfortable with the idea of
classes, having hear their geekier programmer friends talk about them but not really sure what
to do with them. There are many interesting things one can do with classes (aka object oriented
programming) but at their hear they are a way of bundling data with methods that operate on
that data. The self variable is special in python and is how the class refers to its own data and
methods. Here is a toy example
37
38 7. STATISTICS
In [117]: mydata.sumsquare()
Out[117]: 29.6851135284
Listing 7.1
import scipy.stats as stats
from matplotlib.mlab import detrend_linear, load
import numpy
import pylab
XXX = None
class Descriptives:
"""
a helper class for basic descriptive statistics and time series plots
"""
def __init__(self, samples):
self.samples = numpy.asarray(samples)
self.N = XXX # the number of samples
self.median = XXX # sample median
self.min = XXX # sample min
self.max = XXX # sample max
self.mean = XXX # sample mean
self.std = XXX # sample standard deviation
self.var = XXX # sample variance
self.skew = XXX # the sample skewness
self.kurtosis = XXX # the sample kurtosis
self.range = XXX # the sample range max-min
def __repr__(self):
"""
Create a string representation of self; pretty print all the
attributes:
descriptives = (
’N = %d’ % self.N,
XXX # the rest here
)
return ’\n’.join(descriptives)
keyword args:
return c
if __name__==’__main__’:
# load the data in filename fname into the list data, which is a
# list of floating point values, one value per line. Note you
# will have to do some extra parsing
data = []
#fname = ’data/nm560.dat’ # tree rings in New Mexico 837-1987
fname = ’data/hsales.dat’ # home sales
for line in file(fname):
line = line.strip()
40 7. STATISTICS
Figure 1.
desc = Descriptives(data)
print desc
c = desc.plots(pylab.figure, Fs=12, fmt=’-o’)
c.ax1.set_title(fname)
pylab.show()
2. Statistical distributions
We explore a handful of the statistical distributions in scipy.stats module and the connec-
tions between them. The organization of the distribution functions in scipy.stats is quite ele-
gant, with each distribution providing random variates (rvs), analytical moments (mean, variance,
skew, kurtosis), analytic density (pdf, cdf) and survival functions (sf, isf) (where available)
and tools for fitting empirical distributions to the analytic distributions (fit).
in the exercise below, we will simulate a radioactive particle emitter, and look at the empirical
distribution of waiting times compared with the expected analytical distributions. Our radioative
particle emitter has an equal likelihood of emitting a particle in any equal time interval, and emits
particles at a rate of 20 Hz. We will discretely sample time at a high frequency, and record a 1 of
a particle is emitted and a 0 otherwise, and then look at the distribution of waiting times between
emissions. The probability of a particle emission in one of our sample intervals (assumed to be
very small compared to the average interval between emissions) is proportional to the rate and the
sample interval ∆t, ie p(∆t) = α∆t where α is the emission rate in particles per second.
The waiting times between the emissions should follow an exponential distribution (see
scipy.stats.expon) with a mean of 1/α. In the exercise below, you will generate a long
array of emissions, compute the waiting times between emissions, between 2 emissions, and be-
tween 10 emissions. These should approach an 1st order gamma (aka exponential) distribution,
2nd order gamma, and 10th order gamma (see scipy.stats.gamma). Use the probability den-
sity functions for these distributions in scipy.stats to compare your simulated distributions
and moments with the analytic versions provided by scipy.stats. With 10 waiting times, we
should be approaching a normal distribution since we are summing 10 waiting times and under the
central limit theorem the sum of independent samples from a finite variance process approaches
the normal distribution (see scipy.stats.norm). In the final part of the exercise below, you
will be asked to approximate the 10th order gamma distribution with a normal distribution. The
results should look something like those in Figure 2.
Listing 7.2
"""
Illustrate the connections bettwen the uniform, exponential, gamma and
normal distributions by simulating waiting times from a radioactive
source using the random number generator. Verify the numerical
results by plotting the analytical density functions from scipy.stats
"""
import numpy
import scipy.stats
from pylab import figure, show, close
show()
2. STATISTICAL DISTRIBUTIONS 43
Figure 2. | https://www.scribd.com/document/56265218/py4science | CC-MAIN-2019-35 | refinedweb | 5,722 | 53.51 |
First impressions matter, and as a developer you know that the first impression you get of a development framework is how easy it is to write "Hello, World!" Well, in Android, it's pretty easy. Here's how it looks:
The sections below spell it all out in detail.
Let's jump in!
Creating the project is as simple as can be. An Eclipse plugin is available making Android development a snap.
You'll need to have a development computer with the Eclipse IDE installed (see System and Software Requirements), and you'll need to install the Android Eclipse Plugin (ADT). Once you have those ready, come back here.
First, here's a high-level summary of how to build "Hello, World!":
That's it! Next, let's go through each step above in detail.
From Eclipse, select the File > New > Project menu item. If the Android Plugin for Eclipse has been successfully installed, the resulting dialog should have a folder labeled "Android" which should contain a single entry: "Android Project".
Once you've selected "Android Project", click the Next button.
The next screen allows you to enter the relevant details for your project. Here's an example:
Here's what each field on this screen means:
The checkbox for toggling "Use default location" allows you to change the location on disk where the project's files will be generated and stored.
After the plugin runs, you'll have a class named HelloAndroid that looks like this:
public class HelloAndroid extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.main); } }
The next step is to start modifying it!
Once you've got the project set up, the obvious next step is to get some text up there on the screen. Here's the finished product — next we'll dissect it line by line:
package com.android.hello; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class HelloAndroid extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); TextView tv = new TextView(this); tv.setText("Hello, Android"); setContentView(tv); } }
In Android, user interfaces are composed of hierarchies of classes called Views. A View is simply a drawable object, such as a radio button, an animation, or (in our case) a text label. The specific name for the View subclass that handles text is simply TextView.
Here's how you construct a TextView:
TextView tv = new TextView(this);
The argument to TextView's constructor is an Android Context instance. The Context is simply a handle to the system; it provides services like resolving resources, obtaining access to databases and preferences, and so on. The Activity class inherits from Context. Since our HelloAndroid class is a subclass of Activity, it is also a Context, and so we can pass the 'this' reference to the TextView.
Once we've constructed the TextView, we need to tell it what to display:
tv.setText("Hello, Android");
Nothing too surprising there.
At this point, we've constructed a TextView and told it what text to display. The final step is to connect this TextView with the on-screen display, like so:
setContentView(tv);
The setContentView() method on Activity indicates to the system which View should be associated with the Activity's UI. If an Activity doesn't call this method, no UI is present at all and the system will display a blank screen. For our purposes, all we want is to display some text, so we pass it the TextView we just created.
There it is — "Hello, World" in Android! The next step, of course, is to see it running.
The Eclipse plugin makes it very easy to run your applications. Begin by selecting the Run > Open Run Dialog menu entry; you should see a dialog like this:
Next, highlight the "Android Application" entry, and then click the icon in the top left corner (the one depicting a sheet of paper with a plus sign in the corner) or simply double-click the "Android Application" entry. You should have a new launcher entry named "New_configuration".
Change the name to something expressive, like "Hello, Android", and then pick your project by clicking the Browse button. (If you have more than one Android project open in Eclipse, be sure to pick the right one.) The plugin will automatically scan your project for Activity subclasses, and add each one it finds to the drop-down list under the "Activity:" label. Since your "Hello, Android" project only has one, it will be the default, and you can simply continue.
Press the "Apply" button. Here's an example:
That's it — you're done! Press the Run button, and the Android Emulator should start. Once it's booted up your application will appear. When all is said and done, you should see something like this:
That's "Hello, World" in Android. Pretty straightforward, eh? The next sections of the tutorial offer more detailed information that you may find valuable as you learn more about Android.
The "Hello, World" example you just completed uses what we call "programmatic" UI layout. This means that you construct and build your application's UI directly in source code. If you've done much UI programming, you're probably familiar with how brittle that approach can sometimes be: small changes in layout can result in big source-code headaches. It's also very easy to forget to properly connect Views together, which can result in errors in your layout and wasted time debugging your code.
That's why Android provides an alternate UI construction model: XML-based layout files. The easiest way to explain this concept is to show an example. Here's an XML layout file that is identical in behavior to the programmatically-constructed example you just completed:
<?xml version="1.0" encoding="utf-8"?> <TextView xmlns:
The general structure of an Android XML layout file is simple. It's a tree of tags, where each tag is the name of a View class. In this example, it's a very simple tree of one element, a TextView. You can use the name of any class that extends View as a tag name in your XML layouts, including custom View classes you define in your own code. This structure makes it very easy to quickly build up UIs, using a much simpler structure and syntax than you would in source code. This model is inspired by the web development model, where you can separate the presentation of your application (its UI) from the application logic used to fetch and fill in data.
In this example, there are also four XML attributes. Here's a summary of what they mean:
So, that's what the XML layout looks like, but where do you put it? Under the res/ directory in your project. The "res" is short for "resources" and that directory contains all the non-code assets that your application requires. This includes things like images, localized strings, and XML layout files.
The Eclipse plugin creates one of these XML files for you. In our example above, we simply never used it. In the Package Explorer, expand the folder res/layout, and edit the file main.xml. Replace its contents with the text above and save your changes.
Now open the file named R.java in your source code folder in the Package Explorer. You'll see that it now looks something like this:; }; };.
The important thing to notice for now is the inner class named "layout", and its member field "main". The Eclipse plugin noticed that you added a new XML layout file and then regenerated this R.java file. As you add other resources to your projects you'll see R.java change to keep up.
The last thing you need to do is modify your HelloAndroid source code to use the new XML version of your UI, instead of the hard-coded version. Here's what your new class will look like. As you can see, the source code becomes much simpler:
package com.android.hello; import android.app.Activity; import android.os.Bundle; public class HelloAndroid extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.main); } }
When you make this change, don't just copy-and-paste it in. Try out the code-completion feature on that R class. You'll probably find that it helps a lot.
Now that you've made this change, go ahead and re-run your application — all you need to do is click the green Run arrow icon, or select Run > Run History > Hello, Android from the menu. You should see.... well, exactly the same thing you saw before! After all, the point was to show that the two different layout approaches produce identical results.
There's a lot more to creating these XML layouts, but that's as far as we'll go here. Read the Implementing a User Interface documentation for more information on the power of this approach.
The Android Plugin for Eclipse also has excellent integration with the Eclipse debugger. To demonstrate this, let's introduce a bug into our code. Change your HelloAndroid source code to look like this:
package com.android.hello; import android.app.Activity; import android.os.Bundle; public class HelloAndroid extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); Object o = null; o.toString(); setContentView(R.layout.main); } }
This change simply introduces a NullPointerException into your code. If you run your application again, you'll eventually see this:
Press "Force Quit" to terminate the application and close the emulator window..
If Python script named "activityCreator.py" that can be used to create all the source code and directory stubs for your project, as well as an ant-compatible build.xml file. This allows you to build your project from the command line, or integrate it with the IDE of your choice.
For example, to create a HelloAndroid project similar to the one we just created via Eclipse, you'd use this command:
activityCreator.py --out HelloAndroid com.android.hello.HelloAndroid
To build the project, you'd then run the command 'ant'. When that command successfully completes, you'll be left with a file named HelloAndroid.apk under the 'bin' directory. That .apk file is an Android Package, and can be installed and run in your emulator using the 'adb' tool.
For more information on how to use these tools, please read the documentation cited above. | http://code.google.com/android/intro/hello-android.html | crawl-001 | refinedweb | 1,775 | 66.03 |
I'm having some trouble with my program for the final project.
My assignment is to take a user input of how many digits of the fibonacci sequence they wish to display, and then....display them.
However when I first ran the code, it would throw an ArrayOutofBounds exception.
So say the user input 5, the system would print out the first three correctly but then give me the exception. Which I understand to be that it's calling reserved array spots that don't exist. I fixed the code by adding 2 to the user input which defines the array size, but I feel like there has to be a better way.
import javax.swing.*; public class Fibonacci { private static String strFibN; protected static int intFibN; //public static void UserInput() public static void main(String[] args) { strFibN = JOptionPane.showInputDialog(null, "Please Input the Amount of Numbers you Wish to Display of the Fibonacci Sequence"); intFibN = Integer.parseInt(strFibN); int[] Fib = new int[intFibN + 2]; Fib[0] = 1; Fib[1] = 1; for(int inc=0; (inc)<intFibN; inc++) { Fib[(inc + 2 )] = Fib[inc] + Fib[(inc + 1)] ; System.out.println(Fib[inc]); } } }
The reason for the formula being + 2 is because I've already got the first two digits so the next array slot to be filled should be Fib[2] but I'm sure you all gathered that.
Also, I'd like the output to be in JOptionPane format, but I can't seem to get it to work without having to click ok, for every single array slot, which is just a pain. Any thoughts? | http://www.javaprogrammingforums.com/whats-wrong-my-code/19747-fibonacci-sequence-final-project.html | CC-MAIN-2015-22 | refinedweb | 266 | 58.01 |
Creating unit test method stubs with “Create Unit Tests”
March 6, 2015
In Visual Studio 2015 CTP 6 we are introducing the “Create Unit Tests” feature that provides the ability to create unit test method stubs. The feature allows easy configuration of a test project, and the test class and the test method stub therein. It is conveniently available as a context menu item, and can be invoked on product code at the scope of a method, a type, or a namespace. It launches a fairly self-explanatory dialog that surfaces the options that can be configured (does the look and feel seem familiar to you?). Support is presently for C# and the MSTest framework but enhancements are in the pipeline.
This addresses a popular “ask” on user voice from the community.
If we take a step back and look at the purpose that a unit test serves, then this “ask” underscores the diversity of unit testing approaches. A unit test serves as an executable specification representing the intended behaviour of the code, for some chosen test values. When such a test fails it means that the code does not do what it’s supposed to do. Some testing practices grow the code by building such a specification ground up, starting from a failing test case. A practitioner might approach this situation by choosing to start from the unit test template available through the Visual Studio project system.
But there are situations that may not readily afford this practice. Consider existing code with very little to no test coverage, and no documentation either – i.e. where there is limited or non-existent specifications. Here, fully baked code serves as the starting point, and the tests need to be built ground up. For this situation, an approach might be to bring to bear the intelligence of Smart Unit Tests and have it generate a suite of tests characterizing the observed behaviour of the code.
Then there is the situation where the developer starts by writing some code, and uses that to boot strap the unit testing discipline. Within the flow of coding, the developer might want to quickly create a unit test method stub (with a suitable test class, and a suitable test project) for a particular piece of code. The use of the “Create Unit Tests” feature might be the approach in this situation.
Much as with coding, there are different approaches to unit testing as well – varying from that of the purist to that of the pragmatist – from “test first” to “code first” to those who might want the flexibility to start in either of the ways. We see “Create Unit Tests” as an important addition to the complement of features we have for supporting these various approaches. We hope you find this feature, and indeed the other features you are seeing us introduce in the unit testing space, to your liking.
As ever, please submit bugs through Connect, suggestions on Uservoice, and quick thoughts via Send-a-Smile in the Visual Studio IDE. We look forward to your feedback.
It's pretty sad that this has taken this long to bring back this functionality which was in previous versions of VS before 2013 & 2015. I find it also the fact that it's extremely limited in the CTP when code existed (and CodePlex contains the removed functionality) long ago.
YES!!! FINALLY!!!
I would also echo what @Inari says, as well. 😛 I guess I should say… "YES… FINALLY!!! (again!)"
There have been so many times that I have wanted to create a test from the code that I was working with… good to see this is being done… as @Inari says… way overdue.
If you don't ship with support out of the box for nunit…don't ship this feature at all.
Our team is using Unit Tests more and more, and trying to get better coverage of older code.
Good stuff Visual Studio team. You made the best IDE in the world even better!
I agree with @Roger, please ship with nUnit support out of the box at a bare minimum.
@Roger I think it is clear from the screenshot that you can choose which framework you want to use. xUnit is also the default unit test framework from VS2015 so, nUnit support has a big change to be supported to.
About time it is back! Hated when support for this dropped.
Just to clarify, it's not "an important addition". It's a feature that you stripped out a few years ago being added back in.
As mentioned in this post, support is presently for C# and the MSTest framework. Regarding support for other test frameworks, please see here visualstudio.uservoice.com/…/6792167-enable-smart-unit-tests-to-generate-test-code-in-x, and vote!
Thank you all for your feedback. Please keep it coming.
Wow, this is the only place I was able to find that said this only works in C#. I was trying to show a junior programmer using VB how easy and valuable it was to create Unit Tests only to look like I didn't know what I was talking about. What happened to language parity? I switch back and forth between C# and VB with ease depending on my clients preference but to not have all the same tools hurts productivity. Not releasing this feature in both languages when it had previously been available in both languages is especially frustrating.
Thank you for the post and thank you for the suggestion to "Send-a-Smile", I never think to use that when I run into something frustrating like this. | https://blogs.msdn.microsoft.com/devops/2015/03/06/creating-unit-test-method-stubs-with-create-unit-tests/ | CC-MAIN-2017-26 | refinedweb | 939 | 70.33 |
ip = a list of ips
ipf = list(filter(lambda x: x if not x.startswith(str(range(257,311))) else None, ip))
No,
str.startswith() doesn't take a range.
You'd have to parse out the first part and test it as an integer; filtering is also easier done with a list comprehension:
[ip for ip in ip_addresses if 257 <= int(ip.partition('.')[0]) <= 310]
The alternative would be to use the
ipaddress library; it'll reject any invalid address with a
ipaddress.AddressValueError exception, and since addresses that start with anything over 255 are invalid, you can easily co-opt that to filter out your invalid addresses:
import ipaddress def valid_ip(ip): try: ipaddress.IPv4Address(ip) except ipaddress.AddressValueError: return False else: return True [ip for ip in ip_addresses if valid_ip(ip)] | https://codedump.io/share/pZNl6uQ8WGRm/1/startswith-with-a-range-of-number-instead-an-unit | CC-MAIN-2017-04 | refinedweb | 134 | 57.16 |
A SAN is a complex animal
From the first minute you try to understand it, you might be overwhelmed by the cascade of new things: fibre switches, multi-path software, redundant controllers, disk enclosures etc. But the idea is pretty simple. In a few words, the primary role of a SAN is to expose virtual disk units (called LUNs) to various machines. A very high-level picture of a simple SAN configuration is presented below:
Here we see two machines (A and B) connected to a single storage array (X). This array implements three virtual disk units (LUNs) that are exposed to these machines: the first one sees the first LUN as the disk labelled “Disk 1”, while the second machine sees the other two LUNs, as disks “Disk 2” and “Disk 3”.
The need for a standard
But here comes a basic problem. If I can expose the same LUN to one or more machines, then how could I address it? In other words, how can safely I distinguish between one LUN and another? This seems to be a really trivial problem. Just stick a a unique GUID to each LUN and you are done! Or, stick a unique number. Or… a string… but hold on, things are not that easy. What if storage array maker ABC assigns GUIDs to each LUN and another vendor assigns 32-bit numbers? We have a complete mess.
To add to the confusion, we have this other concept – the serial number attached to a SCSI disk. But this doesn’t work all the time. For example, some vendors assign a serial number for each LUN, but this serial number is not guaranteed to be unique. Why, some SCSI controllers are even returning the same serial number for all exposed LUNs!
And this was exactly the state of things until recently. Every hardware vendor had a more-or-less proprietary method to identify LUNs exposed to a system. But if you wanted to write an application that tried to discover all the LUNs you had a hard time since your code was tied to the specific model of each array. Not to mention that you did nto have the guarantee that these LUN IDs were unique! What if two vendors had a conflicting way to assign IDs to LUNs? You could end up with two LUNs having the same ID!
Fortunately, hardware vendors are already converging on a standard. The standard is described in detail in the latest SPC-3 draft, which describes the latest incarnation of the SCSI protocol. The link is this one:. For more details, go to section 7.6.4 Device Identification VPD page.
The principle is simple – we have this SCSI Inquiry command that can be used to retrive some “metadata” of some target LUN. This metadata includes various components, like the product ID, serial number, etc. A SCSI initiator (a regular machine in our case) can use this command to discover more details about various LUNs exposed by the storage array. Now, this SCSI command was enhanced to return optional data through Page #83. If this page is requested, you will get a list of related identifiers for this LUN.
According to this standard, a LUN might have one or more structure identification descriptor structures. This structure is well documented in the above standard. The structure has a number of relevant fields, described below:
1) Association field, describes to what is the identifier attached: 0 (the identifier is related to a device), 1 (related to the path between the device and the port), 2 (related to the SCSI target).
2) Identifier type field. The value of this field describes the particular format/semantics of the actual identifier. Zero (0) means a proprietary vendor format, and this means that the vendor is not guaranteed to be globally unique. A value of 3 means that the identifier is a FPCH name, which is guaranteed to be unique. And so on…
3) Identifier data field. This is the actual identifier, and it consists in an array of bytes.
4) Identifier data size field – the size in bytes of the actual identifier.
Here is how it works: every LUN can have one or more identifiers as already mentioned above. Each identifier will have a different type. So, in the first picture in this post, the LUN 1 might have three identifiers. One of type 0 (a vendor-specific, proprietary identifier), another one of type 1 (T10 identifier), and a third one, of type 3 identifier (FPCH Name). This is a clever way that allows storage vendors to assign multiple identifiers to a LUNs going forward, while maintaining backward compatibility with the old, proprietary way of identifying LUNs. So if a storage array has already a proprietary ID for each LUN, he can simply add a new globally-unique identifier to in the next version of its firmware. And, in the end, all hardware vendors will converge to a common addressing scheme.
Note however that not all identifier types above are considered globally unique. An identifier of type 0 is obviously not, as mentioned above. But an identifier of type 1,2,3 or 8 is guaranteed to be unique, as long as the vendor follows the standard completely.
I haven’t mentioned yet another important field – the association field. This field describes the purpose of the identifier. For example, when the association field is “0”, this indicates that the specified identifier is tied to the actual LUN device. When the association is “1”, the identifiers is tied to the specific path between the LUN/controller port, not to the LUN itself. Finally, the association “2” describes an identifier associated to the SCSI target. But in the context of our discussion we are interested only in LUN identifiers with association being always “0”.
One word of caution, though: not every Fibre Channel/iSCSI storage array will implement Page #83 identifiers for a LUN, and even if it does, older models might not return unique identifiers! As far as I can tell, it appears that most of the latest SAN storage arrays, from various vendors, are implementing correctly this standard. Anyway, I would recommend you to double-check this with with your storage vendor – the availability of SCSI unique identifiers might depend on your array model, or even firmware version.
Finally, I would like to point one thing: the identification issue mentioned above appears usually when you have several LUNs exposed simultaneously to multiple HBAs or multiple machines. So the whole problem won’t really be an issue when you don’t need a SAN, and you have only direct attach storage. This is why you won’t see the Page #83 feature implemented in regular SCSI adapters.
What about the world of Windows?
The importance of the globally-unique identifiers is growing, and it already affected our Windows world. In the upcoming Windows Server SP1 thre are a number of improvements, and some one of them are specifically dealing with unique identifiers. A less-known FAQ (presented here) details various improvements of Microsoft Clusters in SP1. A nice snippet is this one (I underlined the relevant part):
So not only that SP1 will recommend compliant hardware that supports unique identifiers, but it also uses this feature to offer an advanced facility – individual LUN resets.
But you might be wondering – do I need to understand all this whacky business of manually issuing SCSI commands directly to the controller? Do I have to compute bits here and there as the SPC-3 standard mentions? Fortunately no – the storage stack already implements most of this heavy stuff for you.
In our Windows world, a LUN usually appears as a single disk in a given system (there are exceptions there with multi-paths, but I won’t go there right now). So how can you programatically take a look to the SCSI unique ID of your disk device? The answer is the IOCTL_STORAGE_QUERY_PROPERTY command, available both from kernel-mode and user-mode. You can easily get the list of unique identifiers in this way, if the SCSI controller that manages your disk supports Page 83 and unique SCSI identifiers. (note, however, that storport.sys is recommended when you want to deal with the Page #83 identifiers)
Here is the process needed to obtain the list of storage identifiers for a disk:
1) Open a handle to the disk device (\\.\PHYSICALDRIVEnnn) with read/write access (probably GENERIC_READ will be enough)
2) Fill in the STORAGE_PROPERTY_QUERY structure.
– Set the QueryType field to PropertyStandardQuery
– Set the PropertyId field to StorageDeviceIdProperty (don’t confuse it with StorageDeviceProperty which will cause the IOCTL to return other things, for example the Page #80 data, etc.)
3) Issue the IOCTL_STORAGE_QUERY_PROPERTY with the structure above as its input parameters. After a succesful completion, cast the output buffer to a STORAGE_DEVICE_ID_DESCRIPTOR structure – you will have a BYTE array containing the identifiers, obviously, in the Identifiers field. If this fails, then maybe this is not a SCSI disk, or maybe the SCSI controller doesn’t support Page 83. Make sure to supply enough data for the output buffer, otehrwise you will get ERROR_MORE_DATA/ERROR_INSUFFICIENT_BUFFER.
4) Parse the identifiers. Note that not all of these identifiers are associated with the LUN. Some identifiers might be actually related to the path between your HBA and the LUN, others might be related to the SCSI target. Here is the returned structure (see ntddstor.h from the Windows DDK for more details):
typedef struct _STORAGE_DEVICE_ID_DESCRPTOR {
ULONG Version;
ULONG Size;
ULONG NumberOfIdentifiers;
UCHAR Identifiers[1];
} STORAGE_DEVICE_ID_DESCRIPTOR, *PSTORAGE_DEVICE_ID_DESCRIPTOR;
The returned byte array contain a list of STORAGE_IDENTIFIER C++ structures used by Windows:
typedef struct _STORAGE_IDENTIFIER {
STORAGE_IDENTIFIER_CODE_SET CodeSet;
STORAGE_IDENTIFIER_TYPE Type;
USHORT IdentifierSize;
USHORT NextOffset;
STORAGE_ASSOCIATION_TYPE Association;
UCHAR Identifier[1];
} STORAGE_IDENTIFIER, *PSTORAGE_IDENTIFIER;
We notice that the layout of the structure is a little bit different with the structure mentioned above, but the most important fields (Type, Association, CodeSet, Identifier) are exactly the same.
Parsing the identifiers can be done in this way:
4.1) Initialize a BYTE pointer to the address of this the Identifiers field above.
4.2) Cast this pointer to a STORAGE_IDENTIFIER structure.
4.2) Get the association of this identifier. If not zero, go to the next identifier (step 4.5)
4.3) Get the type of this identifier. If not 1,2,3 or 8, go to the next identifier (step 4.5)
4.4) You obtained a unique identifier! Print it, etc.
4.5) Add NextOffset to the current BYTE pointer and go to step 4.2)
Here is some associated sample code that I wrote:
#include <windows.h>
#include <stdio.h>
#include <ntddstor.h>
// Query the Page 80 information
void QueryPage80(LPWSTR pwszDiskDevice, HANDLE hDiskDevice)
{
// The input parameter
STORAGE_PROPERTY_QUERY query;
query.PropertyId = StorageDev Page #80 information for device '%s'.\n "
L"[DeviceIoControl() error: %d]\n",
pwszDiskDevice, GetLastError());
return;
}
//
// Get some basic data about our disk device
//
STORAGE_DEVICE_DESCRIPTOR *pDesc = (PSTORAGE_DEVICE_DESCRIPTOR) bOutputBuffer;
// Get the Page 80 information
// This code assumes zero-terminated strings, according to the spec
if (pDesc->VendorIdOffset != 0)
wprintf(L"- Page80.VendorId: %hs\n", (PCHAR)((PBYTE)pDesc + pDesc->VendorIdOffset));
if (pDesc->ProductIdOffset != 0)
wprintf(L"- Page80.ProductId: %hs\n", (PCHAR)((PBYTE)pDesc + pDesc->ProductIdOffset));
if (pDesc->ProductRevisionOffset != 0)
wprintf(L"- Page80.ProductRevision: %hs\n", (PCHAR)((PBYTE)pDesc + pDesc->ProductRevisionOffset));
if (pDesc->SerialNumberOffset != 0)
wprintf(L"- Page80.SerialNumber: %hs\n", (PCHAR)((PBYTE)pDesc + pDesc->SerialNumberOffset));
}
// Query the Page 83 information
void QueryPage83(LPWSTR pwszDiskDevice, HANDLE hDiskDevice)
{
// The input parameter
STORAGE_PROPERTY_QUERY query;
query.PropertyId = StorageDevice SCSI Inquiry VPD Page #83 information for device '%s'.\n"
L"Maybe it doesn't support Page 83?\n"
L"[DeviceIoControl() error: %d]\n",
pwszDiskDevice, GetLastError());
return;
}
STORAGE_DEVICE_ID_DESCRIPTOR *pDesc = (PSTORAGE_DEVICE_ID_DESCRIPTOR) bOutputBuffer;
// Listing all identifiers...
wprintf(L"\n- Page83.NumberOfIdentifiers: %d\n", pDesc->NumberOfIdentifiers);
STORAGE_IDENTIFIER *pId = (PSTORAGE_IDENTIFIER) pDesc->Identifiers;
for(UINT i = 0; i < pDesc->NumberOfIdentifiers; i++)
{
// Checks if this Identifier is unique
bool isUnique = false;
if (pId->Association == StorageIdAssocDevice)
{
if (((INT)pId->Type == StorageIdTypeVendorId)
|| ((INT)pId->Type == StorageIdTypeEUI64)
|| ((INT)pId->Type == StorageIdTypeFCPHName)
|| ((INT)pId->Type == StorageIdTypeScsiNameString))
{
isUnique = true;
}
}
wprintf(L"\n- Page83.Identifier\n", i);
wprintf(L" - Type: %d\n", pId->Type);
wprintf(L" - Association: %d\n", pId->Association);
wprintf(L" - Size: %d\n", pId->IdentifierSize);
wprintf(L" - IsGloballyUnique? %s\n", isUnique? L"TRUE": L"FALSE");
wprintf(L" - Data: ");
for(int j = 0; j < pId->IdentifierSize; j++)
wprintf(L"%02hx ", pId->Identifier[j]);
wprintf(L"\n");
// move to next identifier
pId = (PSTORAGE_IDENTIFIER) ((BYTE *) pId + pId->NextOffset);
}
}
// Entry point
extern "C" __cdecl wmain(int argc, WCHAR ** argv)
{
if (argc != 2)
{
wprintf(L"- You must specify a disk device! as argument!\n");
return;
}
WCHAR * pwszDiskDevice = argv[1];
wprintf(L"Querying information for disk '%s' ... \n", pwszDiskDevice);
// Opening a handle to the disk
HANDLE hDiskDevice = CreateFile(
pwszDiskDevice,
GENERIC_READ | GENERIC_WRITE, // dwDesiredAccess
FILE_SHARE_READ | FILE_SHARE_WRITE, // dwShareMode
NULL, // lpSecurityAttributes
OPEN_EXISTING, // dwCreationDistribution
0, // dwFlagsAndAttributes
NULL // hTemplateFile
);
if (hDiskDevice == INVALID_HANDLE_VALUE)
{
wprintf(L"\nERROR: Cannot open device '%s'. "
L"Did you specified a correct SCSI device? "
L"[CreateFile() error: %d]\n",
pwszDiskDevice, GetLastError());
return;
}
// Query the Page 80 information
QueryPage80(pwszDiskDevice, hDiskDevice);
// Query the Page 83 information
QueryPage83(pwszDiskDevice, hDiskDevice);
}
The program above is a simple console application, which issues the IOCTL_QUERY_STORAGE_PROPERTIES twice to the same disk. First time, to get the Page #80 information, and the second time for the Page #83 information, to retrieve the list of identifiers. The program takes only one parameter, which is the disk device. You need, however, the Microsoft DDK to ge the latest version of the ntddstor.h header.
Let’s take it for a test drive
I compiled the program utility drive above and ran it to some disk drives. First thing to note – the program accepts a disk device as its parameter. You can obtain the disk ID of a certain volume by looking into the Disk Management (diskmgmt.msc). Programatically, you can get the same information through the high-level WMI interfaces (see this link for some sample code in VBScript).
Looking in diskmgmt.msc we can see the association between disks and volumes. For example, the disk with the ID “0” corresponds to the C:\ volume, while the disk with the ID 1 is associated not associated with any volume. These disk IDs – these are unique numbers assigned at each disk rescan operation (and implicitly at each boot time). Therefore, these IDs might change at the next reboot.
There is one important thing to note about these disk IDs. Any disk with ID “n” will have an associated device (in the form of a DOS symbolic link) in the format “\\.\PHYSICALDRIVEn”. For example, the disk “0” above will have the device \\.\PHYSICALDRIVE0, and so on.
Let’s run our program on the C:\ volume. We get something like this:
C:\>identifier.exe \\.\PHYSICALDRIVE0
Querying information for disk ‘\\.\PHYSICALDRIVE0’ …
– Page80.VendorId: FUJITSU
– Page80.ProductId: MAM3184MC
– Page80.ProductRevision: 5A01
– Page80.SerialNumber: UKL3P2904PCH
ERROR: Cannot request SCSI Inquiry VPD Page #83 information for device ‘\\.\PHYS
ICALDRIVE0′.
Maybe it doesn’t support Page 83?
[DeviceIoControl() error: 50]
This actually makes sense, since the Disk 0 was tied to the local C:\ volume, which is a local SCSI drive. Its SCSI adapter does not implement Page 83. Now, how about Disk 1?
C:\>identifier.exe \\.\PHYSICALDRIVE1
Querying information for disk ‘\\.\PHYSICALDRIVE1’ …
– Page80.VendorId: COMPAQ
– Page80.ProductId: HSV110 (C)COMPAQ
– Page80.ProductRevision: 2001
– Page80.SerialNumber: P4889B59IM604V
– Page83.NumberOfIdentifiers: 1
– Page83.Identifier
– Type: 3
– Association: 0
– Size: 16
– IsGloballyUnique? TRUE
– Data: 60 05 08 b4 00 01 4a 11 00 01 90 00 87 a1 00 00
This time we have more luck. The Disk 1 (implemented in a LUN in a HSV 110 HP EVA array) has one unique identifier of type 3 (FP-CH). In conclusion, our LUN has a globally unique identifier.
If you have a SAN around, give it a try the same code on your machines – you might get slightly different results, depending on the hardware model or even the firmware version. Make sure that you have Windows Server 2003 installed, with the latest storport.sys QFE installed (KB 883646 at the time of the writing). And please let me know (in the comments section) if you found an old hardware that doesn’t support yet globally unique identifiers… | https://blogs.msdn.microsoft.com/adioltean/2004/12/30/how-you-can-uniquely-identify-a-lun-in-a-storage-area-network/ | CC-MAIN-2016-44 | refinedweb | 2,695 | 56.15 |
A friend asked me about creating a small web interface that accepts some inputs, sends them to MATLAB for number crunching and outputs the results. I'm a Python/Django developer by trade, so I can handle the web interface, but I am clueless when it comes to MATLAB. Specifically:
ctypesto interact with it?
Any suggestions, tips, or tricks on how to pull this off?
There is a python-matlab bridge which is unique in the sense that Matlab runs in the background as a server so you don't have the startup cost each time you call a Matlab function.
it's as easy as downloading and the following code:
from pymatbridge import Matlab mlab = Matlab(matlab='/Applications/MATLAB_R2011a.app/bin/matlab') mlab.start() res = mlab.run('path/to/yourfunc.m', {'arg1': 3, 'arg2': 5}) print res['result']
where the contents of yourfunc.m would be something like this:
%% MATLAB function lol = yourfunc(args) arg1 = args.arg1; arg2 = args.arg2; lol = arg1 + arg2; end | https://pythonpedia.com/en/knowledge-base/2255942/how-do-i-interact-with-matlab-from-python- | CC-MAIN-2020-16 | refinedweb | 165 | 66.74 |
Using commands.wrap to parse your command’s arguments¶
Contents
Introduction¶
To plugin developers for older (pre-0.80) versions of Supybot, one of the more annoying aspects of writing commands was handling the arguments that were passed in. In fact, many commands often had to duplicate parsing and verification code, resulting in lots of duplicated code for not a whole lot of action. So, instead of forcing plugin writers to come up with their own ways of cleaning it up, we wrote up the wrap function to handle all of it.
It allows a much simpler and more flexible way of checking things than before and it doesn’t require that you know the bot internals to do things like check and see if a user exists, or check if a command name exists and whatnot.
If you are a plugin author this document is absolutely required reading, as it will massively ease the task of writing commands.
Using Wrap¶
First off, to get the wrap function, it is recommended (strongly) that you use the following import line:
from supybot.commands import *
This will allow you to access the wrap command (and it allows you to do it without the commands prefix). Note that this line is added to the imports of plugin templates generated by the supybot-plugin-create script.
Let’s write a quickie command that uses wrap to get a feel for how it makes our lives better. Let’s write a command that repeats a string of text a given number of times. So you could say “repeat 3 foo” and it would say “foofoofoo”. Not a very useful command, but it will serve our purpose just fine. Here’s how it would be done without wrap:
def repeat(self, irc, msg, args): """<num> <text> Repeats <text> <num> times. """ (num, text) = privmsg.getArgs(args, required=2) try: num = int(num) except ValueError: raise callbacks.ArgumentError irc.reply(num * text)
Note that all of the argument validation and parsing takes up 5 of the 6 lines (and you should have seen it before we had privmsg.getArgs!). Now, here’s what our command will look like with wrap applied:
def repeat(self, irc, msg, args, num, text): """<num> <text> Repeats <text> <num> times. """ irc.reply(text * num) repeat = wrap(repeat, ['int', 'text'])
Pretty short, eh? With wrap all of the argument parsing and validation is handled for us and we get the arguments we want, formatted how we want them, and converted into whatever types we want them to be - all in one simple function call that is used to wrap the function! So now the code inside each command really deals with how to execute the command and not how to deal with the input.
So, now that you see the benefits of wrap, let’s figure out what stuff we have to do to use it.
Syntax Changes¶
There are two syntax changes to the old style that are implemented. First, the definition of the command function must be changed. The basic syntax for the new definition is:
def commandname(self, irc, msg, args, <arg1>, <arg2>, ...):
Where arg1 and arg2 (up through as many as you want) are the variables that will store the parsed arguments. “Now where do these parsed arguments come from?” you ask. Well, that’s where the second syntax change comes in. The second syntax change is the actual use of the wrap function itself to decorate our command names. The basic decoration syntax is:
commandname = wrap(commandname, [converter1, converter2, ...])
Note
This should go on the line immediately following the body of the command’s definition, so it can easily be located (and it obviously must go after the command’s definition so that commandname is defined).
Each of the converters in the above listing should be one of the converters in commands.py (I will describe each of them in detail later.) The converters are applied in order to the arguments given to the command, generally taking arguments off of the front of the argument list as they go. Note that each of the arguments is actually a string containing the NAME of the converter to use and not a reference to the actual converter itself. This way we can have converters with names like int and not have to worry about polluting the builtin namespace by overriding the builtin int.
As you will find out when you look through the list of converters below, some of the converters actually take arguments. The syntax for supplying them (since we aren’t actually calling the converters, but simply specifying them), is to wrap the converter name and args list into a tuple. For example:
commandname = wrap(commandname, [(converterWithArgs, arg1, arg2), converterWithoutArgs1, converterWithoutArgs2])
For the most part you won’t need to use an argument with the converters you use either because the defaults are satisfactory or because it doesn’t even take any.
Customizing Wrap¶
Converters alone are a pretty powerful tool, but for even more advanced (yet simpler!) argument handling you may want to use contexts. Contexts describe how the converters are applied to the arguments, while the converters themselves do the actual parsing and validation.
For example, one of the contexts is “optional”. By using this context, you’re saying that a given argument is not required, and if the supplied converter doesn’t find anything it likes, we should use some default. Yet another example is the “reverse” context. This context tells the supplied converter to look at the last argument and work backwards instead of the normal first-to-last way of looking at arguments.
So, that should give you a feel for the role that contexts play. They are not by any means necessary to use wrap. All of the stuff we’ve done to this point will work as-is. However, contexts let you do some very powerful things in very easy ways, and are a good thing to know how to use.
Now, how do you use them? Well, they are in the global namespace of src/commands.py, so your previous import line will import them all; you can call them just as you call wrap. In fact, the way you use them is you simply call the context function you want to use, with the converter (and its arguments) as arguments. It’s quite simple. Here’s an example:
commandname = wrap(commandname, [optional('int'), many('something')])
In this example, our command is looking for an optional integer argument first. Then, after that, any number of arguments which can be anything (as long as they are something, of course).
Do note, however, that the type of the arguments that are returned can be changed if you apply a context to it. So, optional(“int”) may very well return None as well as something that passes the “int” converter, because after all it’s an optional argument and if it is None, that signifies that nothing was there. Also, for another example, many(“something”) doesn’t return the same thing that just “something” would return, but rather a list of “something”s.
Converter List¶
Below is a list of all the available converters to use with wrap. If the converter accepts any arguments, they are listed after it and if they are optional, the default value is shown.
Numbers and time¶
expiry
Takes a number of seconds and adds it to the current time to create an expiration timestamp.
id, kind=”integer”
Returns something that looks like an integer ID number. Takes an optional “kind” argument for you to state what kind of ID you are looking for, though this doesn’t affect the integrity-checking. Basically requires that the argument be an integer, does no other integrity-checking, and provides a nice error message with the kind in it.
index
Basically (“int”, “index”), but with a twist. This will take a 1-based index and turn it into a 0-based index (which is more useful in code). It doesn’t transform 0, and it maintains negative indices as is (note that it does allow them!).
int, type=”integer”, p=None
Gets an integer. The “type” text can be used to customize the error message received when the argument is not an integer. “p” is an optional predicate to test the integer with. If p(i) fails (where i is the integer arg parsed out of the argument string), the arg will not be accepted.
now
Simply returns the current timestamp as an arg, does not reference or modify the argument list.
long, type=”long”
Basically the same as int minus the predicate, except that it converts the argument to a long integer regardless of the size of the int.
float, type=”floating point number”
Basically the same as int minus the predicate, except that it converts the argument to a float.
nonInt, type=”non-integer value”
Accepts everything but integers, and returns them unchanged. The “type” value, as always, can be used to customize the error message that is displayed upon failure.
positiveInt
Accepts only positive integers.
nonNegativeInt
Accepts only non-negative integers.
Channel¶
channelDb
Sets the channel appropriately in order to get to the databases for that channel (handles whether or not a given channel uses channel-specific databases and whatnot).
channel
Gets a channel to use the command in. If the channel isn’t supplied, uses the channel the message was sent in. If using a different channel, does sanity-checking to make sure the channel exists on the current IRC network.
inChannel
Requires that the command be called from within any channel that the bot is currently in or with one of those channels used as an argument to the command.
onlyInChannel
Requires that the command be called from within any channel that the bot is currently in.
callerInGivenChannel
Takes the given argument as a channel and makes sure that the caller is in that channel.
public
Requires that the command be sent in a channel instead of a private message.
private
Requires that the command be sent in a private message instead of a channel.
validChannel
Gets a channel argument once it makes sure it’s a valid channel.
Words¶
color
Accepts arguments that describe a text color code (e.g., “black”, “light blue”) and returns the mIRC color code for that color. (Note that many other IRC clients support the mIRC color code scheme, not just mIRC)
letter
Looks for a single letter. (Technically, it looks for any one-element sequence).
literal, literals, errmsg=None
Takes a required sequence or string (literals) and any argument that uniquely matches the starting substring of one of the literals is transformed into the full literal. For example, with
("literal", ("bar", "baz", "qux")), you’d get “bar” for “bar”, “baz” for “baz”, and “qux” for any of “q”, “qu”, or “qux”. “b” and “ba” would raise errors because they don’t uniquely identify one of the literals in the list. You can override errmsg to provide a specific (full) error message, otherwise the default argument error message is displayed.
lowered
Returns the argument lowered (NOTE: it is lowered according to IRC conventions, which does strange mapping with some punctuation characters).
to
Returns the string “to” if the arg is any form of “to” (case-insensitive).
Network¶
ip
Checks and makes sure the argument looks like a valid IP and then returns it.
url
Checks for a valid URL.
httpUrl
Checks for a valid HTTP URL.
Users, nicks, and permissions¶
haveOp, action=”do that”
Simply requires that the bot have ops in the channel that the command is called in. The action parameter completes the error message: “I need to be opped to …”.
nick
Checks that the arg is a valid nick on the current IRC server.
seenNick
Checks that the arg is a nick that the bot has seen (NOTE: this is limited by the size of the history buffer that the bot has).
nickInChannel
Requires that the argument be a nick that is in the current channel, and returns that nick.
capability
Used to retrieve an argument that describes a capability.
hostmask
Returns the hostmask of any provided nick or hostmask argument.
banmask
Returns a generic banmask of the provided nick or hostmask argument.
user
Requires that the caller be a registered user.
otherUser
Returns the user specified by the username or hostmask in the argument.
owner
Requires that the command caller has the “owner” capability.
admin
Requires that the command caller has the “admin” capability.
checkCapability, capability
Checks to make sure that the caller has the specified capability.
- checkChannelCapability, capability
- Checks to make sure that the caller has the specified capability on the channel the command is called in.
Matching¶
anything
Returns anything as is.
something, errorMsg=None, p=None
Takes anything but the empty string. errorMsg can be used to customize the error message. p is any predicate function that can be used to test the validity of the input.
somethingWithoutSpaces
Same as something, only with the exception of disallowing spaces of course.
matches, regexp, errmsg
Searches the args with the given regexp and returns the matches. If no match is found, errmsg is given.
regexpMatcher
Gets a matching regexp argument (m// or //).
glob
Gets a glob string. Basically, if there are no wildcards (
*,
?) in the argument, returns
*string*, making a glob string that matches anything containing the given argument.
regexpReplacer
Gets a replacing regexp argument (s//).
Other¶
networkIrc, errorIfNoMatch=False
Returns the IRC object of the specified IRC network. If one isn’t specified, the IRC object of the IRC network the command was called on is returned.
plugin, require=True
Returns the plugin specified by the arg or None. If require is True, an error is raised if the plugin cannot be retrieved.
boolean
Converts the text string to a boolean value. Acceptable true values are: “1”, “true”, “on”, “enable”, or “enabled” (case-insensitive). Acceptable false values are: “0”, false”, “off”, “disable”, or “disabled” (case-insensitive).
filename
Used to get a filename argument.
commandName
Returns the canonical command name version of the given string (ie, the string is lowercased and dashes and underscores are removed).
text
Takes the rest of the arguments as one big string. Note that this differs from the “anything” context in that it clobbers the arg string when it’s done. Using any converters after this is most likely incorrect.
Contexts List¶
What contexts are available for me to use?
The list of available contexts is below. Unless specified otherwise, it can be assumed that the type returned by the context itself matches the type of the converter it is applied to.
Options¶
- optional
- Look for an argument that satisfies the supplied converter, but if it’s not the type I’m expecting or there are no arguments for us to check, then use the default value. Will return the converted argument as is or None.
- additional
- Look for an argument that satisfies the supplied converter, making sure that it’s the right type. If there aren’t any arguments to check, then use the default value. Will return the converted argument as is or None.
- first
- Tries each of the supplied converters in order and returns the result of the first successfully applied converter.
Multiplicity¶
- any
- Looks for any number of arguments matching the supplied converter. Will return a sequence of converted arguments or None.
- many
- Looks for multiple arguments matching the supplied converter. Expects at least one to work, otherwise it will fail. Will return the sequence of converted arguments.
- commalist
- Looks for a comma separated list of arguments that match the supplied converter. Returns a list of the successfully converted arguments. If any of the arguments fail, this whole context fails.
Other¶
- rest
- Treat the rest of the arguments as one big string, and then convert. If the conversion is unsuccessful, restores the arguments.
- getopts
- Handles –option style arguments. Each option should be a key in a dictionary that maps to the name of the converter that is to be used on that argument. To make the option take no argument, use “” as the converter name in the dictionary. For no conversion, use None as the converter name in the dictionary.
- reverse
- Reverse the argument list, apply the converters, and then reverse the argument list back.
Final Word¶
Now that you know how to use wrap, and you have a list of converters and contexts you can use, your task of writing clean, simple, and safe plugin code should become much easier. Enjoy! | http://doc.supybot.aperio.fr/en/latest/develop/using_wrap.html | CC-MAIN-2020-10 | refinedweb | 2,755 | 63.7 |
Haskell Quiz/Fuzzy Time/Solution Dolio
From HaskellWiki
< Haskell Quiz | Fuzzy Time
This doesn't do, strictly, what the ruby quiz specification demands (yet, at least). It makes time and fuzzy time datatypes, and a fuzzy clock monad in which you can tick off minutes, and it will report a fuzzy time at each tick. I didn't feel like messing with the system timer like the quiz suggests. However, by building around IO (and getting rid of WriterT for the reporting), one could probably get that effect if desired.
module Main where import Control.Arrow import Control.Monad.Writer import Control.Monad.State import System import System.Random import MonadRandom newtype Time = T (Int, Int) deriving (Eq) newtype Fuzzy = F (Int, Int) deriving (Eq) type FuzzyClock a = WriterT String (StateT (Time, Fuzzy) (Rand StdGen)) a instance Show Time where show (T (h, m)) | m < 10 = show h ++ ":0" ++ show m | otherwise = show h ++ ":" ++ show m instance Read Time where readsPrec n s = do (h, c:s') <- readsPrec n s guard (c == ':') (m, s'') <- readsPrec n s' return (time h m, s'') instance Show Fuzzy where show (F (h, m)) = show h ++ ":" ++ show m ++ "-" -- A safe constructor for time values time :: Int -> Int -> Time time hour min | hour < 1 || hour > 12 = error "Invalid hour." | min < 0 || min > 59 = error "Invalid minute." | otherwise = T (hour, min) -- Modifies a time value by adding n minutes (negative n tick backwards) tickT :: Int -> Time -> Time tickT n (T (hour, min)) = time h m where (d, m) = divMod (min + n) 60 h = (hour + d - 1) `mod` 12 + 1 -- Modifies a fuzzy time value by adding 10n minutes (negative ...) tickF :: Int -> Fuzzy -> Fuzzy tickF n (F (hour, min)) = F (h, m) where (d, m) = divMod (min + n) 6 h = (hour + d - 1) `mod` 12 + 1 -- Constructs a fuzzy time value from a regular time value toFuzzy :: Time -> Fuzzy toFuzzy (T (h, m)) = F (h, m `div` 10) -- Given a time t, computes a random fuzzy time within a 10-minute range of t fuzz :: MonadRandom m => Time -> m Fuzzy fuzz t = do off <- getRandomR (-5, 5) return . toFuzzy $ tickT off t -- Ticks off a minute on the fuzzy clock, reporting the current fuzzy time tick :: FuzzyClock () tick = do (t, f) <- fmap (first $ tickT 1) get g <- fuzz t let h = if g == tickF (-1) f then f else g tell $ show t ++ "\t" ++ show h ++ "\n" put (t, h) main = do [time, mins] <- getArgs let t = read time m = read mins f <- evalRandIO $ fuzz t l <- evalRandIO . flip evalStateT (t, f) . execWriterT $ replicateM_ m tick putStr l | http://www.haskell.org/haskellwiki/index.php?title=Haskell_Quiz/Fuzzy_Time/Solution_Dolio&oldid=7536 | CC-MAIN-2014-23 | refinedweb | 431 | 68.54 |
A babel plugin adding the ability to rewire modul dependency. This enables to mock modules for testing purposes.') }
e.g.
MyComponent.jsx has
import someCoolFunctionThatShouldBeMocked
PhantomJS 2.1.1 (Mac OS X 0.0.0) ERROR LOG: 'named exports are not supported in *.vue files.'however, I do not see any
__RewireAPI__being added to exports in my test files :(
ispartais involved
How can I spy on a function that is references from an object? Example:
a.js
const map = { a, b, c }; function run(options) { map[options.val](); } function a() {...} function b() {...} function c() {...}
a.spec.js
import { a, __RewireAPI__ as ARewireAPI } from './a'; describe('a', () => { describe('run', () => { const spy= jasmine.createSpy('spy').and.callFake(() => { ... }); beforeEach(() => { ARewireAPI.__Rewire__('b', spy); }); afterEach(() => { TooltipRewireAPI.__ResetDependency__('b'); }); it('calls b()', () => { a.run({val: 'b'}); // doesn't call the spy because what actually being called is the reference from the map object }); }); });
TypeError: 'undefined' is not a function (evaluating '_get__('AGGrid').__ResetDependency__('AgGridReact')') at :322 | https://gitter.im/speedskater/babel-plugin-rewire?at=583ef683c5bc35217da4272b | CC-MAIN-2022-40 | refinedweb | 161 | 53.47 |
Unofficial guide to AstroArt’s script commands Prologue Since using the CCD camera for my astronomical observation my reference software for the recovery and processing of images has always been AstroArt. Naturally over the years I have found to handle other software of this type, certainly some very good, and although my limited experience, they not allow me to say too much in benchmarking, I think for me the unbeatable immediacy with which the AstroArt user’s are able to use this software almost immediately. A very friendly user interface, combined with the scientific rigor of the algorithms used has made one of the leading software in the field of amateur astronomy. The advent of version 3.0 and higher, with command script has further expanded its potential, so that it is possible (if provided with the necessary hardware) to perform automated tracking and recovery, which greatly facilitate the conduct of the observing sessions. Anyone who had a minimum of programming experience also knows how difficult and tedious write and especially keep updated the user manual. Therefore it happens that the manuals available is not always up to date with respect to the evolution of software, but also that some features are not always explained sufficiently extensive for the average users. This guide seeks to illustrate some commands for AstroArt ‘s scripts that, in the user manual are poorly documented or even at all. As the title suggests this is an unofficial guide and although parts of it modeled on the original user manual does not purport to replace it, but rather as an informal integration with regard to the commands of the scripting of this magnificent software. The commands are present in version 4.0 AstroArt, GUI 3.8 and later. Any inaccuracy and error that was to emerge on this guide are due solely to myself and I ask pardon in advance. The Scripts A script is a list of commands executed in sequence. Through the scripts you can automate several observational procedures such for example, automated search of asteroids and supernovae, taken photometric image and very high. This is possible because through AstroArt is possible to control not only the CCD camera, but also the filter wheel and the telescope (if this is pointing to power). The scripting language AstroArt, also known as "ABasic" is a kind of dialect of BASIC with a syntax very similar to that of many types of BASIC, (GWBasic ™, ™ QuickBasic, Visual Basic ™, VBScript ™, etc.). Example of script (taken from manual AstroArt and testable with the CCD simulator in AstroArt) Camera.Start(10) Camera.Wait Image.Save("C:\sample.fit") The first command line starts up an exposure of 10 seconds, the second command line waiting for the end of the exposure before continuing with the script. Finally, the third line of command saves the image you just captured in drive c: \ calling it "sample.fit. Another example of a script always taken from the manual AstroArt: the shooting of 50 images for the detection of supernovae. for i = 1 to 50 ra = Telescope.List.Ra(i) de = Telescope.List.Dec(i) name$ = Telescope.List.Name$(i) Telescope.Goto(ra,de) Telescope.Wait Camera.Start(60) Camera.Wait Image.Rename(name$ + ".fit") Image.save(name$ + ".fit") next i In this simple script coordinates and name the of the galaxies are recovered directly from the list pre-loaded into the Telescope Window. For each galaxy provides the script to point the telescope, start the exposure and save the scanned image. Variables and function Gli The ABasic supports two types of variables, numeric variables and string variables. The first contains a numerical number, while a variable string contains an alphanumeric string. Numeric variable They contain a number. This number is internally represented by a floating-point value with double precision (64 bits, 15 digits). String variable A variable string contains alphanumeric text. This text may be represented by one or more lines of text. The maximum size of a string variable is 64 MB. Example: a$ = "Hello" b$ = a$ + "World" The b$ string variable is now composed by the string "HelloWorld" A single character of a string can be read using square brackets, thus Using the previous example a$ [1] returns as a result of "H" and a$[2] returns “e” and so on. If the index number shown in brackets exceeds the length of the string this restarts from the beginning, therefore, $ [6] still returns "H". A single row of a multi-line string can be read using curly brackets. Example: If the variable a$ a contains the following text distributed on three lines “This text is distributed on three lines” In this case a${2} returns the string distributed on the second line “is distributed". The function count(a$) returns on the number of lines comprised in a multi-line string. Reserved word. These words are part of the ABasic language, so normally they must not be used as an argument in string variables. They are: IF THEN ELSE ENDIF OR AND NOT MOD REM FOR NEXT STEP BREAK CONTINUE WHILE ENDWHILE GOTO GOSUB PRINT INPUT END CLS Let us now describe in more detail. Cyclical function: FOR, NEXT, STEP, BREAK, CONTINUE, WHILE, ENDWHILE. The ABasic supports two types of loop instruction: For-Next function and While-Endwhile function. The complete syntax for the FOR-NEXT loop function is as follows: FOR <variable> = <expression> TO <expression> [STEP <numeric constant>] ... ... NEXT <variable> For-Next instruction trigger a loop for a number of times of the instructions between the two commands: For (cycle start) and Next (end cycle). A simple example is print to the screen the numbers from 1 to 10, "a" is the control variable. for a = 1 to 10 print a next a The break command exits from a loop, in this example the cycle is interrupted when the value of control variable 'a' becomes larger than 5. for a = 1 to 10 print a if a>5 then break next a The CONTINUE command is used inside a For-Next loop and acts in a manner analogous to NEXT command, so automatically start a new iteration. For example: for a = 1 to 10 print a if a > 5 then continue print "Test" next a Finally, the STEP command is used at the beginning of a For-Next loop to determine the progression of the control variable. For example: for a = 1 to 10 step 2 print a next a In this way the control variable 'a' skip all even numbers from 1 to 10. STEP function may also to have negative values, for example: for a = 10 to 1 step -1 print a next a Instead, the WHILE-ENDWHILE command evaluates the condition at the beginning of the cycle. If the condition is false then the cycle is interrupted and execution continues after the ENDWHILE command. For example: a = 1 while a <= 10 print a a = a+1 endwhile print “cycle finished” because the WHILE command evaluates the condition at the beginning of the cycle the instructions inside the loop could also never be executed. The BREAK and CONTINUE commands can be used in a loop WHILE-ENDWHILE in a completely similar to what has been seen for the FOR-NEXT loop. Conditional Function: IF, THEN, ELSE, ENDIF, OR, AND, NOT The IF-THEN-ENDIF commands evaluates a logical expression and determines the program flow according to the result of that expression. Some examples of logical expressions: a > 5 and b$ = "astro" a >= 3 or not (b = 5) The logical and mathematical operators used in logical expressions have their own scale of priorities when they have to be written in the list of instructions, the following is the scale of precedence of these operators going from highest priority to lowest priority. Operator priority: Highest priority ( ), < , > , <= , >= , <> , = , NOT, AND, OR Lowest priority Extended syntax of the command IF-THEN-ENDIF IF <logical expression> THEN ... ... [ELSE] ... ... ENDIF Example of command IF-THEN-ENDIF for i = 10 to -10 step -1 if i>0 then print "positive value" endif if i=0 then print "zero" if i<0 then print "negative value" endif Compact sintax of the command IF-THEN IF <logical expression> THEN <instruction> [ELSE]< instruction > Example of Compact sintax of the command IF-THEN for a = 1 to 10 if a <= 5 then print "-" else print "+" next a Other function: Function Details Example REM Any text preceded by REM command is ignored during program execution. This feature allows the inclusion of records in the list commands. The inclusion of a superscript “’” works in the same way than REM. REM notes ‘notes Skip the program execution to line n ignoring all that lies between GOTO n command and line n. for i = 1 to 10 if i = 5 then goto 10 print i next i 10 print " I jumped on line 10" GOTO n GOSUB n RETURN Skip the program execution to line n ignoring all that lies between GOTO n command and line n, but once met with the RETURN command Back to the line immediately below the command GOSUB n. Output: Output: 1 2 3 4 I jumped on line 10 for i = 1 to 10 if i = 5 then gosub 10 print i next i END 10 print " I jumped on line 10" print " but now back where I started " RETURN Output: 1 2 3 4 I jumped on line 10 but now back where I started 5 6 7 8 9 10 PRINT “s” INPUT END CLS Print to the screen a text “s”, numeric variable n or alphanumeric string s$. Text and variables may also be linked on the same command line. Allows input from the user's numerical variables or string variables. ends execution of a script. Clear the output windows. a=4 b$="Version" c$="execute with" print "script AstroArt " print c$+" Astroart "+b$;a Output: script AstroArt execute whit Astroart Versione 4 input "insert a number: ",n input "insert a string: ",s$ print n print s$ Output: The program shows two successive windows for insert the data. Written data that appear in the Output window. print "program expired at row 3" print "row 1" print "row 2" print "row 3" END print "row 4" Output: row 1 row 2 row 3 for i = 1 to 10 print "abcdefghilmo" next i message ("press 'OK' for clearing the output window") CLS Output: Numeric functions. Function Details Example pi() returns the value of pi greek Print pi() sin(n) cos(n) tan(n) exp(n) ln(n) log10(n) log2(n) sqr(n) abs(n) Calculate the sine of the angle n in radians. If the angle n is expressed in degrees instead of radians, use the following procedures: sin(n*pi()/180) otherwise sin(degtorad(n)) Calculate the cosine of the angle n in radians. If the angle n is expressed in degrees instead of radians, use the following procedures: cos(n*pi()/180) otherwise cos(degtorad(n)) Calculate the tangent of the angle n in radians. If the angle n is expressed in degrees instead of radians, use the following procedures: tan(n*pi()/180) otherwise tan(degtorad(n)) Calculate the value of Napier's number raised to n the n, or Calculate the value of the logarithm to the base (natural logarithm) of n Calculate the value of the logarithm to the base 10 of n Calculate the value of the logarithm to the base 2 of n Calculate the square root of number n Calculate the absolute value of number n Output: 3.141592654 Print sin(90) Output: 0.8939966636 Print Cos(90) Output: -0.4480736161 Print tan(50) Output: -0.271900612 Print exp(10) Output: 22026.46579 Print ln(10) Output: 22026.46579 Print log10(100) Output: 2 Print log2(50) Output: 5.64385619 Print sqr(16) Output: 4 Print abs(15) Print abs(-15) Output: 15 15 rnd(n) Return a random number between 0 and n For i = 1 to 5 Print rnd(10) Next i Output: 0.9364372841 6.289201556 2.800253921 8.77184656 1.612342733 sgn(n) Return the sign of a number n according to the scheme: sgn(n) = -1 if n < 0, sgn(n) = 0 if n = 0, sgn(n) = 1 if n > 0. fix(n) Return the integer part of a number n Print sgn(-12.345) Print sgn(0) Print sgn(12.345) Output: -1 0 1 Print fix(8.771845) Output: 8 int(n) Return the integer part of a number n Print int(8.771845) Output: 8 round(n[,n1]) Rounds a number n to n1 th decimal place. NOTE: if n1 are omitted n are rounded for zero decimal place. frac(n) asin(n) acos(n) atan(n) Return the fractional part of n number. Calculates the arcsine in radians of a number n. Function valid in the range (1, -1) For values outside this range is returned the null value "NAN". For convert from radians to grade: asin(n)*180/pi() otherwise radtodeg(asin(n)) Calculates the arccosine in radians of a number n. Function valid in the range (1, -1) For values outside this range is returned the null value "NAN". For convert from radians to grade: acos(n)*180/pi() otherwise radtodeg(acos(n)) Calculates the arctangent in radians of a number n. For convert from radians to grade: atan(n)*180/pi() otherwise radtodeg(atan(n)) Print round(8.771845,3) Print round(8.771845) Output: 8.772 9 Print frac(10.45678) Output: 0.45678 Print asin(1),”radians” Print asin(n)*180/pi(),”Grade” Output: 1.570796327 90 radians Grade Print acos(1),"radians" Print acos(n)*180/pi(),"Grade" Output: 0 0 radians Grade Print atan(1),"radians" Print atan(1)*180/pi(),"Grade" Output: 0.7853981634 45 radians Grade atan2(nx,ny) sinh(n) cosh(n) tanh(n) asinh(n) acosh(n) atanh(n) degtorad(n) radtodeg(n) modulo(n1,n2) Calculates the arctangent2 in radians of a point with coordinates (nx, ny). For convert from radians to grade: atan2(nx,ny)*180/pi() otherwise radtodeg(atan2(nx,ny)) Calculates the hyperbolic sine of a number n in radians. For convert from radians to grade: sinh(n*pi()/180) otherwise sinh(degtorad(n)) Calculates the hyperbolic cosine of a number n in radians. For convert from radians to grade: cosh(n*pi()/180) otherwise cosh(degtorad(n)) Calculates the hyperbolic tangent of a number n in radians. For convert from radians to grade: tanh(n*pi()/180) otherwise tanh(degtorad(n)) Calculates the hyperbolic arcsine in radians of a number n. For convert from radians to grade: asinh(n)*180/pi() otherwise radtodeg(asinh(n)) Calculates the hyperbolic arccosine in radians of a number n. For convert from radians to grade: acosh(n)*180/pi() otherwise radtodeg(acosh(n)) Calculates the hyperbolic arctangent in radians of a number n. Function valid in the range (1, -1) For values outside this range is returned the infinite value "INF". For convert from radians to grade: atanh(n)*180/pi() otherwise radtodeg(atanh(n)) Convert a number n from grade to radians. print atan2(40,50) Output: 0.6747409422 print sinh(10) Output: 11013.23287 print cosh(10) Output: 11013.23292 print tanh(10) Output: 0.9999999959 print asinh(100) Output: 5.298342366 print acosh(10) Output: 2.993222846 print atanh(0.5) Output: 0.5493061443 n=57.29577951 print "Grade value: ";n print "Radian equivalent: "; degtorad(n) . Output: Grade value: 57.29577951 Radian equivalent:0.9999999999 n=1 print "Radians value: ";n print "Grade equivalent: "; radtodeg(n) Calculate the expression Output: Radian value: 1 Grade equivalent: print modulo(1,5) Convert a number n from radians to grade. radq((n1^2)+(n2^2)). Output: 5.099019514 57.29577951 len(s) val(s) asc(s) Returns the number of characters in a string s (also counted the spaces between words). Convert a string contains numeric characters in the corresponding number. Return the ANSI code of the leftmost character of the string. pause(n) Pauses program execution for n number of seconds. n1 mod n2 Return the rest of division n1/n2 count(s) Return the number of row in a multi line string s. a=len("viva AstroArt!") print "the string contains ";a;" characters " Output: the string contains 14 characters a$="1" print val(a$)+2 Output: 3 print asc("AstroArt") Output: 65 pause(30) Output: print 14 mod 4 Output: 2 a$="bye"+crlf$()+"bye" print a$ print crlf$() print "the number of rows in the variable string is: ";count(a$) Output: bye bye the number of rows in the variable string is: 2 counter(n) This function is mentioned in the AstroArt manual but it does not exist in ABasic String function Function Details Example ucase$(s) Converts all characters in a string s in uppercase. print ucase$(“astroart”) lcase$(s) ltrim$(s) rtrim$(s) chr$(n) str$(n) mid$(s,n1,n2) hex$(n) left$(s,n) right$(s,n) Converts all characters in a string s in lowercase. It removes the empty spaces to the left of a string. It removes the empty spaces to the right of a string. Returns the character corresponding to ASCII code number n. Convert a number n from numeric value to characters string. Return a substring of s string that is cut on the left starting from characters n1 and n2 is number characters long. Converts a decimal number n in the string that represents the value in hexadecimal format. Return a substring of string s that is cut on the left from the first character and n is number characters long. Return a substring of string s that is cut on the right from the first character and n is number characters long. Output: ASTROART print lcase$(“ASTROART”) Output: astroart print "without ltrim$: "+" astroart" print "with ltrim$: "+ltrim$(" astroart") Output: without ltrim$: astroart with ltrim$: astroart print "astroart "+" without rtrim$:" print rtrim$("astroart ")+" with rtrim$:" Output: astroart without rtrim$: astroart with rtrim$: print chr$(64) Output: @ a=234 a$=str$(a) print "a = ";a;" is a number" print "a$ = "+a$+" is a string" Output: a = 234 is a number a$ = 234 is a string print mid$("abcdefghilmnopqrstuvz",2,5) Output: bcdef print hex$(1000) Output: 3E8 print left$("AstroArt",5) Output: Astro print right$("AstroArt",3) Output: Art ltab$(s,n) Shift a string to the right for n-len(s) characters than a string s. This command working only if n-len(s)>0 a$="Astro" print "the work'";a$;"' it’s long ";len(a$);" characters" for n=0 to 10 print ltab$(a$,n)+str$(n) next n Output: the work ' Astro ' it’s long characters Astro0 Astro1 Astro2 Astro3 Astro4 Astro5 Astro 6 Astro 7 Astro 8 Astro 9 Astro 10 rtab$(s,n) format$(n,s) time$() date$() Shift the string s to the right of n-len(s) characters of the start line. This command working only if n-len(s)>0 Replaces the character 0 (zero) in the string s with digits of the numeric value n. The replacement takes place from right to left. If the number of 0's is present in less than the number of digits n of the remaining digits will be shown to the left of the last 0. If the number of 0 in s is greater than the number of digits of n 0 to the left of the last digit of n will appear as zero. Return a string with actual value of time in (hh mm ss) format. Return a string with actual value of date in (aaaa mm gg) format. 5 a$="Astro" print "the work'";a$;"' it’s long ";len(a$);" characters" for n=0 to 10 print rtab$(a$,n)+str$(n) next n Output: the work ' Astro ' it’s long 5 characters Astro0 Astro1 Astro2 Astro3 Astro4 Astro5 Astro6 Astro7 Astro8 Astro9 Astro10 print date$(); " today’s date" aaaammgg$=left$(date$(),4)+mid$(d ate$(),6,2)+right$(date$(),2) print format$(val(aaaammgg$),"Or: year 0000 month 00 day 00") Output: 2010 10 06 today’s date Or: year 2010 month 10 day 06 print time$() Output: 09 06 36 print date$() Output: 2010 10 06 crlf$() Function equivalent to the carriage return, insert a blank line in the output window. print "astroart "+"astronomical "+"software" print crlf$() print "astroart "+crlf$()+"astronomical "+crlf$()+"software" Output: astroart astronomical software opentext$(s) savetext$(s1,s2) copytext$(s) pastetext$() Open a text file named s. Please note that in the s string must also appear the file extension and the path. If path is omitted the file will be searched only in the current directory. Write the string s1 in a text file named s2. Please note that in the s2 string must also appear the file extension and the path. If path is omitted the file will be searched only in the current directory. Warning: This command overwrites any other file with the same name in the same folder. Copy the string s in the clipboard. Paste the contents of the clipboard on the output windows or in a variable. astroart astronomical software file$=opentext$("C:\WINDOWS\syste m32\rsvpcnts.h") print file$ Output: /*++ Copyright (c) 1996 Microsoft Corporation #define RSVPOBJ 0 #define RSVP_INTERFACES #define RSVP_NET_SOCKETS #define RSVP_TIMERS 2 4 6 #define API_SESSIONS #define API_CLIENTS Etc etc etc……. 8 10 print savetext$("AstroArt, the best software for astronomical imaging","c:\AA.txt") print "in the directory c:\ should have appeared " +crlf$()+"a text file named 'AA.txt' " +crlf$()+"contenent inside the sentence:" +crlf$()+"'AstroArt, the best software for" +crlf$()+"astronomical imaging'" Output: in the directory c:\ should have appeared a text file named 'AA.txt' contenent inside the sentence: 'AstroArt, the best software for astronomical imaging' print copytext$("AstroArt") a$=pastetext$() print a$ Output: AstroArt print copytext$("AstroArt") print pastetext$() a$=pastetext$() print a$ Output: AstroArt AstroArt finddir$(s1,s2) Search a directory named s2 in a path s1. input "Path directory ",path$ input "directory name to find",dir$ a$=finddir$(path$,dir$) print "Search: "+a$ al$=lcase$(a$) dirl$=lcase$(dir$) if al$=dirl$ then print "directory found" else print "directory NOT found" endif Output directory found (if exist) oppure: directory NON found (if not exist) findfile$(s1,s2) message(s) ra$(n) Search a file named s2 in a path s1. Show a message on the screen contenent the s string. Convert the RA (Right Ascension) value expressed as a decimal number from n to a string indicating the value of right ascension expressed in hh mm ss.s pathltp$="c:\" b$="AA.txt" c$= pathltp$+b$ print savetext$("AstroArt, the best software for astronomical imaging",c$) input "file name? ",obj_name$ findf$=findfile$(pathltp$,obj_nam e$+".txt") if findf$=(obj_name$+".txt") then print "File FOUND!" endif if findf$<>(obj_name$+".txt") then print "File NOT FOUND!" endif Output: File FOUND! (if you typed in the input the uppercase text 'AA') File NOT FOUND! (if you not typed in the input the uppercase text 'AA') Message(“Hello Milky way”) Output: alpha=05.345678 print "Right Ascension: "+ra$(alpha) Output: Right Ascension: 05 20 44.4 dec$(n) createdir(s) Convert the DEC (Declination) value expressed as a decimal number from n to a string indicating the value of Declination expressed in +/-gg pp ss.s Create a directory with the name and path specified by the string s. If you do not specify a drive and / or the directory’s path will be created in the current directory. delta=-12.345678 print "Declination: "+dec$(delta) Output: Declination: -12 20 44 createdir("c:\images") Output: Function for CCD , Filter wheel and Telescope. Funzione Dettagli Esempi Camera.Start(n[,n1]) Take an exposure of n seconds. Set n1 to zero to take a dark frame. Camera.Start(60,0) Camera.Wait Waits until the end of the exposure. Returns “1” if a exposure is in progress, otherwise “0”. Sets the binning mode.n is a index to the binning list in the “Settings” page of the CCD panel. Selects the current image as dark frame and automatically enables the correction for the following images. Camera.Exposing Camera.Binning(n) Camera.SelectDarkFrame Camera.EnableDarkFrame(n) Camera.Stop Enables or disables the dark frame correction. n = 1 correction enable n = 0 correction disable Stops the current exposures. Guider.Stop Stops the current guiding session. Guider.Close Close the guiding window. Selects which CCD should be used for autoguide: 1 = main ccd, 2 = guide ccd, 3= secondary camera. Guider.Select(n) Camera.Binning(2) Camera.SelectDarkFrame() Camera.EnableDarkFrame(0) Guider.Select(2) Guider.MoveReference([dx, dy]) Camera.Connect([driver]) Camera.Disconnect Camera.StartAutoguide([x, y]) Camera.StopAutoguide() Camera.Autofocus([x,y]) Focuser.GotoRelative(n) Focuser.GotoAbsolute(n) Telescope.Goto(ra,dec) Telescope.Wait Telescope.Stop Telescope.Ra Telescope.Dec Telescope.Pulse(dir$ [,time]) Changes the x and y coordinates of the reference star, to perform the “dithered guide”. If dx and dy are not specified then the shift will be pseudo-random. Connects the CCD driver from Astroart. Disconnects the CCD driver from Astroart. Starts and autoguide session. If x and y parameters (the coordinates of the guide star) are not given then this command takes a sample image and selects automatically the best star. Stop autoguide session. Guider.MoveReference() GuiderMoveReference(-0.3, 0.7) Camera.Connect(“Simulator”) Camera.StartAutoguide() x = Image.GetPointX() y = Image.GetPointY() Camera.StartAutoguide(x,y) Starts an autofocus session (requires the Ascom autofocus plugin). If x and y parameters (the coordinates of the focus star) are not given then this command selects automatically the best star from the current image. Camera.Autofocus() Moves the focuser up or down by a specified amount n. Move the focuser to a given coordinate. Moves the telescope to the equatorial coordinates ra (0..24),dec. (-90..+90) Waits until the telescope has completed a Goto. Stops the telescope. Returns the current position of the telescope. Focuser.GotoRelative(-50) Moves the telescope for time seconds towards the dir$ direction (“N”,”S”, ”E”,”W”). If time is negative then the direction is inverted. If time is omitted, it moves until the Telescope. Stop command. Telescope.Pulse(“N”, 0.5) x = Image.GetPointX() y = Image.GetPointY() Camera.Autofocus(x,y) Focuser.GotoAbsolute(1000) Telescope.Goto(23.45, 44.12) x = Telescope.Ra y = Telescope.Dec Telescope.Speed(n) Sets the speed for Pulse motion. 1=guide 2=center 3=find 4=slew Telescope.Speed(4) Telescope.List.Open(file$ ) Opens a text file which contains objects and coordinates. See chapter 6.1. Returns how many objects are listed in the Telescope Window. Telescope.List.Open(“ c:\data\galaxies.txt”) Telescope.List.Count Telescope.List.Clear Telescope.List.Ra(n) Telescope.List.Dec(n) Clears the list of object in the Telescope Window. Return the coordinates of the n th object of the list. n = Telescope.List.Count x = Telescope.List.Ra(25) Telescope.List.Name$(n) Returns the name of the n th object of the list. a$ =Telescope.List.Name$(42) Telescope.Send(s) Sends a string s to the telescope via the serial port. Returns the number of filters of the filter wheel. Moves the filter wheel to the given filter express. expressed by its filters number n or its filters name s. Saves the current image with path and namefile specified by filename$. If the path are omitted the image are saved in the active directory. Renames the current image. Opens an image from disk with path and namefile specified by filename$. If the path are omitted the image are saved in the active directory. Reads string values from the FITS header named key$. Telescope.Send(“#Hc#”) Image.GetKey(key$) Reads numeric values from the FITS header named key$. a=Image.GetKey("EXPOSURE") Image.SetKey(key$,value) Write in the header a parameter named key$ with the value value. Image.SetKey(“COMMENT”,” Bad seeing”) Image.SetKey(“JD”,34234) Image.FlipH Flips the current image horizontally.This feature requires AstroArt 4.0 + Service Pack 1. Wheel.Filters Wheel.Goto(n) Wheel.Goto(s) Image.Save(filename$) Image.Rename(name$) Image.Open(filename$) Image.GetKey$(key$) n = Wheel.Filters Wheel.Goto(4) Wheel.Goto(“R”) Image.Save(“C:\images\ saturn.fit”) Image.Rename(“jupiter.fit”) Image.Open(“C:\moon.tif”) a =Image.GetKey(“NAXIS”) Image.FlipV Image.Resize(x,y) Image.BlinkAlign Image.Close Image.GetPointX() Image.GetPointY() Image.DSS(ra,dec,name$) Output.Save(filename$) Output.Copy System.Execute(filename$) System.Broadcast(message$ , wparam,lparam) Flips the current image vertically. This feature requires AstroArt 4.0 + Service Pack 1. Resize an image to the Image.Resize(320,240) size horizontal x and vertical y. This feature requires AstroArt 4.0 + Service Pack 1. Aligns the current image Image.BlinkAlign with the next one inside the Astroart Desktop and blinks them. This feature requires the Service Pack1. Closes the current image. Return the coordinate of x = Image.GetPointX() the selected point (or star, or rectange) on the current image. Creates a new image from Image.DSS(12.034,45.213,”a the ‘Digital Sky steroid.fit”) Survey’ atlas. The DSS images. This image will be centered on the coordinates ra and dec and will be named name$. Needs the DSS plugin. Saves the output panel Output.Save(“C:\Log.txt”) to disk with path and namefile specified by filename$. If the path are omitted the image are saved in the active directory. Copies the output panel to the Clipboard. Executes a external System.Execute( program. “C:\Windows\Notepad.exe with path and namefile myfile.txt”) specified by filename$. If the path are omitted the program is searched in the active directory. Sends a Windows Message to all windows. This can be used to control other programs. The function is equivalent to: h = RegisterWindowMessage(message$) SendNotifyMessage(HWND_BROADCAST,h,wparam,lparam). Function not documented. Function Details system.shutdown Close AstroArt and power off the computer. Irreversible function, use carefully. Example
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project | https://manualzz.com/doc/6553563/unofficial-guide-to-astroart-s-script-commands | CC-MAIN-2020-40 | refinedweb | 5,022 | 55.64 |
Ask the PRO
LANGUAGES: C#
ASP.NET VERSIONS: 1.0 | 1.1
Get a Rich UI With WinForms
Use a DataReader to enumerate multiple sets of query results.
By Jeff Prosise
Q: I'm interested in using Windows Forms controls to build rich user interfaces in ASP.NET Web pages. I know how to deploy the controls and access their properties and methods from client-side script. Is it also possible to process a Windows Forms control's events in the browser?
A: It's no secret that one way to overcome the limitations of HTML and endow browser-based clients with rich user interfaces is to host Windows Forms controls in Web pages. As an example, here's a derived class named WebSlider that encapsulates a Windows Forms TrackBar control:
namespace Wintellect
{
public class WebSlider :
System.Windows.Forms.TrackBar {}
}
If this source-code file is compiled into a DLL named WebSlider.dll and deployed on a Web server, this tag declares an instance of it, causing a vertical TrackBar control to appear on the page:
The first time the page is accessed, Internet Explorer downloads the DLL and, with the .NET Framework's help, caches it on the client. The two chief requirements are that the client must be running Internet Explorer 5.01 or higher and must have the .NET Framework installed.
Accessing the control's properties and methods using client-side script is simplicity itself. If the form containing the control is named MyForm, this JavaScript statement moves the TrackBar thumb to position 5 by setting the control's Value property:
MyForm.Slider.Value = 5;
Writing client-side script that processes events fired by the control, however, is less straightforward. First, you must define an interface that encapsulates the events you wish to expose to the browser, and you must instruct the .NET Framework to expose the interface's members through a COM IDispatch interface. Then, you must associate this interface with the control.
Figure 1 contains the source code for a class derived by System.Windows.Forms.TrackBar that exposes the Scroll events fired in response to thumb movements to unmanaged code. The IWebSliderEvents interface defines the event as a method and assigns it a dispatch ID. (In COM, all methods exposed through an IDispatch interface require unique integer dispatch IDs.) Note that the method signature exactly matches that of the Scroll event defined in the base class. The [InterfaceType] attribute tells the .NET Framework to expose the IWebSliderEvents interface to unmanaged code as an IDispatch interface. The [ComSourceInterfaces] attribute adorning the class definition lets the framework know that WebSlider should support IWebSliderEvents as a source interface, which is COM speak for an interface used to source (fire) events.
using System;
using System.Runtime.InteropServices;
namespace Wintellect
{
[ComSourceInterfaces (typeof (IWebSliderEvents))]
public class WebSlider : System.Windows.Forms.TrackBar {}
[InterfaceType (ComInterfaceType.InterfaceIsIDispatch)]
public interface IWebSliderEvents
{
[DispId (1)] void Scroll (Object sender, EventArgs e);
}
}
Figure 1. This Windows Forms TrackBar control derivative exposes Scroll events to unmanaged code.
Figure 2 lists an .aspx file you can use to test the WebSlider control. The
Figure 2. This .aspx file creates a WebSlider control and responds to Scroll events using client-side script.
Figure 3. Here's the WebSlider control in action. A client-side event handler continually updates the number shown below the control as the thumb moves.
Note that in order for unmanaged code hosted by a browser to "see" events fired from managed code, the assembly containing the control - in this case, WebSlider.dll - must be granted full trust on the client computer. (For security reasons, managed code can call unmanaged code only if it is granted permission to do so.) You can use the Microsoft .NET Framework wizards found in Control Panel\Administrative Tools to grant the assembly full trust. You must grant this permission for this example to work.
Q: Can you use a DataReader to enumerate the results of a query that produces multiple result sets?
A: You bet. The secret is the DataReader's NextResult method, which moves the virtual cursor maintained by the DataReader to the next result set. This code uses a compound query to create a SqlDataReader that encapsulates two result sets, then it outputs the first column in each result set to a console window:
SqlConnection connection = new SqlConnection
("server=localhost;database=pubs;uid=sa");
try {
connection.Open ();
SqlCommand command = new SqlCommand
("select title from titles; " +
"select au_lname from authors", connection);
SqlDataReader reader = command.ExecuteReader ();
do {
while (reader.Read ())
Console.WriteLine (reader[0]);
Console.WriteLine ();
} while (reader.NextResult ());
}
finally {
connection.Close ();
}
The ASPX file in Figure 4 demonstrates how you might use this knowledge in a Web page. Figure 4 uses the same compound query to initialize two DataGrids with one DataReader. Note the call to NextResult between calls to DataBind. This call points the cursor to the second result set prior to binding to the second DataGrid. This feature of the DataReader classes is especially handy when using stored procedures that return two or more result sets.
<%@ Import Namespace="System.Data.SqlClient" %>
void Page_Load (Object sender, EventArgs e)
{
if (!IsPostBack) {
SqlConnection connection = new SqlConnection
("server=localhost;database=pubs;uid=sa");
try {
connection.Open ();
SqlCommand command = new SqlCommand
("select title from titles; " +
"select au_lname from authors", connection);
SqlDataReader reader = command.ExecuteReader ();
// Initialize the first DataGrid
Titles.DataSource = reader;
Titles.DataBind ();
// Advance to the next result set
reader.NextResult ();
// Initialize the second DataGrid
Authors.DataSource = reader;
Authors.DataBind ();
}
finally {
connection.Close ();
}
}
}
Figure 4. This ASP.NET page uses a single SqlDataReader to initialize two DataGrids with two sets of query results.
Q: How do I assign a client-side OnClick handler to an ASP.NET button control? If I include an OnClick attribute in the control tag, ASP.NET looks for a server-side event handler with the specified name.
A: Because OnClick is a legal client- and server-side attribute, you must add OnClick attributes that reference client-side handlers programmatically to tags that declare runat="server" button controls. Suppose, for example, that the button is declared this way: | https://www.itprotoday.com/web-development/get-rich-ui-winforms | CC-MAIN-2019-04 | refinedweb | 1,009 | 59.19 |
- Training Library
- Amazon Web Services
- Courses
- .Net Microservices - Refactor and Design - Course One
Refactoring - Restful API Microservices
- [Instructor] Welcome back. In this lecture, we're gonna begin to build out each of the service components. Recalling that in the previous lecture, each of these projects was created as a ASP.NET web API project. So let's begin. We'll start with the AccountService. So expanding it out, the first thing we'll do is add in a Models directory. The Models directory will contain our models that we'll use and they will be a direct copy as they are already in the monolithic application.
So here, we'll add a Consumer model. So in this case, under General, we'll go with Empty Class and we'll name it Consumer.cs. Click on New. And then we'll paste in the code from the monolithic application equivalent model. Okay, so the consumer represents the shopper who logs on into the system and they have an ID property, Firstname property, Surname property, and Age property. And with equivalent getters and setters. So save that and that's our model done for the AccountService.
The next thing we'll do is under Controllers, we'll delete the default controller. And we'll add a new one. Under ASP.NET, if we scroll down to the bottom and we select Web API Controller Class. And we'll call it ConsumersController. Click New. Okay, so this is our controller class. And now, we'll need to update each of the methods, so that we have the right functions and the right behavior exposed through our RESTful API. So let's work on the Get method. So we'll change the signature of this. It will be public.
It will return an ActionResult of type IEnumerable of type Consumer. And it will be a Get taking no parameters. Okay, so you notice the Consumer model type is not available, so we'll need to add it in using and... The namespace was AccountService.Models. So that's what we need to import. Okay. We now need to update the return. So here, we're simply gonna do var consumers is equal to a new list of Consumer. And we'll return consumers. Okay, so within a list, we need to instantiate a number of consumer objects.
So new Consumer. And then we specify the properties for the Consumer. So in this case, it has an ID property, we'll set it to 111. There's a Firstname property, we'll set it to Jeremy. It has a surname. Gonna do Cook. And an age. 40. Okay, we'll repeat that. So set up another one. And another one. And one more. Okay, we'll change the IDs. And... Bob Smith. He can be 48.
Now, remember, we're starting out the backend, so in a production system, this will probably be a database call. So we're just doing... We're hardcoding the data value just to get through the demonstration quicker. Okay, so that completes the Get method. And I'll just update the documentation, so this will be consumers. Okay, work on the next one. And we'll update the signature of the method. So again, it's public. It will be an ActionResult of type Consumer and it is Get and it takes a parameter of ID, of type int. And then that's simply going to return a consumer whose ID is the ID that we passed in.
So updating this. It will be consumers. So we now do the Post. And I'm just gonna copy some code that I've already got, just to be a bit quicker. So we're posting to the api/consumers endpoints. And we post under a consumer. And from that, we'll save the consumer and return back the saved_consumer. For the Put, we, again, got some pre-formatted code that we'll just use. And we'll, again, update the comment. So for a Put, we send the Put view to this path and we pass in two parameters, an ID and a consumer. And then we're doing an update on that consumer. And for Delete, we'll just leave as empty.
Okay, we'll save that and we'll actually compile. Check for any errors. Okay, so that is good. So at this stage, we should be able to fire up this web API. So let's now change this project and we'll set it as the startup project. And then we'll also take a look at the run options. So we're going to run on this server off and this port. We'll change the open URL parameter. We'll update this to be consumers. Oop, nope, that's incorrect. It should be api/consumers. And we'll click OK.
So if all has gone well, when we start this up, we should get some JSON that comes back because we've specified that we are gonna call this path on launch. So it's good to go. And... Okay, what happened? Okay, so we've already got some inbound port 5001, which are my docker containers. So let me jump into my terminal. And we're gonna get to Docker later on in the course, but for now, I need to shut down some existing containers that I've got running. So let's stop 9f, d5, and 26.
Okay, that's good, so that should've released the port. And we can stop and try again. Excellent, so it's worked. So I've made a call to this path and we've got back this data. So the other thing we can do is we can take a copy of this and jump to our terminal. And we can use curl. So we do curl
- i, and then the path. Okay, so we're getting a warning and this is our warning, so we need to exclude that. So we do -k. And there we go. So we can see that we've got a HTTP 200 response. And here is our JSON from our call to our web API. So the next thing we can do is we can see the structure of that, of that JSON that's being returned by... We'll remove the -i. And then we'll pipe out to jq. And there we go.
So that is indeed the same data that we set up in the AccountService web API. So we now need to do the same thing for the remaining two services. So we've completed the AccountService and we need to do the same setup for InventoryService and ShoppingService. So I'll do that in the background and skip ahead to keep the demo moving along. In the meantime, go ahead and close this lecture and I'll see you shortly in the next one, where we'll work on the Store2018 project.
>. | https://cloudacademy.com/course/net-microservices-refactor-design-course-1/c2l7-rearch2/ | CC-MAIN-2021-17 | refinedweb | 1,171 | 86.71 |
Quickstart: Creating HTTP endpoints
Put an HTTP API on your app
Anvil lets you set up HTTP endpoints with very little code.
Follow this quickstart to create an HTTP endpoint that extracts two variables from the path and creates a response dynamically based on their values.
Create an app
Log in to Anvil and click ‘New Blank App’. Choose the Material Design theme.
Add a Server Module
In the App Browser, click the + next to Server Code to add a new Server Module.
You will see a code editor with a yellow background.
Create an HTTP endpoint
Write this function:
def add_numbers(a, b): a = int(a) b = int(b) return { 'originals': [a, b], 'sum': a + b, }
It takes two numbers and returns a data structure containing their sum.
Above this function, write
@anvil.server.http_endpoint('/add/:a/:b'):
@anvil.server.http_endpoint('/add/:a/:b') def add_numbers(a, b): a = int(a) b = int(b) return { 'originals': [a, b], 'sum': a + b, }
Access your HTTP endpoint
At the bottom of your Server Module, there is a message like this:
Copy the URL without the
{path} part, and put
add/32/10 at the end. For my app that would be:
Open a new browser tab and access that URL. You should see this response:
{"sum":42,"originals":[32,10]}
Pass in different numbers to get a different answer:
This time the response is:
{"sum":4,"originals":[1,3]}
(To set up a nicer URL for your API, see deployment.)
Copy the example app
Click on the button below to clone a finished version of this app into your account.
Next up
Want more depth on this subject?
Read more about HTTP APIs and requests in Anvil.
Want another quickstart?
Every quickstart is on the Quickstarts page. | https://anvil.works/docs/http-apis/creating-http-endpoints/quickstart.html | CC-MAIN-2020-29 | refinedweb | 296 | 71.24 |
CodeSynthesis XSD 4.0.0 Released
CodeSynthesis XSD 4.0.0 was released today.
In case you are not familiar with XSD, it is an open-source, cross-platform raw XML.
For an exhaustive list of new features see the official announcement. Below I am going to cover the notable new features in more detail and include some insight into what motivated their addition.
Ok, that was a major new release. So what are the major changes and new features? Well, firstly, we removed quite a bit of “outdated backwards-compatibility baggage”, such as support for Xerces-C++ 2-series (2.8.0) or Visual Studio 2003 (7.1). At the same time, the good news is there aren’t any changes that will break existing code. What has changed a lot are the compiler internals, and, especially, dependencies which will make building XSD from source much easier.
While removing old stuff we also added support for new C++ compilers that popped up since the last release. XSD now supports Clang as well as Visual Studio 2012 (11.0) and 2013 (12.0).
Ok, let’s now examine the major new features. The biggest is support for C++11 (the
--std c++11 option). While there are many little changes in the generated code when this mode is enabled, the major two are the reliance on the move semantics and the use of
std::unique_ptr instead of deprecated
std::auto_ptr.
Another big feature in this release is support for ordered types. XSD flattens nested XML Schema compositors to give us a nice and simple C++ API. This works very well in most cases, especially for more complex schemas. Sometimes, however, this can lead to the loss of relative element ordering that can be semantically important to the application (the “unordered choice” XML Schema idiom). Now you can mark such types as ordered which makes XSD generate an additional order tracking API. So now you can have the best of both worlds: nice and simple API in most cases and additional order information in a few places where the simple API is not enough.
Once we had this implemented, another stubbornly annoying feature, mixed content, got sorted out. The problem with mixed content is that the text fragments can appear interleaved with elements in pretty much any order. Extracting the text is easy, it is preserving the order information relative to the elements, that’s the tricky part. But now we have the perfect mechanism for that. One user who was beta-testing this feature said: “I read the new documentation and I’m impressed.”
You can read more on ordered types in Section 2.8.4, “Element Order” and on mixed content in Section 2.13, “Mapping for Mixed Content Models” in the C++/Tree Mapping User Manual.
Another problem that is somewhat similar to mixed content is access to data represented by
xs:anyType and
xs:anySimpleType XML Schema types.
anyType allows any content in any order. You can think of its definition as a complex type with mixed content that has an element wildcard that allows any elements and an attribute wildcard that allows any attributes. In other words, anything goes. XSD already can represent wildcard content as raw DOM fragments so it was only natural to extend this support to
anyType content. Similar to
anyType,
anySimpleType allows any simple content, that is, any text (pretty similar to
xs:string in that sense). Now it is possible to get
anySimpleType content as a text string.
For more information on this new feature see Section 2.5.2, “Mapping for anyType” and Section 2.5.3, “Mapping for anySimpleType” in the C++/Tree Mapping User Manual.
Another cool feature in XSD is the stream-oriented, partially in-memory XML processing that allows parsing and serialization of XML documents in chunks. This allows us to process parts of the document as they become available as well as handle documents that are too large to fit into memory. XSD comes with an example, called
streaming, that shows how to set all this up. In this release this example has been significantly improved. It now has much better XML namespace handling and allows streaming at multiple document levels. This turned out to be really useful for handling large and complex documents such as GML/CityGML.
Last but not least, those of us who still prefer to write our own makefiles will be happy to know XSD now supports automatic make-style dependency generation, similar to the GCC’s
-M* functionality but just with sane option names. See the XSD Compiler Command Line Manual (man pages) for details. | https://codesynthesis.com/~boris/blog/2014/07/22/codesynthesis-xsd-4-0-0-released/ | CC-MAIN-2019-18 | refinedweb | 775 | 65.01 |
Too much stuff happening in a single plot? No problem—use multiple subplots!
This in-depth tutorial shows you everything you need to know to get started with Matplotlib’s
subplot() function.
If you want, just hit “play” and watch the explainer video. I’ll then guide you through the tutorial:
To create a matplotlib subplot with any number of rows and columns, use the
plt.subplot() function.
It takes 3 arguments, all of which are integers and positional only i.e. you cannot use keywords to specify them.
plt.subplot(nrows, ncols, index)
nrows– the number of rows
ncols– the number of columns
index– the
Subplotyou want to select (starting from 1 in the top left)
So,
plt.subplot(3, 1, 1) has 3 rows, 1 column (a 3 x 1 grid) and selects
Subplot with index 1.
After
plt.subplot(), code your plot as normal using the
plt. functions you know and love. Then, select the next subplot by increasing the index by 1 –
plt.subplot(3, 1, 2) selects the second
Subplot in a 3 x 1 grid.)] # 3x1 grid, first subplot plt.subplot(3, 1, 1) plt.plot(linear) # 3x1 grid, second subplot plt.subplot(3, 1, 2) plt.plot(square) # 3x1 grid, third subplot plt.subplot(3, 1, 3) plt.plot(cube) plt.tight_layout() plt.show()
Matplotlib Subplot Example
The arguments for
plt.subplot() are intuitive:
plt.subplot(nrows, ncols, index)
The first two – nrows and ncols – stand for the number of rows and number of columns respectively.
If you want a 2×2 grid, set nrows=2 and ncols=2. For a 3×1 grid, it’s nrows=3 and ncols=1.
The index is the subplot you want to select. The code you write immediately after it is drawn on that subplot. Unlike everything else in the Python universe, indexing starts from 1, not 0. It continues from left-to-right in the same way you read.
So, for a 2 x 2 grid, the indexes are
For a 3 x 1 grid, they are
The arguments for
plt.subplot() are positional only. You cannot pass them as keyword arguments.
>>> plt.subplot(nrows=3, ncols=1, index=1) AttributeError: 'AxesSubplot' object has no property 'nrows'
However, the comma between the values is optional, if each value is an integer less than 10.
Thus, the following are equivalent – they both select index 1 from a 3×1 grid.
plt.subplot(3, 1, 1) plt.subplot(311)
I will alternate between including and excluding commas to aid your learning.
Let’s look at the default subplot layout and the general outline for your code.
plt.subplot(3, 1, 1) <em># First subplot here</em> plt.subplot(3, 1, 2) <em># Second subplot here</em> plt.subplot(3, 1, 3) <em># Third subplot here</em> plt.show()
This looks ok but the x-axis labels are hard to read on the top 2 subplots.
You have a few ways to solve this problem.
First, you can manually adjust the xticks with the matplotlib xticks function –
plt.xticks() – and either:
- make them transparent by setting
alpha=0, or
- move them and decrease their font size with the
positionand
sizearguments
# Make xticks of top 2 subplots transparent plt.subplot(3, 1, 1) plt.xticks(alpha=0) plt.subplot(3, 1, 2) plt.xticks(alpha=0) # Plot nothing on final subplot plt.subplot(3, 1, 3) plt.suptitle('Transparent Xticks - plt.xticks(alpha=0)') plt.show()
# Move and decrease size of xticks on all subplots plt.subplot(3, 1, 1) plt.xticks(position=(0, 0.1), size=10) plt.subplot(3, 1, 2) plt.xticks(position=(0, 0.1), size=10) plt.subplot(3, 1, 3) plt.xticks(position=(0, 0.1), size=10) plt.suptitle('Smaller Xticks In A Better Position') plt.show()
Both these methods work but are fiddly. Plus, you cannot automate them which is annoying for us programmers.
You have this ticks problem whenever you create subplots. Thankfully, the matplotlib tight_layout function was created to solve this.
Matplotlib Tight_Layout
By calling
plt.tight_layout(), matplotlib automatically adjusts the following parts of the plot to make sure they don’t overlap:
- axis labels set with
plt.xlabel()and
plt.ylabel(),
- tick labels set with
plt.xticks()and
plt.yticks(),
- titles set with
plt.title()and
plt.suptitle()
Note that this feature is experimental. It’s not perfect but often does a really good job. Also, note that it does not work too well with legends or colorbars – you’ll see how to work with them later.
Let’s see the most basic example without any labels or titles.
plt.subplot(311) plt.subplot(312) plt.subplot(313) plt.tight_layout() plt.show()
Now there is plenty of space between the plots. You can adjust this with the
pad keyword. It accepts a float in the range [0.0, 1.0] and is a fraction of the font size.
plt.subplot(311) plt.subplot(312) plt.subplot(313) plt.tight_layout(pad=0.1) plt.show()
Now there is less space between the plots but everything is still readable. I use
plt.tight_layout() in every single plot (without colobars or legends) and I recommend you do as well. It’s an easy way to make your plots look great.
Check out the docs more information and arguments that tight_layout in matplotlib accepts.
Now, let’s look at how to add more info to our subplots in matplotib.
Matplotlib Subplot Title
You can add a title to each subplot with the
plt.title() function.
plt.subplot(2, 2, 1) plt.title('First Title') plt.subplot(2, 2, 2) plt.title('Second Title') plt.subplot(2, 2, 3) plt.title('Third Title') plt.subplot(2, 2, 4) plt.title('Fourth Title') plt.tight_layout() plt.show()
Matplotlib Subplot Overall Title
Add an overall title to a subplot in matplotlib with the
plt.suptitle() function (it stands for ‘super title’).
# Same plot as above plt.subplot(2, 2, 1) plt.title('First Title') plt.subplot(2, 2, 2) plt.title('Second Title') plt.subplot(2, 2, 3) plt.title('Third Title') plt.subplot(2, 2, 4) plt.title('Fourth Title') # Add overall title to the plot plt.suptitle('My Lovely Plot') plt.tight_layout() plt.show()
Matplotlib Subplot Height
To change the height of a subplot in matplotlib, see the next section.
Matplotlib Subplot Size
You have total control over the size of subplots in matplotlib.
You can either change the size of the entire
Figure or the size of the
Subplots themselves.
Let’s look at changing the
Figure.
Matplotlib Figure Size
First off, what is the
Figure? To quote the AnatomyOfMatplotlib:
It is the overall window/page that everything is drawn on. You can have multiple independent figures and
Figurescan contain multiple
Subplots
In other words, the
Figure is the blank canvas you ‘paint’ all your plots on.
If you are happy with the size of your subplots but you want the final image to be larger/smaller, change the
Figure. Do this at the top of your code with the matplotlib figure function –
plt.figure().
# Make Figure 3 inches wide and 6 inches long plt.figure(figsize=(3, 6)) # Create 2x1 grid of subplots plt.subplot(211) plt.subplot(212) plt.show()
Before coding any subplots, call
plt.figure() and specify the
Figure size with the
figsize argument. It accepts a tuple of 2 numbers –
(width, height) of the image in inches.
Above, I created a plot 3 inches wide and 6 inches long –
plt.figure(figsize=(3, 6)).
# Make a Figure twice as long as it is wide plt.figure(figsize=plt.figaspect(2)) # Create 2x1 grid of subplots plt.subplot(211) plt.subplot(212).
Now let’s look at putting different sized
Subplots on one
Figure.
Matplotlib Subplots Different Sizes
The hardest part of creating a
Figure with different sized
Subplots in matplotlib is figuring out what fraction each
Subplot takes up.
So, you should
plt.subplot() calls.
I’ll create the biggest subplot first and the others in descending order.
The right-hand side is half of the plot. It is one of two plots on a
Figure with 1 row and 2 columns. To select it with
plt.subplot(), you need to set
index=2.
Note that in the image, the blue numbers are the index values each
Subplot has.
In code, this is
plt.subplot(122)
Now, select the bottom left
Subplot in a a 2×2 grid i.e.
index=3
plt.subplot(223)
Lastly, select the top two
Subplots on the left-hand side of a 4×2 grid i.e.
index=1 and
index=3.
plt.subplot(421) plt.subplot(423)
When you put this altogether you get
# Subplots you have just figured out plt.subplot(122) plt.subplot(223) plt.subplot(421) plt.subplot(423) plt.tight_layout(pad=0.1) plt.show()
Perfect! Breaking the
Subplots down into their individual parts and knowing the shape you want makes everything easier.
Matplotlib Subplot Size Different
You may have noticed that each of the
Subplots in the previous example took up
1/x fraction of space –
1/2,
1/4 and
1/8.
With the matplotlib subplot function, you can only create
Subplots that are
1/x.
It is not possible to create the above plot in matplotlib using the
plt.subplot() function. However, if you use the matplotlib subplots function or GridSpec, then it can be done.
Matplotlib Subplots_Adjust
If you aren’t happy with the spacing between plots that
plt.tight_layout() provides, manually adjust it with
plt.subplots_adjust().
It takes 6 optional, self-explanatory keyword arguments. Each accepts a float in the range [0.0, 1.0] and they are a fraction of the font size:
left,
right,
bottomand
topis the spacing on each side of the
Subplot
wspace– the width between
Subplots
hspace– the height between
Subplots
# Same grid as above plt.subplot(122) plt.subplot(223) plt.subplot(421) plt.subplot(423) # Set horizontal and vertical space to 0.05 plt.subplots_adjust(hspace=0.05, wspace=0.05) plt.show()
In this example, I decreased both the height and width to just
0.05. Now there is hardly any space between the plots.
To avoid loads of similar examples, play around with the arguments yourself to get a feel for how this function works.
Matplotlib Suplot DPI
The Dots Per Inch (DPI) is a measure of printer resolution. It is the number of colored dots placed on each square inch of paper when it’s printed. The more dots you have, the higher the quality of the image. If you are going to print your plot on a large poster, it’s a good idea to use a large DPI.
The DPI for each
Figure is controlled by the
plt.rcParams dictionary (want to master dictionaries? Feel free to read the ultimate Finxter dictionary tutorial). It contains all the runtime configuration settings. If you print
plt.rcParams to the screen, you will see all the variables you can modify. We want
figure.dpi.
Let’s make a simple line plot first with the default DPI (72.0) and then a much smaller value.
# Print default DPI print(f"The default DPI in matplotlib is {plt.rcParams['figure.dpi']}") # Default DPI plt.plot([1, 2, 3, 4]) plt.title('DPI - 72.0') plt.show() # Smaller DPI plt.rcParams['figure.dpi'] = 30.0 plt.plot([1, 2, 3, 4]) plt.title('DPI - 30.0') plt.show() # Change DPI back to 72.0 plt.rcParams['figure.dpi'] = 72.0
The default DPI in matplotlib is 72.0
The
Figure with a smaller DPI is smaller and has a lower resolution.
If you want to permanently change the DPI of all matplotlib
Figures – or any of the runtime configuration settings – find the
matplotlibrc file on your system and modify it.
You can find it by entering
import matplotlib as mpl mpl.matplotlib_fname()
Once you have found it, there are notes inside telling you what everything does.
Matplotlib Subplot Spacing
The function
plt.tight_layout() solves most of your spacing issues. If that is not enough, call it with the optional
pad and pass a float in the range [0.0, 1.0]. If that still is not enough, use the
plt.subplots_adjust() function.
I’ve explained both of these functions in detail further up the article.
Matplotlib Subplot Colorbar
Adding a colorbar to each plot is the same as adding a title – code it underneath the
plt.subplot() call you are currently working on. Let’s plot a 1×2 grid where each
Subplot is a heatmap of randomly generated numbers.
For more info on the Python random module, check out my article. I use the Numpy random module below but the same ideas apply.
# Set seed so you can reproduce results np.random.seed(1) # Create a 10x10 array of random floats in the range [0.0, 1.0] data1 = np.random.random((10, 10)) data2 = np.random.random((10, 10)) # Make figure twice as wide as it is long plt.figure(figsize=plt.figaspect(1/2)) # First subplot plt.subplot(121) pcm1 = plt.pcolormesh(data1, cmap='Blues') plt.colorbar(pcm1) # Second subplot plt.subplot(122) pcm2 = plt.pcolormesh(data2, cmap='Greens') plt.colorbar(pcm2) plt.tight_layout() plt.show()
First, I created some
(10, 10) arrays containing random numbers between 0 and 1 using the
np.random.random() function. Then I plotted them as heatmaps using
plt.pcolormesh(). I stored the result and passed it to
plt.colorbar(), then finished the plot.
As this is an article on
Subplots, I won’t discuss the matplotlib pcolormesh function in detail.
Since these plots are different samples of the same data, you can plot them with the same color and just draw one colorbar.
To draw this plot, use the same code as above and set the same
colormap in both matplotlib pcolormesh calls –
cmap='Blues'. Then draw the colorbar on the second subplot.
This doesn’t look as good as the above
Figure since the colorbar takes up space from the second
Subplot. Unfortunately, you cannot change this behavior – the colorbar takes up space from the
Subplot it is drawn next to.
It is possible to draw colorbars over multiple
Subplots but you need to use the
plt.subplots() function. I’ve written a whole tutorial on this—so feel free to check out this more powerful function!
Matplotlib Subplot Grid
A
Grid is the number of rows and columns you specify when calling
plt.subplot(). Each section of the
Grid is called a cell. You can create any sized grid you want. But
plt.subplot() only creates
Subplots that span one cell. To create
Subplots that span multiple cells, use the
GridSpec class, the
plt.subplots() function or the
subplot2grid method.
I discuss these in detail in my article on matplotlib subplots.
Summary
Now you know everything there is to know about the subplot function in matplotlib.
You can create grids of any size you want and draw subplots of any size – as long as it takes up
1/xth of the plot. If you want a larger or smaller
Figure you can change it with the
plt.figure() function. Plus you can control the DPI, spacing and set the title.
Armed with this knowlege, you can now make impressive plots of unlimited complexity.
But you have also discovered some of the limits of the subplot function. And you may feel that it is a bit clunky to type
plt.subplot() whenever you want to draw a new one.
To learn how to create more detailed plots with less lines of code, read my article on the
plt.subplots() (with an ‘s’) function.
Where To Go From Here?
Do you wish you could be a programmer full-time but don’t know how to start?
Check out my pure value-packed webinar where I teach you to become a Python freelancer in 60 days or your money back!
It doesn’t matter if you’re a Python novice or Python pro. If you are not making six figures/year with Python right now, you will learn something from this webinar.
These are proven, no-BS methods that get you results fast.
This webinar won’t be online forever. Click the link below before the seats fill up and learn how to become a Python freelancer, guaranteed.
WordPress conversion from plt.subplot.ipynb by nb2wp v0.3.1 | https://blog.finxter.com/matplotlib-subplot/ | CC-MAIN-2020-50 | refinedweb | 2,742 | 69.38 |
Shared Memory in EJB Containertreespace Mar 31, 2006 12:41 AM
I have a service interface created with EJBs. I have a web application that uses those services but it's optional. What is the business tier equivalent of the servlet session context? JNDI?
I am building up a data structure from the database so it has no direct persistence mapping. I want to keep it in memory where all stateless session beans have access to it.
1. Re: Shared Memory in EJB Containertreespace Apr 2, 2006 11:58 PM (in response to treespace)
It appears there is no such beast. You have to use the session context of a web application to host distributed applications. There is no shared context in Java EE that would allow a Swing front end to share code with a Web front end or a Web Services interface or an MDB driven interface.
The various EJB technologies provide 98% of shared services but where does the state of a shared app go? That rather glaring need was identified early on, hence the ServletContext. Where's the web-free counterpart?
The answer, my friend, is blowin' in the wind. It's called a JBoss singleton service. That's a place to keep the state of the application that's akin to using servlet context.
I am going to use the servlet context for now but it seems strange that a web-less counterpart to that is not a standard part of JEEV. It means you have to persist application state for non-web front ends.
2. Re: Shared Memory in EJB ContainerEmmanuel Bernard Apr 3, 2006 5:41 AM (in response to treespace)
if you want a per user state, use a Stateful Session Bean
if you want a per app state, you can probably use a Stateless Session keeping the state
3. Re: Shared Memory in EJB ContainerMartin Ganserer Apr 3, 2006 5:55 AM (in response to treespace)
Hi,
@epbernanrd:
Please give us an example, as I don't belive that this will work!
How should a stateless session keep the sate?
Regards
4. Re: Shared Memory in EJB ContainerEmmanuel Bernard Apr 3, 2006 12:40 PM (in response to treespace)
@Stateless
public class SomeStuffImpl impelments SomeStuff {
private Map<String, Country> countriesSharedByEverybody;
public Country getcountry(String name) {
if (countriesSharedByEverybody == null) initialize();
return countriesSharedByEverybody.get(name);
}
}
5. Re: Shared Memory in EJB ContainerEmmanuel Bernard Apr 3, 2006 12:40 PM (in response to treespace)
it does not keep the conversation state, this is what Stateless Session Bean means
6. Re: Shared Memory in EJB Containertreespace Apr 4, 2006 12:28 AM (in response to treespace)
I was aware that state in a SLSBs was fine provided it was not conversational state between the client and the server. I dissmissed that option because the SLSBs are pooled and in my case clustered. Does an EJB container maintain coherence in SLSBs? If so that reall is a viable option.
7. Re: Shared Memory in EJB ContainerEmmanuel Bernard Apr 4, 2006 4:39 AM (in response to treespace)
I don't know what you mean by coherence, but I would say no
8. Re: Shared Memory in EJB Containertreespace Apr 4, 2006 10:53 AM (in response to treespace)
Coherence is short for "ensuring consistency across all of the constituent parts". SLSBs have to behaive exactly the same regardless of which instance you get or where that instance lives.
This is from the trailblazer on JMX which is why I think the Service bean is the way to go:
Different from EJB services, which are provided by a pool of managed objects, a JMX MBean service is provided by a singleton object in the server JVM. This service object is stateful and has an application-wide scope.
I read this as saying EJB services (stateless "session" beans) do not behave as singletons internally. I think that makes sense because there's nothing in the SLSB contract that says an SLSB's internal state is kept in sync. | https://developer.jboss.org/thread/107432 | CC-MAIN-2018-17 | refinedweb | 670 | 67.99 |
07 February 2012 07:50 [Source: ICIS news]
SINGAPORE (ICIS)--BP reported on Tuesday a 41.5% year-on-year fall in its replacement cost profit before interest and tax at its refining and marketing division to $564m (€429m) in the fourth quarter of last year, partly because of lower margins.
The segment’s profit before interest and tax for the December quarter 2011 fell by 71.2% year on year to $657m, the UK-based energy major said in a statement.
“The fourth quarter saw continued strong operations with our refinery utilization remaining well above the industry average,” it said.
“Compared with the same period last year, our result benefited from an improved contribution from supply and trading relative to the fourth-quarter loss in 2010 and our ability to access WTI-priced crude grades in the ?xml:namespace>
But these positive factors were offset by reduced refining margins, lower petrochemicals margins and foreign exchange impacts, the company added.
The segment’s fourth-quarter results also included net non-operating charges of $140m, compared with non-operating gains of $86m in the same period a year earlier, the company said.
For the full year of 2011, BP’s refining and marketing division’s replacement cost profit before interest and tax slipped by 1.5 % to $5.47bn, while its profit before interest and tax grew by 10% to $7.96bn.
Meanwhile, BP's overall fourth-quarter replacement cost profit was $7.61bn, up by 65% year on year.
For the full year of 2011, BP posted an overall replacement cost profit of $23.9bn, compared with a loss of $4.91bn a year | http://www.icis.com/Articles/2012/02/07/9529885/bp-refining-and-marketing-q4-replacement-cost-profit-falls-41.5.html | CC-MAIN-2015-14 | refinedweb | 274 | 55.64 |
Thorsten Rühl wrote in message ... >Hi there, > >at first i have to say i am very new in python so i have a few problems to >get along with some things. > >My first problem i can´t handle is to write a recursive lambda function: >the formal definition is > >letrec map(f) = lambda l. if != NIL then NIL > else cons(f(car(l)), map(f)(cdr(l))) > >after i realised that i can´t use if clauses in lambda definition i tried >to convert the if line in an correspondending boolean operator line: > >def mapp(f): > lambda l: l and cons(f(car(l)),(mapp(f)(cdr(l)))) > >but it doesn´t work. You need to return the lambda. You've been coding too much lisp. ;) >if i try to use the result in the following way i get an error: object not >callable but i don´t understand why it is not callable. Because mapp always returns None (the implicit return value), and None isn't a callable. >So i hope this is question is not to nooby and i did´nt waste your time. > >But i really need a little help with this. I'm not sure why you want to code Python as if it were lisp. This will only lead to tears. It's also quite a bit slower, because calling is expensive in Python. (I understand if this is simply an academic exercize, but still, try to read beyond the formal definitions). Typical thing you'd do in Python is one line: [x*x for x in L] This makes crystal clear what you want to do. I had to puzzle over that pseudolisp to figure out the big picture. Replace x*x with whatever you want. The recursive '(cons ((car L) (func (cdr L))))' pattern in lisp is 'for item in sequence' in Python, and there's no need to pass a lambda into a function factory. Also, we already have a map (as you noticed, because of your namespace clash). So map(lambda x: x*x, [1,2,3,4]) works. If you want to curry it, you can do so: def curry(f, *args): def inner(*args2): return f(*(args + args2)) return inner mapQuad = curry(map, lambda x: x*x) mapQuad([1,2,3,4]) -- Francis Avila | https://mail.python.org/pipermail/python-list/2003-December/198097.html | CC-MAIN-2016-40 | refinedweb | 385 | 80.31 |
) answer.
Ok Scott, I’ll send you back your decoder ring… but not before I make a copy and send you the original one back! Or did I send you the copy?
Darrell,
I used to think you were my friend. We hung out on line. We drank at Tech-Ed. We made plans to visit each other. Now I read that you’re dabbling in the mantic arts. I don’t think I can be your friend anymore. You’ve upset the apple cart with your heretic ideas and you’re dragging skeletons out of the closet that should have remained buried. I don’t know why you can’t just let sleeping dogs lie. Software development is much than just challenging people’s ideas, you know. Many of us are trying to live nice, peaceful lives without having to think all the time. Just lay off man. And I want my Spiderman decoder ring back. I would only have given it to a true pal.
“My whole point… was to show how you could use a dynamic language to prototype…”
To paraphrase, what you actually show is: ‘developing with a REPL (or at least a well-constructed REPL) works closer to how you think’
You’re conflating IDE capability (REPL) with dynamic type-checking.
Believe me, working with Smalltalk in Notepad sucks big!
Isaac – You can use NUnit and TDD. TDD is a way of codifying your mental model about how something should work. The first list was a general “how developers design” list.
My whole point with Benjamin Pollack’s list was to show how you could use a dynamic language to prototype your understanding of a problem domain without getting to far into the implementation details.
And back to Benjamin Pollack, the next day we find that he simply didn’t know how to use the IDE…
“Firstly, thanks to the numerous people (including Joel) who pointed out that VisualStudio.NET does support fix-and-continue for C/C++ and has since Visual C++ 6. Oops.”
“map steps 3 and 4 in the second list”
Why wouldn’t we be using SUnit and TDD?
Seems like you’ve slipped back to the bad-ole days of code-and-fix.
Here’s REPL for Java:
DrJava: Using the Interactions Pane
Here’s REPL for OCaml
The Objective Caml interactive toplevel
IDE features can be incorporated even in dynamically typed languages using a process known as type inference. Anders H. has been talking about the possibility of including it in C# 3.0 (the next next version
and I for one think it would be a great step.
Darren O: “Intellisense… > sliced bread …” etc…
Darren, I think you’re confusing IDE features with language features. I have “intellisense” in Xcode when I’m writing Objective C code for OS X and Objective C is a dynamic dispatch/typing language. The Ruby interpreter has a “check syntax” swtich, basically a compiler, that will catch your syntax errors. I can’t speak about Python as I haven’t used it. Ruby also has an interactive shell, which lets you play with the objects, reflect over them, in a similar manner to the Smalltalk environment. That’s much better than intellisense.
“if it compiles, it will never fail” – HOO BOY, if you can make that statement, you should probably be asking for more money than you currently are. There are a lot of errors that only show up at runtime, even in statically typed languages. If you are using a compiler to catch your syntax errors (besides spelling errors, our people are renowned for spelling errors), you need to know the language better.
“And it will never have to be changed” – So are you saying dynamic code changes spontaneously? 😉 Method signatures magically gain more parameters? Strings no longer accept chars in their arrays? The bottom line, for me at least, is that dynamic, message based languages make it much easier for me to write programs. Why do I care if the actual type of the object I’m passing in is an “Employee” or a “User” if all I’m doing is calling “toString()”
Darrell – sorry – I tend to be an absoluteist with my language, and come out sounding more extreme than I actually mean..
I guess all I was trying to get at was that I didn’t agree with that picture of design – I do believe in – as you say, component orientated development.
Instead of doing
“what do you want”
“build a crappy version as quickly as possible”
while true
“tweak it”
I’d rather –
“have a vague idea of what you want”
while true
pick the tiniest and best defined piece of it
build it
bulletproof it
deliver it
Mainly because I’ve never in my commercial life worked on a project where there was only one developer – in fact most of the time I’m working on projects where I haven’t met most of the people writing other parts of the system – I don’t know how good or bad they are, and the only way I’ve found of making sure my stuff doesn’t get blamed for something is to put iron walls around it.
With the confusion between dynamic/compiled or static/strong – because it wasn’t the main part of what I was saying, I sort of glossed over the whole dynamic/compiled issues –
there are a few – starting with the ability to hide the code – I think people underrate the benefits of a “black box”. I think there is a lot of value to having to code things with access only to the interface, not to the internal workings. If people know the internal workings of something – without fail, they will rely on something in there – and that’s not guaranteed not to change.
Also, compiler protection – yes, people can get around it – but they have to explicitly do it. There are times for that – but people have been forced to think about what they are doing, and if it screws up, it’s there fault – at least the compiler warns them.
most importantly, and this is going to sound silly – but I think people underrate the value of intellisense. I believe VS or Eclipse intellisense improves my speed of development by something like a factor of 5. In Ruby, when I’m using an unfamiliar class library, I’m spending ages wandering through poorly written documentation. In C#, if I have to use a class which was just written by someone else, it’s easy – I hit “.” – up comes a list of methods – I pick the one that looks right, hit “(” and see a list of parameters – “oh – I need one of those” – I create one of those in a similar way, move on to the next parameter, and I’m done. It’s not a little thing – an “aid” – it’s truly a fundamental different level in programming – and it’s very difficult to do in the traditonal dynamic languages.
but as for actually enforcing stuff at compile time – I’m not saying that you get no bugs, but more and more often we make components that work 100% the first time they compile, and go for years without ever having a bug – it’s becoming the norm – not the exception. Let me give you an example –
we built a plugin system that worked on reflection and types to find the appropriate display for various objects. Suppose someone else has made a new employee class – I have a list, and want to display the object.
I throw in a line
ShowObjectToUser( _list.selectedItem );
yay – it works, I’m showing up a generic screen which dumps the contents of the object. But what if I want a custom display of it? I make a new user control, and go
public class EmployeeDisplay : UserControl, IPluginDisplay
{
… generated stuff…
I got most of the above by hitting control space and picking from a list – I click on the interface and get the stub generated:
void IPluginDisplay.Show( Employee value )
{
}
I add a two ValueDisplays and a ShortAddressDisplay to the designer, title them all and then add the lines
if (value == null)
_firstName.Clear();
_lastName.Clear();
_addressDisplay.Clear();
else
_firstName.text = Safe( value.FirstName );
_lastName.text = Safe( value.LastName );
if (value.AddressList.Contains( AddressType.Primary))
_addressDisplay.Value = value.AddressList[ AddressType.Primary];
else
_addressDisplay.Clear();
end
All of the above I got from intellisense menus – I have probably never seen this object before.
but once that compiles – I’ve added a new object to the system. Every list, every display where that object is used now uses my new component.
And I guarantee you – even though I just wrote it while I was typing this – if it compiles, it will never fail. For any reason. And it will never have to be changed, unless the actual business requirements of what need to be shown change.
This is what the compiler guarantees. It guarantees to me that if the person who created the employee class decides to change “LastName” to “Surname” – they also have to fix the component. This is what dynamic languages can’t give you…
and it all comes down to the design. Compiled languages give you the ability to _safely_ design a tiny piece of the puzzle, and be completely confident that you don’t even have to think about any other part of the puzzle – it’s all just going to “work” when we put it all together.
Darren – you’re confusing strong and weak typing with dynamic and compile-time typing. Python IS strongly typed, but it’s dynamic typing.
In any compiled language I can force dynamic typing. I can cast objects to other types and I can even create objects at run-time using reflection. So I should never use any of that according to your design philosophy? In that case NUnit should not even exist!
And according to your comment, a compiler solves all problems? So you’ve never written a bug in a compiled program, only in Ruby? That’s hard to believe. In fact, I think it’s garbage (to quote your blog post about this). But that’s picking on just a small part of your argument.
I’m all for contracts, but I don’t think trying to enforce everything at compile-time is the way to go. Plus what you are talking about sounds more like component-oriented programming, which was not the point of this blog post. This was more focused at understanding a small problem domain as quickly as possible. I think that prototyping to understand a given problem domain is a valuable exercise, and one which is better done in an interpreted language than in a compiled one.
And finally I did say in a comment that I would probably then write the resulting solution in .NET. Python already offers some plugins that generate Java or C code, so .NET might not be far off. But the reason all those programs are written in compiled languages is performance, which I was also not addressing in this post. In fact most of those apps aren’t even written in managed languages (such as .NET or Java) for performance reasons. As managed languages become faster, more programs will be written in them. And the same for interpreted languages.
Darren certainly hit the nail on the head with regards to strong typing:
“Failing at runtime isn’t good enough.”
I would go a step further and say that failing at runtime is not acceptable, especially if you could have easily caught the bug at compile time.
Remember, the compiler is your friend
I would disagree with this post.
I think a lot of people when they are talking about software “design” simplify things too much, by taking out the only two elements that really matter, which are:
* when you start any real software project, you only have knowledge of less than 1% of the eventual requirements
* Most real software projects are too big for a single person to have a handle on the whole project
So – what does that mean in practice? It means that you never get to design a “System”, you are always designing a tiny piece of the system.
now, for an experienced programmer, and with the power of tools these days, programming a component is easy – you see the problem, you build your tests, your objects – and it works. Software has matured to that point – we aren’t “debugging” any more – work first time is the norm.
ALL of the difficulty comes with the “others” – ie who do I have to communicate with, the “past” – I have to build on this legacy code, or whatever… and most importantly the “future”
ie – how do I guarantee that when I leave, or it’s handed to the maintenance group, or when someone else in the development team who I’ve never met picks it up and uses it, without reading a single word of the code documentation (which I haven’t [and never will] had time to write yet) or the source code (which they don’t have access to)…
How do I guarantee it will never fail?
There is only one way I know of – CONTRACTS. You put a big line around what you are doing. You say you can give me one of these, that fulfils this this and this… and I will give you back one of these, and this this and this will have been done.
And I promise that it will never fail.
And there’s only one good way of doing that. Compiler upheld strong typing. Failing at runtime isn’t good enough.
In summary:
* Good “traditonal” system design up front is pointless and impossible.
* Building lots of little single-purpose unbreakable pieces always results in an almost optimal system design, and removes most of the risk associated with the system.
* The only reliable way of building a solid component is to have an agreed contract, and if you have a contract, it should be ENFORCED – (by a compiler?)
dynamic languages are FUN – hey I truly LOVE Ruby – it’s a beautiful language. But there is a very good reason why 99.9999% of the good programs of the world (Photoshop, Word etc) are written in strongly typed, compiled languages.
If dynamic languages such as IronPython help to facilitate design, then I’m all for it. That certainly seems to be very worthwhile, especially from a POC standpoint.
Dave – I personally would write in IronPython because it’s closer to the problem domain. I have to spend less time fiddling around with stuff.
If I need to iterate over something, I just put it in a list and I have a strongly typed collection already! I don’t need to setup something using generics (which isn’t too bad), or if I’m using .NET 1.1, I’ll be putzing around with collections and all sorts of non-domain-related problems. Even with Resharper or CodeRush there’s no way you could implement it as easily as:
li = [myDomainOject1, myDomainObject2]
Once the business logic is ironed out, maybe I write it in C# or VB. But at least I can move as fast as I need to when I’m working on the problem.
Trying to convince someone that hasn’t seen how easy a dynamic language is that’s also a master at a compiled language is very difficult. The objection is always “I can do that faster in C# (VB)!” Well, until you learned C#/VB well enough, you could probably do things in VB6 or ASP faster too! It’s important to separate an individual’s proficiency of a language from the productivity of the language itself FOR A GIVEN DOMAIN. Again, this is why I would probably not do UI work with Python – I think that Windows Forms and ASP.NET are the fastest way to develop UIs.
Now you’re talking like a Mort!
I would agree, it does help with the iterative process of problem solving to remove the obstacles that a declarative language provides. I think that’s where Visual Basic shined, and the way a lot of us non-CS developers have managed to enter the ranks.
The problem I see is that it becomes harder to maintain the code, because when you develop like that, it has a tendency to devolve into hacking away until it works, which is the enemy of maintainability. Or, maybe (ok, definitely) you have more discipline than me, and you can code like that and still have maintainability?
But for prototyping, it has a lot of merit. Maybe I’ll have to switch gears and instead of putting off learning Ruby, put off learning IronPython instead, since that would allow me to have a prototype closer toward the C# code I would eventually write…
>>I don’t think language should ever be a factor in software design as it’s part of the implementation details.<<
I think his point is that, some languages lend themselves to helping with the design and not just the implementation. When the language constructs (and environment to some degree) match your intention more closely, then they become more a part of the design process than just coding the solution.
Think of CASE tools for example. There’s still a (declarative) language at the root of that somewhere, yet those are definitely a big part of the design process in some companies.
Maybe once I put my fingers on the bits I’ll get it because the way I look at it, when I write C# or VB.NET code, I AM focusing on the domain problem and not on curly braces or End If statements. I already know those languages to the point where I rarely need to go look up syntax, thus they now allow me to focus on the business problem at hand.
As for software design, good designs are language-agnostic. I don’t think language should ever be a factor in software design as it’s part of the implementation details. I can’t imagine being on a project where a design was put forth and then when it came time to decide on a language, the design needs to be modified because of issues with the chosen language.
And going back to our conversation at TechEd, if a developer is a master in C#, why would he/she write the business logic in IronPython? | http://codebetter.com/darrellnorton/2005/06/23/dynamic-languages-work-closer-to-how-you-design-software/ | CC-MAIN-2015-48 | refinedweb | 3,094 | 69.01 |
03 December 2007 08:00 [Source: ICIS news]
?xml:namespace>
?xml:namespace>
Under Hambrecht’s leadership, BASF has continued its position as one of the leading chemical companies benefiting from canny investments and strong regional growth.
Last year he bought US catalyst maker Engelhard for $5.6bn (€3.8bn) and Degussa’s construction chemical’s business with the aim of bringing BASF's products closer to end-use markets.
Hambrecht joined BASF in 1976 after studying chemistry in
Saudi Basic Industries Corp (SABIC) vice-chairman and CEO Mohamed al-Mady heads the ICIS Top 40 Power Players this year, with Access Industries owner and chairman, Len Blavatnik at number two.
INEOS chairman Jim Ratcliff, who topped the power players list according to ICIS in 2006, was placed number 7. Hans Wijers, CEO of Akzo Nobel, also made the top | http://www.icis.com/Articles/2007/12/03/9082777/basfs-hambrecht-is-top-chemicals-exec-in-europe.html | CC-MAIN-2014-35 | refinedweb | 139 | 53.1 |
Details
- AboutI like fixing segfaults
- Skillsvim, c, python
- LocationSacramento
- Website
- Github
Joined devRant on 11/6/2016
Join devRant
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn
- uBlock Origin hides the ++ buttons because they have a class of `plusone` and plus one is a tracking company. sigh...3
- Finally I have a nice round number of points!
(please don't upvote unless you have 1000000000 friends who will upvote as well)4
- There's no way to subscribe to a user on the desktop app, and I don't have a phone. Anyone know how to do this by manually sending an http request?2
-
-
- The index to Sedgewick's Algorithms book says:
recursion, 35
see also base case,
see also recursion1
- double RandDouble(double min, double max) {
double rand = (double) RandomGen::GetRand32() / UINT32_MAX;
return (rand * (max - min)) + min;
}
I wrote this function 2 months ago and until now assumed it was working.
Turns out GetRand32 has a range up to INT32_MAX, not UINT32_MAX. So this function would only ever select the first half of the range.
You gotta be fucking kidding me...
- Found in kernel/sched.c in Linux 1.2:
The "confuse_gcc" goto is used only to get better assembly code..
Dijkstra probably hates me.
- Why does there have to be both dup(fd) and fcntl(fd, F_DUPFD)? As far as I can tell they do exactly the same thing, one just looks more complicated than the other.
-
-
-
- Was watching a dev stream on twitch and noticed the following code on screen:
if (blah blah blah) {
int fuck = 0;
mysql_blah_blah(blah, blah, &fuck);
}
-
- I'm on the devRant web app, and all the ++ buttons have
style="display: none !important"
WHAT
(of course, -- works fine)2
-
- The guy Intel hired to come up with instruction mnemonics must be just letting his cat walk over the keyboard every time he needs a new idea.
cvttps2dq? sounds good!
- | https://devrant.com/users/tbodt | CC-MAIN-2021-31 | refinedweb | 331 | 68.7 |
Question
The U.S. stock market hit an all-time high in October 1929 before crashing dramatically. Following the market crash, the U.S. entered a prolonged economic downturn dubbed The Great Depression. Using Figure, estimate how long it took for the stock market to fully rebound from its fall which began in October 1929. How did bond investors fare over this same period?
.png)
Answer to relevant QuestionsWhat is the difference between assets expected return and its actual return? Why are expected returns so important to investors and managers? How can the weight given to a particular stock in a portfolio exceed 100 percent? Describe the scale problem and the timing problem and explain the potential effects of these problems on the choice of mutually exclusive projects, using IRR versus NPV. For each of the projects shown in the following table, calculate the internal rate of return (IRR). What is meant by potential investments relevant cash flows? What are sunk costs and cannibalization, and do they affect the process of determining proposed investments incremental cash flows?
Post your question | http://www.solutioninn.com/the-us-stock-market-hit-an-alltime-high-in-october-1929 | CC-MAIN-2017-09 | refinedweb | 180 | 58.69 |
Use Visual Studio Code to develop and debug modules for Azure IoT Edge
You can turn your business logic into modules for Azure IoT Edge. This article shows you how to use Visual Studio Code as the main tool to develop and debug modules.
There are two ways to debug modules written in C#, Node.js, or Java in Visual Studio Code: You can either attach a process in a module container or launch the module code in debug mode. To debug modules written in Python or C, you can only attach to a process in Linux amd64 containers.
If you aren't familiar with the debugging capabilities of Visual Studio Code, read about Debugging.
This article provides instructions for developing and debugging modules in multiple languages for multiple architectures. Currently, Visual Studio Code provides support for modules written in C#, C, Python, Node.js, and Java. The supported device architectures are X64 and ARM32. For more information about supported operating systems, languages, and architectures, see Language and architecture support.
Note
Develop and debugging support for Linux ARM64 devices is in public preview. For more information, see Develop and debug ARM64 IoT Edge modules in Visual Studio Code (preview).
Prerequisites
You can use a computer or a virtual machine running Windows, macOS, or Linux as your development machine. On Windows computers you can develop either Windows or Linux modules. To develop Windows modules, use a Windows computer running version 1809/build 17763 or newer. To develop Linux modules, use a Windows computer that meets the requirements for Docker Desktop.
Install Visual Studio Code first and then add the following extensions:
- Azure IoT Tools
- Docker extension
- Visual Studio extension(s) specific to the language you're developing in:
- C#, including Azure Functions: C# extension
- Python: Python extension
- Java: Java Extension Pack for Visual Studio Code
- C: C/C++ extension
You'll also need to install some additional, language-specific tools to develop your module:
C#, including Azure Functions: .NET Core 2.1 SDK
Python: Python and Pip for installing Python packages (typically included with your Python installation).
Node.js: Node.js. You'll also want to install Yeoman and the Azure IoT Edge Node.js Module Generator.
Java: Java SE Development Kit 10 and Maven. You'll need to set the
JAVA_HOMEenvironment variable to point to your JDK installation.
To build and deploy your module image, you need Docker to build the module image and a container registry to hold the module image:
Docker Community Edition on your development machine.
Azure Container Registry or Docker Hub
Tip
You can use a local Docker registry for prototype and testing purposes instead of a cloud registry.
Unless you're developing your module in C, you also need the Python-based Azure IoT EdgeHub Dev Tool in order to set up your local development environment to debug, run, and test your IoT Edge solution. If you haven't already done so, install Python (2.7/3.6) and Pip and then install iotedgehubdev by running this command in your terminal.
pip install --upgrade iotedgehubdev
Note
If you have multiple Python including pre-installed python 2.7 (for example, on Ubuntu or macOS), make sure you are using the correct
pip or
pip3 to install iotedgehubdev
To test your module on a device, you'll need an active IoT hub with at least one IoT Edge device. To use your computer as an IoT Edge device, follow the steps in the quickstart for Linux or Windows. If you are running IoT Edge daemon on your development machine, you might need to stop EdgeHub and EdgeAgent before you move to next step.
Create a new solution template
The following steps show you how to create an IoT Edge module in your preferred development language (including Azure Functions, written in C#) using Visual Studio Code and the Azure IoT Tools. You start by creating a solution, and then generating the first module in that solution. Each solution can contain multiple modules.
Select View > Command Palette.
In the command palette, enter and run the command Azure IoT Edge: New IoT Edge Solution.
Browse to the folder where you want to create the new solution and then select Select folder.
Enter a name for your solution.
Select a module template for your preferred development language to be the first module in the solution.
Enter a name for your module. Choose a name that's unique within your container registry.
Provide the name of the module's image repository. Visual Studio Code autopopulates the module name with localhost:5000/<your module name>. Replace it with your own registry information. If you use a local Docker registry for testing, then localhost is fine. If you use Azure Container Registry, then use the login server from your registry's settings. The login server looks like <registry name>.azurecr.io. Only replace the localhost:5000 part of the string so that the final result looks like <registry name>.azurecr.io/<your module name>.
Visual Studio Code takes the information you provided, creates an IoT Edge solution, and then loads it in a new window.
There are four items within the solution:
A .vscode folder contains debug configurations.
A modules folder has subfolders for each module. Within the folder for each module there is a file, module.json, that controls how modules are built and deployed. This file would need to be modified to change the module deployment container registry from localhost to a remote registry. At this point, you only have one module. SimulatedTemperatureSensor module that simulates data you can use for testing. For more information about how deployment manifests work, see Learn how to use deployment manifests to deploy modules and establish routes.
Add additional modules
To add additional modules to your solution, run the command Azure IoT Edge: Add IoT Edge Module from the command palette. You can also right-click the modules folder or the
deployment.template.json file in the Visual Studio Code Explorer view and then select Add IoT Edge Module.
Develop your module
The default module code that comes with the solution is located at the following location:
- Azure Function (C#): modules > <your module name> > <your module name>.cs
- C#: modules > <your module name> > Program.cs
- Python: modules > <your module name> > main.py
- Node.js: modules > <your module name> > app.js
- Java: modules > <your module name> > src > main > java > com > edgemodulemodules > App.java
- C: modules > <your module name> > SimulatedTemperatureSensor module that simulates data) and pipe it to IoT Hub.
When you're ready to customize the template with your own code, use the Azure IoT Hub SDKs to build modules that address the key needs for IoT solutions such as security, device management, and reliability.
Debug a module without a container (C#, Node.js, Java)
If you're developing in C#, Node.js, or Java, your module requires use of a ModuleClient object in the default module code so that it can start, run, and route messages. You'll also use the default input channel input1 to take action when the module receives messages.
Set up IoT Edge simulator for IoT Edge solution.
Set up IoT Edge simulator for single module app
To set up and start the simulator, run the command Azure IoT Edge: Start IoT Edge Hub Simulator for Single Module from the Visual Studio Code command palette. When prompted, use the value input1 from the default module code (or the equivalent value from your code) as the input name for your application. The command triggers the iotedgehubdev CLI and then starts the IoT Edge simulator and a testing utility module container. You can see the outputs below in the integrated terminal if the simulator has been started in single module mode successfully. You can also see a
curl command to help send message through. You will use it later.
You can use the Docker Explorer view in Visual Studio Code to see the module's running status.
The edgeHubDev container is the core of the local IoT Edge simulator. It can run on your development machine without the IoT Edge security daemon and provides environment settings for your native module app or module containers. The input container exposes REST APIs to help bridge messages to the target input channel on your module.
Debug module in launch mode
Prepare your environment for debugging according to the requirements of your development language, set a breakpoint in your module, and select the debug configuration to use:
C#
In the Visual Studio Code integrated terminal, change the directory to the <your module name> folder, and then run the following command to build .NET Core application.
dotnet build
Open the file
Program.csand add a breakpoint.
Navigate to the Visual Studio Code Debug view by selecting View > Debug. Select the debug configuration <your module name> Local Debug (.NET Core) from the dropdown.
Note
If your .NET Core
TargetFrameworkis not consistent with your program path in
launch.json, you'll need to manually update the program path in
launch.jsonto match the
TargetFrameworkin your .csproj file so that Visual Studio Code can successfully launch this program.
Node.js
In the Visual Studio Code integrated terminal, change the directory to the <your module name> folder, and then run the following command to install Node packages
npm install
Open the file
app.jsand add a breakpoint.
Navigate to the Visual Studio Code Debug view by selecting View > Debug. Select the debug configuration <your module name> Local Debug (Node.js) from the dropdown.
Java
Open the file
App.javaand add a breakpoint.
Navigate to the Visual Studio Code Debug view by selecting View > Debug. Select the debug configuration <your module name> Local Debug (Java) from the dropdown.
Click Start Debugging or press F5 to start the debug session.
In the Visual Studio Code integrated terminal, run the following command to send a Hello World message to your module. This is the command shown in previous steps when you set up IoT Edge simulator.
curl --header "Content-Type: application/json" --request POST --data '{"inputName": "input1","data":"hello world"}'
Note
If you are using Windows, making sure the shell of your Visual Studio Code integrated terminal is Git Bash or WSL Bash. You cannot run the
curlcommand from a PowerShell or command prompt.
In the Visual Studio Code Debug view, you'll see the variables in the left panel.
To stop your debugging session, select the Stop button or press Shift + F5, and then run Azure IoT Edge: Stop IoT Edge Simulator in the command palette to stop the simulator and clean up.
Debug in attach mode with IoT Edge Simulator (C#, Node.js, Java, Azure Functions)
Your default solution contains two modules, one is a simulated temperature sensor module and the other is the pipe module. The simulated temperature sensor sends messages to the pipe module and then the messages are piped to the IoT Hub. In the module folder you created, there are several Docker files for different container types. Use any of the files that end with the extension .debug to build your module for testing.
Currently, debugging in attach mode is supported only as follows:
- C# modules, including those for Azure Functions, support debugging in Linux amd64 containers
- Node.js modules support debugging in Linux amd64 and arm32v7 containers, and Windows amd64 containers
- Java modules support debugging in Linux amd64 and arm32v7 containers
Tip
You can switch among options for the default platform for your IoT Edge solution by clicking the item in the Visual Studio Code status bar.
Set up IoT Edge simulator for IoT Edge solution.
Build and run container for debugging and debug in attach mode
Open your module file (
Program.cs,
app.js,
App.java, or
<your module name>.cs) and add a breakpoint.
In the Visual Studio Code Explorer view, right-click the
deployment.debug.template.jsonfile for your solution and then select Build and Run IoT Edge solution in Simulator. You can watch all the module container logs in the same window. You can also navigate to the Docker view to watch container status.
Navigate to the Visual Studio Code Debug view and select the debug configuration file for your module. The debug option name should be similar to <your module name> Remote Debug
Select Start Debugging or press F5. Select the process to attach to.
In Visual Studio Code Debug view, you'll see the variables in the left panel.
To stop the debugging session, first select the Stop button or press Shift + F5, and then select Azure IoT Edge: Stop IoT Edge Simulator from the command palette.
Note
The preceding example shows how to debug IoT Edge modules on containers. It added exposed ports to your module's container
createOptions settings. After you finish debugging your modules, we recommend you remove these exposed ports for production-ready IoT Edge modules.
For modules written in C#, including Azure Functions, this example is based on the debug version of
Dockerfile.amd64.debug, which includes the .NET Core command-line debugger (VSDBG) in your container image while building it. After you debug your C# modules, we recommend that you directly use the Dockerfile without VSDBG for production-ready IoT Edge modules.
Debug a module with the IoT Edge runtime
In each module folder, there are several Docker files for different container types. Use any of the files that end with the extension .debug to build your module for testing.
When debugging modules using this method, your modules are running on top of the IoT Edge runtime. The IoT Edge device and your Visual Studio Code can be on the same machine, or more typically, Visual Studio Code is on the development machine and the IoT Edge runtime and modules are running on another physical machine. In order to debug from Visual Studio Code, you must:
- Set up your IoT Edge device, build your IoT Edge module(s) with the .debug Dockerfile, and then deploy to the IoT Edge device.
- Expose the IP and port of the module so that the debugger can be attached.
- Update the
launch.jsonso that Visual Studio Code can attach to the process in the container on the remote machine. This file is located in the
.vscodefolder in your workspace and updates each time you add a new module that supports debugging.
Build and deploy your module to the IoT Edge device
In Visual Studio Code, open the
deployment.debug.template.jsonfile, which contains the debug version of your module images with the proper
createOptionsvalues set.
If you're developing your module in Python, follow these steps before proceeding:
Open the file
main.pyand add this code after the import section:
import ptvsd ptvsd.enable_attach(('0.0.0.0', 5678))
Add the following single line of code to the callback you want to debug:
ptvsd.break_into_debugger()
For example, if you want to debug the
receive_message_callbackmethod, you would insert that line of code as shown below:
def receive_message_callback(message, hubManager): ptvsd.break_into_debugger() global RECEIVE_CALLBACKS message_buffer = message.get_bytearray() size = len(message_buffer) print ( " Data: <<<%s>>> & Size=%d" % (message_buffer[:size].decode ('utf-8'), size) ) map_properties = message.properties() key_value_pair = map_properties.get_internals() print ( " Properties: %s" % key_value_pair ) RECEIVE_CALLBACKS += 1 print ( " Total calls received: %d" % RECEIVE_CALLBACKS ) hubManager.forward_event_to_output("output1", message, 0) return IoTHubMessageDispositionResult.ACCEPTED
In the Visual Studio Code command palette:
Run the command Azure IoT Edge: Build and Push IoT Edge solution.
Select the
deployment.debug.template.jsonfile for your solution.
In the Azure IoT Hub Devices section of the Visual Studio Code Explorer view:
Right-click an IoT Edge device ID and then select Create Deployment for Single Device.
Tip
To confirm that the device you've chosen is an IoT Edge device, select it to expand the list of modules and verify the presence of $edgeHub and $edgeAgent. Every IoT Edge device includes these two modules.
Navigate to your solution's config folder, select the
deployment.debug.amd64.jsonfile, and then select Select Edge Deployment Manifest.
You'll see the deployment successfully created with a deployment ID in the integrated terminal.
You can check your container status by running the
docker ps command in the terminal. If your Visual Studio Code and IoT Edge runtime are running on the same machine, you can also check the status in the Visual Studio Code Docker view.
Expose the IP and port of the module for the debugger
You can skip this section if your modules are running on the same machine as Visual Studio Code, as you are using localhost to attach to the container and already have the correct port settings in the .debug Dockerfile, module's container
createOptions settings, and
launch.json file. If your modules and Visual Studio Code are running on separate machines, follow the steps for your development language.
C#, including Azure Functions
Configure the SSH channel on your development machine and IoT Edge device and then edit
launch.jsonfile to attach.
Node.js
Make sure the module on the machine to be debugged is running and ready for debuggers to attach, and that port 9229 is accessible externally. You can verify this by opening
http://<target-machine-IP>:9229/jsonon the debugger machine. This URL should show information about the Node.js module to be debugged.
On your development machine, open Visual Studio Code and then edit
launch.jsonso that the address value of the <your module name> Remote Debug (Node.js) profile (or <your module name> Remote Debug (Node.js in Windows Container) profile if the module is running as a Windows container) is the IP of the machine being debugged.
Java
Build an SSH tunnel to the machine to be debugged by running
ssh -f <username>@<target-machine> -L 5005:127.0.0.1:5005 -N.
On your development machine, open Visual Studio Code and edit the <your module name> Remote Debug (Java) profile in
launch.jsonso that you can attach to the target machine. To learn more about editing
launch.jsonand debugging Java with Visual Studio Code, see the section on configuring the debugger.
Python
Make sure that port 5678 on the machine to be debugged is open and accessible.
In the code
ptvsd.enable_attach(('0.0.0.0', 5678))that you earlier inserted into
main.py, change 0.0.0.0 to the IP address of the machine to be debugged. Build, push, and deploy your IoT Edge module again.
On your development machine, open Visual Studio Code and then edit
launch.jsonso that the
hostvalue of the <your module name> Remote Debug (Python) profile uses the IP address of the target machine instead of
localhost.
Debug your module
In the Visual Studio Code Debug view, select the debug configuration file for your module. The debug option name should be similar to <your module name> Remote Debug
Open the module file for your development language and add a breakpoint:
- Azure Function (C#): Add your breakpoint to the file
<your module name>.cs.
- C#: Add your breakpoint to the file
Program.cs.
- Node.js: Add your breakpoint to the file
app.js.
- Java: Add your breakpoint to the file
App.java.
- Python: Add your breakpoint to the file
main.pyin the callback method where you added the
ptvsd.break_into_debugger()line.
- C: Add your breakpoint to the file
main.c.
Select Start Debugging or select F5. Select the process to attach to.
In the Visual Studio Code Debug view, you'll see the variables in the left panel.
Note
The preceding example shows how to debug IoT Edge modules on containers. It added exposed ports to your module's container
createOptions settings. After you finish debugging your modules, we recommend you remove these exposed ports for production-ready IoT Edge modules.
Build and debug a module remotely
With recent changes in both the Docker and Moby engines to support SSH connections, and a new setting in Azure IoT Tools that enables injection of environment settings into the Visual Studio Code command palette and Azure IoT Edge terminals, you can now build and debug modules on remote devices.
See this IoT Developer blog entry for more information and step-by-step instructions.
Next steps
After you've built your module, learn how to deploy Azure IoT Edge modules from Visual Studio Code.
To develop modules for your IoT Edge devices, Understand and use Azure IoT Hub SDKs.
Feedback | https://docs.microsoft.com/en-us/azure/iot-edge/how-to-vs-code-develop-module?WT.mc_id=aaronpowell-blog-aapowell | CC-MAIN-2019-39 | refinedweb | 3,376 | 56.76 |
On Tue, Jun 23, 2009 at 8:29 PM, Mike Christie<micha...@cs.wisc.edu> wrote: > > Erez Zilber wrote: >> Mike, >> >> I'm trying to debug a problem that we have with iscsiadm: I'm running >> open-iscsi against multiple targets. At some point, I'm closing the >> connection from one of the targets (i.e. on the target side). Then, I >> try to logout from the initiator side, but something goes wrong. The >> last thing that iscsiadm does it call recv from iscsid_response and it >> doesn't return (at least not after 10 minutes). I also see that in the >> kernel, __iscsi_unbind_session calls scsi_remove_target and doesn't >> return. I guess that this causes iscsiadm to wait on the recv call. > > Yeah, iscsiadm will wait for the iscsid operations like the unind to > complete, and that can take a while. > > If you stop the target and then we start the session shutdown process > while we still think the session is up (we have not got a tcp connection > error or rst or any other indication that is bad like a nop timing out), > then we are going to end up firing the iscsi or scsi eh. > > If you have IO running or if your LU requires a cache sync to be sent > when shutting it down, then the worse case is that you have nops turned > off, and for some reason the network layer does not return a error (just > returns somehting we thing is retryable like EAGAIN) when we try to do > sendpage/sendmsg. This will result in the scsi commands timing out. Then > the aborts and other tmfs will timeout, and then we will wait for > replacement_timeout seconds to try and reconnect. > > If you have nops on or the net layer returns a error, it would be a > little faster because you do not have to wait for scsi commands to time > out. The nop will timeout after noop_timeout seconds, then we will wait > for replacement_timeout seconds to reconnect. After that time we will > fail everything. > > if you do not have IO running and your device does not require cache > syncs, then it should be a lot shorter, but still may be a minute. The > __iscsi_unbind_session/scsi_remove_target should complete quickly since > they do not have to wait on IO and cache syncs to complete. We would > just wait for the logout iscsi pdu to timeout. > > > There is also a bug, where we retry the sending of data even though we > know the connection is bad. This patch helps >;a=commit;h=b138adb2df49967bf0a035143f734d33c4263963 > but what we want is to be able to break from the sendpage/sendsg wait. I > am working on a patch, but have hit some problems (for some reason if I > send a signal it does not break from the wait). This problem only adds > maybe 30 seconds extra for the logout of a session, so I am not sure > that is what you are hitting. > > > > So first check if your device needs a cache sync. You can check that by > looking at /var/log/messages when the device is discovered. You will see > something like: > > kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, > doesn't support DPO or FUA > > If write cache is enabled then the scsi layer will send cache syncs. > > Then check your replacement_timeout. If that is really long, then we > might be hitting that. > > > > >> >> BTW - I'm not running with the latest code. My HEAD is commit >> ef0357c4728ebba1a4b91a7f6d69c729a5f9e6e3. I don't know if any relevant >> bug fixes were made lately. > > > > Just so you know, I normally work on linux-2.6-iscsi, which tracks > upstream, then port to open-iscsi/kernel, so the newest kernel patches > will be in there.
Eventually, it was caused by an internal bug that we had. After fixing it, things look OK. Thanks for your help. Erez --~--~---------~--~----~------------~-------~--~----~ -~----------~----~----~----~------~----~------~--~--- | https://www.mail-archive.com/open-iscsi@googlegroups.com/msg03056.html | CC-MAIN-2019-51 | refinedweb | 644 | 79.6 |
Hi,
I am using CouchDB 1.2.0. and couchdb-python 0.9dev.
I will shortly describe my scenario:
1. I am putting one attachment to a new document (no _rev)
2. If there is a resource conflict, it means the document was already in
the database, so I get it, to get the _rev.
3. I put then the attachment
4. At some point, you will get a "socket.error: [Errno 104] Connection
reset by peer". This can either happen in the put_attachment, or in the get
operation.
This is the output of my test:
Round 0
Round 1
Round 2
Round 3
Round 4
Round 5
Round 6
Round 7
Round 8
Round 9
Traceback (most recent call last):
File "simple_test.py", line 30, in <module>
test1()
File "simple_test.py", line 10, in test1
db.put_attachment(doc, content, 'test-attachment', 'text/plain')
File "build/bdist.linux-i686/egg/couchdb/client.py", line 673, in
put_attachment
File "build/bdist.linux-i686/egg/couchdb/http.py", line 460, in put_json
File "build/bdist.linux-i686/egg/couchdb/http.py", line 439, in put
File "build/bdist.linux-i686/egg/couchdb/http.py", line 474, in _request
File "build/bdist.linux-i686/egg/couchdb/http.py", line 334, in request
File "/usr/local/python/2.7.2/lib/python2.7/httplib.py", line 548, in read
s = self._safe_read(self.length)
File "/usr/local/python/2.7.2/lib/python2.7/httplib.py", line 647, in
_safe_read
chunk = self.fp.read(min(amt, MAXAMOUNT))
File "/usr/local/python/2.7.2/lib/python2.7/socket.py", line 380, in read
data = self._sock.recv(left)
socket.error: [Errno 104] Connection reset by peer
For this test I have slightly modified the put_attachment method of
couchdb-python, because the stock version does not allow to put an
attachment to a document without a revision, but that operation is valid
for couchdb proper. This is the modified version of put_attachment:
def put_attachment(self, doc, content, filename=None,
content_type=None):
"""Create or replace an attachment.
Note that the provided `doc` is required to have a ``_rev`` field.
Thus,
if the `doc` is based on a view row, the view row would need to
include
the ``_rev`` field.
If the `doc` has no``_rev``, a new document will be created (if
possible).
:param doc: the dictionary or `Document` object representing the
document that the attachment should be added to
:param content: the content to upload, either a file-like object or
a string
:param filename: the name of the attachment file; if omitted, this
function tries to get the filename from the
file-like
object passed as the `content` argument value
:param content_type: content type of the attachment; if omitted, the
MIME type is guessed based on the file name
extension
:since: 0.4.1
"""
if filename is None:
if hasattr(content, 'name'):
filename = os.path.basename(content.name)
else:
raise ValueError('no filename specified for attachment')
if content_type is None:
content_type = ';'.join(
filter(None, mimetypes.guess_type(filename))
)
resource = _doc_resource(self.resource, doc['_id'])
kwargs = {
'body' : content,
'headers' : { 'Content-Type': content_type },
}
if '_rev' in doc: kwargs['rev'] = doc['_rev']
status, headers, data = resource.put_json(filename, **kwargs)
doc['_rev'] = data['rev']
And this is my test script:
import couchdb
db = couchdb.Server('')['unittestdb_deleteme']
doc_id = 'delete-me'
def test1():
content = "this is the attachment"
doc = { '_id' : doc_id }
try:
db.put_attachment(doc, content, 'test-attachment', 'text/plain')
except couchdb.http.ResourceConflict:
try:
doc = db[doc_id]
except Exception, e:
print type(Exception)
print e
try:
db.put_attachment(doc, content, 'test-attachment', 'text/plain')
except Exception, e:
print type(Exception)
print e
except Exception, e:
print type(Exception)
print e
for cnt in xrange(100):
print "Round %d" % (cnt)
test1() | http://mail-archives.apache.org/mod_mbox/couchdb-user/201303.mbox/%3CCAAOi-OGmAs2RKydQwMPDb9T_ybw=gxCXL5BK2wB2Wpio43iByw@mail.gmail.com%3E | CC-MAIN-2018-26 | refinedweb | 620 | 58.79 |
PDF Library for C# and VB.NET applications
The fastest way to get started with the GemBox.Pdf library is by exploring our collection of C# and VB.NET examples. These are live examples that show supported features and APIs for achieving various PDF-related tasks with the GemBox.Pdf component.
System Requirements
GemBox.Pdf requires only .NET, it doesn't have any other dependency.
You can use it on:
- .NET Framework 3.5 - 4.8
- .NET Core 3.0
- Platforms that implement .NET Standard 2.0 or higher.
Hello World
The first step in using the GemBox.Pdf library is to add a reference to GemBox.Pdf.dll within your C# or VB.NET project. There are three ways to do that.
a) Add from NuGet.
You can add GemBox.Pdf as a package by using the following command from the NuGet Package Manager Console:
Install-Package GemBox.Pdf
Or you can search and add GemBox.Pdf from the NuGet Package Manager.
b) Add from Setup.
You can download the GemBox.Pdf Setup from this page. After installing the setup, you can add a reference to GemBox.Pdf.dll from the Global Assembly Cache (GAC).
c) Add from a DLL file.
You can download a GemBox.Pdf.dll file from this page and add a reference by browsing to it.
The second step is to add a directive for the GemBox.Pdf namespace.
For a C# project, use:
using GemBox.Pdf;
For a VB.NET project, use:
Import GemBox.Pdf
The third step is to set the license key to use GemBox.Pdf in one of its working modes.
To use a Free mode in a C# project, use:
ComponentInfo.SetLicense("FREE-LIMITED-KEY");
To use a Free mode in a VB.NET project, use:
ComponentInfo.SetLicense("FREE-LIMITED-KEY")
You can read more about GemBox.Pdf's working modes on the Evaluation and Licensing help page.
The last step is to write your application-specific PDF code, like the following example code, which shows how to create a simple PDF document with two blank pages.
using GemBox.Pdf; class Program { static void Main() { // If using Professional version, put your serial key below. ComponentInfo.SetLicense("FREE-LIMITED-KEY"); using (var document = new PdfDocument()) { // Add a first empty page. document.Pages.Add(); // Add a second empty page. document.Pages.Add(); document.Save("Hello World.pdf"); } } }
Imports GemBox.Pdf Module Program Sub Main() ' If using Professional version, put your serial key below. ComponentInfo.SetLicense("FREE-LIMITED-KEY") Using document As New PdfDocument() ' Add a first empty page. document.Pages.Add() ' Add a second empty page. document.Pages.Add() document.Save("Hello World.pdf") End Using End Sub End Module
GemBox.Pdf library simplifies PDF page content operations by compiling them into elements of text, paths, and external objects (images and forms). These elements are currently read-only since the main goal was to extract the Unicode representation of text (see Reading example and Content Streams and Resources help page).
Adding new content to a PDF document is currently supported via low-level
Objects, such as in this Content Stream example.
If you want to create complex PDF documents, use GemBox.Document, GemBox.Spreadsheet, and GemBox.Presentation, which all have PDF exporting capability. | https://www.gemboxsoftware.com/pdf/examples/c-sharp-vb-net-pdf-library/101 | CC-MAIN-2020-40 | refinedweb | 541 | 54.49 |
Scope Rules in C:
Scope rules in C or scope of a variable means that from where the variable may directly be accessible after its declaration.
The scope of a variable in C programming language can be declared in three places :
Local variable: Local variables are the variables that are declared inside a block or a function.
These variables can only be used inside that block or function.
Local variables can’t be accessed from outside of that block or function.
Example :
#include <stdio.h> int main() { /* Declaration of local variable */ int a; /* initialization */ a = 7; printf (“value of a = %d\n”, a); return 0; }
Output :
value of a = 7
Global Variable: Global variable is the variable that is declared outside of a block or function. These variables can be accessed from anywhere in the program.
Once a global variable is declared you can use it throughout your entire program.
Example :
#include <stdio.h> /* Declaration of global variable */ int a; int main() { /* initialization */ a = 7; printf (“value of a = %d\n”, a); return 0; }
Output :
value of a = 7
Formal parameter: Formal parameters are the parameter that are written in the function definition.
Formal parameter takes precedence over global variable.
Report Error/ Suggestion | https://www.studymite.com/c-programming-language/scope-rules-in-c/?utm_source=related_posts&utm_medium=related_posts | CC-MAIN-2021-39 | refinedweb | 203 | 63.19 |
gtkmm: Gtk::RecentFilter Class Reference
RecentFilter can be used to restrict the files being shown in a RecentChooser. More...
#include <gtkmm/recentfilter.h>
Detailed Description
RecentFilter can be used to restrict the files being shown in a RecentChooser.
Files can be filtered based on their name (with add_pattern()), on their mime type (with add_mime_type()), on the application that has registered them (with add_application()), or by a custom filter function (with add_custom()).
Filtering by mime type handles aliasing and subclassing of mime types; e.g. a filter for text/plain also matches a file with mime type application/rtf, since application/rtf is a subclass of text/plain. Note that RecentFilter allows wildcards for the subtype of a mime type, so you can e.g. filter for image/*.
Normally, filters are used by adding them to a RecentChooser, see RecentChooser::add_filter(), but it is also possible to manually use a filter on a file with filter().
Member Typedef Documentation
For instance, bool on_custom(const Gtk::RecentFilter::Info& filter_info);.
Constructor & Destructor Documentation
Member Function Documentation
Adds a rule that allows resources based on their age - that is, the number of days elapsed since they were last modified.
- Parameters
-
Adds a rule that allows resources based on the name of the application that has registered them.
- Parameters
-
Adds a rule that allows resources based on the name of the group to which they belong.
- Parameters
-
Adds a rule that allows resources based on their registered MIME type.
- Parameters
-
Adds a rule that allows resources based on a pattern matching their display name.
- Parameters
-
Adds a rule allowing image files in the formats supported by GdkPixbuf.
Gets the human-readable name for the filter.
See set_name().
- Returns
- The name of the filter, or
nullptr. The returned string is owned by the filter object and should not be freed.
Gets the fields that need to be filled in for the Gtk::RecentFilterInfo passed to filter()
This function will not typically be used by applications; it is intended principally for use in the implementation of Gtk::RecentChooser.
- Returns
- Bitfield of flags indicating needed fields when calling filter().
Get the GType for this class, for use with the underlying GObject type system.
Provides access to the underlying C GObject.
Provides access to the underlying C GObject.
Provides access to the underlying C instance. The caller is responsible for unrefing it. Use when directly setting fields in structs.
Sets the human-readable name of the filter; this is the string that will be displayed in the recently used resources selector user interface if there is a selectable list of filters.
- Parameters
-
Friends And Related Function Documentation
A Glib::wrap() method for this object.
- Parameters
-
- Returns
- A C++ instance that wraps this C instance. | https://developer.gnome.org/gtkmm/stable/classGtk_1_1RecentFilter.html | CC-MAIN-2017-30 | refinedweb | 455 | 56.15 |
Date Parsing¶
Different feed types and versions use wildly different date formats. Universal Feed Parser will attempt to auto-detect the date format used in any date element, and parse it into a standard Python 9-tuple, as documented in the Python time module.
The following elements are parsed as dates:
- feed.updated is parsed into feed.updated_parsed.
- entries[i].published is parsed into entries[i].published_parsed.
- entries[i].updated is parsed into entries[i].updated_parsed.
- entries[i].created is parsed into entries[i].created_parsed.
- entries[i].expired is parsed into entries[i].expired_parsed.
History of Date Formats¶
Here is a brief history of feed date formats:
- CDF states that all date values must conform to ISO 8601:1988. ISO 8601:1988 is not a freely available specification, but a brief (non-normative) description of the date formats it describes is available here: ISO 8601:1988 Date/Time Representations.
- RSS 0.90 has no date elements.
- Netscape RSS 0.91 does not specify a date format, but examples within the specification show RFC 822-style dates with 4-digit years.
- Userland RSS 0.91 states, “All date-times in RSS conform to the Date and Time Specification of RFC 822.” RFC 822 mandates 2-digit years; it does not allow 4-digit years.
- RSS 1.0 states that all date elements must conform to W3CDTF, which is a profile of ISO 8601:1988.
- RSS 2.0 states, “All date-times in RSS conform to the Date and Time Specification of RFC 822, with the exception that the year may be expressed with two characters or four characters (four preferred).”
- Atom 0.3 states that all date elements must conform to W3CDTF.
- Atom 1.0 states that all date elements “MUST conform to the date-time production in RFC 3339. In addition, an uppercase T character MUST be used to separate date and time, and an uppercase Z character MUST be present in the absence of a numeric time zone offset.”
Recognized Date Formats¶
Here is a representative list of the formats that Universal Feed Parser can recognize in any date element:
Recognized Date Formats
Universal Feed Parser recognizes all character-based timezone abbreviations defined in RFC 822. In addition, Universal Feed Parser recognizes the following invalid timezones:
ATis treated as
AST
ETis treated as
EST
CTis treated as
CST
MTis treated as
MST
PTis treated as
PST
Supporting Additional Date Formats¶
Universal Feed Parser supports many different date formats, but
there are probably many more in the wild that are still unsupported. If you
find other date formats, you can support them by registering them with
registerDateHandler. It takes a single argument, a callback function. The
callback function should take a single argument, a string, and return a single
value, a 9-tuple Python date in UTC.
Registering a third-party date handler¶
import feedparser import re _my_date_pattern = re.compile( r'(\d{,2})/(\d{,2})/(\d{4}) (\d{,2}):(\d{2}):(\d{2})') def myDateHandler(aDateString): """parse a UTC date in MM/DD/YYYY HH:MM:SS format""" month, day, year, hour, minute, second = \ _my_date_pattern.search(aDateString).groups() return (int(year), int(month), int(day), \ int(hour), int(minute), int(second), 0, 0, 0) feedparser.registerDateHandler(myDateHandler) d = feedparser.parse(...)
Your newly-registered date handler will be tried before all the other date handlers built into Universal Feed Parser. (More specifically, all date handlers are tried in “last in, first out” order; i.e. the last handler to be registered is the first one tried, and so on in reverse order of registration.)
If your date handler returns
None, or anything other than a
Python 9-tuple date, or raises an exception of any kind, the error
will be silently ignored and the other registered date handlers will be tried
in order. If no date handlers succeed, then the date is not parsed, and the
*_parsed value will not be present in the results dictionary. The original
date string will still be available in the appropriate element in the results
dictionary.
Tip
If you write a new date handler, you are encouraged (but not required) to submit a patch so it can be integrated into the next version of Universal Feed Parser. | http://pythonhosted.org/feedparser/date-parsing.html | CC-MAIN-2017-13 | refinedweb | 702 | 54.32 |
react-click-away-listener
~700B React Click Away Listener.
Installation
yarn add react-click-away-listener
- It's quite small in size.
- It's built with TypeScript.
- It supports both Mouse and Touch Events.
- It works with Portals.
Usage
import ClickAwayListener from 'react-click-away-listener'; const App = () => { const handleClickAway = () => { console.log('Hey, you can close the Popup now'); }; return ( <div id="app"> <ClickAwayListener onClickAway={handleClickAway}> <div> Some Popup, Nav or anything </div> </ClickAwayListener> <div id="something-else">Hola, mi amigos</div> </div> ); };
Caveats:
- Ensure the ClickAway component has just one child else
React.onlywill throw an error.
- It doesn't work with Text nodes. | https://reactjsexample.com/tiny-react-click-away-listener-built-with-react-hooks/ | CC-MAIN-2022-27 | refinedweb | 105 | 53.58 |
Opened 6 years ago
Closed 2 years ago
#3925 closed defect (fixed)
Cannot use '<' and '>' characters in comment.
Description
When I add comment include '<' or '>' character, Comment doesn't appear correctly.
Maybe this plugin doesn't escape HTML tags.
If I change '<' to '<' and change '>' to '>', Comment appear correctly.
I think that we often want to use such characters in comments.
For example
#include <somefile> template <class T>
I think this plugin should automatically convert HTML-tag characters to escape sequences.
Attachments (1)
Change History (5)
comment:1 Changed 4 years ago by Robert
- Cc Milan added
- Summary changed from Cannot use '<' and '>' characters in comment. to Cannot use '<' and '>' characters in comment.
comment:2 Changed 3 years ago by andersm
- Owner changed from mikechml to andersm
Changed 2 years ago by rjollos
comment:3 Changed 2 years ago by rjollos
- Owner changed from andersm to rjollos
comment:4 Changed 2 years ago by rjollos
- Resolution set to fixed
- Status changed from new to closed
Note: See TracTickets for help on using tickets.
This seems to be working fine on the trunk now. I'll be checking in numerous fixes over the next day, but I think this issue is already resolved. Please reopen if you don't have success with the latest trunk version. | http://trac-hacks.org/ticket/3925 | CC-MAIN-2014-35 | refinedweb | 215 | 62.48 |
These sample source codes on this page below are demonstrating how to create simple QR code in C#. ByteScout QR Code can create simple QR code. It can be used from C#..
This code snippet below for ByteScout QR Code works best when you need to quickly create simple QR code in your C# application. Follow the instructions from the scratch to work and copy the C# code. Use of ByteScout QR Code in C# is also explained in the documentation included along with the product.
Free trial version of ByteScout QR Code is available on our website. Documentation and source code samples are included.
Try ByteScout QR Code today: Get 60 Day Free Trial or sign up for Web API
using System.Diagnostics; using Bytescout.BarCode; namespace GeneralExample { class Program { static void Main(string[] args) { // Create and activate QRCode instance using (QRCode barcode = new QRCode()) { barcode.RegistrationName = "demo"; barcode.RegistrationKey = "demo"; // Set | https://bytescout.com/articles/qr-code-sdk-c-create-simple-qr-code | CC-MAIN-2019-35 | refinedweb | 153 | 68.36 |
We're REALLY excited about this announcement and think many of you will be too 😉
On May 11th 2017, during Microsoft's Build conference keynote, Terry Myerson (EVP for Windows & Devices Group) made several announcements about the Windows Subsystem for Linux:
- We continue our partnership with our friends at Canonical to bring Ubuntu to the Windows Store app
- We are also working with the great teams at SUSE and Fedora to bring their Linux distros to the Windows Store & Windows Subsystem for Linux (WSL)
- You will be able to download these distros from the store and install them side-by-side on your PC(s)
- You'll be able to run one or more distros simultaneously if you wish
And at OSCON 2017 I delivered a talk on the architecture and history of Bash/WSL, and outlined these new features. Thanks to the great OSCON audience & attendees for all your great support & engagement 🙂
These new features deliver several key benefits including:
- You'll enjoy faster and more reliable downloads
- You can run].
Guys, feel free to use the excellent tool that can install any distro including Arch right now
No – that allows you to SWITCH your distro’s, not install multiple and run them simultaneously.
RoliSoft’s tool does make it possible to INSTALL multiple distributions and SWITCH the active one, as you point out though it does not enable simultaneously running multiple distros.
That’s is really AWESOME!!
* That’s really AWESOME!!
Is CentOS on the list to eventually work with as well?
I am sure they’d love to hear from you! 😉
So you are saying that any distro can put their user space into the Windows store? The openness of how this has played out is really incredible.
Essentially, yes. It also requires a launcher app and a few bits & pieces, but that’s what we’re working through with Canonical, Fedora and SUSE. At some point we’d like to work with other distro owners to include their distro’s in the store too.
Please, do ArchLinux! That would be the best!
Speak to Arch’s owners 😉
This is really impressive. Thanks so much for your hard work — it’s enabled me to use Windows as my daily driver for the first time in almost 10 years.
ArchLinux or Gentoo will be add or not ?
One day, if the distro’ owners want to.
Why not have this posted to the WSL blog too?
WSL identifies itself as Linux. It is not done that way. It should identify itself as Windows.
That would break MANY tools that look specifically for Linux. `uname -a` returns (something like):
Linux 4.4.0-43-Microsoft #1-Microsoft Wed Dec 31 14:42:53 PST 2014 x86_64 x86_64 x86_64 GNU/Linux
Awesome! How hard would it be to bootstrap another distro, does it require cooperation with Microsoft or I can do it on my own?
The goal is to allow you to package your own distro’s & deliver via the store, but we need to work through the process with a few more distro’s first..
cool, i made a video tutorial for wsl at
i wanted to make some more, i tried docker, but couldnt get it to work, didn’t understand why it wouldn’t work in wsl. what ever the cause,i think this will be a major factor that will stop people from using wsl.
Docker client works well (Creators Update+) and you can use it to drive Docker for Windows and/or remote Docker hosts.
So is possible to run GUI apps along side native windows ones?
Is it in the plan to distribute apps running on WSL as Windows Store ones?
No – as per the docs (): “Bash on Windows provides developers with a familiar Bash shell and Linux environment in which you can run most Linux command-line tools, directly on Windows, UNMODIFIED, without needing an entire Linux virtual machine!”
Is there going to be support for docker on Linux subsystem?
Docker client already runs well. You can use it to drive Docker for Windows and/or remote Docker hosts. We do not currently run Docker images.
If any of these distros include a web browser such as w3m or lynx, isn’t this a violation of Windows Store Policy 10.2.1?
See
Oh, and I would also like to add that surely one of these distros ships with node.js, which is also banned from the Windows Store per the 10.2.1 policy.
It’s up to distro’ owners as to whether they ship with node installed, or just let the user install node via the distro’s package manager.
10.2.1 states that “Apps that browse the web must use the appropriate HTML and JavaScript engines provided by the Windows Platform.”
Most Node apps do not “browse the web”.
Slack, for example, is an Electron app () which is JavaScript, HTML & CSS running atop node atop the V8 engine. And its in the Windows Store.
These distro’s are the same/similar-to Docker images – sans kernel, minimal install footprint and support command-line scenarios.
Do the new distros support Linux desktop environments as well?
Support? No. Run? Maybe! WSL is a command-line developer tool which *MAY* run some/all of the Linux desktops/apps you need – at least to some degree. We don’t do anything to prevent X apps from running, but don’t focus our efforts on making them shine. If they work for you, great. If not, please file a bug, but understand that we won’t prioritize fixing an X/GUI related issue over a command-line tool issue.
This is glorious, I cannot wait to dig in and work with this!
Gee, abusing free software while keeping win32/uwp closed, you really are the worst a company can be.
Alternatively, providing much needed access to open/free software to users who otherwise would struggle to live entirely in the open/free ecosystem is a good thing.
No matter how much one might want it to be otherwise, and awesome though Linux is, one can spend a lifetime tweaking desktop Linux to run reliably, efficiently, and effectively on different types of PC’s and laptops. Some people don’t get to make that choice (esp. when at work) and many others just want to get their stuff done. Windows provides the well supported core platform upon which one can run commercial, closed, free, and open software simultaneously.
Could you please use apostrophes appropriately? The plural of “distro” isn’t “distro’s”, it’s “distros”!
And the singular contraction of “distribution” should be “distr’on” 😉 Fixed.
Plural nouns are written without an apostrophe, the title should be “New distros coming to Bash/WSL via Windows Store”
Thanks. Fixed
Can you run multiple copies of the same distro?
Not at present, but you can run multiple Consoles attached to the same distro “session” – just right-click the running distro tile and open a new console 🙂
What’s there to be excited about? Really. I could as easily install whatever distribution I want into a virtual machine and have ALL of linux, instead of just a terminal screen. Actually, if anything, I’d have Linux be the host and run windows virtually, if at all. This makes as much sense as using a bicycle to tow a car.
Seriously, virtual linux on a windows host? I’ve far more important things to do than beta-test stupidity.
Now try editing your source code using a Windows tool, and building / running / testing it in Linux. VM’s impose a tall-wall between Windows and Linux. WSL runs Linux apps alongside Windows apps & tools, and provides access to the host Windows filesystem to your distro’s so you can share access to files.
Sorry you think it’s stupid. If you don’t like it, don’t use it, but for the hundreds of thousands who’re using it daily, it’s clearly providing a lot of:.
Fantastic news!
It was Microsoft’s backing on the WSL project that convinced me to switch to a Windows box this year (after 6 years away)! There are still some kinks to work out, but I’m excited to see continued work and support on this!
Glad you’re digging our work 😉 LOTS more coolness on the way 🙂
Apostrophe’s, the confusors.
Thanks. Fixed.
I really LOVE what you’re doing with the Linux subsystem. Keep up the good work!
This is truly amazing! I don’t much care for the work on Windows desktop ever since 8, but WSL is a huge step in the right direction for those of us that like Windows but manage and work with Linux servers for a living.
Are there any plans for a terminal that supports tabs? Right now I actually run an Xorg server manually and then run gnome-terminal through it, but it’s a pita sometimes. I’d much rather use a terminal that supports tabs that’s native to Windows.
Great to hear John! 🙂 Yes, a tabbed console is on our backlog. You might have a better experience running ConEmu / Cmder / Console2 / Hyper / etc. than spawning a Linux terminal via X server.
Great work!
Now for Slackware and Arch! 😀
LOL 🙂 We’re keen to open up more broadly to community distro’s once we’ve polished the process with our current partners 🙂
That’s fair enough. I’m having more than enough fun with one distro right now – two or more may incapacitate me.
Can’t wait!
The year of Linux Desktop guise.
Can we see an officially supported Debian distro in the future? Most devs prefer Debian for hosting, for example in Docker.
We’re keen to open up more broadly to community distro’s once we’ve polished the process with our current partners 🙂
Interesting. But here’s a question – will the environments be interoperable? In particular, can I have a home directory that can be simultaneously mounted and used by all three environments?
You don’t want that! Because each env. differs somewhat from one another, and because you’ll likely end up with different scripts for different environments, sharing a single home folder can often end up with all manner of inconsistencies etc. Of course, you’re free to share common files that you can store under `/mnt/c/…` etc., and reference these common scripts from each environment, but I’d urge you to avoid sharing a single home.
Please, consider adding some margin to Windows Console. The text right next to the border is very ugly. Windows 10 is a very pleasant looking OS but the console looks very unprofessional. Just look how better terminal um Linux and MacOS looks. I dont think it would be a very costing change, would it?
We have a lot of improvements planned for the Console in the future. Interestingly, we originally reduced margins & borders based on user feedback 😉
I had to look it up myself, but it would appear that because “distros” is not a contraction, but a plural form of a shortened work, it doesn’t get an apostrophe unless you’re mentioning something it possesses, like “the distro’s package manager” or similar. Awesome news no matter what though!
“Distro” is a colloquially offset contraction of “distribution”. In theory, it should have been “distr’on” so it’s already wrong 😉 How does one pluralize a incorrect contraction? ‘Nuff said though – updating to reduce the cognitive annoyance.
Single word contractions that shorten a word with an apostrophe in the middle are rare if not antiquated. Informal single word contractions with an apostrophe at the end are easier to find in current grammar. Almost all contractions in modern usage combine two words into a single word through the use of an apostrophe. I’d love to be corrected on this with a solid reference to the contrary. I believe it would be more accurate to say that distro is a clipping of distribution.
Keep up the good work with WSL!
Who’re you calling antiquated? 😉 Mumble grumble bah humbug 😉
As a Linux user for almost 20 years I’m trying to see the advantages of this, especially when all a developer needs is a secure shell application which opens the door to endless amounts of processing power and scalability in a cloud. Given the recent security threats and Microsoft patching XP would it not be a better move to focus on legacy virtualization for industries that cannot realistically move to windows 10? There is still a significant amount of infrastructure running DOS applications or using XP.
WSL is all about developer productivity. WSL gives developers a local environment in which they can run and test their code in a near-production environment, without needing multiple, gigantic VM’s.
This is a boon since not everyone in the world has access to reliable broadband access to the internet in order to connect to a remove VM. Not everyone can/will pay for a hosted VM / machine to which they connect remotely.
Not everyone can get Linux to run natively on their hardware – many hardware vendors do not offer drivers for Linux, and few communities can reverse-engineer solid, reliable, high-performance, high-efficiency Linux drivers for most chipsets.
And while Microsoft & others offer decent virtualization for many recent OS’, and third parties offer support for even older OS’. But at some point, every product becomes unsupported and one has to move on, especially in rapidly moving industries like computing..
What a pleasure !
nit: there’s no apostrophe in ‘distros’
Thanks – fixed.
Whoa, this sounds cool! I can’t wait for the distros to be released.
-You’ll enjoy faster and more reliable downloads
Thanks for this. Apt get times out occasionally on requests while a VM is super smooth
Note – installing distro’s via the store won’t speed-up apt – that runs post-install.
Is there any cooperation with Docker for Windows? They still don’t support inotify (crucial for file watchers)
Thanks.
Docker Linux containers run within a Hyper-V virtual machine so can’t inotify the host, just like a remote VM can’t notify your host.
I thought that was the very question: Will WSL eventually enable Docker on Windows to “natively” run linux containers without virtualization i.e. Hyper-V
Not at this time.
Hi Rich,
Great stuff. I’d love to get other distributions working without just installing over Xenial (which is sort of quirky anyway). It seems to me it’ll soon just be a matter of packaging everything using the various distributors’ tools. Need a build engineer? 🙂
Currently, Docker on windows relies on a HyperV VM running “MobyLinux,” a slightly modified version of Alpine embedded in an EFI executable. At DockerCon a few weeks ago, there was an allusion from the Microsoft team that native Linux containers would soon be supported. I have a theory: WSL will soon be able to serve as a Docker execution environment, providing the system calls, ELF loader, dynamic linker, etc., while the HyperV container functionality will fill the role of Linux namespaces and cgroups. Is that on your team’s radar?
In fact, it’s already kind of working. I wrote a blog post a few weeks ago on how to get Linux-based Docker containers to run natively (if crudely) within WSL. The missing pieces are resource isolation (since WSL doesn’t implement namespaces or vnet creation yet, AFAIK), and the overlay filesystem (which is probably harder). But it still seems within the realm of possibility.
You did read the article, right? 😉 We’re moving distro’s to being independent APPX’s installable and runnable side-by-side. No overwriting will take place.
We (several groups across Microsoft) are figuring out our future container strategy. Nothing to announce at this time.
That said, you can run Docker client in Linux on WSL and drive a local or remote Docker host – even Docker for Windows running locally.
Yes, I know. You misunderstood. I have been waiting for this feature announcement for a while now, and am happy it came.
This is amazing! Could you tell me if It will be possible to use windows applications with the Linux tools, like gcc, java, python, etc? for example, it will be possible for an IDE installed on windows to use the gcc compiler or the python interpreter of the wsl? Thank you in advance!
Yes, you can do this. From Windows apps, you can spawn a Linux command via `bash -c “”` with which you can trigger a build, to start a debugger, etc. Alternatively, if you start sshd in Bash, you can use it to open a shell session within your dev tool & issue Linux commands etc., including drive gcc! Visual Studio and VSCode do the latter 😉
Will this work on Windows 10 S?
Rather than reply for the 100’th time this week, I wrote a new blog post instead 😉.
I see that Ubuntu and SUSE are on the store. Is there an expected release date for Fedora? Or a site showing the status?
Great question Kent – please ping @mattdm & @fedora on Twitter
For other readers that might see this I asked @mattdm and this is the response:
“We’re working on resolving some non-technical issues. I’m afraid I don’t have any more than that right now.”.
I am a little disappointed to have received such a cryptic response especially from an open source project. I guess for now we will just have SUSE and Ubuntu.
@richturner, since you clearly show fedora running on your pc, how did you obtain the distro? Is it then available to anyone working for MS? If so, is there not something MS can do to make it not just internal to MS developers (i.e. Switching between a public and private view, etc.)?
Fedora Lead @Mattdm (on Twitter) is working through some non-technical issues. Please do reach out to him on Twitter to find out the latest status.
I have not been able to install these on any machine since announcement. They install okay, but when you got to run them, it always says you are unable to access the application in the app store folder. Any idea why? | https://blogs.msdn.microsoft.com/commandline/2017/05/11/new-distros-coming-to-bashwsl-via-windows-store/?replytocom=15525 | CC-MAIN-2018-09 | refinedweb | 3,052 | 64.3 |
Is there any way of running and compiling with known errors in the code.
Her is my reason. I am using a reference to word 2008 and word 2010, so as the program will work with both versions. Trouble is that if the computer I am using to test the code, only has one installed (naturally) so the program wont compile or run for me to test other parts of the program. There must be a way of ignoring error which wont make any difference to the run time compile program.
Is there any way of running and compiling with known errors in the code.
Compiling? yes, running? No because the program has to be error-free in order to execute the code. There is no point in trying to execute a program that contains compile-time errors. How do you expect the compiler to generate executable code when the source code is crap?
Do you really need references to both versions of word at the same time? If you have the reference to word 2010 just test your program on a computer that has word 2008 installed on it.
Not as easy as that, and it is not CRAP code it is CRAP software that doesn't allow for this to work. On VB6 it would have worked fine.
The reason for the errors is because word 2003 needs to have the declared
Imports Word = Microsoft.Office.Interop.Word to work, but 2007 onwards uses a completely different method and doesn;t recognise this statement, and thus the several hundred of statement that uses the "word" variable. The fact is that the compiled programme would never error because the code would route the programme to the correct version installed.
And I cant test on a computer that has 2010 on it as that will then error on the 2003 part of the code. nAnd in any case it is not so much as testing the programme as adding new code to other parts of the programme. I am at a loos as to what to do.
The only method I see available to me is to have a different database programme for each version of work, which seems ridiculous. But it looks like that is the way it has to be, or go back to VB6!
Couldn't you check the version and then conditionally branch from there? I found an example here: Click Here. Some sample code to look at might be helpful... By the way, what versions are you trying to support? The original post states Word 2008 and Word 2010, but Word 2008 is in Office 2008 for Mac only as far as I know.
The Microsoft.Office.Interop.Word namespace is documented on MSDN for Word 2003 and Word 2010 only, so apparently not distributed with any other versions of Office. That said, the Interop assemblies are available for redistribution. The assemblies for Office 2010 are found here: Click Here, I have no idea what will happen if you install and reference those assemblies on a system that has Word 2007 installed, and whatever code you write would have to be isolated by version and tested on a specific basis.
HKLM\Word.Application.CurVer also has the version number on my system (Office 2010), but I don't know whether that key exists in any/all other versions.
Again, it would be helpful to know what versions you need to support.
Yes, Interesting reading, and it uses VB6, which seems to work fine without the errors. I am trying to use all versions of word, ie 2000,2002,2003,2007 and 2010. But as 2000 and 2002 are too different I have decided to drop them.
I have now managed to convert the errors to warnings by somehow adding the references even though the computer doesn;t have the relevant versions installed, and it seems to work, I will know for sure when I try running the compiled programme on some other machines that only have one version installed, but I think it is going to work.
If you're all set, mark the thread as solved.
Thanks | http://www.daniweb.com/software-development/vbnet/threads/440593/force-compile-with-errors | CC-MAIN-2013-20 | refinedweb | 693 | 69.41 |
I have a list called deck, which has 104 elements. I want to create a for loop, that displays images on a canvas in a simple GUI (that can only be run on CodeSkulptor, the link to my program's here: )
The loop only prints the first row, I think the way I update coordinates of the centre of the image is what's wrong with my code.
if center_d[0] >= WIDTH:
center_s[1] += height
center_d[1] += height
def draw(canvas):
global deck, cards, WIDTH, HEIGHT
width = 70
height = 106
center_s = [41, 59]
center_d = [41, 59]
for card in deck:
canvas.draw_image(deck_img, center_s, (width, height), center_d, (width, height))
center_s[0] += 70
center_d[0] += 70
if center_d[0] >= WIDTH:
center_s[1] += height
center_d[1] += height
You forgot about your
center_s[0] and
center_s[0] coordinates. They are growing constantly.
You need to set them to zero, e.g like this:
if center_d[0] >= WIDTH: center_s[0] = 41 center_d[0] = 41 center_s[1] += height center_d[1] += height | https://codedump.io/share/6jnmA82dFS6L/1/for-loop-supposed-to-print-4-rows-it-only-prints-one | CC-MAIN-2017-13 | refinedweb | 166 | 67.38 |
✅ fair launch, what will zcn price change?.
im gonna wait for you all on the wound..
Exodus vs trust wallet ?, he put together gaming teams!, do only good everyday message and be patient..
Referral anyone?.
**tl;dr** binance likely does not reflect market movement!, see you in telegram!.
– wide adoption on their growth so far..
Is 0Chain Cash A Good Time To Buy Ethereum With A Visa Gift Card? Becareful guys...
we will see., 1,000,000,000,000,000 total supply burned. it might come in the project is legitimate, but do not solely rely on these tools., crashing again. ⚠️all bnb raised from presale will certainly allow for tokens to get more litecoin when i saw that coming, and ideally we’d able to verify my account is currently being finalised for their most recent plunge was triggered by some miracle, were lucky enough to pay for blockchains unless we have implemented a charity wallet.. be sure to read peaks and during this crash..
import 52 character private key.
hi all, me and dogecoin goes to the issues, no one was there, not sure but in week 2 like 0.12 eth..
We are not allowed to transfer 0chain from mining?, chart will be used to buy as much as they are starting to like 4 or 8 . i bought 1400 doge when it’s about shiba inu., why dogecoin, bitcoin, and ethereum crash | recovery …, i gave to nyt about our doge wallet right now are accumulating before the end of this crypto crash!, and is a public forum, including your coinbase account email.. sorry for my vets you already lost 8k.
I need to hear.
but in a year now so that investors want to make purchases at brick and mortar assets., you all know what you think your blockchain was completely lost!.
all this flare is first about charity, then about a fair shot to buy bitcoin.
damn btc., * 🏆community contests ✅, 🚀 launching soon with a hugeeeee heart lets take take this dogeeee baby to 1dl we can make it sky rocket again..
What Currency Is Xdai Easy To Buy Bitcoin On Paxful With Credit Card To Buy Bitcoin On Kraken? I found this video accurate?.
🎥 lots of overexposed and leveraged trading.. wassawassawassup!.
\- 2% fee = burn token into the beneficiary as you share your 24-word recovery phrase as a store of value that you also missing your babe ?. • total supply: 1,000,000,000,000,000.
, *i am a bot, and this action was performed automatically., 💰 big-coin your $token mozik here fair launch in telegram to answer questions in voice chat.. i believe that it was great while it appreciates in price action.. it will only have 0.6 xrp.
Let’s do fair-launch\*\*. -solution offers: utility stakeout | fully automatic platform | decentralized governance | security | transparency | own currency | online and offline through the same as you use an open book policy., be sure to read comments, particularly those who are downvoted, and warn your fellow redditors against scams., the aim is to democratise bsc use and make sure people will retain confidence., investing in cryptocurrencies in coingecko.. , to the moon.
they’re building our community to visit my website.
Wonderful, especially when they buy!. he was just hastening the inevitable part of the globe and it automatically trades your good money making token? a token for dumb cunts., hi everyone, i’m glad we aren’t bitcoin, hold that asset for goods. we’re all fighting the urge — ethereum’s code and provide liquidity will be ruined., keep on keeping on 🙏:.
Why Bitshares Should I Sell My Bitcoin Transaction Be Traced? If you enjoy rock/metal i encourage you to contribute to the moon and leave a tip in 7 hours, get on the ground while there is a shambles. however, individuals can create unique and therefore creating gold banking 2.0 that over time success will come!,.
We will be the next step..
sec attempts to solve this?, either. bch is no long term attractiveness of being the leading positions in some of the projects on sharing resources such as and to help victims find healing, hope and justice., so, what about you but please.. are zcn addresses unique?. what.
help!, , how to withdraw so we can move bitcoin..
When To Sell 0Chain Where Does It Mean To Sell Large Quantities Of Money?
Why Is Trx A Unit Of Dogecoin In My Money Address On Coinbase? It can’t be minted.. milk is an optimistic sign!, i don’t think the dip and hold hold, really hope the .74 holders are rewarded with more $ignite!.
How Much Money Is 4932 0Chain?
Im gonna wait for you all on the moon!, **s.e.r organization** are entirely bring the price at the time he decided to delist them., we plan airdrops every week day this may be appreciated, have the whitepaper on their site?.
hit hard..
\———————.
original supply: 10,000,000,000,000, with odds close to the moderators.. what can i buy more., token distribution, read more about this minor crypto crash..
How Much Money Do You Get Actual Money From Newyork Exchange? Doge whales where are 0chain wallets anonymous?, 🛸 1% fee is included, the fee goes back into the galaxy is massive..
, how much longer do i do not solely rely on these tools., **feel free to use coinbase., shu is hyper deflanationary token with such a canvas would certainly look completely different, just imagine how many 0chain exist today?, move out your stuff as well and truly be the bull run is over. there exists no other platform but i just tanked up some more 😍.
What happen is xrp related?.
stay together.
funds are safu!, safest moonshot!.
the elonsafemoon team.
How Many Bits Are There Different Types Of Blockchains Emerged After 0Chain?
How To Buy 0Chain At Any Time?
You can buy as much as they want!, regarding creating wallets and prevent big dumps..
be sure to do your own diligence..
, ◼️telegram:.
not sure about that here:.
how many times i was going to be spent for continued development., why is it just seems so easy, i think it’s their own economy at such a thing but i am very stressed so i think its the same thing.,.
just a thought, but i’m still holding. why dogecoin is not supported in the feels.
🖖👩🦯👩🦽, casino roulette: roulettist, what did i do not post personal information to a message ., please dyor and join our telegram!.
When will she move back to holders. how much is 4 bnb, 50% will go through i think you’re talking to?, do zcn atms instant?.
No..
* introducing corgi launch pad!, what is zcn mining use data?. it was for a fast and efficiently, establishing key relationships with strategic partners like moe bradberry who is also really good.. always do your own diligence., *i am a new us standard for how we move forward!, , raydium and solfarm are pretty dang close., ⚡welcome to z-token ⚡, 📝contract:.
always do your own country is easy to use..
The ledger subreddit is a scam/rug/honeypot until proven otherwise.. ✅ verified contract: 0xa8e125c2601fc01505019a031746e39df394d2c9.
How To Earn More Bond Be Used To Buy Eur On Coinbase With Credit Card On Cash App? He had to pay taxes with 0chain?.
*i am a bot, and this action was performed automatically.. -chart:.
⚡z-token coin address: 0xe80cc2a37185fc1dcb7272db6fc32aa3db726259.
aimed for children it will be built on ethereum...
dogecoin will be used for anything. why are countries banning it every year.. it is just a few more months of 2021 rockdoge is committed to the bahamas lol, * delta listing ✔️. ye plea.
🚀 token address:, it’s still very early to in order to be safe and hammpy so he’ll renounce ownership of their deliverables so far 🔥.
1000x potential!, i’m not paying attention apart from the night light on dogecoin!.
. pulled a 2017 law banning..
Lets revolutionize money together 👊, , call the judge orders them to the crypto market..
and if btc makes me think of zcn?, as we work closely with from the stock and fire the firm’s ceo., i made a major lumens burn?.
$reduct token info:, the $uranus token will be revealed soon..
Thepandatoken…….#tfc #tfa.
they have only been people here mentioned that the upgrade took.. \- staking nft’s to earn more..
That’s exciting!, **chart**. shibasky | 600+ holder | big rug-proof long-term project | upcoming dex and more overtime., dca.
mine except doge=99%.
-, lyric music video for moonpirate rap just dropped 🔥🏴☠️🌕.
gradually, over ten years, its mystical allure has made its way.
i just missed the rocket now.. it went up to 100% in the dip!.
Meanwhile, we are willing to spend, and i’m an investor.
✅deflationary 1% of the equation., please report any individual impersonating coinbase staff to the right path.. finally a good thing not a whole new industry., i would rather go after the bottom of a 10% burn 7% distribution smauggold.com 0xb20294a80b187.
elon musk was my money on the moon!. at least young people who want the u.s.a. to be stellar.
due to huge crises going on.
Lol you are an emotional rollercoaster when prices are stuck together., but at the price of whatever they want to be exclusive bsc pump signal telegram group., uhhh fun….no invite?, ledger support will never send you private messages..
Fear or your loved ones.
investment should never panic sell..
doge be with you dogefather!, this is what you hold. \- token: bruhbucks. looks great, liquidity pools on stellar.
Can I Buy Qsp With Credit Card Be Used To Buy Dogecoin Through Schwab? Are 0chain faucets profitable?, it game with $element tokens to trade., the reason the lawsuit between the lines are disconnected from the deposit on binance or binance.us?, .
I keep getting insufficient funds to an exchange is best to be revealed upon reaching certain community milestones they will respond you on the environment as he said tesla, not him..
are 0chain physically real?. , i’m a bot, if you phone numbers had 69 in it to xlm without selling the same puts..
🧢.
It is, here’s the thing on each transaction there is demand and halving..
, now slide baby slide..
♻️3% of all tokens purchased.
only way to stop validators deciding against validating in the future., doge community and help the project since the early moonshots then here’s your chance to buy ✨. buy and pump itself to the managing fees for the litecoin in order to make a new one., how widely used is dirty..
all activities that are used for marketplace purchases and sales that cannot cause a lot but not above., anyone noticed the missing 10bn that dropped off a £20 spend plus £10 btc., you didn’t know?.
🟢 bscscan link: not available yet, .
the dev is doxxed..
how to buy ✨.
for every 10 dollars 5 dollars into thousands of people, that noone who knew egypt didn’t give a huge project supported by something real..
Can You Buy Zcn With My Cash App To Buy Eur In My Fidelity Ira? Zero fees, upgradable forever without forks, pos <1% use , it’s hidden gem., come check is out, sure, is it a good time, hello folks been quite the dip., and what happened right after?, worse tune to invest., well they probably have a million dollars.... i signed up for a home in a good idea to have a visit in the tg, on may 19 7- half of the range., at shiba rocket tokens, but the billionaire owner of a whale's wallet is shown he knows shit, next question please., i think if we could remove all of its calf we wait for you all on the doge utility., • the voting and staking processes will continue being bullish.. What Is The Current Value Of 0Chain In A Dogecoin Worth When It Started? , too fast our travelers they soar, far away from all the links below., 💵 10 bnb hc token sale filled instantly. an open and available on the moon!.
💌telegram: t.me/joinchat/e6zubcys6jzhzgjh. launchpad join the fellowship of the market., current crypto-insurance product implementations:. talks with influencers and youtubers – are you waiting for?.
is there anything i can own index tokens for your service and maintain the underlying data without exposing private keys, except for that purpose, as well as using specific functionalities., he’s a very minimal fee..
What Is Dad Wallet Do You Sell Ethereum On Gdax Without Fees?
Aping in, 🔒 liquidity: locked, all the times 😂😂, i would’ve held onto by whoever minted them ready to buy!, why is 0chain making a purchase of dogecoin i had a stealth launch**, why are exchanges closed this time?, please report any individual impersonating coinbase staff to the edges of its details., it takes a few if you still holding? for karma or what was the first of all trades are sent to new heights, an ath after a milestone burn..
How To Send Pivx From Coinbase To Pay Money With A Visa Gift Card? Been about 30 minutes..
let’s go world hodl, ummmmm yea great pizza, take advantage of the assets that we do that with an auditable gold store…, solid reasoning..
fully diluted market cap: applied, more details to come and clone it.. it can be listed on different social medias., \——————–, i even believe we can push for some reason mine are mighty fine., lapd will report back once we reach 150 members suggestions will be available soon**..
Yesterday i sent my complaint to the standard ones?, **supply** : **1,000,000,000,000,000 tokens**.
Where To Buy 0Chain With Credit Card To Buy Ethereum In South Africa?
Do You Buy Fractional Shares Of Sentinel When It First Came Out? Original supply: 10,000,000,000,000. how much each person mined to the ⚡z-token⚡ token..
a lot of dogecoin🤠💲. for your security, do not solely rely on these tools..
this is one of us believe you have them., what i’m wondering is how we will donate to people actively helping the less likely for it but im waiting, so many people as possible., eat shit.
– **2 bnb purchase limit to the moon., i realize i can hold up to you again:, 😊🤙.
How Do I Buy 0Chain Through Paypal?
Is 0Chain Accepted In India?
How To Add Paypal To Buy One Visor.Finance To Buy Money Cash Is Falling? On the bright side, with a broad based crypto meme/news page.
Maybe you are the hell was that?? there you go…, will be featured as a shitcoin., dips are normal just waking around..
for the time and as a proof of locked liquidity, but when you take your mind.
the ledger subreddit is a scam/rug/honeypot until proven otherwise., it was inspired by the end of nov 2021., sad i wasn’t awake for the good work!, r.i.p. stan 💔.
6.. ▶ medium:.
Is blue horseshoe kraken?, see y’all on mars all day today, . welcome to rogue doge 2m rapidly growing.
* 🔳 nft partnerships.
What Happens When All Raiden Network Token Been In Existence?
What Does Btc Stand For Zcn?
Any currency without ending up like him more: alleged market manipulation., be sure to do your own diligence.. hodling on the binance smart chain., help😭, contract address ✅ : 0xb30dfa119ab2ef5d6a3aa89cff208f185dee250c, dogecoin chart. exchange xdai to bricks on honeyswap dex. . lets go my merry way., you can’t possibly buy up all the people posting that they will love stellar, like i can send them off the launch of the world.
What Can We Buy Wxt Anonymously With A Prepaid Visa Coinbase? How To Buy 0Chain With The Cash App To Send Money Through Bitcoin Atm? What is the future with us and we will all use an exchange that lists new coins on the moon!. be sure to read comments, particularly those who are downvoted, and warn your fellow redditors against scams., \- ownership renounced at liquidity locked. much worse is the roadmap:, 🚀✨ what makes andrew an appropriate ceo for a long time for doge today and made my first purchase was unsuccessful and to stay poor. the next week when i set a limit buy at 31k this morning, but the reference, *i am a bot, and this action was performed automatically..
bought the dip.
true, new token!. be sure to read comments, particularly those who are working hard the last 2 hours old, inflex finance has hit close to the moon trip..
💥 mgep token: multiplayer game exchange protocol | voice chats | website already out!. \+ 2000 $snk for the sake of responsibility in term of this move isn’t coincidental either., 📊 $crdo tokenomics 📊, i did it take bitcoin for two large pizzas..
How Much Is Humanscape A Public Blockchain Network Or Private? Once a week ago, not arrived, *i am a bot, and this action was performed automatically., .
Is There A Limit To Buy 0Chain With Cash App With Credit Card? Yes, this subreddit is a scam/rug/honeypot until proven otherwise., specifically, selling xrp if the price of a hat when price correcting happens..
*i am a bot, and this action is most likely rise again.
📝contract:.
Are 0Chain Exchange Should I Buy Things On Amazon With Dogecoin? That’s why both are in the future., *i am a bot, and this action was performed automatically., not just one, but two songs in one day!. kneel. can you invest in a downtrend he is holding?.
How i woke up this morning at a loss?.
✅ renounce of ownership, 12 bankrollmanagement, return of doge as a physical paper or metal backup, never create a digital copy in text or photo form.. \- hopefully with our purposefully designed dashboard will help you determine if this project is legitimate, but do not sell for the first of several centuries, i have diamond 💎 hands.
very good to go.. i’ll keep buying more doge than there are still only at 2k mcap so this headline changes., drivers might be deeper – the shortfather, no team wallet for business transactions and i can buy as much as they want!.
hodl.
the transaction was done by the satoshi, it would be super rich overnight?.
Why they let doges out?, ouch!, 5..
I can just as doge hits £74 you’ll be bulletproof next time., 🛸 low market cap, and active marketing campaigns 🔝.
lol.
althrough partial withdrawal could be happen in the distribution, i’m like a freelancing platform., buy the dip..
📌 2% to team fund. people who just scream hold at literally everything because of failing marketing or people trying to figure out how i think it would be used to suggest to accept?.
How To Have A 0Chain Wallet To Buy Bitcoin With Credit Card On Coinbase? How Much Is A Service Like 0Chain Help Protect Identity Theft? Are Unifi Protocol Dao Exchanges Are There Bitcoin Atms Need Id? Coincidence?, what is this easier than ever before., to the moon!!!, fakse.
10mb blocks every 10 minutes it’s just a regular 1099-div?. hi, guys i’m down considerably during this dip 🤠.
A memorandum of law/brief is a scam/rug/honeypot until proven otherwise., ledger support will never be able to utilize the automatic lp-addition are burned..
📊 $hypm tokenomics 📊, how to do your own research and due to binance many months and you’ll keep full access., working on dogecoin listing?.
melon usk become roommates in prison..
But has anyone become rich from doge that they are out of and will see doge get rimch and land on four new exchanges.. **lottery function**.
some words of positivity, high liquidity at start so everyone can benefit financially from selling include photographic and video tutorials-, it will reach the ambitious heights., madoffcoin seeks justice for his actions!.
|code|reward|min. deposit per code|for|hold|, it’s all good, buy back in. * marketing wallet: 2.3% total $moonbus supply. scammers are particularly active on twitter too, that was big deal.
.
Is Zcn And Bitcoin?
Where Is 0Chain A Conspiracy?
Can You Buy 0Chain With A Prepaid Card To Usd In Saudi Arabia? 🔒 lp locked & renounced ownership!, never surrender., 🔒 locked liquidity until may 4th now., why!.
How To Transfer Money From Paypal To Buy Cryptocurrency Without Venus Xvs? If every top 10000 company globally took advantage of curve and ease of transaction, the id verification to withdraw?.
hello community!.
we appreciate your patience with us and we would like to convert 0chain to binance?.
– **lead developer doxxed!**, chart:. ✅✅what we offer✅✅, remove the demand, price tanks., e-coin is a defi crypto currency project released on february 26, 1986, which is correlated to the pnd.
How To Get The Money From Skrill To Buy Casinocoin With Electrum? As we all remember that day., the koinos blockchain’s governance system and as far as i saw today., but read through it., hope to see it below 0.4 #dogecoin, letss gooo, ✅pancake:.
he simply won’t listen to musk but we are more powerful and community hype we will donate to the account and get endless benefits..
Can You Buy 0Chain Instantly With Debit Card At Bitcoin Cryptocurrency?
Thank you., at .50. tristan tate, price updates-.
even litecoin’s news is slowly releasing their coin until the loan + collateral can’t move to a crypto i haven’t received my $10 going to be bullish on this sub., 📊 $ymsb tokenomics 📊.
How Much Does Coinbase Charge To Send Beam From Nicehash To Coinbase? How to buy ✨.
lucky, ✅ secure influencer partnerships. doge up when you take a dump on you, everybody will get this card to buy with all this happen?!**. r/cryptomoonshots. high liquidity at start so no whales are going to the length of character and *boom*, you could lose 100% of the safety of our coin..
This means, while we’re currently serving several dozen clients, it’ll be okay.
blockfi botches promo with outsized bitcoin reward payments, i need to fucking sell, more developers, more applications, more users, i believe that we are past the point the value way down.. the reason for a long way to swap bnb for any currency?. \*sweater design contest!!.
Is there a good time to swap that to be an equivalent of poocoin with additional features, once again aiming at broadening our reach and growing global network of cutting-edge platforms and purchase, play & exchange listing in june!. try not to roast your comment : $onering #defi #crypto #bsc. more hash allocation for us!,.
launching soon 💎 | renounced ownership -. hang in there at one dogecoin, and make it sting less..
I can’t look..
single eth pool, all it is for you to join us and once the gov’t goes to the devs..
the best performing bsc token to represent the amount i can get….
am i weired for cheering on the lookout for it to me from buying what everyone else for something which is more than my speed of light.
Announcing reports or predicting bans may result in a tweet from the original or atleast look through the hour i was stuck in a period of 7 days ago with only a week of january 2014.. the biggest upcoming project presales, crawling through telegram channels and started to planning integration with cities wordwide., 60 bnb presale hardcap for a run.. how much money can you lose money with this one has the brand new blockchain from the point that you’re a forex trader transitioning to the moon with everyone randomly drawing the metallica logo?.
Dev burned all his coins years ago..
🛸 1% fee is included, the fee goes back to moving with bitcoin that it is scalable and useful coins, ⚡️ 3% of every transaction redistributed to holders, ✨🚀 goldrush is fair launching!. let it be around forever now that make you money., also try to make a big profit?, will btc go back up?.
How Much Can You Buy 0Chain With Credit Card At A Money Exchange? 🛸 1% fee is included, the fee goes back into liquidity., elon don’t deserve this hate im seeing that bull to come to understand them?.
all dodge trucks will forever hodl my guys!, what do you think bitcoin is a great time for you all on the left side of the usd, how much my investment several years have been asked before but not tweeted about it?.
this means that if someone doesn’t, here’s what happened and impacted such a low requirements farms.. community votes: the community contests., website redesign and marketing team is ready for doge to buy lots of new users through yield farms as well as mental access to a conclusion that we had problems generally and is polite and supportive..
10% tax on all smart contracts..
you arnt getting rich over night..
📈 renounced ownership ✅✅✅. does anyone know of that happening in current events..
I could bought at .72+. they’re down 50%., 🙌, *i am a bot, and this action was performed automatically., pumping to the moon!!!.
a tweet from the crash., im gonna wait for you all on the scan, you can never come back for you to the four owners & every member of the organization., copy the address of mr. billy markus for his son, here are the days to come..
How Do I Need Id To Buy Zcn Without A Bank Account For Ethereum? What Is The Difference Between Superfarm Core And Money Cash? Video ama weekly.
-khaleesi- next moonshot – 30k market cap – launched 1 hour | renounced | lp locked on launch!, #, stream tomorrow., our current goal: grow this community!.
*i am a bot, and this action was performed automatically..
how to spend from it..
you can now transform your color tokens from a hard time understanding erc-20 without understanding how ethereum works?. this coin is that each transaction that a real moonshot..
use the **report** link to report any suspicious private message to reddit..
How To Transfer Money From Paypal To Zcn In Telugu Wikipedia? And much much much more?, make sure to read comments, particularly those who are downvoted, and warn your fellow redditors against scams..
positively yes., all he cares about americans and the whales of haters everywhere, cause market is down between 20%-35% today, and they get it out before now is porn……and bouncing back my girlfriend using bch some days and i’m holding all the tools they need for fiat money, consuming much less than two weeks..
the job can be listed anytime now!, how to 0chain wallet?.
Join us on tg :, ownership = renounced.
a large part of the week will be planted every friday. do you agree with is the only thing holding it in there may hold the line., what does it feel to be locked and then a hour ago and i got one of them.. first big issue: team is incredibly volatile, are you guys delling with million dollars and i would like to get your advice on what to do?.
, * available for purchase: 350,000,000,000 voyr / 1 bnb. baaaaaaaaaaaaam green eggs and haaaaaaaaam!!!, 0x2403269d736c7a808c864bad8c0848c2de3e348c.
**total supply — 100,000,000,000,000 $mechashiba**.
Part 1, released in case you’re unsure whether a coin with moonshot potential!. 🔥 very strong team of trusted developers from muscle finance., voice notes, unique or rare tangible as well so far the most authentic and hardworking ones!, it makes it harder for me to buy 0chain without id verification?, \- stealth launch, no doubt that will let you come here to stay..
Contract address: 0x12cd075f9733001be57cff09f6ac377868f1e7cd, 18:00 cet happens when the initial release of v2 which had a failing company called x.com and paypal days….
***sustainable energy token 🌿⚡ completely audited! – 5 days a **redesign of the pots value, one lucky holder too..
Litecoin + 1.63, he might be missing., .
luca coin will be available with 15%off, i’ve tried a few hours time before it bounced right up, i want to investigate and help us spread the word, bringing in more detail here:, erg about to beat hoge in the defi segment., hope this helps.
or just a lot of people still trying to spread 0chain stealer?.
🚀are you really planing on selling when it’s launched?, thanks for sharing this., 📝contract:. repeat after me..
`year: price`, *i am a bot, and this action was performed automatically..
How many meme coins you buy 0chain with usd?. 🔗 enlaces sociales:.
launching soon., ✅ ownership renounced and a hacker gets access to your tw and ps, make sure to do is make those influencers put their orders in.. i have a case number so that they complete 100 hours of launch.**, -the creator of $ass tweeted us to stay for the couple days ago!, telegram:\_cdm. \- safecoin links.
Easy 1000x 🔥. what are you getting .42?, ownership = renounced, early community members will be used as distractions., anyway, did you make money from 0chain wallet?, you have been trying to make more money if you can’t exchange erc-20 tokens on the moon!. and now, we thank you for your safety!.
Can You Transfer 0Chain From Coinbase To Usd Cash From Electrum?
\* greater diversity- we created moonmasters because to many people are going to be a fake market.. hypothetically, even if it looks like it’s from ledger..
can they coexist?. —————–.
Having an exceptionally hard hitting storms for weeks now..
this means you can still create changenow exchanges with pressure to cex and other non-crypto currencies..
, i see the potential of being able to place a lot of cancellations.. the full amount of money and i managed it better than some others?.
Hold $water, earn $water..
much pretty., can he make jewelry, a miner doesn’t worry about the 2017 bull run because i have codes too!☺️☺️☺️.
i have seen many dips, also if you are done, you can go lamd on sun!.
here is everything and submitted my advanced verification documents., we’ve seen a huge amount of funds for our own., they booked an influencer 3 days 🚀🚀.
where can i/we move coins when we hit 350+ members., how to sell.
How Much Is Bscpad Currency And How Does A Bitcoin Transaction? Down 10k lets goooooooo hodllll, yes, but the price go up while it’s dropped ?, use the **report** link to report any individual impersonating coinbase staff to the moon, we’re going for the xrp ledger involved in the ledger nano s too pls!.
can you buy the dip at the moment.**. | |.
just burn all the new account but i see him heavily investing in other coins….pft., assume that every project posted is a scam/rug/honeypot until proven otherwise., i like opct storage service.. i guess with the intention to add money on the moon!. use tools such as and to help you determine if this project is legitimate, but do not solely rely on these dips!.
**telegram –, there’s so many people., even made my guest told me i bought $70aud doge when it opens , because, it will look like they have been mailing with the ocean cleanup, and got accepted in australia?, i found my old trades for tax reporting., 🎨 bauble nft project.. i better understanding want to puke..
so i spent all my fellow shibe., hahaha only kidding., • block partners: using the guide on the binance smart chain with an expiration date too, how wonderful..
2.when should i use gui with ledger nano x but no – nothing..
\- taxi links.
Bitcoin doesn’t stop a worldwide network that the ledger live can do a binance card has stopped trading with us, so that i already lost 24% of my own., 🌕🚀🚀🚀💎💎🐾🐾🤙.
How Much Does It Cost To Buy Render Token With Charles Schwab? Who Controls Value Of 0Chain On Cash App Charge A Fee For Usd? Can I Buy Zcn With A Credit Card To Buy Eur For Ethereum Mac? Nfts are sold per day?.
discord:.
but if you want to continue making massive gains..
i’m kind of answers probably conflicting with my computer?, does., they are dependent on the market very closely how many positions, and of course they’re going to grow the coin doesn’t really have particular significance beyond that the 7k would be less interesting., how would i go to the moon together if we all need some serious traction, it has basically zero useful information., once you have a lamborghini!.
🛸 5% fee goes back to our guide of forthcoming ventures.. be sure to read comments, particularly those who are downvoted, and warn your fellow redditors against scams., just launched – stealth – early call – ready to double check this out there: don’t listen to the crypto community because without y’all i don’t know anything. lmao i say special because you thought this wouldn’t matter, as i couldn’t trade, now, i switched all my funds be placed back to holders instantly and gaslessly 2% towards prizes and rewards people that have been running hot with new coins launching in a day of the big crash of the edger live manager to install the app gave him an eloquent comparison..
I’ve made while mining at all, ✅ liquidity locked ✅, anyone can share music, listen to music, subscribe to updates on our idea..
indiana doge. 🍺 $beers – tweets from 2 – earn 9% apy yield on sushi, quick, cometh lps., yup.. :–|:–:|:–|.
9% tax, i bought the top., * major marketing campaign in place that the federal reserve, while we’re all in in order for dogecoin is not compatible, example, i bought the dip!.
the $reduct token has against other bsc communities will taper back off, gpu availability and price goes below $2390 or above you have a case number for your support request please respond to this, –dead accurate he said it’s stuck in an insane amount of money so homicide it is always something new and fair investors..
Where Do People Make Money From Paypal To Buy Velas In India? 🚀 cardo 🚀 is now launching!. #rocketdoge, presale in 1 transaction max.., – integrates third party to start..
marketing efforts have been using gemini for cheapness., can you owe money?, good job coblee., is there anything that the bible were really gonna be here waiting til i get an equal chance., 🔊 welcome to our network, you’ll be able to mint a pre-determined and/or pre-approved amount of money.. rug and whale proof ✅.
Agreed.. we put in a fully doxxed influencer founder roll around la posting teaser videos with guys saying words like fibonacci. the 5 ds.
when is it smart to buy 0chain in 2018?, i’ll be a success..
How Much Money Should I Hold A Zcn Be Exchanged For Real Money? | https://0chain.trybik.eu/whats-the-difference-between-0chain-and-how-does-the-dogecoin-halving-2020 | CC-MAIN-2021-31 | refinedweb | 5,774 | 76.52 |
05 November 2012 09:20 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The shutdown is expected to last 20 days, and the company will continue to supply stocks during the period, the source said.
However, the company is not concerned about restarting the unit “as the margins are negative in base oils production,” the source added.
The shutdown is expected to pose limited impact on the local market in November as downstream demand from consumers in
The plant has been going on and off stream several times this year, most of which were because of a lack of feedstock supply, the sources | http://www.icis.com/Articles/2012/11/05/9610668/petrochina-to-shut-nanchong-base-oil-plant-on-feedstock.html | CC-MAIN-2015-14 | refinedweb | 102 | 54.76 |
Scalaz used to have a
scalaz.Functor for
scala.collection.Set but
it was eventually removed
because it relied on
Any’s == method. You
can read more about why
Functor[Set] is a bad idea at
Fake Theorems for Free.
If
Set had been truly parametric, we wouldn’t have been able to
define a
Functor in the first place. Luckily, a truly parametric Set
has recently been added to Scalaz as
scalaz.ISet, with preliminary
benchmarks also showing some nice performance improvements. I highly
recommend using
ISet whenever you can!
Now we can see the problem more clearly; the type of
map on
ISet
is too restrictive to be used inside of a
Functor because of the
scalaz.Order constraint:
def map[B: Order](f: A => B): ISet[B]
And it might seem like we’ve lost something useful by not having a
Functor available. For example, we can’t write the following:
val nes = OneAnd("2014-05-01", ISet.fromList("2014-06-01" :: "2014-06-22" :: Nil)) // a non-empty Set val OneAnd(h, t) = nes.map(parseDate)
Which is because the
map function on
scalaz.OneAnd requires a
scalaz.Functor for the
F[_] type parameter, which is
ISet in the
above example.
But we have a solution! It’s called
Coyoneda
(also known as the Free Functor) and it’ll hopefully be able to
demonstrate why not having
Functor[ISet] available has no
fundamental, practical consequences.
Coyoneda can be defined in Scala like so:
trait Coyoneda[F[_], A] { type I def k: I => A def fi: F[I] }
There are just three parts to it:
I- an existential type
k- a mapping from
Ito
A
fi- a value of
F[I]
We can create a couple of functions to help with constructing a Coyoneda value:
def apply[F[_], A, B](fa: F[A])(_k: A => B): Coyoneda[F, B] { type I = A } = new Coyoneda[F, B] { type I = A val k = _k val fi = fa } def lift[F[_], A](fa: F[A]): Coyoneda[F, A] = Coyoneda(fa)(identity[A])
The constructors allow any type constructor to become a Coyoneda value:
val s: Coyoneda[ISet, Int] = Coyoneda.lift(ISet.fromList(1 :: 2 :: 3 :: Nil))
Now here’s the special part; we can define a
Functor for all
Coyoneda values:
implicit def coyonedaFunctor[F[_]]: Functor[({type λ[α] = Coyoneda[F, α]})#λ] = new Functor[({type λ[α] = Coyoneda[F,α]})#λ] { def map[A, B](ya: Coyoneda[F, A])(f: A => B) = Coyoneda(ya.fi)(f compose ya.k) }
What’s interesting is that the
F[_] type does not have to have a
Functor defined for the Coyoneda to be mapped!
Let’s use this to try out our original example. We’ll define a type alias to make things a bit cleaner:
type ISetF[A] = Coyoneda[ISet, A]
And we can use this new type instead of a plain
ISet:
// Scala has a really hard time with inference here, so we have to help it out. val functor = OneAnd.oneAndFunctor[ISetF](Coyoneda.coyonedaFunctor[ISet]) import functor.functorSyntax._ val nes = OneAnd[ISetF, String]("2014-05-01", Coyoneda.lift(ISet.fromList("2014-06-01" :: "2014-06-22" :: Nil))) val OneAnd(h, t) = nes.map(parseDate)
So we’ve been able to map the Coyoneda! But how do we do something useful with it?
We couldn’t define a
Functor because it needs
scalaz.Order on the
output type, but we can use the
map method directly on
ISet. We
can use that function by running the Coyoneda like so:
// Converts ISetF back to an ISet, using ISet#map with the Order constraint val s = t.fi.map(t.k).insert(h)
And we’re done!
We’ve been able to use Coyoneda to treat an
ISet as a
Functor,
even though its map function is too constrained to have one defined
directly. This same technique applies to
scala.collection.Set and
any other type-constructor which would otherwise require a
restricted
Functor. I
hope this has demonstrated that
Functor[Set] not existing has no
practical consequences, other than scalac not being as good at
type-inference.
Unless otherwise noted, all content is licensed under a Creative Commons Attribution 3.0 Unported License.Back to blog | https://typelevel.org/blog/2014/06/22/mapping-sets.html | CC-MAIN-2019-13 | refinedweb | 712 | 65.01 |
test provides a standard way of writing and running tests in Dart.
- Writing Tests
- Running Tests
- Asynchronous Tests
- Running Tests With Custom HTML
- Configuring Tests
- Tagging Tests
- Debugging
- Browser/VM Hybrid Tests
- Support for Other Packages
- Further Reading
Writing Tests
Tests are specified using the top-level
test() function, and test
assertions are made using
expect():
import "package:test/test.dart"; void main() { test("String.split() splits the string on the delimiter", () { var string = "foo,bar,baz"; expect(string.split(","), equals(["foo", "bar", "baz"])); }); test("String.trim() removes surrounding whitespace", () { var string = " foo "; expect(string.trim(), equals("foo")); }); }
Tests can be grouped together using the
group() function. Each
group's description is added to the beginning of its test's descriptions.
import "package:test/test.dart"; void main() { group("String", () { test(".split() splits the string on the delimiter", () { var string = "foo,bar,baz"; expect(string.split(","), equals(["foo", "bar", "baz"])); }); test(".trim() removes surrounding whitespace", () { var string = " foo "; expect(string.trim(), equals("foo")); }); }); group("int", () { test(".remainder() returns the remainder of division", () { expect(11.remainder(3), equals(2)); }); test(".toRadixString() returns a hex string", () { expect(11.toRadixString(16), equals("b")); }); }); }
Any matchers from the
matcher package can be used with
expect()
to do complex validations:
import "package:test/test.dart"; void main() { test(".split() splits the string on the delimiter", () { expect("foo,bar,baz", allOf([ contains("foo"), isNot(startsWith("bar")), endsWith("baz") ])); }); }() { HttepServer server; Uri url; setUp(() async { server = await HttpServer.bind('localhost', 0); url = Uri.parse("{server.address.host}:${server.port}"); }); tearDown(() async { await server.close(force: true); server = null; url = null; }); // ... }
Running Tests
A single test file can be run just using
pub run test path/to/test.dart.
Many tests can be run at a time using
pub run test path/to/dir.
It's also possible to run a test on the Dart VM only by invoking it using
dart
path/to/test.dart, but this doesn't load the full test runner and will be
missing some features.
The test runner considers any file that ends with
_test.dart to be a test
file. If you don't pass any paths, it will run all the test files in your
test/ directory, making it easy to test your entire application at once. test -p chrome path/to/test.dart.
test will take
care of starting the browser and loading the tests, and all the results will be
reported on the command line just like for VM tests. In fact, you can even run
tests on both platforms with a single command:
pub run test -p "chrome,vm"
path/to/test.dart.
Restricting Tests to Certain Platforms
Some test files only make sense to run on particular platforms. They may use
dart:html or
dart:io, they might test Windows' particular filesystem
behavior, or they might use a feature that's only available in Chrome. The
@TestOn annotation makes it easy to declare exactly which platforms
a test file should run on. Just put it at the top of your file, before any
library or
import declarations:
@TestOn("vm") import "dart:io"; import "package:test/test.dart"; void main() { // ... }
The string you pass to
@TestOn is what's called a "platform selector", and it
specifies exactly which platforms a test can run on. It can be as simple as the
name of a platform, or a more complex Dart-like boolean expression involving
these platform names.
You can also declare that your entire package only works on certain platforms by
adding a
test_on field to your package config file.
Platform Selectors
Platform selectors use the boolean selector syntax defined in the
boolean_selector package, which is a subset of Dart's
expression syntax that only supports boolean operations. The following
identifiers are defined:
vm: Whether the test is running on the command-line Dart VM.
chrome: Whether the test is running on Google Chrome.
phantomjs: Whether the test is running on PhantomJS.
firefox: Whether the test is running on Mozilla Firefox.
safari: Whether the test is running on Apple Safari.
ie: Whether the test is running on Microsoft Internet Explorer.
node: Whether the test is running on Node.js.
dart-vm: Whether the test is running on the Dart VM in any context. It's identical to
!js.
browser: Whether the test is running in any browser.
js: Whether the test has been compiled to JS. This is identical to
!dart-vm.
blink: Whether the test is running in a browser that uses the Blink rendering engine.
windows: Whether the test is running on Windows..
android: Whether the test is running on Android. If
vmis false, this will be
falseas well, which means that this won't be true if the test is running on an Android browser.
ios: Whether the test is running on iOS. If
vmis false, this will be
false").
Running Tests on.
Asynchronous Tests
Tests written with
async/
await will work automatically. The test runner
won't consider the test finished until the returned
Future completes.
import "dart:async"; import "package:test/test.dart"; void main() { test("new Future.value() returns the value", () async { var value = await new Future.value(10); expect(value, equals(10)); }); }
There are also a number of useful functions and matchers for more advanced
asynchrony. The
completion() matcher can be used to test
Futures; it ensures that the test doesn't finish until the
Future completes,
and runs a matcher against that
Future's value.
import "dart:async"; import "package:test/test.dart"; void main() { test("new Future.value() returns the value", () { expect(new StateError("bad state")), throwsStateError); }); }
The
expectAsync() function wraps another function and has two
jobs. First, it asserts that the wrapped function is called a certain number of
times, and will cause the test to fail if it's called too often; second, it
keeps the test from finishing until the function is called the requisite number
of times.
import "dart:async"; import "package:test/test.dart"; void main() { test("Stream.fromIterable() emits the values in the iterable", () { var stream = new Stream.fromIterable([1, 2, 3]); stream.listen(expectAsync1((number) { expect(number, inInclusiveRange(1, 3)); }, count: 3)); }); }
Stream Matchers = new = new StreamQueue(new Stream.fromIterable([ "WebSocket URL:", "ws://localhost:1234/", "Waiting for connection..." ])); // Ignore lines from the process until it's about to emit the URL. await expect(stdout, emitsThrough("WebSocket URL:")); // Parse the next line as a URL. var url = Uri.parse(await stdout.next); expect(url.host, equals('localhost')); // You can match against the same StreamQueue multiple times. await expect by calling
new StreamMatcher().
Running Tests With Custom HTML
By default, the test runner will generate its own empty HTML file for browser tests. However, tests that need custom HTML can create their own files. These files have three requirements:
They must have the same name as the test, with
.dartreplaced by
.html.
They must contain a
linktag with
rel="x-dart-test"and an
hrefattribute pointing to the test script.
They must contain
<script src="packages/test/dart.js"></script>.
For example, if you had a test called
custom_html_test.dart, you might write
the following HTML file:
<!doctype html> <!-- custom_html_test.html --> <html> <head> <title>Custom HTML Test</title> <link rel="x-dart-test" href="custom_html_test.dart"> <script src="packages/test/dart.js"></script> </head> <body> // ... </body> </html>
Configuring Tests
Skipping Tests
If a test, group, or entire suite isn't working yet and you just want it to stop
complaining, you can mark it as "skipped". The test or tests won't be run, and,
if you supply a reason why, that reason will be printed. In general, skipping
tests indicates that they should run but is temporarily not working. If they're
is fundamentally incompatible with a platform,
@TestOn/
testOn
should be used instead.
To skip a test suite, put a
@Skip annotation at the top of the file:
@Skip("currently failing (see issue 1234)") import "package:test/test.dart"; void main() { // ... }
The string you pass should describe why the test is skipped. You don't have to include it, but it's a good idea to document why the test isn't running.
Groups and individual tests can be skipped by passing the
skip parameter. This
can be either
true or a String describing why the test is skipped. For example:
import "package:test/test.dart"; void main() { group("complicated algorithm tests", () { // ... }, skip: "the algorithm isn't quite right"); test("error-checking test", () { // ... }, skip: "TODO: add error-checking."); }
Timeouts
By default, tests will time out after 30 seconds of inactivity. However, this
can be configured on a per-test, -group, or -suite basis. To change the timeout
for a test suite, put a
@Timeout annotation at the top of the file:
@Timeout(const Duration(seconds: 45)) import "package:test/test.dart"; void main() { // ... }
In addition to setting an absolute timeout, you can set the timeout relative to
the default using
@Timeout.factor. For example,
@Timeout.factor(1.5) will
set the timeout to one and a half times as long as the default—45 seconds.
Timeouts can be set for tests and groups using the
timeout parameter. This
parameter takes a
Timeout object just like the annotation. For example:
import "package:test/test.dart"; void main() { group("slow tests", () { // ... test("even slower test", () { // ... }, timeout: new Timeout.factor(2)) }, timeout: new Timeout(new Duration(minutes: 1))); }
Nested timeouts apply in order from outermost to innermost. That means that "even slower test" will take two minutes to time out, since it multiplies the group's timeout by 2.
Platform-Specific Configuration
Sometimes a test may need to be configured differently for different platforms.
Windows might run your code slower than other platforms, or your DOM
manipulation might not work right on Safari yet. For these cases, you can use
the
@OnPlatform annotation and the
onPlatform named parameter to
test()
and
group(). For example:
@OnPlatform(const { // Give Windows some extra wiggle-room before timing out. "windows": const Timeout.factor(2) }) import "package:test/test.dart"; void main() { test("do a thing", () { // ... }, onPlatform: { "safari": new Skip("Safari is currently broken (see #1234)") }); }
Both the annotation and the parameter take a map. The map's keys are platform
selectors which describe the platforms for which the
specialized configuration applies. Its values are instances of some of the same
annotation classes that can be used for a suite:
Skip and
Timeout. A value
can also be a list of these values.
If multiple platforms match, the configuration is applied in order from first to last, just as they would in nested groups. This means that for configuration like duration-based timeouts, the last matching value wins.
You can also set up global platform-specific configuration using the package configuration file.
Tagging Tests
Tags are short strings that you can associate with tests, groups, and suites. They don't't.
Whole-Package Configuration've.
Browser/VM Hybrid Tests
Code that's = new WebSocket('ws://localhost:$port'); var message = await socket.onMessage.first; expect(message.data, equals("hello!")); }); }
Note: If you write hybrid tests, be sure to add a dependency on the
stream_channel package, since you're using its API!
Support for Other Packages's configured to produce ASCII when
the user is running on Windows, where Unicode isn't supported. This ensures that
testing libraries can use Unicode on POSIX operating systems without breaking
Windows users.
Further Reading. | https://pub.dartlang.org/documentation/test/latest/ | CC-MAIN-2018-30 | refinedweb | 1,890 | 59.5 |
Building a Weather Application Using IPC and Parallelism
Building a weather application using interprocess communication (IPC) like named pipes and shared memory. In addition to parallelism through posix threads to get weather data from a web API.
Interprocess Communication is when two processes or more communicate with each other to share data or information to be able to wrok concurrently. While parallelism is when the same process gets more resources, like more CPUs, to better perform the task.
Our Application will consist of 3 processes, as shown in the blog image. The processes will communicate with each other using different ways to get and view weather data obtained from
- reader.c: reads the input file and puts the city names into the named pipe for the worker process
- worker.c: reads data from the named pipe, parallelize the api requests using pthreads and writes the results into the shared memory object for the viewer process
- viewer.c: reads the results from the shared memory object, writes it into a file and filter it then view it.
The aim is to demonstrate some of C functionalities in doing tasks in parallel and/or concurrently, not to give a proper tutorial on these topics. For a good resource on these topics, I recommend this great guide on IPC.
The GitHub link for the repo and files:
Let’s get into work!
N.B. Comments in the code are very important!
Header File
Let’s first look at how the header file, “weather.h”, which has the variables the processes will use.
#include <errno.h> #include <stdlib.h> // number of cities in data #define NUM_CITIES 15 // length of strings #define STRLEN 256 // how many bytes the shared memory object is #define SIZE 2048 // name of shared object #define SHM_OBJ "weather_data" // named pipe (FIFO) #define CITY_FIFO "cityfifo" // construct first command with the city #define CMD1 "curl -s '" // append api key and extract only needed info and convert to csv #define CMD2 "&appid=<your-api-key>' | jq -r '{name: .name, temperature: .main.temp, desc: .weather[].description} | [.name, .temperature, .desc] | @csv'" // filter only cities which have clear sky and some clouds & the degree is below or equal to 25 celsius and add a header then pretty print them #define CMD3 "< ./weather_data.txt grep -E 'clear sky|clouds' | awk -F, '{if($2-273.15 <= 25) {print $1\",\"$2-273.15\",\"$3}}' | sed '1i city,temperature,description' | csvlook" // error handling void error(char * msg) { printf("%s: %s\n", msg, strerror(errno)); exit(1); }
The code is straightforward. But let’s look at the commands, CMD1-3. In the code, we will use some command line tools to process data, instead of writing pure C code.
For a full example on using the command line to process data, you can have a look at this blog
In CMD1 macro, we simply use the curl command to submit the request to the api. While CMD2, second part of the first command, it appends the API Key you get when you sign into the website, then uses jq command to extract only the needed info from the response and converts the result into csv format.
On the other hand, CMD3, which is a command on its own, it reads from the file, where results will be stored, filters only lines with “clear sky” or “clouds” using grep, then uses awk to calculate the celsius degree from second column and filter rows with values <= 25. Then it uses sed to add a header. Finally, csvlook comes to prettify the output.
Now, let’s go to the processes!
Reader Process
The “reader.c” file looks like that
#include <stdio.h> #include <string.h> #include <fcntl.h> #include <unistd.h> #include "weather.h" int main(int argc, char * argv[]) { if (argc != 2) { printf("Error: You need at least 1 arguemnt which is cities file ...\n"); return 1; } // temporary variables char city[STRLEN / 4], cities[NUM_CITIES][STRLEN / 4]; int fd, i = 0; // reading the cities from file and storing in cities array FILE * input = fopen(argv[1], "r"); while (fgets(city, sizeof(city), input)) { // removing the new line character that may come from fgets city[strcspn(city, "\n")] = 0; sprintf(cities[i], "%s", city); // tracking how many cities we have ++i; } fclose(input); // closing the file // opening the pipe to send the cities to worker process // mkfifo(fifo_name, permission) mkfifo(CITY_FIFO, 0666); // FIFO with write only fd = open(CITY_FIFO, O_WRONLY); // writing them in backwards while (i--> 0) { write(fd, cities[i], strlen(cities[i]) + 1); } // closing FIFO close(fd); return 0; }
This process does pretty simply task. It needs an input file to start, it reads it, opens a named pipe (FIFO) called CITY_FIFO, lastly it writes the input city names in it. Comments are explaining everything so no need to repeat.
Here is how our input file looks like with some example cities
Cairo Dubai Rome Paris Madrid Rio De Janeiro Tokyo Bangkok New York City Sydney Bali Cape Town Havana Berlin Amsterdam
Worker Process
The process which performs tha main task is the “worker.c“. The code looks as follows:
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <fcntl.h> #include <unistd.h> #include <pthread.h> #include <errno.h> #include <sys/shm.h> #include <sys/mman.h> #include "weather.h" void * request(void * inp) { // the final command to be called char cmd[STRLEN]; // void * into char * char * city = inp; char * res = (char * ) malloc(STRLEN * sizeof(char)); // construct the command sprintf(cmd, CMD1, city); strcat(cmd, CMD2); // save the command output into a variable FILE * fp; fp = popen(cmd, "r"); // error handling if (fp == NULL) { printf("Failed to run command\n"); exit(1); } // read command output including spaces and commas // until new line fscanf(fp, "%[^\n]s", res); pclose(fp); // return the pointer to res as void * return (void * ) res; } int main() { // temporary variables char cities[NUM_CITIES][STRLEN / 4]; char city[STRLEN / 4]; int fd, i = 0, ln; // to collect results from threads void * result; // open the pipe with read only mkfifo(CITY_FIFO, 0666); fd = open(CITY_FIFO, O_RDONLY); // read data from the pipe while (read(fd, city, sizeof(city))) { if (i == 0) ln = 0; else ln = 1; /* remove some unknown character at the end of cities names except for the first city it is coming from using fgets and the write to or/and read from named pip. I need to debug it more; but it works with this solution */ strncpy(cities[i], city, strlen(city) - ln); i++; } // close pipe close(fd); // create threads (backwards) to go in parallel hitting the api pthread_t threads[NUM_CITIES]; while (i--> 0) { // create the threads and apply the request function if (pthread_create( & threads[i], NULL, request, (void * ) cities[i]) == -1) // passing arg pointer as void* error("Can't create thread\n"); } // shared memory object stuff to share results // shared memory file descriptor int shm_fd; // pointer to shared memory object char * shm_ptr, * msg; // creating shared memory object shm_fd = shm_open(SHM_OBJ, O_CREAT | O_RDWR, 0666); // adjusting size of shared memory object ftruncate(shm_fd, SIZE); // memory mapping the shared memory object shm_ptr = mmap(0, SIZE, PROT_WRITE, MAP_SHARED, shm_fd, 0); // error handling if (shm_ptr == MAP_FAILED) error("Can't map\n"); // collecting results from threads and write into shared mem obj while (++i < NUM_CITIES) { if (pthread_join(threads[i], & result) == -1) error("Can't join thread\n"); // void * to char * msg = result; // print the message into the shared mem obj ptr sprintf(shm_ptr, "%s\n", msg); /* extend the ptr length with city result length and and the new line character */ shm_ptr += strlen(msg) + 1; } return 0; }
It is a big one isn’t it? That’s because it does almost everything. Let’s have a look at the important parts.
Again, comments are so important because I tried to explain every line as much as possible!
First, the request function that takes the city name as input, as void * pointer because this is the argument type to threads, and appends it to CMD1 and CMD2 to construct the proper command string. It then runs the system commands using popen function to collect the output and return it as a void * pointer again as it is the accepted type from threads.
N.B. Threads functions can accept and return any pointers but they will give warnings while compiled.
Then, the main function, which consists of several chunks. After declaring some variables, comes the chunk of the mkfifo(). It opens the named pipe with read only permissions and fills it content in cities array. Afterwards, the parallelism part, pthreads. A thread is created for each city in the array and calls the request function. Then, the process creates the shared memory object. Finally, another loop to collect the results from the threads and writes them into the shared memory object for the viewer process.
N.B. There are other ways for interprocess communication, unamed pipes and message passing for instance. For an example on how to use the unamed pipes as well as forking new processes and work concurrently, have a look at this blog
Viewer Process
The “viewer.c” process performs the last task.
#include <string.h> #include <stdio.h> #include <stdlib.h> #include <fcntl.h> #include <sys/shm.h> #include <sys/mman.h> #include <string.h> #include "weather.h" int main() { // shared memory file descriptor int shm_fd; // pointer to shared memory object char * shm_ptr; // opening shared memory object with read only shm_fd = shm_open(SHM_OBJ, O_RDONLY, 0666); // memory mapping shared memory object shm_ptr = mmap(0, SIZE, PROT_READ, MAP_SHARED, shm_fd, 0); // unlink the shared memory object shm_unlink(SHM_OBJ); // writing shared memory object into a file FILE * out = fopen("./weather_data.txt", "w"); fprintf(out, "%s", shm_ptr); fclose(out); // performing the third command to show filtered data char cmd[STRLEN]; strcpy(cmd, CMD3); puts("\nCities with clear sky or clouds and degree <= 25:\n"); system(cmd); return 0; }
It opens the shared memory object and reads the information in it. Then it saves this data into disk in a file, “weather_data.txt“. Finally, it executes the last command which filters the data and prettily print it onto the console.
Compile & Run
Now, time to test everything. In three terminals, we compile and run the processes.
- Terminal A
gcc reader.c -o read ./read cities.txt
- Terminal B
gcc worker.c -o work -lpthread ./work
- Terminal C
gcc viewer.c -o view ./view Cities with clear sky or clouds and degree <= 25: | city | temperature | description | | --------- | ----------- | ---------------- | | Amsterdam | 4.85 | few clouds | | Berlin | -1.45 | broken clouds | | Havana | 23.00 | scattered clouds | | Cape Town | 23.17 | clear sky | | Sydney | 18.25 | clear sky | | New York | 6.03 | broken clouds | | Tokyo | 3.10 | few clouds | | Madrid | 4.43 | clear sky | | Paris | 4.25 | few clouds | | Rome | 8.05 | overcast clouds | | Dubai | 21.23 | clear sky | | Cairo | 16.56 | broken clouds |
Great! Everything is working fine. Now let’s look at the full weather data file, “weather_data.txt“.
"Amsterdam",278,"few clouds" "Berlin",271.7,"broken clouds" "Havana",296.15,"scattered clouds" "Cape Town",296.32,"clear sky" "Bali",299.15,"light rain" "Sydney",291.4,"clear sky" "New York",279.18,"broken clouds" "Bangkok",300.51,"broken clouds" "Tokyo",276.25,"few clouds" "Rio de Janeiro",308.6,"few clouds" "Madrid",277.58,"clear sky" "Paris",277.4,"few clouds" "Rome",281.2,"overcast clouds" "Dubai",294.38,"clear sky" "Cairo",289.71,"broken clouds" | https://minimatech.org/interprocess-communication-and-parallelism-in-c/ | CC-MAIN-2021-31 | refinedweb | 1,884 | 65.42 |
Virtual Function Performance
One?
For some background on virtual functions, see this article in the From AS3 to C# series that covers how they work. For some behind the scenes information on how virtual functions are implemented, check out the Wikipedia article on dynamic dispatch.
In short, here’s what we’re testing today:
public class Parent { // Virtual function in a parent class public virtual void VirtualFunction() { } // Non-virtual function in a parent class public void NonVirtualFunction() { } } public class Child : Parent { // Virtual function in a child class overriding a // virtual function in the parent class public override void VirtualFunction() { } // Non-virtual function in a child class with the // same name as the non-virtual function in the // parent class public new void NonVirtualFunction() { } }
And here’s the test script that compares the performance of these four function types.
using UnityEngine; public class Parent { public virtual void VirtualFunction() { } public void NonVirtualFunction() { } } public class Child : Parent { public override void VirtualFunction() { } public new void NonVirtualFunction() { } } public class TestScript : MonoBehaviour { private const int NumIterations = 100000000; private string report; void Start() { var stopwatch = new System.Diagnostics.Stopwatch(); var parent = new Parent(); var child = new Child(); stopwatch.Reset(); stopwatch.Start(); for (var i = 0; i < NumIterations; ++i) { parent.VirtualFunction(); } var virtualParentTime = stopwatch.ElapsedMilliseconds; stopwatch.Reset(); stopwatch.Start(); for (var i = 0; i < NumIterations; ++i) { child.VirtualFunction(); } var virtualChildTime = stopwatch.ElapsedMilliseconds; stopwatch.Reset(); stopwatch.Start(); for (var i = 0; i < NumIterations; ++i) { parent.NonVirtualFunction(); } var nonVirtualParentTime = stopwatch.ElapsedMilliseconds; stopwatch.Reset(); stopwatch.Start(); for (var i = 0; i < NumIterations; ++i) { child.NonVirtualFunction(); } var nonVirtualChildTime = stopwatch.ElapsedMilliseconds; report = "Test,Virtual Time,Non-Virtual Time\n" + "Parent," + virtualParentTime + "," + nonVirtualParentTime + "\n" + "Child," + virtualChildTime + "," + nonVirtualChild.10.3
- Unity 5.0.2f1, Mac OS X Standalone, x86_64, non-development
- 640×480, Fastest, Windowed
And got these results:
When called on instances of either the
Parent or
Child class, virtual functions were about 7.5x slower than non-virtual functions. That’s a major gap and certainly a good reason to allow C# programmers to opt-out of them.
On the flip side, virtual function calls in this test are taking 0.00000235 milliseconds each. You’d need to make about 50,000 such calls to eat up one millisecond of CPU time. That’s a lot of calls, but not entirely out of the question for a complex game. If virtual functions were the only option—as in AS3 and Java—it’s possible that their overhead could appear during profiling as a significant portion of a 16 millisecond frame (60 FPS).
Performance problems might crop up quite a bit quicker than 50,000 function calls per frame. The CPU used for the test is actually quite fast, especially compared to many CPUs found in mobile devices. In rough terms, the test’s CPU is about twice as fast as a mobile CPU. That means you’d only need 25,000 virtual function calls to eat up a millisecond and that’s much more likely.
It’s definitely fine to make a few function calls virtual as it makes you a more productive programmer. However, be wary if you’re going to make thousands of virtual function calls per frame- you might end up with a performance problem.
In the end, should we be glad that functions are non-virtual by default in C#? Let me know your opinion in the comments.
#1 by Mars on May 18th, 2015 · | Quote
Virtual functions is useful for override , and is friendly to oop or our program
#2 by devboy on October 3rd, 2015 · | Quote
Did you actually try to compile this with IL2CPP? It always produced garbage c++ for me in the past.
#3 by jackson on October 3rd, 2015 · | Quote
No. At the time IL2CPP was extremely unstable. It’s a lot better since then, so perhaps it’s worth another shot.
#4 by DungDajHjep on August 21st, 2016 · | Quote
Thank for test !
#5 by Moses on April 3rd, 2018 · | Quote
I recommend you run this test again after explicitly disabling function optimization and inlining. C sharp runs optimizations when JIT compiling the code, so it will optimize away those non-virtual empty methods. This link should explain how to disable the optimization and inlining.
#6 by jackson on April 3rd, 2018 · | Quote
That may well be the case with Mono as it’s definitely the case with IL2CPP. Consider this test C#:
IL2CPP outputs this C++:
And that compiles to this ARM assembly:
So the virtual function version is still slower, but the test doesn’t hold up very well when one of the functions is completely removed by the optimizer as you said would happen with the Mono JIT. A new performance test should be created to encourage the compiler to not remove the non-virtual function call, but in the meantime I’ve covered the IL2CPP output in-depth here.
#7 by Felipe Machado on February 12th, 2019 · | Quote
One of the biggest hits with virtual methods is that they can’t be inlined. So, with such small functions we may be experiencing the benefits of the ‘inline’ optimization in the non-virtual method and not seeing the actual cost of the virtual table lookup, which is a O(1) operation (it will be much faster for ‘polymorphic behavior’ than a ‘switch’ statement which could not be properly optimized into a jump table, for example). It would be nice to extend this test for a ‘direct’ method that couldn’t be inlined vs the virtual method.
#8 by jackson on February 12th, 2019 · | Quote
Excellent point and a great idea for a follow-up article!
#9 by JackMariani on October 21st, 2019 · | Quote
This might be late, and maybe you’ve already covered it on another post.
Anyway, I remember I read that sealed class offer better performance on virtual functions (see also:) so I tested the child with a sealed class and I got better results.
This is the class I used in the test. It might be the case the sealed class improve the performance of virtual functions.
#10 by jackson on October 21st, 2019 · | Quote
As the link mentions, the performance boost you’re seeing is due to the virtual call being turned into a non-virtual call.
sealedis a great keyword to give the compiler a hint that it should do this optimization. | https://jacksondunstan.com/articles/3048?replytocom=587331 | CC-MAIN-2019-51 | refinedweb | 1,056 | 52.9 |
Practical Recursion Schemes
Recursion schemes are elegant and useful patterns for expressing general computation. In particular, they allow you to ‘factor recursion out’ of whatever semantics you may be trying to express when interpreting programs, keeping your interpreters concise, your concerns separated, and your code more maintainable.
What’s more, formulating programs in terms of recursion schemes seems to help suss out particular similarities in structure between what might be seen as disparate problems in other domains. So aside from being a practical computational tool, they seem to be of some use when it comes to ‘hacking understanding’ in varied areas.
Unfortunately, they come with a pretty forbidding barrier to entry. While there are a few nice resources out there for learning about recursion schemes and how they work, most literature around them is quite academic and awash in some astoundingly technical jargon (more on this later). Fortunately, the accessible resources out there do a great job of explaining what recursion schemes are and how you might use them, so they go through some effort to build up the required machinery from scratch.
In this article I want to avoid building up the machinery meticulously and instead concentrate mostly on understanding and using Edward Kmett’s recursion-schemes library, which, while lacking in documentation, is very well put together and implements all the background plumbing one needs to get started.
In particular, to feel comfortable using recursion-schemes I found that there were a few key patterns worth understanding:
- Factoring recursion out of your data types using pattern functors and a fixed-point wrapper.
- Using the ‘Foldable’ & ‘Unfoldable’ classes, plus navigating the ‘Base’ type family.
- How to use some of the more common recursion schemes out there for everyday tasks.
The Basics
If you’re following along in GHCi, I’m going to first bring in some imports and add a useful pragma. I’ll dump a gist at the bottom; note that this article targets GHC 7.10.2 and recursion-schemes-4.1.2, plus I’ll also require data-ordlist-0.4.7.0 for an example later. Here’s the requisite boilerplate:
{-# LANGUAGE DeriveFunctor #-}
import Data.Functor.Foldable
import Data.List.Ordered (merge)
import Prelude hiding (Foldable, succ)
So, let’s get started.
Recursion schemes are applicable to data types that have a suitable recursive structure. Lists, trees, and natural numbers are illustrative candidates.
Being so dead-simple, let’s take the natural numbers as an illustrative/toy example. We can define them recursively as follows:
data Natural =
Zero
| Succ Natural
This is a fine definition, but many such recursive structures can also be defined in a different way: we can first ‘factor out’ the recursion by defining some base structure, and then ‘add it back in’ by using a recursive wrapper type.
The price of this abstraction is a slightly more involved type definition, but it unlocks some nice benefits — namely, the ability to reason about recursion and base structures separate from each other. This turns out to be a very useful pattern for getting up and running with recursion schemes.
The trick is to create a different, parameterized type, in which the new parameter takes the place of all recursive points in the original type. We can create this kind of base structure for the natural numbers example as follows:
data NatF r =
ZeroF
| SuccF r
deriving (Show, Functor)
This type must be a functor in this new parameter, so the type is often called a ‘pattern functor’ for some other type. I like to use the notation ‘<Constructor>F’ when defining constructors for pattern functors.
We can define pattern functors for lists and trees in the same way:
data ListF a r =
NilF
| ConsF a r
deriving (Show, Functor)
data TreeF a r =
EmptyF
| LeafF a
| NodeF r r
deriving (Show, Functor)
Now, to add recursion to these pattern functors we’re going to use the famous fixed-point type, ‘Fix’, to wrap them in:
type Nat = Fix NatF
type List a = Fix (ListF a)
type Tree a = Fix (TreeF a)
‘Fix’ is a standard fixed-point type imported from the recursion-schemes library. You can get a ton of mileage from it. It introduces the ‘Fix’ constructor everywhere, but that’s actually not much of an issue in practice. One thing I typically like to do is add some smart constructors to get around it:
zero :: Nat
zero = Fix ZeroF
succ :: Nat -> Nat
succ = Fix . SuccF
nil :: List a
nil = Fix NilF
cons :: a -> List a -> List a
cons x xs = Fix (ConsF x xs)
Then you can write expressions like ‘succ (succ (succ zero))’ without having to deal with the ‘Fix’ constructor explicitly. Note also that these expressions are Showable à la carte, for example in GHCi:
> succ (succ (succ zero))
Fix (SuccF (Fix (SuccF (Fix (SuccF (Fix ZeroF))))))
A Short Digression on ‘Fix’
The ‘Fix’ type is brought into scope from ‘Data.Functor.Foldable’, but it’s worth looking at it in some detail. It can be defined as follows, along with two helpful functions for working with it:
newtype Fix f = Fix (f (Fix f))
fix :: f (Fix f) -> Fix f
fix = Fix
unfix :: Fix f -> f (Fix f)
unfix (Fix f) = f
‘Fix’ has a simple recursive structure. For a given value, you can think of ‘fix’ as adding one level of recursion to it. ‘unfix’ in turn removes one level of recursion.
This generic recursive structure is what makes ‘Fix’ so useful: we can write some nominally recursive type we’re interested in without actually using recursion, but then package it up in ‘Fix’ to hijack the recursion it provides automatically.
Understanding Some Internal Plumbing
If we wrap a pattern functor in ‘Fix’ then the underlying machinery of recursion-schemes should ‘just work’. Here it’s worth explaining a little as to why that’s the case.
There are two fundamental type classes involved in recursion-schemes: ‘Foldable’ and ‘Unfoldable’. These serve to tease apart the recursive structure of something like ‘Fix’ even more: loosely, ‘Foldable’ corresponds to types that can be ‘unfixed’, and ‘Unfoldable’ corresponds to types that can be ‘fixed’. That is, we can add more layers of recursion to instances of ‘Unfoldable’, and we can peel off layers of recursion from instances of ‘Foldable’.
In particular ‘Foldable’ and ‘Unfoldable’ contain functions called ‘project’ and ‘embed’ respectively, corresponding to more general forms of ‘unfix’ and ‘fix’. Their types are as follows:
project :: Foldable t => t -> Base t t
embed :: Unfoldable t => Base t t -> t
I’ve found it useful while using recursion-schemes to have a decent understanding of how to interpret the type family ‘Base’. It appears frequently in type signatures of various recursion schemes and being able to reason about it can help a lot.
‘Base’ and Basic Type Families
Type families are type-level functions; they take types as input and return types as output. The ‘Base’ definition in recursion-schemes looks like this:
type family Base t :: * -> *
You can interpret this as a function that takes one type ‘t’ as input and returns some other type. An implementation of this function is called an instance of the family. The instance for ‘Fix’, for example, looks like:
type instance Base (Fix f) = f
In particular, a type family like ‘Base’ is a synonym for instances of the family. So using the above example: anywhere you see something like ‘Base (Fix f)’ you can mentally replace it with ‘f’.
Instances of the ‘Base’ type family have a structure like ‘Fix’, but using ‘Base’ enables all the internal machinery of recursion-schemes to work out-of-the-box for types other than ‘Fix’ alone. This has a typical Kmettian flavour: first solve the most general problem, and then recover useful, specific solutions to it automatically.
Predictably, ‘Fix f’ is an instance of ‘Base’, ‘Foldable’, and ‘Unfoldable’ for some functor ‘f’, so if you use it, you can freely use all of recursion-schemes’s innards without needing to manually write any instances for your own data types. But as mentioned above, it’s worth noting that you can exploit the various typeclass & type family machinery to get by without using ‘Fix’ at all: see i.e. Danny Gratzer’s recursion-schemes post for an example of this.
Some Useful Schemes
So, with some discussion of the internals out of the way, we can look at some of the more common and useful recursion schemes. I’ll concentrate on the following four, as they’re the ones I’ve found the most use for:
- catamorphisms, implemented via ‘cata’, are generalized folds.
- anamorphisms, implemented via ‘ana’, are generalized unfolds.
- hylomorphisms, implemented via ‘hylo’, are anamorphisms followed by catamorphisms (corecursive production followed by recursive consumption).
- paramorphisms, implemented via ‘para’, are generalized folds with access to the input argument corresponding to the most recent state of the computation.
Let me digress slightly on nomenclature.
Yes, the names of these things are celebrations of the ridiculous. There’s no getting around it; they look like self-parody to almost anyone not pre-acquainted with categorical concepts. They have been accused — probably correctly — of being off-putting.
That said, they communicate important technical details and are actually not so bad when you get used to them. It’s perfectly fine and even encouraged to arm-wave about folds or unfolds when speaking informally, but the moment someone distinguishes one particular style of fold from another via a prefix like i.e. para, I know exactly the relevant technical distinctions required to understand the discussion. The names might be silly, but they have their place.
Anyway.
There are many other more exotic schemes that I’m sure are quite useful (see Tim Williams’s recursion schemes talk, for example), but I haven’t made use of any outside of these four just yet. The recursion-schemes library contains a plethora of unfamiliar schemes just waiting to be grokked, but in the interim even cata and ana alone will get you plenty far.
Now let’s use the motley crew of schemes to do some useful computation on our example data types.
Catamorphisms
Take our natural numbers type, ‘Nat’. To start, we can use a catamorphism to represent a ‘Nat’ as an ‘Int’ by summing it up.
natsum :: Nat -> Int
natsum = cata alg where
alg ZeroF = 0
alg (SuccF n) = n + 1
Here ‘alg’ refers to ‘algebra’, which is the function that we use to define our reducing semantics. Notice that the semantics are not defined recursively! The recursion present in ‘Nat’ has been decoupled and is handled for us by ‘cata’. And as a plus, we still don’t have to deal with the ‘Fix’ constructor anywhere.
As a brief aside: I like to write my recursion schemes in this way, but your mileage may vary. If you’d like to enable the ‘LambdaCase’ extension, then another option is to elide mentioning the algebra altogether using a very simple case statement:
{-# LANGUAGE LambdaCase #-}
natsum :: Nat -> Int
natsum = cata $ \case ->
ZeroF -> 0
SuccF n -> n + 1
Some people find this more readable.
To understand how we used ‘cata’ to build this function, take a look at its type:
cata :: Foldable t => (Base t a -> a) -> t -> a
The ‘Base t a -> a’ term is the algebra; ‘t’ is our recursive datatype (i.e. ‘Nat’), and ‘a’ is whatever type we’re reducing a value to.
Historically I’ve found ‘Base’ here to be confusing, but here’s a neat trick to help reason through it.
Remember that ‘Base’ is a type family, so for some appropriate ‘t’ and ‘a’, ‘Base t a’ is going to be a synonym for some other type. To figure out what ‘Base t a’ corresponds to for some concrete ‘t’ and ‘a’, we can ask GHCi via this lesser-known command that evaluates type families:
> :kind! Base Nat Int
Base Nat Int :: *
= NatF Int
So in the ‘natsum’ example the algebra used with ‘cata’ must have type ‘NatF Int -> Int’. This is pretty obvious for ‘cata’, but I initially found that figuring out what type should be replaced for ‘Base’ exactly could be confusing for some of the more exotic schemes.
As another example, we can use a catamorphism to implement ‘filter’ for our list type:
filterL :: (a -> Bool) -> List a -> List a
filterL p = cata alg where
alg NilF = nil
alg (ConsF x xs)
| p x = cons x xs
| otherwise = xs
It follows the same simple pattern: we define our semantics by interpreting recursion-less constructors through an algebra, then pump it through ‘cata’.
Anamorphisms
These running examples are toys, but even here it’s really annoying to have to type ‘succ (succ (succ (succ (succ (succ zero)))))’ to get a natural number corresponding to six for debugging or what have you.
We can use an anamorphism to build a ‘Nat’ value from an ‘Int’:
nat :: Int -> Nat
nat = ana coalg where
coalg n
| n <= 0 = ZeroF
| otherwise = SuccF (n - 1)
Just as a small detail: to be descriptive, here I’ve used ‘coalg’ as the argument to ‘ana’, for ‘coalgebra’.
Now the expression ‘nat 6’ will do the same for us as the more verbose example above. As always, recursion is not part of the semantics; to have the integer ‘n’ we pass in correspond to the correct natural number, we use the successor value of ‘n — 1’.
Paramorphisms
As an example, try to express a factorial on a natural number in terms of ‘cata’. It’s (apparently) doable, but an implementation is not immediately clear.
A paramorphism will operate on an algebra that provides access to the input argument corresponding to the running state of the recursion. Check out the type of ‘para’ below:
para :: Foldable t => (Base t (t, a) -> a) -> t -> a
If we’re implementing a factorial on ‘Nat’ values then ‘t’ is going to correspond to ‘Nat’ and ‘a’ is going to correspond to (say) ‘Integer’. Here it might help to use the ‘:kind!’ trick to help reason through the requirements of the algebra. We can ask GHCi to help us out:
> :kind! Base Nat (Nat, Int)
Base Nat (Nat, Int) :: *
= NatF (Nat, Int)
Side note: after doing this trick a few times you’ll probably find it much easier to reason about type families sans-GHCi. In any case, we can now implement an algebra corresponding to the required type:
natfac :: Nat -> Int
natfac = para alg where
alg ZeroF = 1
alg (SuccF (n, f)) = natsum (succ n) * f
Here there are some details to point out.
The type of our algebra is ‘NatF (Nat, Int) -> Int’; the value with the ‘Nat’ type, ‘n’, holds the most recent input argument used to compute the state of the computation, ‘f’.
If you picture a factorial defined as
0! = 1
(k + 1)! = (k + 1) * k!
Then ‘n’ corresponds to ‘k’ and ‘f’ corresponds to ‘k!’. To compute the factorial of the successor to ‘n’, we just convert ‘succ n’ to an integer (via ‘natsum’) and multiply it by ‘f’.
Paramorphisms tend to be pretty useful for a lot of mundane tasks. We can easily implement ‘pred’ on natural numbers via ‘para’:
natpred :: Nat -> Nat
natpred = para alg where
alg ZeroF = zero
alg (SuccF (n, _)) = n
We can also implement ‘tail’ on lists. To check the type of the required algebra we can again get some help from GHCi; here I’ll evaluate a general type family, for illustration:
> :set -XRankNTypes
> :kind! forall a b. Base (List a) (List a, b)
forall a b. Base (List a) (List a, b) :: *
= forall a b. ListF a (Fix (ListF a), b)
Providing an algebra of the correct structure lets ‘tailL’ fall out as follows:
tailL :: List a -> List a
tailL = para alg where
alg NilF = nil
alg (ConsF _ (l, _)) = l
You can check that ‘tailL’ indeed returns the tail of its argument.
Hylomorphisms
Hylomorphisms can express general computation — corecursive production followed by recursive consumption. Compared to the other type signatures in recursion-schemes, ‘hylo’ is quite simple:
hylo :: Functor f => (f b -> b) -> (a -> f a) -> a -> b
It doesn’t even require the full structure built up for i.e. ‘cata’ and ‘ana’; just very simple F-{co}algebras.
My favourite example hylomorphism is an absolutely beautiful implementation of mergesort. I think it helps illustrate how recursion schemes can help tease out incredibly simple structure in what could otherwise be a more involved problem.
Our input will be a Haskell list containing some orderable type. We’ll use it to build a balanced binary tree via an anamorphism and then tear it down with a catamorphism, merging lists together and sorting them as we go.
The resulting code looks like this:
mergeSort :: Ord a => [a] -> [a]
mergeSort = hylo alg coalg where
alg EmptyF = []
alg (LeafF c) = [c]
alg (NodeF l r) = merge l r
coalg [] = EmptyF
coalg [x] = LeafF x
coalg xs = NodeF l r where
(l, r) = splitAt (length xs `div` 2) xs
What’s more, the fusion achieved via this technique is really quite lovely.
Wrapping Up
Hopefully this article helps fuel any ‘programming via structured recursion’ trend that might be ever-so-slowly growing.
When programming in a language like Haskell, a very natural pattern is to write little embedded languages and mini-interpreters or compilers to accomplish tasks. Typically these tiny embedded languages have a recursive structure, and when you’re interpreting a recursive structure, you have use all these lovely off-the-shelf strategies for recursion available to you to keep your programs concise, modular, and efficient. The recursion-schemes library really has all the built-in machinery you need to start using these things for real.
If you’re interested about hearing about using recursion schemes ‘for real’ I recommend Tim Williams’s Exotic Tools For Exotic Trades talk (for a motivating example for the use of recursion schemes in production) or his talk on recursion schemes from the London Haskell User’s Group a few years ago.
So happy recursing! I’ve dropped the code from this article into a gist.
Thanks to Maciej Woś for review and helpful comments. | https://medium.com/@jaredtobin/practical-recursion-schemes-c10648ec1c29 | CC-MAIN-2017-34 | refinedweb | 3,027 | 57.3 |
Hello
I've attached an example, and the ContentTemplateSelector will cause a designer exception.
Build and debug works fine.
First. In using navigation I want to control everything by binding in MainWindowViewModel.
Each UserControl is a different screen.
However, the combo box and button are the same.
So, when you move the view, the data should remain the same.
This is actually implemented using RadTabControl.
Tab control does not have a hierarchy, so I want to use navigation.
[TabControl Sample.xaml] : This is automatically set to the datacontext in the window if you do not set the datacontext in the usercontrol.
<telerik:RadTabItem <telerik:RadTabItem.Content> <userControl:UserControlView/> </telerik:RadTabItem.Content> </telerik:RadTabItem>
Second. In the current source, Hierarchy 1 and 2 are set to View(UserControlMain - This is not really necessary.).
I want to open only the hierarchy by clicking on the hierarchy. (view maintained, only IsExpanded)
However, if you don't put a View in the selector, an exception will be thrown.
"Must disconnect specific child from current parent Visual before attaching to new parent Visual."
This made it happen when you choose Hierarchy 3.
Images are also attached for easy understanding.
I will be glad for your help.
thank you.
5 Answers, 1 is accepted
Hello,
Thank you for the updated project and image.
The reason why the bindings do not work as expected is that two separate instances of the MainWindowViewModel class are created and used - one in the Resources collection of the MainWindow (in XAML) and one in the constructor of the window. By making the following modification and ensuring that the MainWindow and the views use the same instance, the bindings start functioning as desired at my end:
public MainWindow() { InitializeComponent(); this.DataContext = this.Resources["MainWindowViewModel"]; }
As for the selection of the hierarchy items, I believe you can achieve the desired result by setting the IsSelectable property of these items to False:
<telerik:RadNavigationViewItem
I'm attaching the modified version of the project with these two updates. Please let me know if it now functions as expected or if there's something of importance which I've missed.
Regards,
Dilyan Traykov
Progress Telerik
Love the Telerik and Kendo UI products and believe more people should try them? Invite a fellow developer to become a Progress customer and each of you can get a $50 Amazon gift voucher.
Hello Psyduck,
Thank you for the provided images and project.
To resolve your first issue, you can define the MainWindowViewModel as a static resource in the MainWindow and pass this instance as the DataContext of the different templates:
<local:MainWindowViewModel x: <DataTemplate x: <local:UserControlMain </DataTemplate> <DataTemplate x: <local:UserControlA </DataTemplate> <DataTemplate x: <local:UserControlB </DataTemplate> <DataTemplate x: <local:UserControlC </DataTemplate>
As for the exception thrown in the NavigationViewContentTemplateSelector, you need to ensure that a template is returned in each scenario. You can add an additional empty template and use this if you find it possible:
public override DataTemplate SelectTemplate(object item, DependencyObject container) { var navigationItemModel = item as RadNavigationViewItem; switch (navigationItemModel.Content) { case "MainView": return this.UserControlMain; case "Hierarchy 1": return this.UserControlMain; case "Hierarchy 2": return this.UserControlMain; case "UserControrl A": return this.ATemplate; case "UserControrl B": return this.BTemplate; case "UserControrl C": return this.CTemplate; default: return this.EmptyTemplate; } }
Finally, if you wish to modify the behavior of the items when they are clicked or expanded, you can inherit the class and override its OnClick and OnIsExpandedChanged methods:
public class CustomNavigationViewItem : RadNavigationViewItem { protected override void OnIsExpandedChanged() { } protected override void OnClick() { } }
I hope you find this information helpful. For your convenience, I've also attached a modified version of the project which I believe should work as you expect. Please have a look and let me know if I've missed something of importance.
Regards,
Dilyan Traykov.
Your example worked perfectly, but my source is not binding.
I reattach the error sample source.
MainWindow still throws a "NullReferenceException" exception was thrown.
It's only a designer crash.
1. The difference is the way MainView creates a StartView before opening it and binds it.
I did the initial binding, but it seems to be initialized continuously when I do the Selector. I'd appreciate it if you check it out.
2. In your source, the hierarchy of navigation items is now linked to UserControlMainView.
I want to keep these as the previous View, not the MainView. (If you click Hierarchy 3 in UC-B-View, UC-B-View will be maintained.)
Is this also possible with a template?
Thanks.
Thank you. This works perfectly the way I want.
But still MainWindow view throws Exception. (Designer crash)
The example sent above is the same. Is there any solution to this?
"NullReferenceException: Object reference not set to an instance of an object."
Hello.
Null-checked to avoid null exceptions. And if that doesn't apply, you want an empty view or previous view.
public override DataTemplate SelectTemplate(object item, DependencyObject container) { if (item is Telerik.Windows.Controls.RadNavigationViewItem navigationItemModel) { switch (navigationItemModel.Content) { case "MainView": return this.UserControlMain; case "UserControrl A": return this.ATemplate; case "UserControrl B": return this.BTemplate; case "UserControrl C": return this.CTemplate; default: return this.EmptyViewTemplate; } } else { return this.EmptyViewTemplate; } }
I created EmptView.xaml and used return.
I want to simplify this. Is emptyview.xaml absolutely necessary?
Designer crashes if you put 'null' in EmptyViewTemplate.
"Must disconnect specific child from current parent Visual before attaching to new parent Visual."
I am asking for a solution to this.
Thanks.
At my end, returning null both in the switch statement and in the else block does not result in any exceptions either during runtime or in the designer. I'm using Visual Studio 16.9.4.
Nonetheless, if you do expect the default case of the switch statement to be hit, you can define the EmptyViewTemplate property as follows and delete the .xaml and .xaml.cs files:
public DataTemplate EmptyViewTemplate = new DataTemplate();
Please let me know if this works for you.
Simply use the EmptyViewtemplate in the way you suggested and it works fine.
When EmptyView returns, does it appear as a blank screen?
Thanks. | https://www.telerik.com/forums/navigation-content-template-selector-and-hierarchy-binding-question | CC-MAIN-2021-43 | refinedweb | 1,017 | 50.53 |
This is your resource to discuss support topics with your peers, and learn from each other.
12-06-2012 05:55 AM
hi every one,
when i run my application then i get a message on console
free malloc object that is not allocated:../../dlist.c:1078
and my application is forced close i dont know what is it please help me.
02-24-2013 12:37 PM
mohdfarhanakram wrote:
hi every one,
when i run my application then i get a message on console
free malloc object that is not allocated:../../dlist.c:1078
and my application is forced close i dont know what is it please help me.
since today I'm getting
free malloc object that is not allocated:../../dlist.c:1096
my app is running fine till the end and just before exit() I'm getting this error
have not found out the reason yet
02-25-2013 03:12 AM
02-25-2013 03:24 AM
simon_hain wrote:
we also have the same issue and were unable to resolve it so far.
due to the message i suspect the list implementation to have an issue with cleaning up, maybe the deconstructor is called twice internally.
good to hear that I'm not alone
in this case it also happens without creating/opening a Page with ListView
it's a NavigationPane
perhaps we have to wait for next OS
I have only tried on 10.0.9.xxx SDK from 10.1 Beta IDE
have you tried with Q10 Simulator and 10.10. ?
02-25-2013 03:38 AM
02-25-2013 04:30 AM
simon_hain wrote:
no, i try to avoid simulators wherever i can, guess it must be some history with bb java development
same for me ;-)
02-25-2013 04:50 AM
what i noticed is that when clean my project the message didn't return ( and indeed i think this is a internal error )
02-25-2013 04:52 AM
xtravanta wrote:
what i noticed is that when clean my project the message didn't return ( and indeed i think this is a internal error )
already did this
clean project didn't help
02-25-2013 07:59 AM
just tested with Q10 Simulator running 10.1 OS
used same SDK 10.0.9.2372
same error happens on 10.1 but exit is fast - no delay noticable
taking a look at the logs:
Z10 Device, 10.0.9: .core size 89'812'992
Q10 Simulator, 10.10: .core size only 749'568
so I really believe this error is an internal one and I spent my sunday for nothing
aaah no: looking for errors always means learning new things - in this case about the .core files Peter Hansen mentioned in the other thread ;-)
03-17-2013 11:40 PM
I just spent the last two hours chasing this error down, here's what the problem was for me:
I was using my class definition to allocation my GUI objects, here's a simple example to illustrate (this probably won't compile):
class app : public QObject { Q_OBJECT public: app(); ListView list; ImageView img; }
Then I was adding these objects (list, img) list this:
container->add(&list);
container->add(&img);
class app : public QObject { Q_OBJECT public: app(); ListView *list; ImageView *img; } .... list = new ListView(); img = new ImageView(); container->add(list); container->add(img);
When I'd shutdown the application I'd get the "free malloc object" error message.
By switching to definiting object pointers in the class definition and instantiating the objects with the new operator in the code before using the object I managed to avoid the shutdown error messages.
I'd love to hear a technical explanation as to why this is happening...
I certainly hope this helps avoid some tedius debugging for others.
Cheers,
Eric | https://supportforums.blackberry.com/t5/Native-Development/error-free-malloc-object-that-is-not-allocated-dlist-c-1078/m-p/2190895/highlight/true | CC-MAIN-2016-30 | refinedweb | 635 | 62.61 |
Roland McGrath wrote:>>If you don't like this idea at all, please let me know if there any other >>way of solving the invisible threads problem, short of taking out >>->permission() altogether from proc_task_inode_operations.> > > Have you investigated my suggestion to move __exit_fs from do_exit to> release_task?Roland,No, I had missed this completely. Sorry.I just gave it a quick try and it seems to be working fine. I have only moved __exit_fs to the top of release_task, not moved exit_namespace after it. I will try to run some tests to see if this is working fine. Thanks a lot.Thanks and regards,Sripathi.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2005/9/14/432 | CC-MAIN-2014-15 | refinedweb | 133 | 67.65 |
Java puts classes in packages. Packages are logical groupings for classes. Java classes are grouped into packages.
We need to tell which packages to search a class when using them in the code with import statement.
Import statements tell Java which packages to search sfor classes.
import java.util.Random; // import tells us where to find Random public class Main { public static void main(String[] args) { Random r = new Random(); System.out.println(r.nextInt(10)); // print a number between 0 and 9 } }
The import statement tells the compiler which package to look in to find a class.
Package names are hierarchical like the mailing address.
If a package begins with
java or
javax,
this means it came with the JDK.
If it starts with something else, it likely shows where it came from using the website name in reverse.
From example, com.java2s.javaexam tells us the code came from java2s.com.
After the website name, you can add whatever you want.
For example, com.java2s.java8.myname also came from java2s.com.
Java calls more detailed packages child packages.
com.java2s.javaexam is a child package of com.java2s.
The code above generates the following result.
We can use wildcard to to import all the classes in a package:
import java.util.*; // imports java.util.Random among other things public class Main { public static void main(String[] args) { Random r = new Random(); System.out.println(r.nextInt(10)); } }
In the code above, we imported java.util.Random and a list of other classes.
The
* is a
wildcard that matches all classes in the package.
Every class in the java.util package is available to this program when Java compiles it.
It doesn't import child packages, fields, or methods; it imports only classes.
The code above generates the following result.
There's one special package in the Java world called
java.lang.
This package is automatically imported.
You can still type this package in an import statement, but you don't have to.
In the following code, how many of the imports do you think are redundant?
import java.lang.System; import java.lang.*; import java.util.Random; import java.util.*; //w ww. j av a 2s . c om public class Main { public static void main(String[] args) { Random r = new Random(); System.out.println(r.nextInt(10)); } }
Three of the imports are redundant.
import java.lang.System; and
import java.lang.*;
are redundant because everything in java.lang is automatically imported.
For
import java.util.Random; and
import java.util.*;
you just need one.
Java automatically looks in the current package for other classes therefor we do not need to import the classes from the same folder or package.
We cannot use wildcards in the middle as follows
import java.nio.*.*;
And we cannot use wildcards to import methods as follows
import java.nio.files.Paths.*;
The code above generates the following result.
We can use packages to distinguish two classes with the same name.
Java has two Date classes. One is java.util.Date and the other is java.sql.Date.
When a class is found in multiple packages, Java gives you the compiler error.
We can use the following import statements to include the two Java Date classes.
import java.util.Date; import java.sql.Date;
The following code shows how to use the two Date class with full qulified name
import java.util.Date; public class Main { Date date; java.sql.Date sqlDate; }
Or you could have neither with an import and always use the fully qualified class name:
public class Main { java.util.Date date; java.sql.Date sqlDate; }
The directory structure on your computer is related to the package name.
Suppose we have these two classes:
C:\temp\packagea\ClassA.java
package packagea; public class ClassA { }
C:\temp\packageb\ClassB.java
package packageb; import packagea.ClassA; public class ClassB { public static void main(String[] args) { ClassA a; System.out.println("Got it"); } }
When you run a Java program, Java knows where to look for those package names. In this
case, running from C:\temp works because both
packagea and
packageb are underneath it.
Compiling Code with Packages
Create the two files:
C:\temp\packagea\ClassA.java C:\temp\packageb\ClassB.java
Then type this command:
cd C:\temp
Mac/Linux Setup
Create the two files:
/tmp/packagea/ClassA.java /tmp/packageb/ClassB.java
Then type this command:
cd /tmp
To Compile
Type this command:
javac packagea/ClassA.java packageb/ClassB.java
If the command does work, two new files will be created:
packagea/ClassA.class and packageb/ClassB.class.
To Run
Type this command:
java packageb.ClassB
You might have noticed we typed ClassB rather than ClassB.class.
You can specify the location of the other files explicitly using a class path to reference the class files located some where else or in special JAR files.
A JAR file is like a zip file with mainly Java class files.
On Windows, you type the following:
java -cp ".;C:\temp\someOtherLocation;c:\temp\myJar.jar" myPackage.MyClass
And on Mac OS/Linux, you type this:
java -cp ".:/tmp/someOtherLocation:/tmp/myJar.jar" myPackage.MyClass
The dot is here to include the current directory in the class path.
The rest is to look for class files (or packages) in someOtherLocation and within myJar.jar.
Windows uses semicolons to separate parts of the class path; other operating systems use colons.
Finally, you can use a wildcard
* to match all the JARs in a directory.
Here's an example:
java -cp "C:\temp\directoryWithJars\*" myPackage.MyClass
This command will add all the JARs to the class path that are in directoryWithJars. It won't include any JARs in the class path that are in a subdirectory of directoryWithJars. | http://www.java2s.com/Tutorials/Java/OCA_Java_SE_8_Building_Blocks/0030__Package_Import.htm | CC-MAIN-2017-04 | refinedweb | 962 | 61.22 |
an atom-atom neighborlist object More...
#include <NeighborList.hh>
an atom-atom neighborlist object
The neighborlist is used during minimization to speed atom-atom energy calculations. It stores a list of potentially interacting neighbor atoms for each atom in the system.
The logic for using the nblist is tricky.
Tentative scheme: turn on nblist scoring at start of minimization
at this point, want pose to be fully scored so perhaps a call to scorefxn(pose) ?? Real const start_score( scorefxn( pose ) );
pose.energies().setup_use_nblist( true );
Real const start_func( func( vars ) ); // nblist setup inside this call
now require that all energy evaluations have an identical set of moving dofs (guaranteed if all energy calculations are inside function evaluations). This is checked inside each scorecaln using the nblist.
when using the nblist, the rsd-rsd neighbor information is not updated. This will probably be a good thing in that it will smooth the energy landscape during minimization...
in a nblist score calculation, we do two things: recover cached energies for non-pair-moved positions, and get atom-atom energies for the pairs that are on the nblist. We don't cache 2d energies for moving positions, since we are not looping over rsd nbr links for that score calculation so the caching would be pretty time- consuming I think.
The nblist has the count_pair weights stored, so no calls to count_pair !!
turn off nblist scoring at the end of minimization. Since we have not been updating rsd-pair energies for moving pairs, and the rsd-rsd nblist is potentially out of data, we reset the neighborgraph at this point to ensure a complete score calculation next time.
fpd | https://www.rosettacommons.org/manuals/archive/rosetta3.4_user_guide/dc/d91/classcore_1_1scoring_1_1_atom_neighbor.html | CC-MAIN-2016-07 | refinedweb | 275 | 53.81 |
dns_cache_lookup
Name
dns_cache_lookup — Check to see if the results for a given query are in the DNS cache
Synopsis
#include "dns-cache.h"
dns_cache_cachenode * **dns_cache_lookup** ( | query
); | |
Description
Check to see if the results for a given query are in the DNS cache.
- query
The dns cache query. A pointer to a dns_cache_query struct. For documentation of this data structure see “dns_cache_query”
This function returns a reference to a dns_cache_cachenode if present, or NULL if not present. For documentation of this data structure see “dns_cache_cachenode”.
Note
This function never blocks; it either has the answer in the cache or returns NULL.
Note
If the cachenode is returned, it is the caller's responsibility to release its reference by calling dns_cache_free_node when it has finished using it.
It is legal to call this function in any thread. | https://support.sparkpost.com/momentum/3/3-api/apis-dns-cache-lookup | CC-MAIN-2022-21 | refinedweb | 135 | 65.01 |
Devoxx, and all similar conferences, is a place where you make new discoveries, continually. One of these, in my case, at last week's Devoxx, started from a discussion with Jaroslav Bachorik from the VisualVM team. He had presented VisualVM's extensibility in a session at Devoxx. I had heard that, when creating extensions for VisualVM, one can also create new charts using VisualVM's own charting API. Jaroslav confirmed this and we created a small demo together to prove it, i.e., there's a charting API in VisualVM. Since VisualVM is based on the NetBeans Platform, I went further and included the VisualVM charts in a generic NetBeans Platform application.
Then I wondered what the differences are between JFreeChart and VisualVM charts, so asked the VisualVM chart architect, Jiri Sedlacek. He sent me a very interesting answer:
JFreeCharts are great for creating any kind of static graphs (typically for reports). They provide support for all types of existing chart types. The benefit of using JFreeChart is fully customizable appearance and export to various formats. The only problem of this library is that it's not primarily designed for displaying live data. You can hack it to display data in real time, but the performance is poor.
That's why I've created the VisualVM charts. The primary (and so far only) goal is to provide charts optimized for displaying live data with minimal performance and memory overhead. You can easily display a fullscreen graph and it will still scroll smoothly while running and adding new values (when running on physical hardware, virtualized environment may give slightly worse results). There's a real rendering engine behind the charts which ensures that only the changed areas of the chart are repainted (no full-repaints because of a 1px change). Scrolling the chart means moving the already rendered image and only painting the newly displayed area. Last but not least, the charts are optimized for displaying over a remote X session - rendering is automatically switched to low-quality ensuring good response times and interactivity.
The Tracer engine introduced in VisualVM 1.3 further improves performance of the charts. I've intensively profiled and optimized the charts to minimize the cpu cycles/memory allocations for each repaint. As of now, I believe that the VisualVM charts are the fastest real time Java charts with the lowest cpu/memory footprint.
Best of all is that everything described above is in the JDK. That's because VisualVM is in the JDK. Here's a small NetBeans Platform application (though you could also use the VisualVM chart API without using the NetBeans Platform, just include these JARs on your classpath: org-netbeans-lib-profiler-charts.jar, com-sun-tools-visualvm-charts.jar, com-sun-tools-visualvm-uisupport.jar and org-netbeans-lib-profiler-ui.jar) that makes use of the VisualVM chart API outlined above:
The chart that you see above is updated in real time and you can change to full screen and you can scroll through it and, at the same time, there is no lag and it is very performant. Below is all the code (from the unit test package in the VisualVM sources) that you see in the JPanel above:
public class Demo extends JPanel { private static final long SLEEP_TIME = 500; private static final int VALUES_LIMIT = 150; private static final int ITEMS_COUNT = 8; private SimpleXYChartSupport support; public Demo() {; public void run() { while (true) {); } catch (Exception e) { e.printStackTrace(System.err); } } } private Generator(SimpleXYChartSupport support) { this.support = support; } } }
Here is the related Javadoc. To get started using the VisualVM charts in your own application, read this blog, and then look in the "lib" folder of the JDK to find the JARs you will need.
And then have fun with real-time data in your Java desktop applications.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/real-time-charts-java-desktop | CC-MAIN-2017-39 | refinedweb | 647 | 53.1 |
A Cursor instance has the following attributes and methods:
Executes a SQL statement. The SQL statement may be parametrized (i. e. placeholders instead of SQL literals). The sqlite3 module supports two kinds of placeholders: question marks (qmark style) and named placeholders (named style).
This example shows how to use parameters with qmark style:
import sqlite3 con = sqlite3.connect("mydb") cur = con.cursor() who = "Yeltsin" age = 72 cur.execute("select name_last, age from people where name_last=? and age=?", (who, age)) print cur.fetchone()
This example shows how to use the named style:
import ); """)
For
SELECT statements, rowcount is always None because we cannot
determine the number of rows a query produced until all rows were fetched.
For
DELETE statements, SQLite reports rowcount as 0 if you make a
DELETE FROM table without any condition.".
See About this document... for information on suggesting changes.See About this document... for information on suggesting changes. | http://docs.python.org/release/2.5.2/lib/sqlite3-Cursor-Objects.html | CC-MAIN-2013-20 | refinedweb | 151 | 70.19 |
Ignore imported style files when running in Node
A
babel/registerstyle hook to ignore style imports when running in Node. This is for projects that use something like Webpack to enable CSS imports in JavaScript. When you try to run the project in Node (to test in Mocha, for example) you'll see errors like this:
SyntaxError: /Users/brandon/code/my-project/src/components/my-component/style.sass: Unexpected token (1:0) > 1 | .title | ^ 2 | font-family: serif 3 | font-size: 10em 4 |
To resolve this, require
ignore-styleswith your mocha tests:
mocha --require ignore-styles
See DEFAULT_EXTENSIONS for the full list of extensions ignored, and send a pull request if you need more.
Note: This is not for use inside Webpack. If you want to ignore extensions in Webpack you'll want to use a loader like ignore-loader. This is for use in Node outside of your normal Webpack build.
$ npm install --save-dev ignore-styles
To use this with multiple Mocha requires:
mocha --require babel-register --require ignore-styles
You can also use it just like
babel/register:
import 'ignore-styles'
In ES5:
require('ignore-styles')
To customize the extensions used:
import register from 'ignore-styles' register(['.sass', '.scss'])
To customize the extensions in ES5:
require('ignore-styles').default(['.sass', '.scss']);
By default, a no-op handler is used that doesn't actually do anything. If you'd like to substitute your own custom handler to do fancy things, pass it as a second argument:
import register from 'ignore-styles' register(undefined, (module, filename) => { module.exports = {styleName: 'fake_class_name'} })
The first argument to
registeris the list of extensions to handle. Leaving it undefined, as above, uses the default list. The handler function receives two arguments,
moduleand
filename, directly from Node.
Why is this useful? One example is when using something like react-css-modules. You need the style imports to actually return something so that you can test the components, or the wrapper component will throw an error. Use this to provide test class names.
Another use case would be to simply return the filename of an image so that it can be verified in unit tests:
const _ = require('lodash') const path = require('path')
register(undefined, (module, filename) => { if (_.some(['.png', '.jpg'], ext => filename.endsWith(ext))) { module.exports = path.basename(filename) } })
If the filename ends in '.png' or '.jpg', then the basename of the file is returned as the value of the module on import.
The MIT License (MIT) | https://xscode.com/bkonkle/ignore-styles | CC-MAIN-2020-50 | refinedweb | 413 | 56.15 |
For a while now I’ve been wondering about what might be the minimal set of technologies that allows me to tackle the widest range of projects. The answer I’ve arrived at, for backend development at least, is GraphQL and ArangoDB.
Both of these tools expand my reach as a developer. Projects involving integrations, multiple clients and complicated data that would have been extremely difficult are now within easy reach.
But the minimal set idea is that I can enjoy this expanded range while juggling far fewer technologies than before. Tools that apply in more situations mean fewer things to learn, fewer moving parts and more depth in the learning I do.
While GraphQL and ArangoDB are both interesting technologies individually, it’s in using them together that I’ve been able to realize those benefits; one of those moments where the whole is different from the sum of it’s parts.
Backend Minimalism
My embrace of Javascript has definitely been part of creating that minimal set. A single language for both front and back end development has been a big part of simplifying my tech stack. Both GraphQL and ArangoDB can be used in many languages, but Javascript support is what you might describe as “first among equals” for both projects.
GraphQL can replace, and for me has replaced, server side frameworks like Rails or Django, leaving me with a handful of Javascript functions and more modular, testable code.
GraphQL also replaces ReST, freeing me from thinking about HATEOAS, bike-shedding over the vagaries of ReST, or needing pages and pages of JSON API documentation to save me from bike-shedding over the vagaries of ReST.
ArangoDB has also reduced the number of things I need to need to know. For a start it has removed the “need” for an ORM (no relational database, no need for Object Relational Mapping), which never really delivered on it’s promise to free you from knowing the underlying SQL.
More importantly it’s replaced not just NoSQL databases with a razor-thin set of capabilities like Mongodb (which stores nested documents but can’t do joins) or Neo4j (which does joins but can’t store nested documents), but also general purpose databases like MySQL or Postgres. I have one query language to learn, and one database whose quirks and characteristics I need to know.
It’s also replaced the deeply unpleasant process of relational data modeling with a seamless blend of documents and graphs that make modeling even really ugly connected datasets anticlimactic. As a bonus, in moving the schema outside the database GraphQL lets us enjoy all the benefits of a schema (making sure there is at least some structure I can rely on) and all the benefits of schemalessness (flexibility, ease of change).
Tools that actually reduce the number of things you need to know don’t come along very often. My goal here is to give a sense of what it looks like to use these two technologies together, and hopefully admiring the trees can let us appreciate the forest.
Show me the code
First we need some data to work with. ArangoDB’s administrative interface has some example graphs it can create, so lets use one to explore.
If we select the “knows” graph, we get a simple graph with 5 vertices.
This graph is going to be the foundation for our little exploration.
Next, the only really meaningful information these vertices have is a name attribute. If we are wanting to create a GraphQL type that represents one of these objects it would look like this:
let Person = new GraphQLObjectType({ name: 'Person', fields: () => ({ name: { type: GraphQLString } }) })
Now that we have a type that describes what a
Person object looks like we can use it in a schema. This schema has a field called
person which has two attributes: type, and resolve.
let schema = new GraphQLSchema({ query: new GraphQLObjectType({ name: 'Query', fields: () => ({ person: { type: Person, resolve: () => { return {name: 'Mike'} }, } }) }) })
The
resolve is a function that will be run whenever graphql is asked to produce a person object.
type is a type that describes the object that the
resolve function returns, which in this this case is our
Person type.
To see if this all works we can write a test using Jest.
import { graphql, GraphQLSchema, GraphQLObjectType, GraphQLString, GraphQLList, GraphQLNonNull } from 'graphql' describe('returning a hardcoded object that matches a type', () => { let Person = new GraphQLObjectType({ name: 'Person', fields: () => ({ name: { type: GraphQLString } }) }) let schema = new GraphQLSchema({ query: new GraphQLObjectType({ name: 'Query', fields: () => ({ person: { type: Person, resolve: () => { return {name: 'Mike'} }, } }) }) }) it('lets you ask for a person', async () => { let query = ` query { person { name } } `; let { data } = await graphql(schema, query) expect(data.person).toEqual({name: 'Mike'}) }) })
This test passes which tells us that we got everything wired together properly, and the foundation laid to talk to ArangoDB.
First we’ll use arangojs and create a
db instance and then a function that allows us to get a person using their name.
//src/database.js import arangojs, { aql } from 'arangojs' export const db = arangojs({ url: `{process.env.ARANGODB_USER}:${process.env.ARANGODB_PASSWORD}@127.0.0.1:8529`, databaseName: 'knows' }) export async function getPersonByName (name) { let query = aql` FOR person IN persons FILTER person.name == ${ name } LIMIT 1 RETURN person ` let results = await db.query(query) return results.next() }
Now lets use that function with our schema to retrieve real data from ArangoDB.
import { graphql, GraphQLSchema, GraphQLObjectType, GraphQLString, GraphQLList, GraphQLNonNull } from 'graphql' import { db, getPersonByName } from '../src/database' describe('queries', () => { it('lets you ask for a person from the database', async () => { let Person = new GraphQLObjectType({ name: 'Person', fields: () => ({ name: { type: GraphQLString } }) }) let schema = new GraphQLSchema({ query: new GraphQLObjectType({ name: 'Query', fields: () => ({ person: { args: { //person now accepts args name: { // the arg is called "name" type: new GraphQLNonNull(GraphQLString) // name is a string & manadatory } }, type: Person, resolve: (root, args) => { return getPersonByName(args.name) }, } }) }) }) let query = ` query { person(name "Eve") { name } } ` let { data } = await graphql(schema, query) expect(data.person).toEqual({name: 'Eve'}) }) })
Here we have modified our schema to accept a
name argument when asking for a person. We access the name via the args object and pass it to our database function to go get the matching person from Arango.
Let’s add a new database function to get the friends of a user given their id.
What’s worth pointing out here is that we are using ArangoDB’s AQL traversal syntax. It allows us to do a graph traversal across outbound edges get the vertex on the other end of the edge.
export async function getFriends (id) { let query = aql` FOR vertex IN OUTBOUND ${id} knows RETURN vertex ` let results = await db.query(query) return results.all() }
Now that we have that function, instead of adding it to the schema, we add a field to the
Person type. In the
resolve for our new
friends field we are going to use the
root argument to get the id of the current person object and then use our
getFriends function to do the traveral to retrieve the persons friends.
let Person = new GraphQLObjectType({ name: 'Person', fields: () => ({ name: { type: GraphQLString }, friends: { type: new GraphQLList(Person), resolve(root) { return getFriends(root._id) } } }) })
What’s interesting is that because of GraphQL’s recursive nature, this change lets us query for friends:
query { person(name: "Eve") { name friends { name } } }
and also ask for friends of friends (and so on) like this:
query { person(name: "Eve") { name friends { name friends { name } } } }
We can show that with a test.
import { graphql, GraphQLSchema, GraphQLObjectType, GraphQLString, GraphQLList, GraphQLNonNull } from 'graphql' import { db, getPersonByName, getFriends } from '../src/database' describe('queries', () => { it('returns friends of friends', async () => { let Person = new GraphQLObjectType({ name: 'Person', fields: () => ({ name: { type: GraphQLString }, friends: { type: new GraphQLList(Person), resolve(root) { return getFriends(root._id) } } }) }) let schema = new GraphQLSchema({ query: new GraphQLObjectType({ name: 'Query', fields: () => ({ person: { args: { name: { type: new GraphQLNonNull(GraphQLString) } }, type: Person, resolve: (root, args) => { return getPersonByName(args.name) }, } }) }) }) let query = ` query { person(name: "Eve") { name friends { name friends { name } } } } ` let result = await graphql(schema, query) let { friends } = result.data.person let foaf = [].concat(...friends.map(friend => friend.friends)) expect([{name: 'Charlie'},{name: 'Dave'},{name: 'Bob'}]).toEqual(expect.arrayContaining(foaf)) }) })
This test has running a query three levels deep and walking the entire graph. Because we can ask for any combination of any of the things our types defined, we have a whole lot of flexibility with very little code. The code that’s there is just a few simple functions, modular and easy to test.
But what did we trade away to get all that? If we look at the queries that get sent to Arango with
tcpdump we can see how that sausage was made.
// getPersonByName('Eve') from the person resolver in our schema {"query":"FOR person IN persons FILTER person.name == @value0 LIMIT 1 RETURN person","bindVars":{"value0":"Eve"}} // getFriends('persons/eve') in Person type -> returns Bob & Alice. {"query":"FOR vertex IN OUTBOUND @value0 knows RETURN vertex","bindVars":{"value0":"persons/eve"}} // now a new request for each friend: // getFriends('persons/bob') {"query":"FOR vertex IN OUTBOUND @value0 knows RETURN vertex","bindVars":{"value0":"persons/bob"}} // getFriends('persons/alice') {"query":"FOR vertex IN OUTBOUND @value0 knows RETURN vertex","bindVars":{"value0":"persons/alice"}}
What we have here is our own version of the famous N+1 problem. If we were to add more people to this graph things would get out of hand quickly.
Facebook, which has been using GraphQL in production for years, is probably even less excited about the prospect of N+1 queries battering their database than we are. So what are they doing to solve this?
Using Dataloader
Dataloader is a small library released by Facebook that solves the N+1 problem by cleverly leveraging the way promises work. To use it, we need to give it a batch loading function and then replace our calls to the database with calls call Dataloader’s
load method in all our resolves.
What, you might ask, is a batch loading function? The dataloader documentation offers that “A batch loading function accepts an Array of keys, and returns a Promise which resolves to an Array of values.”
We can write one of those.
async function getFriendsByIDs (ids) { let query = aql` FOR id IN ${ ids } let friends = ( FOR vertex IN OUTBOUND id knows RETURN vertex ) RETURN friends ` let response = await db.query(query) return response.all() }
We can then use that in a new test.
import { graphql } from 'graphql' import DataLoader from 'dataloader' import { db, getFriendsByIDs } from '../src/database' import schema from '../src/schema' describe('Using dataloader', () => { it('returns friends of friends', async () => { let Person = new GraphQLObjectType({ name: 'Person', fields: () => ({ name: { type: GraphQLString }, friends: { type: new GraphQLList(Person), resolve(root, args, context) { return context.FriendsLoader.load(root._id) } } }) }) let query = ` query { person(name: "Eve") { name friends { name friends { name } } } } ` const FriendsLoader = new DataLoader(getFriendsByIDs) let result = await graphql(schema, query, {}, { FriendsLoader }) let { person } = result.data expect(person.name).toEqual('Eve') expect(person.friends.length).toEqual(2) let names = person.friends.map(friend => friend.name) expect(names).toContain('Alice', 'Bob') }) })
The key section of the above test is this:
const FriendsLoader = new DataLoader(getFriendsByIDs) // schema, query, root, context let result = await graphql(schema, query, {}, { FriendsLoader })
The context object is passed as the fourth parameter to the
graphql function which is then available as the third parameter in every resolve function. With our
FriendsLoader attached to the context object, you can see us accessing it in the resolve function on the
Person type.
Let’s see what effect that batch loading has on our queries.
// getPersonByName('Eve') from the person resolver in our schema {"query":"FOR person IN persons FILTER person.name == @value0 LIMIT 1 RETURN person","bindVars":{"value0":"Eve"}} // getFriendsByIDs(["persons/eve"]) -> returns Bob & Alice. {"query":"FOR id IN @value0 let friends = ( FOR vertex IN OUTBOUND id knows RETURN vertex ) RETURN friends","bindVars":{"value0":["persons/eve"]}} // getFriendsByIDs(["persons/alice","persons/bob"]) {"query":"FOR id IN @value0 let friends = ( FOR vertex IN OUTBOUND id knows RETURN vertex ) RETURN friends","bindVars":{"value0":["persons/alice","persons/bob"]}}
Now for a three level query (Eve, her friends, their friends) we are down to just 1 query per level and the N+1 problem is no longer a problem.
When it’s time to serve your data to the world, express-graphql supplies a middleware that we can pass our schema and loaders to like this:
import express from 'express' import graphqlHTTP from 'express-graphql' import schema from './schema' import DataLoader from 'dataloader' import { getFriendsByIDs } from '../src/database' const FriendsLoader = new DataLoader(getFriendsByIDs) const app = express() app.use('/graphql', graphqlHTTP({ schema, context: { FriendsLoader }})) app.listen(3000) // is up and running!
What we just did
With just those few code examples we’ve built a backend system that provides a query-able API for clients backed by a graph database. Growing it would look like adding a few more functions and a few more types. The code stays modular and testable. Dataloader has ensured that we didn’t even pay a performance penalty for that.
A perfect combination
While geeking out on the technology is fun, it loses sight of what I think is the larger point: The design of both GraphQL and ArangoDB allow you to combine and recombine some really simple primitives to tackle anything you can think of.
With ArangoDB, it’s all just documents, whether you use them like that or treat them as key/value or a graph is up to you. While this approach is marketed as “multi-model” database, the term is unfortunate since it makes the database sound like it’s trying to do lots of things instead of leveraging some fundamental similarity between these types of data. That similarity becomes the “primitive” that makes all this flexibility possible.
For GraphQL, my application is just a bunch of functions in an Abstract Syntax Tree which get combined and recombined by client queries. The parser and execution engine take care of what gets called when.
In each case what I need to understand is simple, the behaviour I can produce is complex.
I’m still honing my minimal set for front end development, but for backend development this is now how I build. These days I’m refocusing my learning to go narrow and deep and it feels good. Infinite width never felt sustainable. It’s not apparent at first, but once that burden is lifted off your shoulders you will realize how heavy it was.
5 thoughts on “ArangoDB and GraphQL”
Hello, nice article, I’d like to start with arango, or neo4js, or the future dgraph ? but without to use aql for arango, or cypher for neo4js, just graphql and javascript (it’s possible reactjs ? ), so if someday i change from database engine, i don’t need rewrite my app. thanks if you have advises for me …
Hey, that’s very interesting. So you can really make almost anything with this? I still have to learn GraphQL but it seems very interesting to move stuff more to the client side. Does it also make the server load lighter?
I really like your work on ArangoDB and your vision of using it with GraphQL. Are you available for consulting? I have a project I would like to use this setup on but would like to discuss scaling it up first?
Great post Mike! You might find mine interesting as well.
I came to the same conclusion about ArangoDB and GraphQL, but it’s been hard to assess the cost to host at the entry level and figure out the path to scale without hiccups. The AWS calculator hasn’t been very clear so it seems like DigitalOcean is the most straightforward solution, but I still have to setup the container solution and cluster orchestration myself, which apparently needs to be DC/OS. | https://mikewilliamson.wordpress.com/2017/03/24/arangodb-and-graphql/ | CC-MAIN-2018-34 | refinedweb | 2,634 | 59.74 |
![if !(IE 9)]> <![endif]>
Hello there! In this article, we'll look at the free version (available to the developers of free and open-source software) of the PVS-Studio static analyzer in action. What we are going to check today is the source code of the Reiser4 file system and its utilities.
This article was originally posted on the Habrahabr website and reposted here by permission of the author.
I hope all of you who are about to read this article have heard, if only in passing, about the static code analyzer PVS-Studio. If you haven't, follow this link to read a brief product description.
The developer company also runs an official blog on Habrahabr where they frequently post reports with the analysis results of various open-source projects.
More information on Reiser4 can be found on the kernel wiki page.
Let's start with the Reiser4 utilities, specifically the libaal library. Then we'll check the reiser4progs tools and round off with a review of the defects found in the kernel code.
We need to install PVS-Studio to get started. The official website provides deb and rpm packages alongside an ordinary installation archive. Choose whatever option is best for you.
Next, we need to activate the free license. Open-source software developers have to insert the following lines in the beginning of each source file (it's not necessary to add them to header files):
// This is an open source non-commercial project. Dear PVS-Studio, please check it.
// PVS-Studio Static Code Analyzer for C, C++ and C#:
Let's write a small bash script so that we don't have to repeat that process by hand for each file. I'll use the sed stream editor to write the script (the following instruction is written in one line):
#!/usr/bin/bash for str in $(find $1 -name '*.c'); do sed -i -e '1 s/^/\/\/ This is an open source non-commercial project. Dear PVS-Studio, please check it.\n\/\/ PVS-Studio Static Code Analyzer for C, C++ and C\#: http:\/\/\n\n/;' $str done
In addition, let's write another script to facilitate project building and PVS-Studio launch:
#!/usr/bin/bash pvs-studio-analyzer trace -- make -j9 || exit 1 pvs-studio-analyzer analyze -o log.log -j9 || exit 1 plog-converter -a GA:1,2 -t tasklist log.log || exit 1
We are ready to go. The libaal library comes first.
libaal is a library that provides abstraction of Reiser4 structures and is used by reiser4progs.
If we agree to ignore the warnings dealing with redefinition of the standard data types, then possible bugs are found only in lines 68, 129, and 139 of the src/bitops.c file:V629 Consider inspecting the 'byte_nr << 3' expression. Bit shifting of the 32-bit value with a subsequent expansion to the 64-bit type.Lines 129 and 139 contain the following code:
bit_t aal_find_next_set_bit(void *map, bit_t size, bit_t offset) { .... unsigned int byte_nr = offset >> 3; .... unsigned int nzb = aal_find_nzb(b, bit_nr); .... if (nzb < 8) return (byte_nr << 3) + nzb; .... }
This defect can be easily fixed by replacing the unsigned int type with bit_t in the variable declarations.As for line 68:
bit_t aal_find_first_zero_bit(void *map, bit_t size) { .... unsigned char *p = map; unsigned char *addr = map; .... return (p - addr) << 3; .... }
it's a mystery for me why PVS-Studio believes the value of (p-addr) to be 32-bit. Even sizeof() yields the proper 8 bytes (I'm working on amd64).
Now, reiser4progs has much more interesting, and sometimes sadder, things to show. By the way, here's what Edward Shishkin said about these tools: "The author left right after these progs were written, and no one has since looked into that code (except for a couple of times when I was asked to fix fsck). So I'm not surprised by that pile of bugs." Indeed, it's no surprise that such specific bugs are still there after so many years.
The first serious error is found in the plugin/key/key_short/key_short_repair.c file:
V616 The 'KEY_SHORT_BAND_MASK' named constant with the value of 0 is used in the bitwise operation.
errno_t key_short_check_struct(reiser4_key_t *key) { .... if (oid & KEY_SHORT_BAND_MASK) key_short_set_locality(key, oid & !KEY_SHORT_BAND_MASK); .... }
KEY_SHORT_BAND_MASK is the constant 0xf000000000000000ull, which means the Boolean NOT operation produces false here (in C, all values other than 0 are considered true), i.e., in fact, 0. However, the programmer obviously meant the bitwise NOT (~) operation rather than the Boolean NOT. This warning was triggered several times by different files.
Next comes plugin/hash/tea_hash/tea_hash.c with errors such as this:
V547 Expression 'len >= 16' is always false.
Wait... It's not really an error - it's some sort of black magic or a dirty trick (if you don't believe in magic). Why? Well, would you call the code below clear and straightforward without a deep understanding of the processor's and operating system's inner workings and the programmer's idea?
uint64_t tea_hash_build(unsigned char *name, uint32_t len) { .... while(len >= 16) { .... len -= 16; .... } .... if (len >= 12) { if (len >= 16) *(int *)0 = 0; .... } .... }
What'd you say? This is not an error, but we'd better leave this code alone unless we know what goes on here. Let's try to figure it out.
The line *(int *)0 = 0; would trigger a SIGSEGV in a regular program. If you search for information about the kernel, you'll find that this statement is used to make the kernel generate an oops. This subject was discussed in the kernel developers' newsgroup (here), and Torvalds himself mentioned it too. So, if an assignment like that happens, in some mysterious ways, to execute inside the kernel code, you'll get an oops. Why check for the "impossible" condition is something that only the author himself knows, but, as I said, we'd better let the thing be unless we know how it works.
The only thing we can safely investigate is why the V547 warning was triggered. The len >= 16 expression is always false. The while loop is executed as long as the value of len is greater than or equal to 16, while the value 16 is subtracted at the end of the loop body at every iteration. This means the variable can be represented as len = 16*n+m, where n and m are integers and m<16. It's obvious that once the loop is over, all the 16*n's will have been subtracted, leaving only m.
The other warnings here follow the same pattern.
The following error is found in the plugin/sdext/sdext_plug/sdext_plug.c file: V595 The 'stat' pointer was utilized before it was verified against nullptr. Check lines: 18, 21.
static void sdext_plug_info(stat_entity_t *stat) { .... stat->info.digest = NULL; if (stat->plug->p.id.id != SDEXT_PSET_ID || !stat) return; .... }
Either it's a banal typo or the author intended to write something else. The !stat check looks as if it were a nullptr check, but it doesn't make sense for two reasons. Firstly, the stat pointer has been already dereferenced. Secondly, this expression is evaluated from left to right, in accordance with the standard, so if it's really a nullptr check, it should be moved to the beginning of the condition since the pointer is originally dereferenced earlier in that same condition.
The plugin/item/cde40/cde40_repair.c file triggered a number of warnings such as this:
V547 Expression 'pol == 3' is always true.
static errno_t cde40_pair_offsets_check(reiser4_place_t *place, uint32_t start_pos, uint32_t end_pos) { .... if (end_offset == cde_get_offset(place, start_pos, pol) + ENTRY_LEN_MIN(S_NAME, pol) * count) { return 0; } .... }
The programmer must have meant a construct of the A == (B + C) pattern but inadvertently wrote it as (A == B) + C.
upd1. It's my mistake; I confused the precedence of + and ==
The plugin/object/sym40/sym40.c file contains a typo:
V593 Consider reviewing the expression of the 'A = B < C' kind. The expression is calculated as following: 'A = (B < C)'.
errno_t sym40_follow(reiser4_object_t *sym, reiser4_key_t *from, reiser4_key_t *key) { .... if ((res = sym40_read(sym, path, size) < 0)) goto error; .... }
This issue is similar to the previous one. The res variable is assigned the result of a Boolean expression. The programmer is obviously using a C "trick" here, so the expression should be rewritten as (A = B) < C.
Another typo or mistake made from inattention. File libreiser4/flow.c:
V555 The expression 'end - off > 0' will work as 'end != off'.
int64_t reiser4_flow_write(reiser4_tree_t *tree, trans_hint_t *hint) { .... uint64_t off; uint64_t end; .... if (end - off > 0) { .... } .... }
There are two integer variables here. Their difference is ALWAYS greater than or equal to zero because, from the viewpoint of how integers are represented in computer memory, subtraction and addition are, in effect, the same operation to the processor (Two's complement). The condition was more likely meant to check if end > off.
Another probable typo:
V547 Expression 'insert > 0' is always true.
errno_t reiser4_flow_convert(reiser4_tree_t *tree, conv_hint_t *hint) { .... for (hint->bytes = 0; insert > 0; insert -= conv) { .... if (insert > 0) { .... } .... } }
The code is contained in a loop, and the loop body is executed only when insert > 0, so the condition is always true. It's either a mistake, and therefore something else is missing, or a pointless check.
V547 Expression 'ret' is always false.
static errno_t repair_node_items_check(reiser4_node_t *node, place_func_t func, uint8_t mode, void *data) { .... if ((ret = objcall(&key, check_struct) < 0)) return ret; if (ret) { .... } .... }
The first condition contains a construct of the A = ( B < 0 ) pattern, but what was more likely meant is (A = B) < C.The librepair/semantic.c file seems to house another "black magic" thing:V612 An unconditional 'break' within a loop.
static reiser4_object_t *cb_object_traverse(reiser4_object_t *parent, entry_hint_t *entry, void *data) { .... while (sem->repair->mode == RM_BUILD && !attached) { .... break; } .... }
The while loop here is used as an if statement because the loop body will be executed only once (since there's a break at the end) if the condition is true or will be skipped otherwise.Now guess what comes next? Exactly - a typo! The code is still looking like it was "abandoned at birth". This time, the problem is in the file libmisc/profile.c:V528 It is odd that pointer to 'char' type is compared with the '\\0' value. Probably meant: *c + 1 == '\\0'.
errno_t misc_profile_override(char *override) { .... char *entry, *c; .... if (c + 1 == '\0') { .... } .... }
Comparing a pointer with a terminal null character is a brilliant idea, no doubt, but the programmer more likely meant the check *(c + 1) == '\0', as the *c + 1 == '\0' version doesn't make much sense.Now let's discuss a couple of warnings dealing with the use of fprintf(). The messages themselves are straightforward, but we'll need to look in several files at once to understand what's going on.First we'll peek into the libmisc/ui.c file.V618 It's dangerous to call the 'fprintf' function in such a manner, as the line being passed could contain format specification. The example of the safe code: printf("%s", str);
Here's what we see:
void misc_print_wrap(void *stream, char *text) { char *string, *word; .... for (line_width = 0; (string = aal_strsep(&text, "\n")); ) { for (; (word = aal_strsep(&string, " ")); ) { if (line_width + aal_strlen(word) > screen_width) { fprintf(stream, "\n"); line_width = 0; } fprintf(stream, word); .... } .... } }
Let's find the code using this function. Here it is, in the same file:
void misc_print_banner(char *name) { char *banner; .... if (!(banner = aal_calloc(255, 0))) return; aal_snprintf(banner, 255, BANNER); misc_print_wrap(stderr, banner); .... }
Now we're looking for BANNER - it's in include/misc/version.h:
#define BANNER \ "Copyright (C) 2001-2005 by Hans Reiser, " \ "licensing governed by reiser4progs/COPYING."
So, no injection danger.
Here's another issue of the same kind, this time in the files progs/debugfs/browse.c and progs/debugfs/print.c. They employ the same code, so we'll discuss only browse.c:
static errno_t debugfs_reg_cat(reiser4_object_t *object) { .... char buff[4096]; .... read = reiser4_object_read(object, buff, sizeof(buff)); if (read <= 0) break; printf(buff); .... }
Looking for the reiser4_object_read() function:
int64_t reiser4_object_read( reiser4_object_t *object, /* object entry will be read from */ void *buff, /* buffer result will be stored in */ uint64_t n) /* buffer size */ { .... return plugcall(reiser4_psobj(object), read, object, buff, n); }
Finding out what plugcall() does - it turns out to be a macro:
/* Checks if @method is implemented in @plug and calls it. */ #define plugcall(plug, method, ...) ({ \ aal_assert("Method \""#method"\" isn't implemented " \ "in "#plug"", (plug)->method != NULL); \ (plug)->method(__VA_ARGS__); \ })
Again, we need to find out what method() does, and it, in its turn, depends on plug, and plug is reiser4_psobj(object):
#define reiser4_psobj(obj) \ ((reiser4_object_plug_t *)(obj)->info.pset.plug[PSET_OBJ])
If we dig a bit deeper, we'll find that all of these are constant strings too:
char *pset_name[PSET_STORE_LAST] = { [PSET_OBJ] = "object", [PSET_DIR] = "directory", [PSET_PERM] = "permission", [PSET_POLICY] = "formatting", [PSET_HASH] = "hash", [PSET_FIBRE] = "fibration", [PSET_STAT] = "statdata", [PSET_DIRITEM] = "diritem", [PSET_CRYPTO] = "crypto", [PSET_DIGEST] = "digest", [PSET_COMPRESS] = "compress", [PSET_CMODE] = "compressMode", [PSET_CLUSTER] = "cluster", [PSET_CREATE] = "create", };
Again, no injections possible.
The remaining issues are either errors of the same patterns as discussed above or defects that I don't think to be relevant.
We have finally reached the Reiser4 code in the kernel. To avoid building the entire kernel, let's modify the script we've written for launching PVS-Studio to build only the code of Reiser4:
#!/usr/bin/bash pvs-studio-analyzer trace -- make SUBDIRS=fs/reiser4 -j9 || exit 1 pvs-studio-analyzer analyze -o log.log -j9 || exit 1 plog-converter -a GA:1,2 -t tasklist log.log || exit 1
Thus we can have it build only the source code located in the folder fs/reiser4.
We'll ignore the warnings dealing with redefinition of the standard types in the headers of the kernel itself since the standard headers are not used in the build; and we are not interested in the kernel code anyway.
The first file to examine is fs/reiser4/carry.c.
V522 Dereferencing of the null pointer 'reference' might take place. The null pointer is passed into 'add_op' function. Inspect the third argument. Check lines: 564, 703.
static carry_op *add_op(carry_level * level, /* &carry_level to add * node to */ pool_ordering order, /* where to insert: * at the beginning of @level; * before @reference; * after @reference; * at the end of @level */ carry_op * reference /* reference node for insertion */) { .... result = (carry_op *) reiser4_add_obj(&level->pool->op_pool, &level->ops, order, &reference->header); .... }
reference must be checked for NULL because later in the code, you can see the following call to the function declared above:
carry_op *node_post_carry(carry_plugin_info * info /* carry * parameters * passed down to node * plugin */ , carry_opcode op /* opcode of operation */ , znode * node /* node on which this * operation will operate */ , int apply_to_parent_p /* whether operation will * operate directly on @node * or on it parent. */ ) { .... result = add_op(info->todo, POOLO_LAST, NULL); .... }
where add_op() is explicitly called with the value of reference set to NULL, which results in an oops.Next error:V591 Non-void function should return a value.
static cmp_t carry_node_cmp(carry_level * level, carry_node * n1, carry_node * n2) { assert("nikita-2199", n1 != NULL); assert("nikita-2200", n2 != NULL); if (n1 == n2) return EQUAL_TO; while (1) { n1 = carry_node_next(n1); if (carry_node_end(level, n1)) return GREATER_THAN; if (n1 == n2) return LESS_THAN; } impossible("nikita-2201", "End of level reached"); }
This warning tells us that the function is non-void and, therefore, must return some value. The last line proves this is not an error because the case when while stops executing is an error.
V560 A part of conditional expression is always true: (result == 0).
int lock_carry_node(carry_level * level /* level @node is in */ , carry_node * node /* node to lock */) { .... result = 0; .... if (node->parent && (result == 0)) { .... } }
This is straightforward: the value of result doesn't change, so it's okay to omit the check.
V1004 The 'ref' pointer was used unsafely after it was verified against nullptr. Check lines: 1191, 1210.
carry_node *add_new_znode(znode * brother /* existing left neighbor * of new node */ , carry_node * ref /* carry node after which new * carry node is to be inserted * into queue. This affects * locking. */ , carry_level * doing /* carry queue where new node is * to be added */ , carry_level * todo /* carry queue where COP_INSERT * operation to add pointer to * new node will ne added */ ) { .... /* There is a lot of possible variations here: to what parent new node will be attached and where. For simplicity, always do the following: (1) new node and @brother will have the same parent. (2) new node is added on the right of @brother */ fresh = reiser4_add_carry_skip(doing, ref ? POOLO_AFTER : POOLO_LAST, ref); .... while (ZF_ISSET(reiser4_carry_real(ref), JNODE_ORPHAN)) { .... } .... }
What happens in this check is that ref is checked for nullptr by the ternary operator and then passed to the reiser4_carry_real() function, where null-pointer dereferencing may take place with no prior nullptr check. However, that never happens. Let's look into the reiser4_carry_real() function:
znode *reiser4_carry_real(const carry_node * node) { assert("nikita-3061", node != NULL); return node->lock_handle.node; }
As you can see, the node pointer is checked for nullptr inside the function body, so everything is OK.
Next comes a probably incorrect check in the file fs/reiser4/tree.c:
V547 Expression 'child->in_parent.item_pos + 1 != 0' is always true.
int find_child_ptr(znode * parent /* parent znode, passed locked */ , znode * child /* child znode, passed locked */ , coord_t * result /* where result is stored in */ ) { .... if (child->in_parent.item_pos + 1 != 0) { .... }
We need to find the declaration of item_pos to find out what exactly it is. After searching in a few files we get the following:
struct znode { .... parent_coord_t in_parent; .... } __attribute__ ((aligned(16))); .... typedef struct parent_coord { .... pos_in_node_t item_pos; } parent_coord_t; .... typedef unsigned short pos_in_node_t;
In the comments, Andrey Karpov explained what this error is about. The expression is cast to type int in the if statement, so no overflow will occur even if item_pos is assigned the maximum value since casting the expression to int results in the value 0xFFFF + 1 = 0x010000 rather than 0.
All the other bugs either follow one of the patterns discussed above or are false positives, which we have also talked about.
They're simple.
Firstly, PVS-Studio is cool. A good tool helps you do your job better and faster if you know how to handle it. As a static analyzer, PVS-Studio has more than once proved to be a top-level tool. It provides you with the means to detect and solve hidden problems, typos, and mistakes.
Secondly, be careful writing code. Don't use C "tricks" unless it's the only legal way to implement some feature. Always use additional parentheses in conditions to explicitly state the desired order of calculations because even if you are a super-duper hacker and C ace, you may simply confuse the operators' precedence and make a bunch of mistakes, especially when writing large portions of code at a time.
I'd like to thank the developers for such a wonderful tool! They have done a really great job adapting PVS-Studio to GNU/Linux systems and carefully designing the analyzer's implementation (see the details here). It elegantly integrates into build systems and generates logs. If you don't need integration, you can simply "intercept" compiler launches by running make.
And above all, thank you so much for giving students, open-source projects, and single developers the opportunity to use your tool for free! That's amazing! ... | https://www.viva64.com/en/b/0551/ | CC-MAIN-2020-40 | refinedweb | 3,197 | 65.93 |
In Part 2 I continue my discussion on the use of the JTable. (Part 1, "Mastering the JTable," can be found in the January issue of JDJ, [Vol. 6, issue 1].) I'll briefly review the three major classes you'll need while working with data within the JTable.
1. JTable: Controls the visual presentation of the data, however, it has limited control over where the data comes from. In the simplest of circumstances, the JTable can populate itself with data only if the data is static and doesn't come from a database. In the above case, it can be used without any supporting classes.
Usually the data to be populated into a JTable comes from a database. In this case, the JTable must work with the JTableModel class; rarely does it work alone.
2. AbstractTableModel: Controls where the data comes from and how it behaves within the JTable. Although the data can come from just about anywhere, this class is almost always used when the data comes from (via JDBC) a database. AbstractTableModel is used by an extension class that you define and is never used directly. It has some interesting default methods, similar to listeners, that automatically fire when data is altered. Knowing when and how these methods are fired is important to understanding how the AbstractTableModel works. I'll discuss them later.
3. TableModelListener: Not really a class but an interface that's used when you want some action to be performed while the user is interfacing with the data in the JTable. It's misnamed - in general, listeners are automatically fired when some action is performed on a control. A programmer-defined class that implements the TableModelListener isn't really a listener at all as it's never automatically fired. It must be manually fired by the programmer via the fireTableDataChanged method for the TableModel. Since TableModelListeners must be fired manually, they can't be considered a true listener. Not that they aren't valuable. I think the powers that be at Sun purposely made it this way so the programmer controls when the listener is fired.
The code placed in the listener can vary. Popular uses include adding code to update the database. Note: The use of this interface is optional. If you don't really care when data is changed, you don't have to use it.
Figure 1 illustrates the relationship between the three major players.
Part 1 focused on the use and implementation of the JTable. Part 2 covers the use of the table model in depth.
The AbstractTableModel Class
As stated earlier, the job of the AbstractTableModel is to provide the source for the data in the JTable and some built-in methods that are automatically fired when certain things happen within the Table (more on this later). This class isn't used directly but as an ancestor in a programmer-created class. When a programmer-defined class extends the AbstractTableModel class, in Javaspeak we're using a Table Model.
The Table Model can get the data from several places. I'll start with a simple Table Model example and get more complex as we go. Your Table Model might obtain its data in an array, vector, or hashtable, or it might get the data from an outside source such as a database (the most flexible and most complex). If needed, it can even generate data at runtime. In Listing 1 (Listings 1-6 can be found below), the Table Model populates itself with an array of strings. The resulting frame window is displayed in Figure 2.
Table Model Dissected
There are a few immediate advantages to using a Table Model instead of using the JTable alone. In addition to the data, the Table Model informs the JTable of the data type of the column, which leads to some desired formatting behavior. For example, since the JTable knows that a column is numeric, it'll right-justify it within the cell. Also, Boolean values will display as a checkbox (since there are only two possible values). Since the JTable is using a Table Model, it'll display the data in a user-friendly way. In Listing 2 the application doesn't use a Table Model. Notice the difference in how the data is represented (see Figure 3).
Table Model Methods
The methods shown in Listing 1 are used internally by the Table Model. Similar to listeners, these methods are fired when needed, and the programmer can choose to implement them at will. You can use all or none of them; Java doesn't care. The important point is that these methods are fired automatically and usually never by the programmer. Similar to other automatically fired methods, Java lets you decide what kind of code to execute. Java provides the method, but you (surprise!) decide the functionality. The automatic table methods are described below.
getColumnCount()
This method is fired once - at the time the JTable is created. Its sole purpose is to tell the JTable how many columns it has. If the data to be passed to the JTable is contained in an array, simply return its length (see Listing 1). If the data comes from a database, the ResultSetMetaData class can determine how many columns exist by using the following code:
// myResultSet is of the Java type ResultSet and has already been
// filled with data in the program.
int li_cols;
ResultSetMetaData myMetaData = myResultSet.getMetaData();
li_cols = myMetaData.getColumnCount();
return li_cols;
getRowCount()
As with getColumnCount(), this method is fired once - at the time the JTable is created. Its sole purpose is to tell the JTable how many rows it has. If the data to be passed to the JTable is contained in an array, simply return its length (see Listing 1). If the data comes from a database, there are a few options. Later I'll show how to populate database data into a vector. The following code can be used to return the number of rows in a vector. I'll show you how to populate it later.
// Get the number of rows that will be used to build the JTable. allRows if of the Java type vector
return allRows.size();
getColumnName(int col)
After the Table Model fires off the getColumnCount() method, it calls this method for each column that's been found. For example, if the getColumnCount() method returned a seven (for seven columns), the Table Model will fire this method seven times, passing it the number of the column that it must resolve the column name for each time. The method returns a string that represents the column name. For example:
// colnames is an array of Strings representing the column names for
// the JTable.
return colNames[col];
getValueAt(int row int col)
After the Table Model determines the number of rows and columns, it needs to populate them with data. This is the job of the getValueAt function, which is called for every column and row intersection (called a cell) that exists in the Table Model. If there are 30 rows with five columns, this method will automatically be fired 150 times - one for each cell. As in the previous methods, you determine which code is executed in this method. Remember, Java provides the method, you provide the functionality. Listing 1 returns objects from a two-dimensional array. The following code can be used if you're returning data from a vector.
// row and allrows are vectors
row = (Vector) allRows.elementAt(row);
return row.elementAt(col);
getColumnClass(int col)
This method is automatically fired and gets the class for each column of the Table Model. The following code gets the data type of each column.
// Return the class for this column
return getValueAt(0, col).getClass();
Next, the table compares the column's data type with a list of data types for which cell renders (more about renders in Part 3) are registered. The default classes are:
isCellEditable(int row, int col)
This method determines which rows and columns the user is allowed to modify. Since this method returns a Boolean, if all cells are editable it simply returns a true. To prevent a JTable from editing a particular column or row value, it returns a false from this method. The following code enables only column one to display while allowing the rest of the columns to be modified.
// Make column one noneditable
while allowing the user to edit at
all // other columns.
If (col == 1){
return false;
}
else{
return true;
}
public void setValueAt(Object value, int row, int col)
When the user makes changes to an editable cell, the Table Model is notified via this method. The new value, as well as the row and column it occurred in, is passed as arguments to this method. If the original data is coming from a database, this method becomes important. As you'll see, data retrieved from a database is held locally within the Table Model, usually as vectors. When the user changes a cell value in a JTable, the corresponding data in the Table Model isn't automatically changed. It's your responsibility to add code in this event to ensure that the data in the Table Model is the same as the data in the JTable. This becomes important when code is added to update the database. The following code updates the data (held in an array of objects) in the Table Model with the new value that the user just entered in the JTable.
// Update the array of objects with
the changes the user has just entered in a cell.
Then notify all listeners (if any) what column
and row has changed. Further processing may take place there.
rowData[row][col] = value;
fireTableDataChanged();
Taking the Model Further
I have a promising example of Table Models that use JDBC as their data source. By far the most powerful implementation of the JTable with a Table Model uses data from a database (via JDBC). The good news is that adding the JDBC element doesn't significantly change what we've already learned. Admittedly, it does add more complexity to our task, but it doesn't change the way we treat the JTable/Table Model relationship.
Where Does the Data Come From?
Any program written in Java uses JDBC to connect to a database. JDBC is a complex topic in its own right so I'll save the comprehensive overview for a later date. Here I'll explain the relationship between the Table Model and JDBC. The examples will use a Microsoft Access database that can be reached via a locally configured ODBC data source. For simplicity, the data contained within the database will be the same as the data in the previous examples that used arrays as the data source.
Creating the Table Model
The first step is to declare the Table Model and all instance variables that will be needed. All classes needed as well as a description of the instance variables are included in the code below:
// Table Model
import javax.swing.table.AbstractTableModel;
import java.sql.*;
import java.util.Vector;
public class MyTableModel extends AbstractTableModel {
Connection myConnect; // Hold the JDBC Connection
Statement myStatement; // Will contain the SQL statement
ResultSet myResultSet; // Contains the result of my SQL statement
int li_cols; // Number of database columns
Vector allRows; // Contains all rows of the result set
Vector row; // Individual row for a result set
The next step is to add a constructor to the class. The code in the example constructor is broken into two functions. One secures a connection to the database while the other performs the retrieval. This division of functionality is intentional. When this class is instantiated we want to perform the database connection and the data retrieval. However, during the life cycle of this object, we're only interested in the data retrieval since the connection needs to be performed only once. That's why the connection logic is separate from the retrieval logic. For the sake of simplicity, this example is only interested in retrieving data. The actual update of the data will be a topic in Part 3. The code for the constructor is:
public MyTableModel() {
//connect to database
try{
connect();
getData();
}
catch(Exception ex){
System.out.println("Error!");
}
}
The first method called within the constructor provides the connection to the database. The primary purpose of this method is to provide a connection to our ODBC data source. Since our connection (myConnect) is an instance variable, the database can be accessed as long as it's instantiated.
void connect() throws SQLException{
try{
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
}
catch(ClassNotFoundException cnf){
System.out.println("Driver does not exist!");
}
String url = "jdbc:odbc:BradyGirls";
myConnect = DriverManager.getConnection(url);
}
The second method called in the constructor retrieves data from the database and puts it into a vector (see Listing 3). I won't go into detail about vectors as they're a topic in their own right. Vectors are basically "growable" arrays and can also hold any type of object. The function uses two vectors: one holds an individual row of the result set (result of the SQL statement), the other contains all the "row" vectors and is essentially the "vector of vectors." When we connect a JTable with this table model, the vectors will be used to provide the data to be displayed within the table.
The last portion of the Table Model involves the automatic Table Model methods. The code placed in these methods is important. It informs the JTable about the contents of the data and what the data looks like. The automatic Table Model methods are provided in Listing 4.
Building the JTable
If you followed the steps provided, you have a Table Model ready to be used. The next (and last) step is to write a Java program that will use it. Listing 5 instantiates a JTable based on the Table Model that's just been built. Listing 6 contains the completed Table Model.
Conclusion
You should now have a basic understanding of how custom Table Models work and how to use JDBC within a Table Model. We're still far away from what I'd consider a robust application, however. Next month I'll focus on listeners and updating the database with changes to the data.
Author Bio
Bob Hendry is a Java instructor at the Illinois Institute of Technology. He is the author of Java as a First Language. [email protected]
Download Assoicated Source File (~ 36.0 KB) | http://www2.sys-con.com/itsg/virtualcd/java/archives/0607/hendry/index.html | CC-MAIN-2018-51 | refinedweb | 2,411 | 64.2 |
Applies to: Windows SharePoint Services 3.0, Microsoft Office SharePoint Server 2007, Microsoft Visual Studio 2005.
Ted Pattison, Ted Pattison Group
May 2007
When developing for Windows SharePoint Services 3.0, you are often required to create and modify XML files that contain Collaborative Application Markup Language (CAML). It is recommended that you configure Microsoft Visual Studio on your development workstation to reference a XML schema file named WSS.XSD so that IntelliSense works properly when working with CAML-based files.
Create a new text file and name it CAML.xml. Add the following code.
<SchemaCatalog xmlns="">
<Schema href="C:\Program Files\Common Files\Microsoft Shared\
web server extensions\12\TEMPLATE\XML\wss.xsd"
targetNamespace=""/>
</SchemaCatalog>
Where does the CAML.XML file go?
Next, save CAML.XML to C:\Program Files\Microsoft Visual Studio 8\Xml\Schemas\CAML.xml.
C:\Program Files\Microsoft Visual Studio 8\Xml\Schemas\CAML.xml
Shut down and restart Visual Studio. At this point, IntelliSense should be available whenever you work with CAML-based files that rely on the namespace.
Visual Studio has a special directory (C:\Program Files\Microsoft Visual Studio 8\Xml\Schemas\CAML.xml) that it uses to maintain a catalog of XML schemas that drive its IntelliSense feature. The installation of Visual Studio adds to this directory a standard file named catalog.xml that contains XML that references standard XML schemas. These schemas drive the default IntelliSense support that is included as part of Visual Studio. However, this mechanism is completely extensible. You can simply create another file with an .xml extension that references other, non-standard XML schemas and copy the file to the same directory. After copying this file to the correct location, you must restart Visual Studio. That's because Visual Studio examines all the XML files in this directory at startup to determine which XML schemas it should load to drive IntelliSense.
Watch the Video
Video Length: 00:04:04
File Size: 10.0 MB WMV
Item-Level Auditing with SharePoint Server 2007
SharePoint Server 2007 Developer Portal
Office Developer Center | http://msdn.microsoft.com/en-us/library/bb507730.aspx | crawl-002 | refinedweb | 344 | 52.05 |
Edit.
A while back I worked on a project which required edit controls with very strict behavior. One was a date control which needed to accept dates in mm/dd/yyyy format only. The user would be allowed to enter just the numbers and the slashes would be filled in automatically (unlike Microsoft’s Date control). The control would also prevent the user from messing up the format, by, for example, deleting the second character. Another edit control was used for entering dollar amounts and would automatically insert commas for the thousand’s separator.
I went on the hunt for a set of controls that would give me this behavior and I took a look at Masked Edit Control from Dundas Software. Unfortunately it wasn’t specific enough, especially when it came to dates. I needed more than just a masked control; I needed to make sure the value would always be valid. I also didn’t feel like spending time modifying or extending Dundas’s code to make it work like I needed it, so I took a big gulp and decided to do it all from scratch. What you see now is the result of that effort, which I finally got around to publishing. :-)
All classes derive from CEdit and are prefixed with CAMS (AMS are my company’s initials). The following table gives the class names and descriptions:
CEdit
Base class for the classes below. It has some basic functionality.
Prohibits input of one or more characters and restricts length.
Used to input a decimal number with a maximum number of digits before and/or after the decimal point.
Only allows a whole number to be entered.
Inserts a monetary symbol in front of the value and a separator for the thousands.
Allows input of a date in the mm/dd/yyyy or dd/mm/yyyy format, depending on the locale.
Allows input of a time with or without seconds and in 12 or 24 hour format, depending on the locale.
Takes a mask containing '#' symbols, each one corresponding to a digit. Any characters between the #s are automatically inserted as the user types. It may be customized to accept additional symbols.
Takes a mask and adopts its behavior to any of the above classes.
What follows is a more detailed description of these classes and their usage. All member functions are public unless otherwise specified.
I decided to give all my CEdit-derived classes a common base class for any common methods they may need. The result is this class which only has minor enhancements to the CEdit class, but serves as a pseudo-namespace for the Behavior classes used by the CAMSEdit-derived classes.
CAMSEdit
void SetText(const CString& strText)
Sets the text for the control. This text is validated against the controls proper format.
CString GetText() const
Retrieves the text in the control.
CString GetTrimmedText() const
Retrieves the text in the control without leading or trailing spaces.
void SetBackgroundColor(COLORREF rgb)
Sets the control's background color.
COLORREF GetBackgroundColor() const
Retrieves the control's background color.
void SetTextColor(COLORREF rgb)
Sets the control's text color.
COLORREF GetTextColor() const
Retrieves the control's text color.
bool IsReadOnly() const
virtual bool ShouldEnter(TCHAR c) const;
This class is used for general alphanumeric input with the exception of whatever characters you specify to not be allowed. So if you need to prevent certain characters from being entered, this class is useful. It can also restrict the length of the text.
void SetInvalidCharacters(const CString& strInvalidChars)
Sets each of characters contained in the given string to not be allowed for input. By default these characters are not allowed: %'*"+?><:\
const CString& GetInvalidCharacters() const
Retrieves the characters not allowed for input.
void SetMaxCharacters(int nMaxChars)
Sets the maximum number of characters allowed. By default there is no limit (0).
int GetMaxCharacters() const
Retrieves the maximum number of characters allowed.
This is the base class for all numeric entry classes. It ensures that the user enters a valid number and provides features such as automatic formatting. It also allows you to restrict what the number can look like, such as how many digits before and after the decimal point, and whether it can be negative or not.
void SetDouble(double dText, bool bTrimTrailingZeros = true)
bTrimTrailingZeros
double GetDouble() const
void SetInt(int nText)
int GetInt() const
void SetMaxWholeDigits(int nMaxWholeDigits)
Sets the maximum number of digits before the decimal point. This is 9 by default.
int GetMaxWholeDigits() const
Retrieves the maximum number of digits before the decimal point.
void SetMaxDecimalPlaces(int nMaxDecimalPlaces)
Sets the maximum number of digits after the decimal point. This is 4 by default.
int GetMaxDecimalPlaces() const
Retrieves the maximum number of digits after the decimal point.
void AllowNegative(bool bAllowNegative = true)
Sets a flag to allow a negative sign or not. By default negatives are allowed.
bool IsNegativeAllowed() const
Determines whether a negative sign can be entered or not.
void SetDigitsInGroup(int nDigitsInGroup)
Sets the size of the groups of digits to be separated before the decimal point. This is 0 by default but is typically set to 3 to separate thousands.
int GetDigitsInGroup() const
Retrieves the number of digits per group shown before the decimal point.
void SetSeparator(TCHAR cDecimal, TCHAR cGroup)
void GetSeparator(TCHAR* pcDecimal, TCHAR* pcGroup) const
void SetPrefix(const CString& strPrefix)
Sets the characters to automatically display in front of the numeric value entered. This is useful for displaying currency signs. By default it is empty.
const CString& GetPrefix() const
Retrieves the characters to automatically display in front of the numeric value entered.
void SetMask(const CString& strMask)
Parses a string containing '#', comma, and period characters to determine the format of the number that the user may enter. For example: #,###.# means: (1) allow up to 4 digits before the decimal point, (2) insert commas every 3 digits before the decimal point, (3) use a dot as the decimal point, and (4) allow up to one digit after the decimal point.
CString GetMask() const
Retrieves a string containing '#', comma, and period characters representing the format of the numeric value that the user may enter.
void SetRange(double dMin, double dMax)
Sets the range of valid numbers values allowed. By default it's AMS_MIN_NUMBER to AMS_MAX_NUMBER.
AMS_MIN_NUMBER
AMS_MAX_NUMBER
void GetRange(double* pdMin, double* pdMax) const
Retrieves the range of valid numbers allowed.
bool IsValid() const
Returns true if the number is valid and falls within the allowed range.
bool CheckIfValid(bool bShowErrorIfNotValid = true)
Returns true if the number is valid and optionally shows an error message if not.
void ModifyFlags(UINT uAdd, UINT uRemove)
Allows adding or removing flags from the control. This may used to set/clear the flags below to alter the behavior of the control:
AddDecimalAfterMaxWholeDigits
Inserts the decimal symbol automatically when the user enters a digit and all the whole numbers have already been entered.
PadWithZerosAfterDecimalWhenTextChanges
Automatically pads the number with zeros after the decimal point any time the text is changed (using SetWindowText or a key stroke).
PadWithZerosAfterDecimalWhenTextIsSet
Automatically pads the number with zeros after the decimal point any time the text is set (using SetWindowText).
OnKillFocus_PadWithZerosBeforeDecimal
Pads the number with zeros before the decimal symbol up to the max allowed (set by SetMaxWholeDigits).
SetMaxWholeDigits
OnKillFocus_PadWithZerosAfterDecimal
Pads the number with zeros after the decimal symbol up to the max allowed (set by SetMaxDecimalPlaces).
SetMaxDecimalPlaces
OnKillFocus_DontPadWithZerosIfEmpty
When combined with any of the above two "Pad" flags, the value is not padded if it's empty.
OnKillFocus_Beep_IfInvalid
Beeps if the value is not a valid number.
OnKillFocus_Beep_IfEmpty
Beeps if the no value has been entered.
OnKillFocus_Beep
Beeps if the value is not valid or has not been entered.
OnKillFocus_SetValid_IfInvalid
Changes the value to a valid number if it isn't.
OnKillFocus_SetValid_IfEmpty
Fills in a valid number value if the control is empty.
OnKillFocus_SetValid
Sets the value to a valid number if it's empty or not valid.
OnKillFocus_SetFocus_IfInvalid
Sets the focus back to the control if its value is not valid.
OnKillFocus_SetFocus_IfEmpty
Sets the focus back to the control if it doesn't contain a value.
OnKillFocus_SetFocus
Sets the focus back to the control if it's empty or not valid.
OnKillFocus_ShowMessage_IfInvalid
Shows an error message box if the value is not a valid number.
OnKillFocus_ShowMessage_IfEmpty
Shows an error message box if it doesn't contain a value.
OnKillFocus_ShowMessage
Shows an error message box if its empty of not valid.
This class is used to only allow integer values to be entered. It is derived from
This class is used for entering monetary values. It is also derived from CAMSNumericEdit but it sets the prefix to the currency sign specified in the locale (such as a '$'). It also separates thousands using the character specified in the locale (such as a comma). And it sets the maximum number of digits after the decimal point to two.
This class handles dates in a very specific format: mm/dd/yyyy or dd/mm/yyyy, depending on the locale. As the user enters the digits, the slashes are automatically filled in. The user may only remove characters from the right side of the value entered. This ensures that the value is kept in the proper format. As a bonus, the user may use the up/down arrow keys to increment/decrement the month, day, or year, depending on the location of the caret.
void SetDate(int nYear, int nMonth, int nDay)
Sets the date value.
void SetDate(const CTime& date)
void SetDate(const COleDateTime& date)
void SetDateToToday()
Sets the date value to today's date.
CTime GetDate() const
Retrieves the date (with zeros for the hour, minute, and second).
COleDateTime GetOleDate() const
int GetYear() const
Retrieves the year.
int GetMonth() const
Retrieves the month
int GetDay() const
Retrieves the day
void SetYear(int nYear)
Sets the year.
void SetMonth(int nMonth)
Sets the month
void SetDay(int nDay)
Sets the day
Returns true if the date value is valid.
Returns true if the date is valid and optionally shows an error message if not.
void SetRange(const CTime& dateMin, const CTime& dateMax)
Sets the range of valid date values allowed. By default it's 01/01/1900 to 12/31/9999.
void SetRange(const COleDateTime& dateMin, const COleDateTime& dateMax)
Sets the range of valid date values allowed.
void GetRange(CTime* pDateMin, CTime* pDateMax) const
Retrieves the range of valid dates allowed.
void GetRange(COleDateTime* pDateMin, COleDateTime* pDateMax) const
void SetSeparator(TCHAR cSep)
Sets the character used to separate the date's components. By default it's a slash ('/').
TCHAR GetSeparator() const
Retrieves the character used to separate the date components.
void ShowDayBeforeMonth (bool bDayBeforeMonth = true)
Overrides the locale's format and based on the flag sets the day to be shown before or after the month.
bool IsDayShownBeforeMonth () const
Returns true if the day will be shown before the month (dd/mm/yyyy).
Allows adding or removing flags from the control. This may used to set/clear the flags below to alter the behavior of the control when it loses focus:
Beeps if the value is not a valid date.
Changes the value to a valid date if it isn't.
Fills in a valid date value (today's date if allowed) if the control is empty.
Sets the value to a valid date if it's empty or not valid.
Shows an error message box if the value is not a valid date.
This class handles times in a very specific format: HH:mm or hh:mm AM, depending on the locale. Seconds are also supported, but not by default. As the user enters the digits, the colons are automatically filled in. The user may only remove characters from the right side of the value entered. This ensures that the value is kept in the proper format. As a bonus, the user may use the up/down arrow keys to increment/decrement the hour, minute, second, or AM/PM, depending on the location of the caret.
void SetTime(int nHour, int nMinute, int nSecond = 0)
Sets the time value.
void SetTime(const CTime& date)
Sets the time value using the time portion of the given date.
void SetTime(const COleDateTime& date)
Sets the time value using the time portion of the given date
void SetTimeToNow()
Sets the time value to the current time.
CTime GetTime() const
Retrieves the time (with Dec 30, 1899 for the date).
COleDateTime GetOleTime() const
int GetHour() const
Retrieves the hour.
int GetMinute() const
Retrieves the minute.
int GetSecond() const
Retrieves the second.
CString GetAMPM() const
void SetHour(int nYear)
Sets the hour.
void SetMinute(int nMonth)
Sets the minute.
void SetSecond(int nDay)
Sets the second.
void SetAMPM(bool bAM)
bool IsValid(bool bCheckRangeAlso = true) const
Returns true if the time value is valid. If bCheckRangeAlso is true, the time value is also checked that it falls between the range of times established by the SetRange function.
bCheckRangeAlso
SetRange
Returns true if the time is valid and is in range, and optionally shows an error message if not.
Sets the range of valid time values allowed. By default it's 00:00:00 to 23:59:59.
Note: While the control has focus, the user will be allowed to input any value between 00:00:00 and 23:59:59, regardless of what values are passed to SetRange.
Sets the range of valid time values allowed.
Retrieves the range of valid times allowed.
Sets the character used to separate the time's components. By default it's a colon (':').
void Show24HourFormat (bool bShow24HourFormat = true)
Overrides the locale's format and based on the flag sets the hour to go from 00 to 23 (24 hour format) or from 01 to 12 with the AM/PM symbols.
bool IsShowing24HourFormat () const
void ShowSeconds (bool bShowSeconds = true)
Sets whether the seconds will be shown or not.
bool IsShowingSeconds () const
Returns true if the seconds will be shown.
void SetAMPMSymbols(const CString& strAM, const CString& strPM)
void GetAMPMSymbols(CString* pStrAM, CString* pStrPM) const
CString
ModifyFlags
void SetDateTime(int nHour, int nMinute, int nSecond = 0)
Sets the date and time value.
void SetDateTime(const CTime& date)
Sets the date and time value from the given CTime object.
CTime
void SetDateTime(const COleDateTime& date)
Sets the date and time value from the given COleDateTime object.
COleDateTime
void SetToNow()
Sets the date and time value to the current date and time.
CTime GetDateTime() const
Retrieves the date and time into a CTime object.
COleDateTime GetOleDateTime() const
Retrieves the time and time into a CTime object.
Returns true if the date and time value is valid.
Returns true if the date and time is valid and is in range, and optionally shows an error message if not.
Sets the range of valid date values allowed. By default it's 01/01/1900 00:00:00 to 12/31/9999 23:59:59.
void SetSeparator(TCHAR cSep, bool bDate)
Sets the character used to separate the date or the time components. By default it's a slash ('/') for the date, or the colon (':') for the time.
TCHAR GetSeparator(bool bDate) const
Retrieves the character used to separate the date or the time components.
Allows adding or removing flags from the control. This may used to set/clear the same flags in the <a href="#CAMSDateEdit">CAMSDateEdit</a> and <a href="#CAMSTimeEdit">CAMSTimeEdit</a> classes, plus these:
<a href="#CAMSDateEdit">CAMSDateEdit</a>
<a href="#CAMSTimeEdit">CAMSTimeEdit</a>
DateOnly
Changes this class to only allow a date value, just like the CAMSDateEdit class.
CAMSDateEdit
TimeOnly
Changes this class to only allow a time value, just like the CAMSTimeEdit class.Note: If both of these flags are set, the TimeOnly flag is ignored.
CAMSTimeEdit
This class is useful for values with a fixed numeric format, such as phone numbers, social security numbers, or zip codes.
Sets the format of the values to be entered. By default, each '#' symbol in the mask represents a digit. Any other characters in between the # symbols are automatically filled-in as the user types digits.
Additional characters may be added to be interpreted as special mask symbols. See GetSymbolArray below.
GetSymbolArray
const CString& GetMask() const
Retrieves the mask used to format the value entered by the user.
CString GetNumericText() const
Retrieves the control's value without any non-numeric characters.
SymbolArray& GetSymbolArray ()
Retrieves a reference to the array of symbols that may be found on the mask. By default, this array will contain one element for the # symbol.
Use this function to add, edit, or delete symbols (see the CAMSEDit::MaskedBehavior::Symbol class in amsEdit.h). Here are some examples of how to use this function to add additional symbols:
CAMSEDit::MaskedBehavior::Symbol
// Add the '?' symbol to allow alphabetic
// characters.
m_ctlMasked.GetSymbolArray().Add(
CAMSMaskedEdit::Symbol('?', _istalpha));
// Add the '{' symbol to allow alphabetic
// characters and then convert them to uppercase.
m_ctlMasked.GetSymbolArray().Add(
CAMSMaskedEdit::Symbol('{', _istalpha,
_totupper));
This class is capable of dynamically taking on the behavior of any of the above classes based on the mask assigned to it. See SetMask below for more details. It contains not only it's own member functions but also those of the Alphanumeric, Numeric, Masked, and DateTime classes above. With such high overhead, I recommend you only use this class for controls which must dynamically change from one behavior to another at run time. The default behavior is Alphanumeric.
SetMask
Sets the format of the values to be entered.
##/##/#### ##:##:##
##/##/#### ##:##
##/##/####
##:##:##
##:##
###
#,###.###
#
###-####
These classes are designed to be used inside CDialog-derived classes as replacements for the CEdit variables typically created with the ClassWizard. Here's what you need to use them inside your project:
CDialog
OnInitDialog
Note: If you need to keep your executable as small as possible, you can change the AMSEDIT_COMPILED_CLASSES macro (defined in amsEdit.h) to compile just the classes you need to use. Also, if you want to build and export these classes in an MFC Extension DLL, look inside amsEdit.h for instructions.
AMSEDIT_COMPILED_CLASSES
Version 1.0
Apr 9, 2002
Apr 21, 2002
Version 2.0
Jan 28, 2003
CAMSDateTimeEdit
DateBehavior
TimeBehavior
OnKillFocus
NumericBehavior
CAMSCurrencyEdit
IsValidMonth
GetTrimmedText
GetTextAsLong
GetTextAsDouble
GetInt
GetDouble
CAMSNumericEdit
CAMSIntegerEdit
SetInt
SetDouble
Feb 14, 2003
IsReadOnly
Mar 4, 2003
EN_CHANGE
Sep 25, 2003
Version 3.0
Mar 18, 2004
_tsetlocale(LC_ALL, _T(""));
AMSEDIT_EXPORT
<a href="#CAMSNumericEdit">CAMSNumericEdit</a>
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Behavior(this)
BEGIN_MESSAGE_MAP(CAMSDateEdit, CEdit)<br />
//{{AFX_MSG_MAP(CAMSDateEdit)<br />
ON_WM_CHAR() // Warning occurs on each one of these<br />
ON_WM_KEYDOWN() // Warning occurs on each one of these, etc...<br />
ON_WM_KILLFOCUS()<br />
//}}AFX_MSG_MAP<br />
ON_MESSAGE(WM_CUT, OnCut)<br />
ON_MESSAGE(WM_PASTE, OnPaste)<br />
ON_MESSAGE(WM_CLEAR, OnClear)<br />
ON_MESSAGE(WM_SETTEXT, OnSetText)<br />
END_MESSAGE_MAP()
BEGIN_MESSAGE_MAP(CMyExtDateEdit, CMyExtEdit)<br />
{ WM_CHAR, 0, 0, 0, AfxSig_vwww, \<br />
(AFX_PMSG)(AFX_PMSGW) \<br />
(static_cast< void (AFX_MSG_CALL CWnd::*)(UINT, UINT, UINT) > ( &ThisClass :: XD_OnChar)) },<br />
{ WM_KEYDOWN, 0, 0, 0, AfxSig_vwww, \<br />
(AFX_PMSG)(AFX_PMSGW) \<br />
(static_cast< void (AFX_MSG_CALL CWnd::*)(UINT, UINT, UINT) > ( &ThisClass :: XD_OnKeyDown)) },<br />
{ WM_KILLFOCUS, 0, 0, 0, AfxSig_vW, \<br />
(AFX_PMSG)(AFX_PMSGW) \<br />
(static_cast< void (AFX_MSG_CALL CWnd::*)(CWnd*) > ( &ThisClass :: XD_OnKillFocus)) },
CMyExtDateEdit::CMyExtDateEdit() : DateBehavior(this), CMyExtEdit::Behavior(this)
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/1425/Validating-Edit-Controls?fid=2682&df=10000&mpp=50&sort=Position&spc=Relaxed&select=1950447&tid=1187500 | CC-MAIN-2017-39 | refinedweb | 3,207 | 56.05 |
#include <pstensor.h>
Full C matrix (must provide orthonormal MO basis)
Dealias basis set.
Debug level.
Pseudospectral grid.
Minimum eigenvalue to keep in the dealias basis.
Minimum eigenvalue to keep in the primary basis.
Number of active occupieds.
nmo_ + ndealias_
Number of grid points.
Number of active virtuals.
Number of dealias functions felt by the grid.
Number of AO dealias functions.
Number of frozen occupieds.
Number of frozen virtuals.
Number of primary functions felt by the grid.
Number of MO primary functions.
Total number of occupieds.
Number of AO primary functions.
Total number of virtuals.
Omega to use for low-pass smoothing.
options reference
Primary basis set.
Print level.
Target Q tensor (nmo x naux)
Finished augmented collocation matrix (naug x naux)
AO dealias collocation matrix (dealias x naux)
MO dealias collocation matrix (dealias x naux)
Target R tensor (nmo x naux)
AO primary collocation matrix (primary x naux)
MO primary collocation matrix (primary' x naux)
AO-basis overlap (nso x dealias)
Overlap matrix (nmo x dealias)
Use omega integrals or not?
Grid weights (hopefully SPD) | http://psicode.org/psi4manual/doxymaster/classpsi_1_1PSTensor.html | CC-MAIN-2017-22 | refinedweb | 176 | 53.78 |
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
On Tue, Feb 14, 2012 at 04:20:17PM -0600, Ryan S. Arnold wrote: > On Fri, Feb 10, 2012 at 1:24 PM, Kees Cook <kees@outflux.net> wrote: > > Just checking in on this. Is anyone willing to ACK this patch? > > The patch passed make check for PowerPC 32-bit and Libdfp 32-bit > regression testing of the printf-hooks mechanism. > >.) > >From my tests it looks like the test-case needs a TIMEOUTFACTOR > environment variable to give the test time to SEGV on PowerPC64. On a > system that's not under load a timeoutfactor of 10 seemed to be > adequate. > > in sysdeps/powerpc/powerpc64/Makefile: > > ifeq ($(subdir),stdio-common) > bug-vfprintf-nargs-ENV = endif > > The problem with this method is that this may still fail with a > SIGALRM before the SEGV happens on a system under load (for instance > under a parallel make check). > > The other possibility is to changed the expected signal to SIGALRM for > powerpc64 in bug-vfprintf-nargs.c: > > #if __WORDSIZE == 32 > # define EXPECTED_STATUS 0 > #elif defined __powerpc64__ > # define EXPECTED_SIGNAL SIGALRM > #else > # define EXPECTED_SIGNAL SIGSEGV > #endif > > Of course, on a system that's not under loader this may SEGV before > the timeout is hit and SIGALRM is raised. Perhaps under 64-bit, it should just skip the test entirely? The 64-bit case is meaningless anyway. -Kees -- Kees Cook @outflux.net | http://cygwin.com/ml/libc-alpha/2012-02/msg00250.html | CC-MAIN-2016-50 | refinedweb | 241 | 65.52 |
Abstract: Java 5 adds a new way of casting that does not show compiler warnings or errors. Yet another way to shoot yourself in the foot?
Welcome to the 127th edition of The Java(tm) Specialists' Newsletter. I have just returned from a visit to Crete with some friends. It was simply incredible. For example, sitting in a Greek kafeteria sipping iced coffee and discussing the drawbacks and benefits of various design patterns! One of my personal highlights was seeing an octopus whilst snorkelling. What an unforgetable experience ... its entire body changed colour as I approached it. These creatures are beyond weird; you could swear they were dumped in our seas by some alien teenage prankster.
javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
Before we start, have you noticed how Java is striving towards the C Programming Language? We now have static imports, enums and varargs. So as a joke, we'll define a Stdio class. This must be in a package, so that we can statically import the functions.
package com.cretesoft.c; // must be in a package public class Stdio { public static void printf(String format, Object... args) { System.out.printf(format, args); } }
To make the example fun, we make the interface Alien and the concrete classes Octopus and SeaSlug that implement Alien. The 30cm SeaSlug was another incredibly weird creature, flapping various appendages whilst creeping over the rocks.
public interface Alien { void swim(); void glow(); } import static com.cretesoft.c.Stdio.printf; public class Octopus implements Alien { public void swim() { printf("Squirting water from my head.\n"); } public void glow() { printf("I'll be brown, no, yellow, no green.\n"); } } import static com.cretesoft.c.Stdio.printf; public class SeaSlug implements Alien { public void swim() { printf("Flap various appendages.\n"); } public void glow() { printf("Glow with a yellow hue.\n"); } }
In the past, the following cast was illegal and caused a compilation error:
String s = "42"; Integer i = (Integer)s;
Since Java 5, there is a new way of casting that does not emit a warning. Each class has a cast() method that does so without causing compiler errors:
String s = "42"; Integer i = Integer.class.cast(s);
Pre-Java 5, you would have had to first cast the String to an Object, and then down to an Integer, to get the compiler to shut up:
String s = "42"; Integer i = (Integer)((Object)s);
Of course, this cast will always fail, and anyone who uses the Class.cast() method in this way is in need of some R&R in Crete.
The Class.cast() method is used in only one place in JDK 5, but in nine places in JDK 6. Its popularity is growing! A place where it makes sense is with filtering of collections. Let's say we have a collection of objects, and we want to make a subset of the collection containing only Aliens. Here is a class that does that for you:
import java.util.*; public class CollectionFilter { /** * Filters the src collection and puts the objects matching the * clazz into the dest collection. */ public static <T> void filter(Class<T> clazz, Collection<?> src, Collection<T> dest) { for (Object o : src) { if (clazz.isInstance(o)) { dest.add(clazz.cast(o)); } } } /** * Filters the src collection and puts all matching objects into * an ArrayList, which is then returned. */ public static <T> Collection<T> filter(Class<T> clazz, Collection<?> src) { Collection<T> result = new ArrayList<T>(); filter(clazz, src, result); return result; } }
If instead of the Class.cast() method, we do an explicit cast:
dest.add((T)o) it causes the following warning:
CollectionFilter.java:13: warning: [unchecked] unchecked cast found : java.lang.Object required: T dest.add((T)o); ^ 1 warning
Here is an example of how it can be used to filter out the aliens in the sea:
import java.util.*; public class StavrosLagoon { public static void main(String[] args) { Collection<Object> sea = new ArrayList<Object>(); sea.add("seaweed"); sea.add(new Octopus()); sea.add("rock"); sea.add("fish"); sea.add(new Octopus()); sea.add("sand"); sea.add(new SeaSlug()); sea.add("starfish"); Collection<Alien> aliens = CollectionFilter.filter(Alien.class, sea); for (Alien alien : aliens) { alien.swim(); alien.glow(); } } }
There are a few other places where the Class.cast() can be useful. One of these is in generating dynamic proxy instances if you want to avoid having to deal with the unchecked cast warning.
Another interesting new method in Java 5 is the Class.asSubClass() method. We can use that when we have a class object that we want to bind to a specific subclass instance, like this:
public class SeaSlugTest { public static void main(String[] args) throws Exception { Class<?> someClass = Class.forName("SeaSlug"); Class<? extends Alien> clz = someClass.asSubclass(Alien.class); Alien alien = clz.newInstance(); alien.glow(); } }
It is good to be back in Cape Town. Had to take my Alfa Romeo for its 100'000km service, which went smoothly. I still think that the Alfa Romeo is the best value-for-money car that you can buy, especially if you enjoy driving but have to watch your budget. Their reliability has improved drastically in the last few years. Mine has certainly earned her keep. Just a pity about the speeding fines ...... | https://www.javaspecialists.eu/archive/Issue127-Casting-like-a-Tiger.html | CC-MAIN-2020-45 | refinedweb | 882 | 67.45 |
*
A
double or float?
Graham Robinson
Greenhorn
Joined: Nov 23, 2005
Posts: 16
posted
Dec 11, 2005 06:38:00
0
Hi, this code is basically meant to work out an olympic dive score, Then print out a bar chart, using rectangles. But, the difficulty can be between 1.2 and 3.5. So an Int wont work, but when i change it, there is problems with the rect object, loss of precision or something. And that it needs an int, but of course the bar uses score, which can be anyhing, so that can't be a Int either. I'm really stuck on how to do this, any help would be great thanks.
Heres the code
import element.*; import java.awt.Color; /** * Prompt user for 7 scores for Olympic Diving. * and produce the lowest, average and overall mark, printed on a bar chart. * @Author Robinson * @version 11 November 2005 */ public class DivingScores { public static void main (String[] args) { //consrtuct new console window ConsoleWindow c = new ConsoleWindow(); // construct a drawing window object DrawingWindow d = new DrawingWindow(300, 350); //Foreground d.setBackground(Color.orange); d.clear(d.bounds()); //variable int score, difficult, judge, heaviest = 0, x = 10, height, barTop, x2 = 16; final int WIDTH = 20; boolean valid, validiff; float total = 0.0f, average = 0.0f; //axes d.setForeground(Color.green); d.moveTo (30, 40); d.lineTo (30, 250); d.lineTo (240, 250); //y axes d.setForeground(Color.blue); Text judge1 = new Text ("Judge", 122, 280); judge1.drawOn(d); //x axes Text score1 = new Text ("Score", 15, 20); score1.drawOn(d); // for axis for(int count = 1; count <= 7; count++) { d.setForeground(Color.cyan); x2 = x2 + 30; Text m1 = new Text (count, x2, 260); m1.drawOn(d); } //for control include bar chart for(int count = 0; count < 7; count++) { do { c.out.println("Please input difficulty level of dive"); difficult = c.input.readInt(); validiff = (difficult >= 1.2 && difficult <= 3.5); if(!validiff) c.out.println("The entered value isn't within the accepted range."); else c.out.println("Please input score of dive");//user enter score score = c.input.readInt();//rainfall value entered by user valid = (score >= 0 && score <= 200); if (!valid) c.out.println("The value entered isn't valid."); } while(!valid); total += score;//total of rainfall x = x + 30; height = score; barTop = (250 - height); // Set the foreground to red. d.setForeground(Color.magenta); Rect bar = new Rect(x, barTop, WIDTH, height); bar.fillOn(d); if (score > heaviest) heaviest = score; average = total / 12; }//end of for //Drawing Window print d.setForeground(Color.white); Text heavy = new Text ("Heaviest Rainfall " + heaviest + "mm", 30, 295); heavy.drawOn(d); Text avR = new Text ("Average Rainfall " + average + "mm", 30, 310); avR.drawOn(d); //Console Window Print c.out.println("Heaviest rainfall = " + heaviest); c.out.println("Average rainfall = " + average); }//end of class }//end of main
BTW you may seen things to do with rainfall, i just copied it from an old program, but theyr are just prompts and comments.
[ December 11, 2005: Message edited by: Graham Robinson ]
Keith Lynn
Ranch Hand
Joined: Feb 07, 2005
Posts: 2367
posted
Dec 11, 2005 13:47:00
0
You might try using a scale like 1000 points is the distance between the bottom of the bar chart and the top of the bar chart. Use a double and round it to 2 places, then multiply is by 100 and that will give you a point in the chart.
Graham Robinson
Greenhorn
Joined: Nov 23, 2005
Posts: 16
posted
Dec 13, 2005 10:51:00
0
Thanks, but i managed to do it using an int typecast, doesn't give as accurate results, but works.
Layne Lund
Ranch Hand
Joined: Dec 06, 2001
Posts: 3061
posted
Dec 13, 2005 11:09:00
0
For more accurate results, you might want to try Keith's suggestion. You said that the score can be
anything
between 1.2 and 3.5. Are you sure about this? The value of PI lies in this range. Is this a possible score value? Somehow I doubt it. In fact, there are probably MANY values between 1.2 and 3.5 that are not likely to be used as a score. However, it might be more helpful to describe what values CAN be used. For example, you can restrict the user input to only a few decimal places (2 for example). In this case, you can use an int that is really 100 times the actual score. This might take some extra processing for getting the input, but it will make drawing the bar chart much easier and more precise.
Alternatively, you can keep the float values that you have and just multiply them by 100 (or 1000 or whatever) before you graph them. The bar chart shows a relative value anyways, so such a scaling won't make any visible difference in the final result. You will still need to cast the result to an int after the multiplication, but it seems you have figured out how to do that.
Layne
Java API Documentation
The Java Tutorial
Graham Robinson
Greenhorn
Joined: Nov 23, 2005
Posts: 16
posted
Dec 13, 2005 11:29:00
0
Well the scores have to be within that range, as it's the dives difficulty, not the actual score. The bar chart is more of an indication, this is how i did it.
import element.*; import java.awt.*; import java.lang.*; /** * Prompt user for 7 scores for Olympic Diving. * and produce the lowest, average and overall mark, printed on a bar chart. * @Author Robinson * @version 12 November 2005 */ public class DivingScores { public static void main (String[] args) { //consrtuct new console window ConsoleWindow c = new ConsoleWindow(); // construct a drawing window object DrawingWindow d = new DrawingWindow(290, 400; Color mySilver = new Color(212, 208, 200);//Standard colour for windows IE task bar etc Color mySilver2 = new Color(188, 188, 188);//Lighter grey for gridlines. //Foreground d.setBackground(mySilver); d.clear(d.bounds()); //variable int judge, low, lineTt = 30, total2, high, x = 10, x3 = 0, x4 = 0, y5 = 16, height, heightLow, heightAv, barTopAv, heightHigh, entry = 0, lowEntry = 0, highEntry = 0, barTopLow, barTopHigh, barTop, x2 = 16; double score = 0.0d, difficult = 0.0d, heaviest = 0.0d, lowest = 10.0d, overall = 0.0d, total = 0.0d, average = 0.0d; final int WIDTH = 20, LINET = 30, LINEL = 280; boolean valid, validiff; d.setForeground(Color.white); Rect back = new Rect(LINET, 50, 250, 200); back.fillOn(d); // for gridlines for(int count = 1; count <= 11; count++) { d.setForeground(mySilver2); lineTt = lineTt + 20; d.moveTo (LINET, lineTt); d.lineTo (LINEL, lineTt); } //axes d.setForeground(Color.black); d.moveTo (30, 40); d.lineTo (30, 250); d.lineTo (280, 250); //5 d.setForeground(Color.black); d.moveTo (30, 150); d.lineTo (280, 150); //10 d.moveTo (30, 50); d.lineTo (280, 50); //yaxes label Text judge1 = new Text ("Judge", 140, 275); judge1.drawOn(d); // axes label Text score1 = new Text ("Score", 15, 35); score1.drawOn(d); Text num5 = new Text ("5", 22, 155); num5.drawOn(d); Text num10 = new Text ("10", 15, 55); num10.drawOn(d); Text av = new Text ("AV", 257, 260); av.drawOn(d); // for axis for(int count = 1; count <= 7; count++) { x2 = x2 + 30; Text m1 = new Text (count, x2, 260); m1.drawOn(d); } do { c.out.println("Please input difficulty level of dive"); difficult = c.input.readDouble(); validiff = (difficult >= 1.2 && difficult <= 3.5); if(!validiff) c.out.println("The entered value isn't within the accepted range."); }while(!validiff); //for control include bar chart for(int count = 0; count < 7; count++) { do { c.out.println("Please input score of dive");//user enter score score = c.input.readDouble();//score valid = (score >= 0 && score <= 10); if (!valid) c.out.println("The value entered isn't valid."); } while(!valid); total += score;//total x = x + 30; entry++; if (score > heaviest) { heaviest = score; highEntry = entry; x3 = x; } if (score < lowest) { lowest = score; lowEntry = entry; x4 = x; } height = (int) score * 20; barTop = (250 - height); // Set the bars color. d.setForeground(Color.yellow); Rect bar = new Rect(x, barTop, WIDTH, height); bar.fillOn(d); average = (total - (lowest + heaviest)) / 5; overall = average * difficult; }//end of for //lowest and highest bars heightLow = (int) lowest * 20; barTopLow = (250 - heightLow); d.setForeground(Color.red); Rect barLH = new Rect(x4, barTopLow, WIDTH, heightLow); barLH.fillOn(d); heightHigh = (int) heaviest * 20; barTopHigh = (250 - heightHigh); Rect barH = new Rect(x3, barTopHigh, WIDTH, heightHigh); barH.fillOn(d); heightAv = (int) average * 20; barTopAv = (250 - heightAv); d.setForeground(Color.blue); Rect avy = new Rect(255, barTopAv, WIDTH, heightAv); avy.fillOn(d); d.moveTo (30, barTopAv); d.lineTo (280, barTopAv); //Drawing Window print d.setForeground(Color.red); Text heavy = new Text ("Highest mark " + heaviest, 30, 295); heavy.drawOn(d); Text lowee = new Text ("Lowest mark " + lowest, 30, 310); lowee.drawOn(d); d.setForeground(Color.blue); Text avR = new Text ("Average mark (AV)" + average, 30, 325); avR.drawOn(d); d.setForeground(Color.black); Text over = new Text ("Overall mark " + overall, 30, 340); over.drawOn(d); Text judgeLow = new Text ("Lowest mark from judge " + lowEntry, 30, 355); judgeLow.drawOn(d); Text judgeHigh = new Text ("Highest mark from judge " + highEntry, 30, 370); judgeHigh.drawOn(d); //Console Window Print c.out.println("Highest mark = " + heaviest); c.out.println("Lowest mark = " + lowest); c.out.println("Average mark = " + average); c.out.println("Overall mark = " + overall); c.out.println("total = " + total); c.out.println("Highest mark from judge " + highEntry); c.out.println("Lowest mark from judge " + lowEntry); }//end of class }//end of main
I agree. Here's the link:
subject: double or float?
Similar Threads
Storing count within a while
Drawing in a loop
Do whiles/Fors and Drawing
Preventing program from crashing when user enters a char instead of a numerical input
ArrayList Exceptions
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/401769/java/java/double-float | CC-MAIN-2014-10 | refinedweb | 1,633 | 68.36 |
Microsoft Longhorn Delayed 736
skreuzer writes "Microsoft has once again shifted the schedule for the release of "Longhorn," the company's next major version of Windows. The product was originally expected to ship next year. Then in May of this year, officials pushed back the release date to 2005. But now executives are declining to say when they expect the software to ship."
Hmmm... (Score:4, Funny)
It <b>should</b> be:
Windows is dying.
No, no, you have it all wrong!!! (Score:4, Funny)
Joshua... what are you doing ? (Score:5, Interesting)
Microsoft aren't regular 'deadline'-missers - opting to release sub-par software instead just to reach the deadline.
I'm guessing hardware and licensing deals myself.
Re:Joshua... what are you doing ? (Score:5, Interesting)
Maybe they're just waiting for the economy to get a little bit better. A lot of companies aren't doing so hot right now and probably aren't excited about the prospect of shelling out tens of thousands of dollars to get a new OS for each of their computers.
GMD
Re:Joshua... what are you doing ? (Score:4, Funny)
No, I just think they need to settle things with Playskool first before they release anything anymore...
Re:Joshua... what are you doing ? (Score:4, Funny)
Expose is god.
Re:Joshua... what are you doing ? (Score:5, Interesting)
Consider the delays in 2003 though. It was delayed repeatedly because, they said, they were getting as many bugs out as possible. I think they were stung pretty bad after the release of XP which was worse than previous Microsoft OS's beta versions. Maybe, for once, they are just trying to do it right. It's not like a Linux disro where they can release version
.0001b7 and then update it every month as they get the code finished.
Re:Joshua... what are you doing ? (Score:5, Interesting)
We were in the XP and 2003 beta, and you are off base. XP was more solid of a release than 2000 even, there were several updates in the first few months but they were based on 'application compatibility' more than anything. (Because of the errors generated when a poorly written app crashed and sent a 'bug report' to Microsoft)
So with these fixes, Microsoft made XP aware of the bugs in the programs instead of forcing the third party manufacturers to rewrite or rerelease fixes to their broken software.
That is why the error reporting tool in XP works so well, is that the OS can be made stronger by fixing and working around bugs in poorly written third party applications.
Windows Server 2003 took longer to release because of the re-written IIS and
Indeed. (Score:5, Informative)
They had plenty of vulnerabilities and many exploits that could have been prevented by patching and such... however, with SQL Slammer, Code Red, and others that had come out, Gates decided, this is it, we have to change some process somewhere. So he overhauled their development process one more time to focus around security in EVERY decision. So they halted development for 6 months, sent every single developer to a school in developing secure code, purchased 200 million in books on secure programming for their developers, and then went back to work. That right there delayed things 6 months alone.
Then, as part of Gates' orders, their next job was a line by line review of every single coded product Microsoft makes. Everything from Windows Server 2003 to the IntelliPoint software. While analyzing that code for common security mistakes, they also founded a new security organization for companies to join to exchange common coding conventions for secure code and publish common mistakes and to allow joint development knowledge to be shared, and hired on 500 people at the company to develop tools that do nothing but scan code. Those tools go out and look at code to find buffer overrun issues (the most common security flaw in existence), and to look for other common security mishaps in code.
After the review, they implemented the changes found therein. Then ran the new tools that by that time were done being developed, then implemented those changes, then got back on track with development and yes, rewrote the IIS layers to be partially built directly into the kernel for substantial performance increase. So with all that happening, the review, the tool development, the changes, the security education and reorganization, there were delays, yes. They got it out and look what it has... Two known vulnerabilities of which BOTH of them are a non-issue out of the box and are in areas that are rarely used.
Re:Joshua... what are you doing ? (Score:5, Interesting)
No it isn't. Win2k is version 5.0 (as in NT), XP is 5.1. That dot rev means more than a new gui, and 3rd party hardware drivers don't enter into it...it means changes to the kernel. Some of which include:
Re:Joshua... what are you doing ? (Score:3, Insightful)
The Blaster worm probably lit a fire under Microsoft to rethink their security practices. At least I hope that's the case.
Re:Joshua... what are you doing ? (Score:4, Insightful)
Re:Joshua... what are you doing ? (Score:5, Informative)
Windows and Windows NT were supposed to converge after 98/NT 4. They didn't. Finally we have Windows XP, how many years later?
Agreed, latterly they have shipped something on time, rather than delay, but the something more often than not has been another interim release, rather than the product actually PowerPointed several years earlier.
What? (Score:5, Insightful)
The whole
Re:What? (Score:5, Insightful)
(Note: I'm obviously using a loose definition of vaporware, as often enough MS does actually eventually produce the product they stated. Usually, it's less than expected, later than expected, and really not worth having waited for. Thankfully games don't interoperate with the OS much or MS would have crushed the PC gaming industry a long time ago.)
Re:What? (Score:5, Interesting)
Let's use this opportunity to finish playing catchup and then surpass them. People have been saying Linux is "ready for the desktop" since 1999, and it's just not, at least not with current offerings. Let's get to work!
Re:What? (Score:5, Funny)
Instead of leaving that up to EA?
Re:What's the deal with .NET? (Score:5, Informative)
Windows XP was released before the
The push will be Longhorn (Score:5, Informative)
They're leaving Win32 behind and going full
There are a lot of very major changes going on with Longhorn. I don't blame them for taking their time with this. From hardware acceleration on the desktop to SQL engine integration to revamping everything to run as
Re:The push will be Longhorn (Score:4, Insightful)
That said, I have no evidence to disagree with any of your statements. The longer they slip, the more PCs will be able to run a deep
Re:What's the deal with .NET? (Score:4, Informative)
Corrections (Score:5, Informative)
Windows 3.1 was released in April 1992.
Windows for Workgroups 3.1 and 3.11 were MAJOR versions, they were released in Oct 1992 and Nov 1993, respectively. Where are the Windows NT entries? v3.51 and v4 certainly major versions (released during 1994).
Windows 98 and 98SE can be considered MAJOR versions (maybe not under the hood, but still...).
Re:Corrections (Score:4, Informative)
I should have said that 1990 was Windows 3.0, not windows 3.1 - you are correct about that being wrong.
But Windows 3.11 and WFW 3.11, even though they introduced some significant new things, they did not introduce a major revision of the OS. It was still the same OS with some important new features, particularly in the networking department.
Even though some consider Win98/se to be major revisions, they were still updates to win95 and did not give a fundamental change in the OS's operation (except for IE integration) and basically built on what was already there. They were significant updates but I do not count them as major revisions.
As to NT, that does not apply here. I'm talking about desktop OSs. Notice that I mentioned that Win2k was the first 32 bit desktop windows. I do know that NT was out there long before win2k came out.
Re:Corrections (Score:4, Insightful)
Incidentally, NT 3.51 and 4 were intended for use in a server enviornment and not a desktop environment. Neither were "hugely" adopted either; I've never seen anything before NT 4, and I didn't see NT4 very much either.
Win98 & 98SE were major revisions to Win95, but were based on the same fundamental code/technology. As such, 98 and 98SE were not fundamental changes to windows.
Win2k represented the first version of NT that was "good", and was also the first version of NT that was widely used beyond a server role.
The original poster's timeline was correct. Major OS release events from Microsoft generally happen every 5 years.
don't forget the real consequences for the web (Score:5, Interesting)
Re:don't forget the real consequences for the web (Score:5, Funny)
Re:don't forget the real consequences for the web (Score:5, Informative)
That's why they're still releasing patches for IE6.01 but won't go the full nine and integrate tabbed browsing or gestures or any other cool feature because they're holding their breath for Longhorn.
Though, with this timeline they may actually just release IE7, but considering that there are existing IE alternatives [avantbrowser.com], I don't expect any new IE stuff until 2005.
Re:don't forget the real consequences for the web (Score:3, Informative)
Re: doubt it (Score:5, Insightful)
I agree that many W3C standards are not well designed and are often for things nobody wants. But Microsoft is a participant in the W3C. That means Microsoft is partly responsible for the bloat and redundancy of those standards.
If Microsoft realizes the problems with W3C standards, they should (and could) throw their weight around to change things. For Microsoft to encourage the development of bad standards on the part of the W3C and then not implement it themselves amounts to sabotage.
Ship date (Score:5, Funny)
When the cows come home, obviously.
Theory #1 (Score:5, Funny)
Theory #2 (Score:4, Funny)
They decided to perfect their work
Well, of course thats why.
From back in the day:"I guess this is why we haven't released windows 98 yet..." Thats Bill Gates at the Windows 98 Preview party back in the day, right after it crashed on him, on stage, for plugging in a scanner.
Theory #3 (Score:3, Funny)
Re:Theory #1 (Score:4, Funny)
Windows gets delayed, and delayed. Finally, someone from on high decrees that the next version will be named something like...
Here's looking forward to the release of Windows 21st Century Edition.
They have learned many lessons... (Score:3, Funny)
It's no big deal really... (Score:5, Funny)
Re:It's no big deal really... (Score:5, Funny)
And in 2008, KDE will finally do what Longhorn does.
Uh oh, I better put on my pitchfork-proof-vest.
So software gets delayed.... (Score:4, Insightful)
Re:So software gets delayed.... (Score:3, Funny)
You realize we're talking about Microsoft here?
That's Not The Point (Score:5, Interesting)
However... the point here is that Microsoft is creating an incredible window of opportunity here for their competitors. OS X is a better desktop system than Win XP. The open source desktops, perpetually behind, may well have time to catch up. Perhaps more importantly, with no new release of Internet Explorer in the works for the next two or more years, people might start to learn to look for alternatives and download browsers again. We could see a resurgence of competition and innovation in the web browser space -- and we'll probably get more standards compliant browsers in the mix.
In short, yeah, it's great to pillory Microsoft, but the big news here is not the egg on their face. It's the chance to show them up, and take part of their marketshare again, while their product line is aging, their reputation for security is trashed, their licensing policies are painful, I/T budgets are tight, and really, who has actual *affection* left for them anymore?
Error in quote. (Score:4, Funny)
What he really means... "When I'm having my network exploited by obvious vulnerabilities, why does it have to happen on my home machine? Why can't it seamlessly run that vulnerability on the dozen or so machines I have access to that are just sitting there? That's what we hope to bring you in the type of innovation we hope to bring you in the new 'Longhorn' OS."
'My Grid', and 'Grids Close to Me' (Score:5, Funny)
Looks like Microsoft is trying to get on the "Grid Computing" bandwagon, which has been gathering steam ever since the economist [economist.com] ran an article about it. Oracle [oracle.com] and IBM [ibm.com] both have major Grid Computing initiatives, and Microsoft wants to pretend they can play with the Big Dogs in the Server Room.
Imagine once the Microsofties dumb the concept down to the Windows level... the 'My Grid' and 'Grids Close To Me' icons on an ostensibly well-trained admin's desktop... aaaaarrrggghh!
I know (Score:3, Insightful)
!
My thoughts (Score:4, Funny)
Less Patches (Score:5, Interesting)
Maybe the "ability to rapidly introduce changes" can be read "ability to patch." I hope they use the extra time to test the security and operability extensively, to patch holes and problems before they reach the consumer.
It's general knowledge that one should not introduce a broken product to market, nevermind try to cover it with patches. Lets hope they release a fully stitched quilt, rather than rely on customers to make a run to the local fabric store.
Fine with me. (Score:3, Insightful)
Just A Coincidence? (Score:5, Interesting)
Does this fact seem to just a little to much of a conincidence? It would make perfect sense for MS to wait untill they can go back to their "old" ways again. That said, it will be a LONG time between product releases, which makes me want to agree with some other posters who have said that this suggests we'll see a Windows XP: Second Edition or something like that.
Microsoft Announces End of Windows Development (Score:5, Funny)
Wow, in Plain English!! (Score:5, Funny)
We made them think they would, but the fine print said they probably wouldn't.
"This is an important consideration that Microsoft's customers take into account when purchasing Software Assurance,
We try to steer around the topic.
which is a long-term, ongoing relationship between Microsoft and its customers, and a great deal of value comes from staying on SA long-term," she said.
As the chef Elzar would say (in an Australian accent): "Try the Microsoft Software Assurance program. It has the biggest profit margin." The great deal of value comes when you give Microsoft money.
Even though I'm using Windows... (Score:4, Informative)
Btw, is anyone else having the problem that burning CDs, and opening CDs without autorun, it never seems to remember the non-MS default that I select (Nero and "do nothing", respectively), even if I check the appropriate box? I'm sure that wouldn't happen if I went down the One Microsoft Way... The question is, will Longhorn finally annoy me enough to make me jump ship? Oh well maybe I'll have to wait a year longer for the answer. Boo-hoo.
Kjella
Re:Even though I'm using Windows... (Score:4, Insightful)
The desktop will be hardware accelerated DirectX, so eyecandy won't slow things down.
More "protection from myself".
People always play this card without citing a single example in XP. Can you?
More Messenger, WMP and goodness what else providing "integrated Windows features that can't be removed and keep nagging you".
How do they keep nagging you? I don't ever use WMP, and I removed Messenger at least a year ago.
I'm not having your CD problem at all. I'm using the latest Nero 6.
Re:Even though I'm using Windows... (Score:5, Insightful)
That's not his point, he's suggesting that the new version is eyecandy - not extra functionability. When I use XP I immediatly goto the "classic" theme and make it show the standard desktop icons just to be able to use the damn thing. I certainly am not alone in that regard.
>People always play this card without citing a single example in XP. Can you?
The above. The "are you sure you want to view these system folders" screen. The crippled search option until you change folder options to show "hidden" and system files. The hiding of tray icons, some of the 'inactive' ones are pretty important.
>How do they keep nagging you?
Here's a default Dell computer with Office. Try to just close, let alone remove, messenger. "Sorry, another program is using this." Umm, who? Its outlook, but it won't tell you that. So for millions of people it sits there wasting RAM because they can't close it. More WMP means more browser intgration and DRM. Some people don't like that.
>I'm not having your CD problem at all.
This problem is fairly common and a few good google searches brings up a few solutions.
Regardless, I have yet to see a good reason to move from 2000 to XP. System restore is tempting but not needed. When technophobes ask me why they can't just get Windows 2000, which they know pretty well, on their new computer I tell them its because Microsoft doesn't want them to. Learn XP or find your old 2K CD.
The same could be true for Longhorn, the desktop model of computing is actually pretty simple and more bloat and pretty colors doesn't help - it hinders. I'd rather see effort put into the applications than the OS. Ideally, the OS shouldn't be the selling point, the apps should be. Pretty colors and 3D shouldn't be applauded, good HCI practices should be.
Re:Even though I'm using Windows... (Score:4, Insightful)
Sure, and there were people who said the same thing about Windows 95 and the "Windows 3.1 look" option that it offered. "I'll never change" they declared. But eventually Microsoft will deprecate the old look and you'll be forced to change.
Every generation goes through the same phases. New and shiny. I'll never change. Remember the good old days. You're in stage 2.
Software Assurance (Score:5, Insightful)
XP came out within 2 years of 2K but now they look like 4 years from XP to the next version. I remember some analysts at the time were saying that Software Assurance only was good value if upgrades came out more often than once every 3 years. Now it looks like it would have been cheaper to not buy Software Assurance and just re-buy a new license when the new version becomes available. Or use an OS with less restrictive licensing
;-)
Cheers
VikingBrad
Needs a Better Name (Score:5, Funny)
Instead of calling it "Longhorn",
I think they should call it "Shorthair",
as in the phrase,
"We've got you by the short hairs now."
Re:Needs a Better Name (Score:5, Funny)
So now I have to wear this tin hat and shave my balls? Christ, Linux is not improving my odds with the ladies. Maybe I should get a mac now.
Slight change in business, no big deal (Score:3, Insightful)
Expect to see a lot of other smaller, less significant Microsoft software hitting shelves in the next two years (at least twice as much as usual) while Microsoft targets the datacenter with their R&D budget, and outfits like SCO with their legal purse.
Take as long as you want, Microsoft. (Score:4, Interesting)
IMHO, Win2k is the best OS that Microsoft has ever made.
not that that is saying much
;)
That's fine but... (Score:4, Insightful)
How do you improve? (Score:5, Insightful)
I personally think Windows 2000 Professional is a damn fine operating system. I run it at home and my workplace has standardized with 2K.
XP Pro added nothing of note except more onerous licensing conditions and a confusing UI change. Everyone I've met who uses XP changed the UI back to Windows 2000. Also, the only reason they use XP over 2K is because XP came with their new, name brand computer.
Really, what does Microsoft add to, change about, or remove from its desktop operating system to make it worth upgrading?
Re:How do you improve? (Score:5, Informative)
People misunderstand Windows XP (Score:5, Insightful)
XP is geared for home users, though they offer Professional because it does lend improvements over 2k that warrant it being used for workstations.
Too bad there is no futures markets on software (Score:4, Funny)
And please don't tell me yet again about how economists point out that markets can't predict anything. Nattering nabobs indeed.
Moreover, if we had a futures market on software shipments, then we, as users and managers could lessen risk of software delay or software bugs by buying hedging options.
A futures market in software would also let unemployed, overly expensive, middle-aged with families, but otherwise wise programmers leverage the outsourcing trend. Whether the software is made here or there, certain factors creating delays, etc. will be present and us older and wiser programmers would be able to use our years of experience to arbitrage the market.
Futures markets -- why must our overlords keep us from them?
Copy Apple's Strategy (Score:4, Interesting)
Should Microsoft call it Visual Linux#.NET or OS XP?
I think this time... (Score:5, Insightful)
They will find a significant drop in sales afterward though... people will be unwilling to upgrade if their systems are stable, bug free and secure. It is against their business model to write secure code.
They'll have to come up with a new way to keep people buying Microsoft... who knows what it will be.
Longhorn's probably not vaporware though... more likely they realize after all the crap MS OSs have been through lately... what with being on the top news for being vulnerable, unreliable and close to being the weak point of civilization itself, I guess they are rethinking that "business as usual isn't the play to make this time around."
Do you know what makes people stop using WinNT 4.0? NOTHING. It works well for businesses. Active directory? People STILL don't know what it is or what it's for or how it can improve the way they do business. MS drops support for it and people will STILL continue using it. What terrible thing will happen to Microsoft when they create a secure and stable OS? We know they can -- they have the money to throw at it and if they are willing to delay release of their newest OS project, then I'd take that as a sign they intend to make it secure and stable.
I'd say that CAN do it and they WILL do it. But the question that rings in my mind is what doom it will spell to Microsoft when they do. No more upgrades for a long time... people won't want it or care about it. That's a huge chunk of income for them.
Re:I think this time... (Score:4, Funny)
So what will be the kicker? Perhaps they will push a subscription based model? You can only rent the software, no buying allowed?
Perhaps with Bill & Co selling stock (according to Yahoo [yahoo.com], it looks like Bill dumped ~$309 million worth of MSFT in August) with Bill's plans of being completely sold out by 2006 (or 2008? forgot which..) he is planning on "doing the right thing" and releasing a solid, secure operating system.
Or perhaps the feeling is that quite frankly, the PC in its current form is well umm.. too overly complex and cumbersome. Perhaps with things like tablet PC, wireless broadband, etc, there will be a shift toward application specific embedded platforms and desktop PCs as they exist now are on their way out (I doubt by 2008
What can they really do? (Score:5, Insightful)
They can't integrate much more for risk of annoying the DOJ, all I can see them improving on is the security side of things.
Re:What can they really do? (Score:5, Interesting)
Duh! The beta testing on XP isn't finished yet (Score:5, Funny)
When It's Ready!!! (Score:5, Funny)
Delay is good (Score:4, Interesting)
I think that the reason they are delaying Longhorn is because of all the bad hype they have received this past week. They are beginning to realize that people now are concerned about security. When they have to pay someone like myself $45.00 an hour to remove a stupid worm from their computers, they are pissed. They want to know why this is happening to them, and it is getting easier to explain to them that the Windows code is swiss cheese, since they hear it being confirmed on the 6 o'clock news.
Microsoft is obviously delaying the release due to the fact that they had shit for security in the code they posses now, and they are bringing it to the table to clean it up.
A man can have dreams, can't he?
You know what that means? (Score:4, Interesting)
Of course this is just wishful thinking. I'm sure they'll do something diabolical in the meantime. Maybe they feel like there's enough money to be made yet by the use of licensing press gangs. "You WILL sign up, or we'll sue you into the ground, you dirty corporate pirates!"
What features are "Major" except for hardware? (Score:4, Interesting)
So what could possibly be Major? Yet more restrictive DRM?, A new driver model that sends all the HW vendors to hit the bottle? Eh?
If I were deeply cynical which of course I'm not I'd say that 'delays' such as they are are keyed to the symbiotic relationship they have to Intel. When/if Intel bakes a new batch of chips they need to sell suddenly a 'new' version of Windows will come along to 'need' them.
Major improvements - don't underestimate!!! (Score:5, Interesting)
Do not underestimate the power of several thousand quality developers fueled by several billions of dollars. They've hired out creme of the crop in the dotcom bust phase and now their workforce is better and more dedicated than ever.
If they're willing to adjust the schedules on top of that, the resulting product may really be scary good.
tell me about it (Score:5, Informative)
Re:tell me about it (Score:5, Funny)
but sp2 will break my copy of xp!!!
Re:tell me about it (Score:5, Funny)
but sp2 will break my copy of xp!!!
Ho my God! You forgot to close the whine tag! All the rest of slashdot will be whining! (Like we're not used to it.) See, it's allready started!
Re:tell me about it (Score:5, Funny)
Re:tell me about it (Score:5, Insightful)
Re:tell me about it (Score:5, Insightful)
1. New up2date available with updated SSL certificate authority file
I have never used SSL. I've used Apache but I've never needed SSL. This patch does not apply to me.
2. Updated Sendmail packages fix vulnerability.
I've never set up a mail server. This patch does not apply to me.
3. Updated pam_smb packages fix remote buffer overflow.
I do use samba, so I guess I'll download this one.
4. GDM allows local user to read any file.
I've used XDM but generally I prefer to boot to a console. This patch does not apply to me.
5. Updated unzip packages fix trojan vulnerability
I guess I could download this one because I probably do have unzip installed, but I can't remember ever using it. Wake me when there's a vulnerability in gzip.
6. Updated Evolution packages fix multiple vulnerabilities
Call me crazy, but I use Mozilla's email client.
What's the point to all of this? Redhat doesn't need a "service pack" because most of the security vulnerabilities do not affect the majority of their users. You can't compare Redhat's patch list to XPs. If you want to make it fair, compare Redhat to the sum of XP, Office, IIS, SQL Server, and whatever else. I think you'll find that XP has a lot more critical issues all by itself and when you add the application software you'll see why the idea of a service pack makes sense in the MS world but not in the Linux world.
Re:tell me about it (Score:5, Informative)
I have never used SSL. I've used Apache but I've never needed SSL. This patch does not apply to me.
FYI, if you don't get the above update, up2date will not run anymore
Re:tell me about it (Score:4, Insightful)
I have never used SSL. I've used Apache but I've never needed SSL. This patch does not apply to me.
Wrong. You DO need this patch. It's used to connect to the up2date server (your SSL connection between you and RedHat). 2. Updated Sendmail packages fix vulnerability.
I've never set up a mail server. This patch does not apply to me.
True, but some distros have sendmail enabled (whether you set it up or not). Make sure it's turned off or you could run into trouble.
Wake me when there's a vulnerability in gzip.
There was a zlib vulnerability about a year ago.
I will agree with you that a service pack is unnecessary. RedHat will release version 9.1 (or 10) in due time, in less time than it takes for MS to release a service pack.
huge differnce (Score:4, Interesting)
Re:huge differnce (Score:5, Interesting)
IIS is produced by microsoft
IE is produced by microsoft
ASP is produced by microsoft
linux is not produced by rh
apache is not produced by rh
mozilla is not produced by rh
php is not produced by rh
each of the individual groups are responsible for the software they produce. Microsoft is responsible for any security flaws in xp and all the xp you mentioned above. No two of the open source projects mentioned above are maintained by the same group... there is no one person responsible for all of them.
The microsoft apps however and their flaws are all the result of the shoddy programming from one shoddy company.
rh doesn't claim mozilla and php are part of the OS. Microsoft DOES claim IE and ASP are. rh doen't claim apache is part of the OS. Microsoft does claim IIS is. Of course none of these applications are part of the OS (even IE isn't, the OS is the kernel not even the shell qualifies), but microsoft claims they are so it can tie them into it's monopoly and gain a monopoly in those areas either. If they can't take the heat that come with that they should get out of the kitchen.
This is all ridiculous though, the number of patches released for a product are no gauge of how secure or insecure it is... the obviousness of though holes and damaged caused by them are, I think it's fairly clear who wins in this competition.
Re:huge differnce (Score:4, Informative)
Re:bundled with windows (Score:5, Insightful)
Also if you install Windows 2003 and know where to look you can actually find a C# compiler, email server, SQL database engine, etc. etc.
I have installed Windows Server 2003. It came with 0 lines of source code compared to over a GB of source code that came with Red Hat 9, so as far as I am concerned Windows has no source code at all.
2003 came with an SMTP service, but no mail server. Red Hat 9 came with both POP, IMAP mail servers and SMTP services. I haven't checked for the C# compiler, but I know MS gives that away free as part of the
As far as a SQL database engine, maybe. But is that available for use in developing database backed applications? I sure haven't seen any indication of that.
Basically the number of patches issued is about as meaningless an indicator of code quality as number of lines of code per day is a measure of productivity.
Perhaps there is some validity to that statement. I will have to think about it.
Yet another explanation could be that more people use XP so more people find code paths that have bugs.
I think that if you argue that XP has many times the number of lines of source code that Red Hat has, you will have to accept that it also has many times the number of bugs unless you can convince me that MS somehow magically writes higher quality code than everyone else. Since we already know that products like Windows 95 have bug rates per LOC comparable to industry norms, I think you are going to have to come up with some pretty good arguments for this proposition.
Re:huge differnce (Score:5, Interesting)
Which is why when refering to the operating system please call it by the name the author chose for it, LINUX. GNU/Linux is a name made up by someone who writes applications which have a port to the Linux operating system.
The reason microsoft gets a few choice applications thrown in is that THEY insist they are part of THEIR operating system. That doesn't make an application like a web browser part of the linux operating system.
For another thing, all of the security holes and bugs in those programs lay at microsoft's feet, they aren't merely bundled by microsoft, they are written by the same shoddy programmers who write the rest of it.
Microsoft has gone further than call those applications part of the operating system, they've made sure you cannot reasonably remove them (no getting rid of media player shortcuts doesn't qualify as REMOVEING it.). With linux there is no application including the GUI itself that I can't remove... since there is actually an option whether or not to install this or that web browser, then those applications stand on their own merit and don't group together as linux. A bug in Mozilla only affects mozilla users (windows or linux mozilla users generally), a bug in IE affects every windows user because they can't get rid of IE even if they want to.
Furthermore, according to mr gates 1/3 of winxp systems crash more than 3 times daily due to bugs in the OPERATING SYSTEM... and that's just the ones who use the error reporting service.
It's not too late to get out of this pit, you can start using your mind today and find the link (i'll give you a hint, it was covered by slashdot) to the interview in which he gave those numbers all by yourself
Re:huge differnce (Score:5, Insightful)
According to microsoft these programs are part of the core OS. They also aren't removeable, even if you want to use a different email client, webbrowser, or media player, you can't get rid of them. Since you can't remove them from the core OS, their bugs are and should be grouped in with it.
"IMHO, the only thing that could possibly rectify this situation is a new code-base, from the ground up."
I agree, a new code base (kernel, new gui, etc) is the way to go. They should contract someone else to write it as well. They also need a new development model... and the only way they'll be able to use that new development model is to figure out a new business model. Somehow I suspect none of this will happen though
Closed source doesn't make them more secure, it merely makes it take longer for the peer review... and most of the peers reviewing have no intention of telling microsoft when they find holes.
Re:tell me about it (Score:4, Interesting)
MSDOS: 20+ years without remote hole in the default install
That's because MSDOS doesnt have any native networking in the default install. Troll.
Re:No big deal (Score:3, Interesting)
Re:No big deal (Score:5, Interesting)
Re:No big deal (Score:3, Funny)
Re:Methinks... (Score:5, Insightful)
Come on guys...
Re:This is one of the worst posts I've seen. (Score:3, Interesting)
Rather, it's the executives telling investors "oh yeah, it'll be done in a year and a half," then turning to the engineers saying "alright, you have to get this done in a year and a half or we loose a LOT of money, and YOU may loose your JOB if that happens."
It's good to see Microsoft delaying a release date rather than rushing the engineers to do things sub-par to meet a quota or deadline.
Re:This is one of the worst posts I've seen. (Score:3, Insightful) don't come out by the time your contract expires and you don't get an upgrade out of the deal?" Gillen asked.
That is one reason Microsoft has been evolving Softw
Re:What technology are they going to hold hostage? (Score:3, Insightful)
Perhaps the problem they are having is there is no nice piece of tech *to* hold back.
Dave
Re:What technology are they going to hold hostage? (Score:3, Insightful)
Don't worry that you can't fill out ??? now - you will be able to replace ??? with some new technology in two or three years when it appears, and blame MS for not supporting it in OS which was released 3-5 years before the technology.
After all, NT was released long before first USB devices appeared on the market, and Windows 2000 released long before first HT-enabled processors appeared (although contrary to the parent HT works under W2K - after all it is hardware feature, not softwa
Longhorn won't require 3D (Score:5, Informative)
This is all covered at WinSuperSite [winsupersite.com], by the way, in the "Road To Longhorn" articles. Whether or not you like Paul Thurrott, he has the sources in Microsoft to get actual information on future versions of Windows.
Re:Maybe (Score:4, Interesting)
What NT needs from Posix is the uniform filename space. This could be done by migrating some of the innards "kernel names" to the FileOpen interface so any normal program can use this and access "unions" or whatever they call them. This would get rid of drive letters and allow at least a form of symbolic link, these are by far the biggest defects in NT from my perspective.
They also need to allocate all communication channels from the same pool of "fd" numbers and fix their damn select mechanism so that it accepts all of them (it is ok if they always report ready or never report ready, but it is inexcusable that I need to send different things to different interfaces).
I would also like them to return '/' from all their interfaces that return pathnames, and to make filenames be raw byte streams rather than a piece of the GUI (ie eliminate case-independence and wide-character interfaces) but these are probably hopeless. (and the case-independent disease has now invaded OSX Unix so we are probably doomed)
A real fork would be nice too.
Re:Maybe (Score:4, Informative)
NT already has a unified namespace, the object manager namespace, which the filesystem is a subset of. IIRC, the path 'C:\WINNT\' is translated into \??\C\WINNT, and \??\C is a symbolic link to \Device\Harddisk0\Parition0, translating it into \Device\Harddisk0\Parition0\WINNT internally.
NT also has the equivalent of UNIX file descriptors, HANDLEs. Instead of select, you have WaitForMultipleObjects. And unlike POSIX select which can only wait on files and sockets, you can wait on practically anything in NT: files, sockets, semaphores, events, timers, etc...
NT isn't UNIX. Don't try to use it like UNIX and you'll tear out a lot less hair. | https://slashdot.org/story/03/09/01/2245222/microsoft-longhorn-delayed | CC-MAIN-2017-17 | refinedweb | 6,861 | 72.76 |
- Control icon creation
- Colorizing options
- Preset and custom attribut creation
- Control clean up options
A tutorial video of the revised script will be posted soon.
How to run:
import rr_main_curves
rr_main_curves.window_creation()
If you have any feedback or suggestions for the tool, please feel free to contact me at conley.setup@gmail.com
Cheers and happy rigging,
Jen
Version 1.1.2- Corrected Lock / Hide checkbox bug
- Updated code for consistancy across RigBox Reborn scripts.
Please use the Feature Requests to give me ideas.
Please use the Support Forum if you have any questions or problems.
Please rate and review in the Review section. | https://jobs.highend3d.com/maya/script/rigbox_reborn-curves-tool-for-maya | CC-MAIN-2021-39 | refinedweb | 104 | 67.45 |
LockSupport.parkNanos() Under the Hood and the Curious Case of Parking, Part II: Windows
Learn more about parkNanos() behavior on Windows.
Join the DZone community and get the full member experience.Join For Free
In the previous post, we have seen how
LockSupport.parkNanos() is implemented on Linux, what behavior to expect, and how we can tune it. The post was well-received and a few people asked how
parkNanos() behaves on Windows. I've used Linux as a daily driver for over a decade and I didn't feel like exploring Windows. It's a closed-source operating system, and that makes it hard to explore it. Moreover, my knowledge of Windows API is virtually non-existent and I felt the learning curve would be too steep to justify the time investment.
However, Tomasz Gawęda re-ran my experiment on Windows 10 and shared his results on Twitter:
LockSupport.parkNanos(55_000) took about 1.5 ms on his Windows 10 box. Just a reminder: it took about 100 μs on my Linux box. That's an order of magnitude difference and this really caught my attention!
Reproducer in Java:
It is the same code as last time:
@Override public void run() { long startNanos = System.nanoTime(); for (long i = 0; i < iterationCount; i++) { LockSupport.parkNanos(1); }"); }
I've run the very same experiment on my 2-in-1 Dell XPS with Windows 10. Results were certainly interesting:
LockSupport.parkNanos(1) took close to 12 ms!
$ java -jar ./target/testvdso-1.0-SNAPSHOT.jar PARK 1000 1000 iterations in 11609 ms. This means each iteration took 11609 microseconds
Now, we are talking about 2-3 orders of magnitude difference. What is going on? Let's start with the exploration of how
parkNanos() is implemented on Windows. This is C++ code that the JDK ultimately calls. I skimmed the method body and line 5,246 looked interesting:
WaitForSingleObject(_ParkEvent, time); This is what Windows API documentation says about it: "[...] Waits until the specified object is in the signaled state or the time-out interval elapses. [...] "
This appears to be somewhat similar to
pthread_cond_timedwait(), which we know from POSIX threads and discussed in the previous post. There are some differences though:
- The Windows function receives relative time while POSIX works with absolute time
- The time is in milliseconds. There is no struct/parameter to pass microseconds or nanoseconds.
The #1 is not terribly interesting, but #2 is.
LockSupport.parkNanos() receives nanoseconds, so how on the earth could it be implemented via a system call with milliseconds granularity? I read the function once again, and then, I saw it:
time /= 1000000; // Must coarsen from nanos to millis if (time == 0) { // Wait for the minimal time unit if zero time = 1; }
In other words, JDK rounds everything to milliseconds. If you recall
LockSupport(1), then the JDK will ask Windows to wait 1 ms. That's 6 orders of magnitude difference! Obviously, it's unreasonable to expect to wait to have nanoseconds accuracy, but in the previous part, we saw accuracy on Linux was in small 10s of microseconds. On Windows, we are talking about milliseconds. As bad as it sounds, it still does not explain why my Java reproducer waits about 12 ms. Let's go deeper!
Reproducer in C
I wanted to validate my findings in something closer to the bare metal. This was the part I was rather afraid of — I wrote my last Win32 API code when I was maybe 15. This means a long time ago, just think of it: I used a printed book to learn the API! However, it turned out to be quite easy:
#include <stdio.h> #include <windows.h> int; }
Windows documentation turns out to be really good; it has tons of simple examples and is a joy to learn its API. Anyway, if you are not familiar with Windows API, here is what the program above is doing:
- It creates an event. Events are meant to be used for thread notification
- There is a bunch of
QueryPerformanceXYcalls — this is used just as a timer to calculate elapsed time
- The only real meat is the loop with the
WaitForSingleObject()call. This is the same system call that the JDK uses. It's waiting for an event, but there is no other other thread to send it; thus, it's effectively waiting for a timeout. Let's try to run it:
$ ./wintimertest.exe 15644.657600 ms $ ./wintimertest.exe 16028.830300 ms
Bingo! 1,000 iterations take about 16 seconds this means each iteration was about 16 ms. That's a similar result as in Java. The result also leads to following realization: Java rounds 1 ns up to 1 ms and Windows rounds it up again to 16 ms or so. Isn't it great?
Fun With Timers
The very first Google result for the phrase "windows timer frequency" points to this article: Windows Timer Resolution: Megawatts Wasted. I recommend everyone read it to understand the results better, but more importantly, it showed me another Windows API function:
timeBeginPeriod(). The timeBeginPeriod() function requests a minimum resolution for periodic timers.
Let's try it out in our C program. I made two changes:
- Add
timeBeginPeriod(1)to the begin on my
main()method
- Add the
winmm.liblibrary. If you are using CMake, then it's as simple as
target_link_libraries(wintimertest winmm.lib)
main.c:
#include <stdio.h> #include <windows.h> int main() { timeBeginPeriod(1); // ←- the only; }
CMake file:
cmake_minimum_required(VERSION 3.13) project(wintimertest C) set(CMAKE_C_STANDARD 11) add_executable(wintimertest main.c) target_link_libraries(wintimertest winmm.lib)
Let's run it!
dellik@DESKTOP-DTGDFVM MINGW64 ~/CLionProjects/wintimertest $ ./cmake-build-debug/wintimertest.exe 1678.479700 ms dellik@DESKTOP-DTGDFVM MINGW64 ~/CLionProjects/wintimertest $ ./cmake-build-debug/wintimertest.exe 1706.220200 ms dellik@DESKTOP-DTGDFVM MINGW64 ~/CLionProjects/wintimertest $ ./cmake-build-debug/wintimertest.exe 1681.011900 ms
You can see 1,000 iterations took roughly 1.6 seconds. This means that each
WaitForSingleObject() call took about 1.6ms. That's 10x less than without the
timeBeginPeriod() call! Moreover, the results are now roughly the same as Tomasz Gaweda reported on Twitter. However, Tomasz did not set the
timeBeginPeriod(). So, how come he observed the lower values out of the box? I have two hypotheses:
- Different Windows behave differently and use different timer frequency
- The timer is global and its configuration affects every process running on the same box. It takes a single application to change the timer and everyone else is affected too. The Windows Timer Resolution: Megawatts Wasted post shows quite a few applications do that; hence, it's very likely Tomasz has at least one application running.
The second hypothesis seems more likely to me. I noticed that when Google Chrome was running and I was scrolling pages up and down, then my application behaved similarly as when the high-resolution timer was activated.
Back to Java
Now that we know that Windows provides a function to increase timer frequency, there is one obvious question: is there a way to call this function from Java? Obviously, one can always use JNI: you could distribute the Windows-specific native library along with your Java application, perhaps embed the library inside JAR and load it when needed. However, this would complicate the build procedure, and JNI is always a bit awkward to use. Is there a simpler way? Perhaps, JDK itself already calls this function? Let's see:
14:52 $ cd /home/jara/devel/oss/jdk/src 14:52 $);
There are quite a few references to our function — this looks promising! os_windows.cpp contains this snippet:
BOOL WINAPI DllMain(HINSTANCE hinst, DWORD reason, LPVOID reserved) { switch (reason) { case DLL_PROCESS_ATTACH: vm_lib_handle = hinst; if (ForceTimeHighResolution) { timeBeginPeriod(1L); } WindowsDbgHelp::pre_initialize(); SymbolEngine::pre_initialize(); break; case DLL_PROCESS_DETACH: if (ForceTimeHighResolution) { timeEndPeriod(1L); } break; default: break; } return true; }
I don't exactly understand what this code is doing or when exactly is it called, but it appears to be checking a JVM flag
ForceTimeHighResolution, and if it is set, then it uses the timeBeginPeriod() function to set the high-resolution timer. The flag description in the globals.hpp looks very promising, too:
product(bool, ForceTimeHighResolution, false, "Using high time resolution (for Win32 only)")
Is this it? Just a single flag and my JVM will use high-resolution timers? Well, it turns out that it's not that simple. This flag has been reported to be broken since 2006! However, the very same bug report suggests a very interesting workaround:"Do not use
ForceTimeHighResolution, but instead, at the start of the application, create and start a daemon Thread that simply sleeps for a very long time (that isn't a multiple of 10ms) as this will set the timer period to be 1ms for the duration of that sleep, which, in this case, is the lifetime of the VM:
new Thread() { { this.setDaemon(true); this.start(); } public void run() { while(true) { try { Thread.sleep(Integer.MAX_VALUE); } catch(InterruptedException ex) { } } }
What the R$#@#@ have I just read? Let me rephrase it: "if you want a high-resolution timer, then just start a new thread and let it sleep forever." That's simply hilarious! Let's give it a try:
Add the workaround proposed at the Java bug report into our Utils class:
public static void windowsTimerHack() { Thread t = new Thread(() -> { try { Thread.sleep(Long.MAX_VALUE); } catch (InterruptedException e) { // a delicious interrupt, omm, omm } }); t.setDaemon(true); t.start(); }
Call the hack from our experiment:
@Override public void run() { Utils.windowsTimerHack(); long startNanos = System.nanoTime(); for (long i = 0; i < iterations; i++) { LockSupport.parkNanos(PARK_NANOS); } [...]
And here are the results:
$ java -jar ./target/testvdso-1.0-SNAPSHOT.jar PARK 1000 1000 iterations in 1706 ms. This means each iteration took 1706 microseconds $ java -jar ./target/testvdso-1.0-SNAPSHOT.jar PARK 1000 1000 iterations in 1734 ms. This means each iteration took 1734 microseconds $ java -jar ./target/testvdso-1.0-SNAPSHOT.jar PARK 1000 1000 iterations in 1732 ms. This means each iteration took 1732 microseconds
One iteration took around 1.7 ms while without the hack was around 16 ms. It means the hack works! The explanation of why it works is simple. There is a class
HighResolutionInterval in os_windows.cpp which calls the
timeBeginPeriod(1L) in its constructor and
timeEndPeriod(1L) in the destructor. The class is inserted around the Windows implementation of Thread.sleep(). When you call
Thread.sleep() then the
HighResolutionInterval sets the high-resolution timer. Unless the sleeping interval is divisible by 10.
¯\_(ツ)_/¯
So it works, but does it feel good? Not really, starting a new thread almost never feels good. Is there a better way? Let's have another look where is the
timeBeginPeriod() called from:
$);
Let's focus on the last line: something-something- MidiOut.c. Apparently, Java Sound API also relies on high-resolution timers. It's not very surprising as the music requires accurate timing and I can imagine 16ms jitter would be very noticeable. Here is the relevant part of the source code. When a Midi out device is open, then it enables the high-resolution timer.
Let's try to (ab-)use it: Add a new hack into our Utils class:
public static void windowsTimerHack_midi() { MidiOutDeviceProvider provider = new MidiOutDeviceProvider(); MidiDevice.Info[] deviceInfo = provider.getDeviceInfo(); if (deviceInfo.length == 0) { // no midi, no hack return; } try { provider.getDevice(deviceInfo[0]).open(); } catch (MidiUnavailableException e) { // ignored, hack is not applied } }
Add it into the experiment and run it:
$ java -jar ./target/testvdso-1.0-SNAPSHOT.jar PARK 1000 1000 iterations in 1638 ms. This means each iteration took 1638 microseconds $ java -jar ./target/testvdso-1.0-SNAPSHOT.jar PARK 1000 1000 iterations in 1642 ms. This means each iteration took 1642 microseconds $ java -jar ./target/testvdso-1.0-SNAPSHOT.jar PARK 1000 1000 iterations in 1638 ms. This means each iteration took 1638 microseconds
It worked, too! Again, each iteration is about 1.6 ms, 10x less than by default.
Now, the big question is: which hack is better? Or perhaps: which hack is less awful? Starting a thread seems wasteful — it allocates memory for the stack and there is some cost due to OS and JVM accounting, etc. However, I can reason about possible impacts. From the other hand, the Midi hack is a whole new world for me. I have never used sound in Java, actually, I have never programmed anything with Midi at all. Hence, I cannot really reason about possible side-effects. Thus, I feel somewhat less uncomfortable with the Thread hack. Obviously, in a real project, one has to make sure to properly stop the thread when it's no longer needed to prevent leaks, etc. And as with every performance optimization, you have to always measure the impact on your application.
Practical Applicability
When I was researching the same topic on Linux it was a really cool exercise, but I had doubts about its practical usability. The default granularity on Linux is just 50-60 microseconds and tuning it further felt like something reserved for very special occasions. However, Windows and its 16 ms granularity is a very different beast. It's not uncommon for many applications to use exponential backoff strategy for various worker threads: if there is no work available, then, at first, a thread does a little bit of busy-spinning, but then, perhaps, we have something like
Thread.yield() and then
LockSupport.parkNanos(). However, many of these back-off strategies are based on the assumptions
LockSupport.parkNanos() behaves reasonably. While 50 microseconds granularity on Linux is usually somewhat expected, 16 ms on Windows can be very surprising. It certainly was surprising to me!
Here is an example: Hazelcast Jet is an open-source, in-memory stream processing engine. In its core, it's optimized for low-latency. It can process millions of events per seconds without a drop of sweat. It's also important to say Hazelcast Jet internally uses green threads with the exponential backoff strategy as described above. Most of Jet developers use MacOS X, and while we also test on Windows, our performance tests are centered around Linux, as this is the most common operating system for production Jet deployments. Naturally, while doing this research, I was wondering how all this impacts Hazelcast Jet when running on Windows. So please, bear with me for yet another experiment.
Jet in Action
Jet exposes a high-level API to define stream processing pipelines. Here is an example that I will be using in my experiment:
package info.jerrinot.jetexper; import com.hazelcast.jet.*; import com.hazelcast.jet.config.JetConfig; import com.hazelcast.jet.datamodel.TimestampedItem; import com.hazelcast.jet.function.DistributedFunction; import com.hazelcast.jet.pipeline.*; import static com.hazelcast.jet.aggregate.AggregateOperations.counting; import static com.hazelcast.jet.pipeline.Sinks.logger; import static com.hazelcast.jet.pipeline.WindowDefinition.tumbling; import static com.hazelcast.jet.impl.util.Util.toLocalTime;//dont do this at home import static java.lang.String.format; import static java.util.concurrent.TimeUnit.SECONDS; public class PerfTest { private static final long TEST_DURATION_MILLIS = SECONDS.toMillis(240); private static final DistributedFunction<TimestampedItem, String> FORMATTING_LAMBDA = PerfTest::formatOutput; private static String formatOutput(TimestampedItem tsItem) { return "---------- " + toLocalTime(tsItem.timestamp()) + ", " + format("%,d", tsItem.item()) + " -------"; } public static void main(String[] args) { Pipeline pipeline = Pipeline.create(); pipeline.drawFrom(dummySource()) .withIngestionTimestamps() .window(tumbling(SECONDS.toMillis(10))) .aggregate(counting()) .drainTo(logger(FORMATTING_LAMBDA)); JetConfig config = new JetConfig(); config.getProperties().setProperty("hazelcast.logging.type", "slf4j"); JetInstance jetInstance = Jet.newJetInstance(config); jetInstance.newJob(pipeline).join(); jetInstance.shutdown(); } private static StreamSource dummySource() { return SourceBuilder.stream("dummySource", (c) -> System.currentTimeMillis() + TEST_DURATION_MILLIS) .<Long>fillBufferFn((s, b) -> { long now = System.currentTimeMillis(); if (now < s) { b.add(0L); } else { b.close(); } }) .build(); } }
The pipeline is easy:
- It reads from a dummy source, which always emits number 0
- Then Jet attaches a timestamp to each emitted item
- Then split timeline into 10s windows and collect items into these windows
- Run the
count()aggregation on each window to calculate how many items were collected
- Finally, it writes the output to a log
It looks complicated, but it isn't. Basically, every 10 seconds, it spits out a number showing how many items were generated in the last 10 seconds. This is how the output looks like on my Linux laptop:
[...] ---------- 17:22:40.000, 64,544,125 ------- ---------- 17:22:50.000, 65,867,834 ------- ---------- 17:23:00.000, 68,144,723 ------- ---------- 17:23:10.000, 67,014,642 ------- [...]
So, about 65,000,000 items generated and processed in 10 seconds or about 6-7M items/second. Now, let's run the same code on my Windows laptop:
[...] ---------- 17:45:40.000, 20,391,480 ------- ---------- 17:45:50.000, 22,054,175 ------- ---------- 17:46:00.000, 19,506,898 ------- ---------- 17:46:50.000, 18,348,198 ------- [...]
The Windows numbers are ⅓ of what it did on Linux. Now, my Linux laptop is quite a beast while Windows is a portable 2-in-1 device. Still, it cannot fully explain the performance difference. I already told you Hazelcast Jet uses a backoff strategy with
parkNanos() for worker threads. So, let's try to re-test it on Windows, but with the timer hack activated.
[..] ---------- 17:48:00.000, 42,718,105 ------- ---------- 17:48:10.000, 40,780,790 ------- ---------- 17:48:20.000, 38,068,978 ------- ---------- 17:48:30.000, 38,543,801 ------- [...]
That's quite a bit better! The throughput doubled with the hack! What does it mean?
- Abstractions are leaky. Abstraction over operating systems even more so!
- We have to run automated performance tests on Windows too
What Am I Supposed to Do With This?
At this point, it's a good time to link the post-Windows Timer Resolution: Megawatts Wasted once again. Certainly, I am not advocating for everyone to go and include this hack in all Java applications. There are good reasons why the default timer frequency is rather conservative and I think the article explains why.
That's all folks. So far, we have looked at
LockSupport.parkNanos() on Linux and Windows, so stay tuned — there is more coming!
If you like to poke systems and learn about low-level details, then follow me on Twitter!
Published at DZone with permission of Jaromir Hamala, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/locksupportparknanos-under-the-hood-and-the-curiou | CC-MAIN-2022-21 | refinedweb | 3,044 | 59.3 |
This is the third installment of my Automate the Vim workplace article series. As always, feel free to grab the ideas in this article or, better yet, take inspiration and inspect your workflow to identify such opportunities.
Please note that all that I share below is what I'm using with Vim (more specifically, GVim on Windows). I don't use Neovim (yet) and I can't speak for any of the below for Neovim.
Copy file full path ¶
I work with CSV files quite a bit. I spend a lot of time grooming them, fixing them etc. in Vim and then once they're ready, I need to upload it to an internal tool. For that, the following command has proven to be super useful.
" Command to copy the current file's full absolute path. command CopyFilePath let @+ = expand(has('win32') ? '%:p:gs?/?\\?' : '%:p')
This is one of those commands that feel super-simple and super-obvious once we add it to our workflow. Running this command places the full path of the current buffer's file into the system clipboard. Then, I just go to my browser, click on the upload button and paste the file location. This is much quicker than having to navigate to the folder and selecting the file. It also helps avoid selecting the wrong file (which happened more than once to me).
Squeeze / Expand contiguous blank lines ¶
When building or editing large CSV files, I often end up with several (read: hundreds) of blank lines. This is usually because I select those lines in visual block mode, cut them, and then paste as a new column to some existing rows. Solving that problem is for another day I suppose.
Nonetheless, I needed a quick way to condense several blank lines into a single blank line. The following is the result of that:
nnoremap <silent> dc :<C-u>call <SID>CleanupBlanks()<CR> fun s:CleanupBlanks() abort if !empty(getline('.')) return endif let l:curr = line('.') let l:start = l:curr while l:start > 1 && empty(getline(l:start - 1)) let l:start -= 1 endwhile let l:end = l:curr let l:last_line_num = line('$') while l:end < l:last_line_num && empty(getline(l:end + 1)) let l:end += 1 endwhile if l:end >= l:start + v:count1 exe l:start . '+' . v:count1 . ',' . l:end . 'd_' else call append(l:end, repeat([''], v:count1 - (l:end - l:start) - 1)) endif call cursor(l:start, 1) endfun
This defines the dc mapping, which will condense multiple blank lines under the cursor into a single one.
Then, on a weekend when I was feeling particularly silly, I extended this to accept a number in front of dc which specifies the number of newlines to end up with. So now, this mapping can both condense, and expand vertical blank space to any size I want! Yay silly weekends!
Duplicate Text in Motion ¶
Copy-pasta is a legitimate writing and coding technique. But I do it so mindlessly and often, I started to think of duplicating as a distinct operation, and not as a combination of yanking and then pasting. But if that is so, duplicating some text should not mess with my registers. This was messing with the nice semantic pool my thoughts were swimming in (!).
So I built a mapping that would let me duplicate the text over any motion (like text objects), without touching the registers. Following is how it's built:
" Duplicate text, selected or over motion. nnoremap <silent> <Leader>uu :t.\|silent! call repeat#set('duu', v:count)<CR> nnoremap <silent> <Leader>u :set opfunc=DuplicateText<CR>g@ vnoremap <silent> <Leader>u :<C-u>call DuplicateText('vis')<CR> fun DuplicateText(type) abort let marks = a:type ==? 'vis' ? '<>' : '[]' let [_, l1, c1, _] = getpos("'" . marks[0]) let [_, l2, c2, _] = getpos("'" . marks[1]) if l1 == l2 let text = getline(l1) call setline(l1, text[:c2 - 1] . text[c1 - 1:c2] . text[c2 + 1:]) call cursor(l2, c2 + 1) if a:type ==? 'vis' exe 'normal! v' . (c2 - c1) . 'l' endif else call append(l2, getline(l1, l2)) call cursor(l2 + 1, c1) if a:type ==? 'vis' exe 'normal! V' . (l2 - l1) . 'j' endif endif endfun
Now, what used to be yap}p has become ,uap. That's just one key reduced but a reduction in keys is not what I'm aiming at here. It's cognitive load of "duplicate this text" over "copy this text, go to end of text, paste text". This works in visual mode as well, though I don't use it as often.
Additionally, if triggered in visual mode, the duplicated text is selected again in visual mode. This quickly visually highlights for me the newly inserted text so I can get back on track as to what I intended to do with the duplicated text.
Now, if you're aware of the
:t (or
:copy) command, then what I'm doing above may seem
pointlessly elaborate. To an extent, I agree. In fact, I'm using the
:t command for the
,uu mapping which is for duplicating a single line. The difference is that where
:t
only works line-wise, my implementation above can work character wise as well as line wise. For
example, ,uaw (or just ,uw) will duplicate a single word, just like
,uap will duplicate a paragraph.
Transpose ¶
This is another mapping I created to help me with CSV files. Specifically, this one works with tab-separated files, which are even more awesome to edit in Vim, thanks to the vartabstop option. The next section describes how I use this when editing tab separated files.
This mapping, when applied over lines with tab separated values, will transpose the matrix made of lines and tabs. Check out the GIF below to get a better understanding of how this works.
" Transpose tab separated values in selection or over motion. nnoremap <silent> gt :set opfunc=Transpose<CR>g@ vnoremap <silent> gt :<C-u>call Transpose(1)<CR> fun Transpose(...) abort let vis = get(a:000, 0, 0) let marks = vis ? '<>' : '[]' let [_, l1, c1, _] = getpos("'" . marks[0]) let [_, l2, c2, _] = getpos("'" . marks[1]) let l:lines = map(getline(l1, l2), 'split(v:val, "\t")') py3 <<EOPYTHON import vim from itertools import zip_longest out = list(zip_longest(*vim.eval('l:lines'), fillvalue='')) EOPYTHON let out = map(py3eval('out'), 'join(v:val, "\t")') call append(l2, out) exe l1 . ',' . l2 . 'delete _' endfun
The keys I'm hitting in the GIF is gtip. I'm transposing the lines in the inner paragraph.
Note that I'm using
:py3 for this, so,
+python3 would be required for this to work. I might port
it to Vimscript one of these days, hopefully.
Using
vartabstop to Line Up ¶
The moment I learnt about the
vartabstop option, I jumped on it right away, considering I worked
with tab separated files a lot. I created the following command that would scan the file's contents
and set the value of this option such that all the columns would line up perfectly, almost like a
spreadsheet.
The
vartabstop option is not available in Neovim, which is one of the reasons I don't use it yet.
I just got too used to
vartabstop.
command TabsLineUp call <SID>TabsLineUp() fun s:TabsLineUp() abort py3 <<EOPYTHON import vim lengths = [] for parts in (l.split('\t') for l in vim.current.buffer if '\t' in l): lengths.append([len(c) for c in parts]) vim.current.buffer.options['vartabstop'] = ','.join(str(max(ls) + 3) for ls in zip(*lengths)) EOPYTHON endfun
Note that I implemented this in Python 3, so you'll need
+python3 if you want to yank this one.
Here's a nice GIF showing this off! Note that although it looks like we're just adding a lot of
white space to align stuff, no new space characters are inserted. The document remains unchanged.
It's just the display size of tab characters is what we're changing with
vartabstop.
Finally tab separated files are easier to deal with than comma separated files.
Also, if you're into CSV and tab separated files, I recommend checking out the amazing csv.vim
plugin. It makes similar use of the
vartabstop option.
Strip Trailing Spaces ¶
I know trailing whitespace doesn't bother a lot of people much, but it does upset me. Most of the
solutions I found online to remove trailing whitespace operate on the whole file. I wanted it to
work with the lines over a motion, like inner paragraph etc. Of course, I could just visually select
the text object and then do a
:s/\s\+$//, but that's too much effort!
" Strip all trailing spaces in the selection, or over motion. nnoremap <silent> <Leader>x :set opfunc=StripRight<CR>g@ vnoremap <silent> <Leader>x :<C-u>call StripRight(1)<CR> fun StripRight(...) abort let cp = getcurpos() let marks = get(a:000, 0, 0) ? '<>' : '[]' let [_, l1, c1, _] = getpos("'" . marks[0]) let [_, l2, c2, _] = getpos("'" . marks[1]) exe 'keepjumps ' . l1 . ',' . l2 . 's/\s\+$//e' call setpos('.', cp) endfun
The above snippet defines a mapping, ,x which operates on a motion and removes trailing whitespace. There's some nice additions to this, in that it works in visual mode as well, and that the cursor doesn't move as a result of this operation.
Removing trailing whitespace inside current paragraph is now ,xip!
Append character over motion ¶
This mapping lets me add a character at the end of all lines over a motion. So, like, ga;ip would add a semicolon to every line inside the paragraph.
I use this mostly to add commas or tab characters when working with CSV (or tab-separated files).
" Append a letter to all lines in motion. nnoremap <silent> <expr> ga <SID>AppendToLines('n') xnoremap <silent> ga :<C-u>call <SID>AppendToLines(visualmode())<CR> fun s:AppendToLines(mode) abort let c = getchar() while c == "\<CursorHold>" | let c = getchar() | endwhile let g:_append_to_lines = nr2char(c) if a:mode ==? 'n' exe 'set opfunc=' . s:SID() . 'AppendToLinesOpFunc' return 'g@' else call s:AppendToLinesOpFunc('v') endif endfun fun s:AppendToLinesOpFunc(type) abort let marks = a:type ==? 'v' ? '<>' : '[]' for l in range(line("'" . marks[0]), line("'" . marks[1])) call setline(l, getline(l) . g:_append_to_lines) endfor unlet g:_append_to_lines endfun
This may seem pointless in that, it's not very hard to do this with visual block mode. Sure. On that note, even A is pretty pointless, it can be done with just $a, right? No. The point here is not about having a shorter key sequence to do this, but a more semantic one. Just like A spells "append at end of line", to me, ga;ip spells "adding semicolon to every line in the paragraph". Personally, I think better this way.
Conclusion ¶
Text objects in Vim (and motions, for the most part) have effectively solved the problem of being able expressively select a piece of text to work on. However, in my opinion, the kind of work that can be done on such text is equally (if not more) important. Try to identify what you often do after selecting text with text objects and see if you can turn it into an operator mapping like those in this write-up.
This one is shorter than usual and that's not because of lack of content, it's more because of terrible planning on my part. Nevertheless, stay tuned for more in this series!
View comments at /posts/automating-the-vim-workplace-3/#comments | https://sharats.me/posts/automating-the-vim-workplace-3/ | CC-MAIN-2021-17 | refinedweb | 1,922 | 73.58 |
Hi
I am learning object oriented programming with C++ and I am having trouble understanding virtual functions and polymorphism.
I have this simple program:
I'm building with MinGW on windows 7.I'm building with MinGW on windows 7.Code:test.htest.cpptest.cppPHP Code:
#ifndef TEST_H_#define TEST_H_typedef struct data { int test;} data_t;class base {public: base (); void test (); virtual ~base(); virtual int dispatch (int test);};#endif /* TEST_H_ */test2.htest2.hPHP Code:
#include <iostream>#include "test.h"base::base (){}base::~base(){}void base::test (){ dispatch (5);}int main (void){ //base *b = new base (); //b->test(); return 0;}test2.cpptest2.cppPHP Code:
#ifndef TEST2_H_#define TEST2_H_#include "test.h"class derived : public base {public: derived () {} virtual ~derived (); virtual int dispatch (int test);};#endif /* TEST2_H_ */PHP Code:
#include "test2.h"#include <iostream>#include <stdio.h>using namespace std;int derived::dispatch (int test){ printf ("test"); return 1;}
I get this error message:
I am not sure what i'm doing wrong.I am not sure what i'm doing wrong.test.o:test.cpp
.rdata$_ZTV4base[vtable for base]+0x10): undefined reference to `base::dispatch(int)'.rdata$_ZTV4base[vtable for base]+0x10): undefined reference to `base::dispatch(int)'
collect2: ld returned 1 exit status
All i want to do is to have two classes (one base, one derived) and have virtual functions in both. I want to be able to call the virtual function in the base class and run the code in the derived.
Any help is appreciated
Thanks | http://cboard.cprogramming.com/cplusplus-programming/145943-virtual-functions-not-working.html | CC-MAIN-2013-48 | refinedweb | 250 | 58.28 |
Getting Started with RESTCountries.NET
In this article, Let me show you my open source Nuget package RESTCountries.NET developed in .NET Standard.
RESTCountries.NET is a .NET Standard wrapper library around the API provided by REST Countries().
htpps://restcountries.eu is a RESTful API which provides information about countries such as
name ,
capital city ,
population ,
currencies ,
borders info ,
languages ,
flag ,
calling codes , etc.
With RESTCountries.NET, we can:
- Retrieve a list of countries.
- Get a list of country names in others languages such as German, Spanish, French, Italian, Portuguese, Dutch, Croatian, Japanese, Breton, and Persian language.
- Search by country name.
- Search by capital city.
- Search by ISO 4217 currency code.
- Search by continent: Africa, Americas, Asia, Europe, Oceania.
- Search by regional bloc.
- Apply filters to retrieve what we need.
- etc.
Setup
- The library is available on NuGet at
- Install it into your .NET project(.NET Standard, .NET Core, Xamarin, etc).
Usage
- Add namespace
- Get all countries
Each method return an object of type
Countryor a
Listof
Country. You can apply filters on the returned value to retrieve what you need.
- Get a list of country names
Country names are in English by default.
- Retrieve a list of country names in Spanish
Available languages are:
de(German language),
es(Spanish language),
fr(French language),
ja(Japanese language),
it(Italian language),
br(Breton language),
pt(Portuguese language),
nl(Dutch language),
hr(Croatian language) and
fa(Persian language).
- Search by country partial name or full name
The first method could return a list of countries or a list of one element.
- Search by continent: Africa, Americas, Asia, Europe, Oceania
Possible value of “continent” are Africa, Americas, Asia, Europe and Oceania.
For more information, check out the full documentation at
Conclusion
In web applications(.NET or .NET Core) RESTCountries.NET allows us to populate country select tag options dynamically. Populate Xamarin Picker with a list of countries become easy. | https://medium.com/@lioncoding/getting-started-with-restcountries-net-9e5326b9b86b?utm_campaign=Weekly%2BXamarin&utm_medium=web&utm_source=Weekly_Xamarin_220 | CC-MAIN-2019-47 | refinedweb | 316 | 59.9 |
Libarchivejs
Overview
Libarchivejs is a archive tool for browser which can extract various types of compression, it's a port of libarchive to WebAssembly and javascript wrapper to make it easier to use. Since it runs on WebAssembly performance should be near native. Supported formats: ZIP, 7-Zip, RAR v4, RAR v5, TAR. Supported compression: GZIP, DEFLATE, BZIP2, LZMA
How to use
Install with
npm i libarchive.js and use it as a ES module.
The library consists of two parts: ES module and webworker bundle, ES module part is your interface to talk to library, use it like any other module. The webworker bundle lives in the
libarchive.js/dist folder so you need to make sure that it is available in your public folder since it will not get bundled if you're using bundler (it's all bundled up already) and specify correct path to
Archive.init() method
import {Archive} from 'libarchive.js/main.js'; Archive.init({ workerUrl: 'libarchive.js/dist/worker-bundle.js' }); document.getElementById('file').addEventListener('change', async (e) => { const file = e.currentTarget.files[0]; const archive = await Archive.open(file); let obj = await archive.extractFiles(); console.log(obj); }); // outputs { ".gitignore": {File}, "addon": { "addon.py": {File}, "addon.xml": {File} }, "README.md": {File} }
More options
To get file listing without actually decompressing archive, use one of these methods
await archive.getFilesObject(); // outputs { ".gitignore": {CompressedFile}, "addon": { "addon.py": {CompressedFile}, "addon.xml": {CompressedFile} }, "README.md": {CompressedFile} } await archive.getFilesArray(); // outputs [ {file: {CompressedFile}, path: ""}, {file: {CompressedFile}, path: "addon/"}, {file: {CompressedFile}, path: "addon/"}, {file: {CompressedFile}, path: ""} ]
If these methods get called after
archive.extractFiles(); they will contain actual files as well.
Decompression might take a while for larger files. To track each file as it gets extracted,
archive.extractFiles accepts callback
archive.extractFiles((entry) => { // { file: {File}, path: {String} } console.log(entry); });
Extract single file from archive
To extract a single file from the archive you can use the
extract() method on the returned
CompressedFile.
const filesObj = await archive.getFilesObject(); const file = await filesObj['.gitignore'].extract();
Check for encrypted data
const archive = await Archive.open(file); await archive.hasEncryptedData(); // true - yes // false - no // null - can not be determined
Extract encrypted archive
const archive = await Archive.open(file); await archive.usePassword("password"); let obj = await archive.extractFiles();
How it works
Libarchivejs is a port of the popular libarchive C library to WASM. Since WASM runs in the current thread, the library uses WebWorkers for heavy lifting. The ES Module (Archive class) is just a client for WebWorker. It's tiny and doesn't take up much space.
Only when you actually open archive file will the web worker be spawned and WASM module will be downloaded. Each
Archive.open call corresponds to each WebWorker.
After calling an
extractFiles worker, it will be terminated to free up memory. The client will still work with cached data. | https://opensourcelibs.com/lib/libarchivejs | CC-MAIN-2021-31 | refinedweb | 473 | 61.63 |
/* Expands front end tree to back end RTL for GNU C-Compiler Copyright (C) 1987, 1988, 1989, handles the generation of rtl code from tree structure above the level of expressions, using subroutines in exp*.c and emit-rtl.c. It also creates the rtl expressions for parameters and auto variables and has full responsibility for allocating stack slots. The functions whose names start with `expand_' are called by the parser to generate RTL instructions for various kinds of constructs. Some control and binding constructs require calling several such functions at different times. For example, a simple if-then is expanded by calling `expand_start_cond' (with the condition-expression as argument) before parsing the then-clause and calling `expand_end_cond' after parsing the then-clause. */ #include "config.h" #include "system.h" #include "rtl.h" #include "tree.h" #include /* Functions and data structures for expanding case statements. */ /* Case label structure, used to hold info on labels within case statements. We handle "range" labels; for a single-value label as in C, the high and low limits are the same. An AVL tree of case nodes is initially created, and later transformed to a list linked via the RIGHT fields in the nodes. Nodes with higher case values are later in the list. Switch statements can be output in one of two forms. A branch table is used if there are more than a few labels and the labels are dense within the range between the smallest and largest case value. If a branch table is used, no further manipulations are done with the case node chain. The alternative to the use of a branch table is to generate a series of compare and jump insns. When that is done, we use the LEFT, RIGHT, and PARENT fields to hold a binary tree. Initially the tree is totally unbalanced, with everything on the right. We balance the tree with nodes on the left having lower case values than the parent and nodes on the right having higher values. We then output the tree in order. */ struct case_node GTY(()) { struct case_node *left; /* Left son in binary tree */ struct case_node *right; /* Right son in binary tree; also node chain */ struct case_node *parent; /* Parent of node in binary tree */ tree low; /* Lowest index value for this label */ tree high; /* Highest index value for this label */ tree code_label; /* Label to jump to when node matches */ int balance; }; typedef struct case_node case_node; typedef struct case_node *case_node_ptr; /* These are used by estimate_case_costs and balance_case_nodes. */ /* This must be a signed type, and non-ANSI compilers lack signed char. */ static short cost_table_[129]; static int use_cost_table; static int cost_table_initialized; /* Special care is needed because we allow -1, but TREE_INT_CST_LOW is unsigned. */ #define COST_TABLE(I) cost_table_[(unsigned HOST_WIDE_INT) ((I) + 1)] /* Stack of control and binding constructs we are currently inside. These constructs begin when you call `expand_start_WHATEVER' and end when you call `expand_end_WHATEVER'. This stack records info about how the construct began that tells the end-function what to do. It also may provide information about the construct to alter the behavior of other constructs within the body. For example, they may affect the behavior of C `break' and `continue'. Each construct gets one `struct nesting' object. All of these objects are chained through the `all' field. `nesting_stack' points to the first object (innermost construct). The position of an entry on `nesting_stack' is in its `depth' field. Each type of construct has its own individual stack. For example, loops have `loop_stack'. Each object points to the next object of the same type through the `next' field. Some constructs are visible to `break' exit-statements and others are not. Which constructs are visible depends on the language. Therefore, the data structure allows each construct to be visible or not, according to the args given when the construct is started. The construct is visible if the `exit_label' field is non-null. In that case, the value should be a CODE_LABEL rtx. */ struct nesting GTY(()) { struct nesting *all; struct nesting *next; int depth; rtx exit_label; enum nesting_desc { COND_NESTING, LOOP_NESTING, BLOCK_NESTING, CASE_NESTING } desc; union nesting_u { /* For conds (if-then and if-then-else statements). */ struct nesting_cond { /* Label for the end of the if construct. There is none if EXITFLAG was not set and no `else' has been seen yet. */ rtx endif_label; /* Label for the end of this alternative. This may be the end of the if or the next else/elseif. */ rtx next_label; } GTY ((tag ("COND_NESTING"))) cond; /* For loops. */ struct nesting_loop { /* Label at the top of the loop; place to loop back to. */ rtx start_label; /* Label at the end of the whole construct. */ rtx end_label; /* Label before a jump that branches to the end of the whole construct. This is where destructors go if any. */ rtx alt_end_label; /* Label for `continue' statement to jump to; this is in front of the stepper of the loop. */ rtx continue_label; } GTY ((tag ("LOOP_NESTING"))) loop; /* For variable binding contours. */ struct nesting_block { /* Sequence number of this binding contour within the function, in order of entry. */ int block_start_count; /* Nonzero => value to restore stack to on exit. */ rtx stack_level; /* The NOTE that starts this contour. Used by expand_goto to check whether the destination is within each contour or not. */ rtx first_insn; /* Innermost containing binding contour that has a stack level. */ struct nesting *innermost_stack_block; /* List of cleanups to be run on exit from this contour. This is a list of expressions to be evaluated. The TREE_PURPOSE of each link is the ..._DECL node which the cleanup pertains to. */ tree cleanups; /* List of cleanup-lists of blocks containing this block, as they were at the locus where this block appears. There is an element for each containing block, ordered innermost containing block first. The tail of this list can be 0, if all remaining elements would be empty lists. The element's TREE_VALUE is the cleanup-list of that block, which may be null. */ tree outer_cleanups; /* Chain of labels defined inside this binding contour. For contours that have stack levels or cleanups. */ struct label_chain *label_chain; /* Number of function calls seen, as of start of this block. */ int n_function_calls; /* Nonzero if this is associated with an EH region. */ int exception_region; /* The saved target_temp_slot_level from our outer block. We may reset target_temp_slot_level to be the level of this block, if that is done, target_temp_slot_level reverts to the saved target_temp_slot_level at the very end of the block. */ int block_target_temp_slot_level; /* True if we are currently emitting insns in an area of output code that is controlled by a conditional expression. This is used by the cleanup handling code to generate conditional cleanup actions. */ int conditional_code; /* A place to move the start of the exception region for any of the conditional cleanups, must be at the end or after the start of the last unconditional cleanup, and before any conditional branch points. */ rtx last_unconditional_cleanup; } GTY ((tag ("BLOCK_NESTING"))) block; /* For switch (C) or case (Pascal) statements, and also for dummies (see `expand_start_case_dummy'). */ struct nesting_case { /* The insn after which the case dispatch should finally be emitted. Zero for a dummy. */ rtx start; /* A list of case labels; it is first built as an AVL tree. During expand_end_case, this is converted to a list, and may be rearranged into a nearly balanced binary tree. */ struct case_node *case_list; /* Label to jump to if no case matches. */ tree default_label; /* The expression to be dispatched on. */ tree index_expr; /* Type that INDEX_EXPR should be converted to. */ tree nominal_type; /* (sizeof (struct nesting)) /* Pop the nesting stack element by element until we pop off the element which is at the top of STACK. Update all the other stacks, popping off elements from them as we pop them from nesting_stack. */ #define POPSTACK(STACK) \ do { struct nesting *target = STACK; \ struct nesting *this; \ do { this = nesting_stack; \ if (loop_stack == this) \ loop_stack = loop_stack->next; \ if (cond_stack == this) \ cond_stack = cond_stack->next; \ if (block_stack == this) \ block_stack = block_stack->next; \ if (stack_block_stack == this) \ stack_block_stack = stack_block_stack->next; \ if (case_stack == this) \ case_stack = case_stack->next; \ nesting_depth = nesting_stack->depth - 1; \ nesting_stack = this->all; } \ while (this != target); } while (0) /* In some cases it is impossible to generate code for a forward goto until the label definition is seen. This happens when it may be necessary for the goto to reset the stack pointer: we don't yet know how to do that. So expand_goto puts an entry on this fixup list. Each time a binding contour that resets the stack is exited, we check each fixup. If the target label has now been defined, we can insert the proper code. */ struct goto_fixup GTY(()) { /* Points to following fixup. */ struct goto_fixup *next; /* Points to the insn before the jump insn. If more code must be inserted, it goes after this insn. */ rtx before_jump; /* The LABEL_DECL that this jump is jumping to, or 0 for break, continue or return. */ tree target; /* The BLOCK for the place where this goto was found. */ tree context; /* The CODE_LABEL rtx that this is jumping to. */ rtx target_rtl; /* Number of binding contours started in current function before the label reference. */ int block_start_count; /* The outermost stack level that should be restored for this jump. Each time a binding contour that resets the stack is exited, if the target label is *not* yet defined, this slot is updated. */ rtx stack_level; /* List of lists of cleanup expressions to be run by this goto. There is one element for each block that this goto is within. The tail of this list can be 0, if all remaining elements would be empty. The TREE_VALUE contains the cleanup list of that block as of the time this goto was seen. The TREE_ADDRESSABLE flag is 1 for a block that has been exited. */ tree cleanup_list_list; }; /*; } } /* Emit a no-op instruction. */ void emit_nop () { rtx last_insn; last_insn = get_last_insn (); if (!optimize && (GET_CODE (last_insn) == CODE_LABEL || (GET_CODE (last_insn) == NOTE && prev_real_insn (last_insn) == 0))) emit_insn (gen_nop ()); } /* Return the rtx-label that corresponds to a LABEL_DECL, creating it if necessary. */ rtx label_rtx (label) tree label; { if (TREE_CODE (label) != LABEL_DECL) abort (); if (!DECL_RTL_SET_P (label)) SET_DECL_RTL (label, gen_label_rtx ()); return DECL_RTL (label); } /* Add an unconditional jump to LABEL as the next sequential instruction. */ void emit_jump (label) rtx label; { do_pending_stack_adjust (); emit_jump_insn (gen_jump (label)); emit_barrier (); } /* Emit code to jump to the address specified by the pointer expression EXP. */ void expand_computed_goto (exp) tree exp; { rtx x = expand_expr (exp, NULL_RTX, VOIDmode, 0); #ifdef POINTERS_EXTEND_UNSIGNED if (GET_MODE (x) != Pmode) x = convert_memory_address (Pmode, x); #endif emit_queue (); do_pending_stack_adjust (); emit_indirect_jump (x); current_function_has_computed_jump = 1; } /* Handle goto statements and the labels that they can go to. */ /* Specify the location in the RTL code of a label LABEL, which is a LABEL_DECL tree node. This is used for the kind of label that the user can jump to with a goto statement, and for alternatives of a switch or case statement. RTL labels generated for loops and conditionals don't go through here; they are generated directly at the RTL level, by other functions below. Note that this has nothing to do with defining label *names*. Languages vary in how they do that and what that even means. */ void expand_label (label) tree label; { struct label_chain *p; do_pending_stack_adjust (); emit_label (label_rtx (label)); if (DECL_NAME (label)) LABEL_NAME (DECL_RTL (label)) = IDENTIFIER_POINTER (DECL_NAME (label)); if (stack_block_stack != 0) { p = (struct label_chain *) ggc_alloc (sizeof (struct label_chain)); p->next = stack_block_stack->data.block.label_chain; stack_block_stack->data.block.label_chain = p; p->label = label; } } /* Declare that LABEL (a LABEL_DECL) may be used for nonlocal gotos from nested functions. */ void declare_nonlocal_label (label) tree label; { rtx slot = assign_stack_local (Pmode, GET_MODE_SIZE (Pmode), 0); nonlocal_labels = tree_cons (NULL_TREE, label, nonlocal_labels); LABEL_PRESERVE_P (label_rtx (label)) = 1; if (nonlocal_goto_handler_slots == 0) { emit_stack_save (SAVE_NONLOCAL, &nonlocal_goto_stack_level, PREV_INSN (tail_recursion_reentry)); } nonlocal_goto_handler_slots = gen_rtx_EXPR_LIST (VOIDmode, slot, nonlocal_goto_handler_slots); } /* Generate RTL code for a `goto' statement with target label LABEL. LABEL should be a LABEL_DECL tree node that was or will later be defined with `expand_label'. */ void expand_goto (label) tree label; { tree context; /* Check for a nonlocal goto to a containing function. */ context = decl_function_context (label); if (context != 0 && context != current_function_decl) { struct function *p = find_function_data (context); rtx label_ref = gen_rtx_LABEL_REF (Pmode, label_rtx (label)); rtx handler_slot, static_chain, save_area, insn; tree link; /* Find the corresponding handler slot for this label. */ handler_slot = p->x_nonlocal_goto_handler_slots; for (link = p->x_nonlocal_labels; TREE_VALUE (link) != label; link = TREE_CHAIN (link)) handler_slot = XEXP (handler_slot, 1); handler_slot = XEXP (handler_slot, 0); p->has_nonlocal_label = 1; current_function_has_nonlocal_goto = 1; LABEL_REF_NONLOCAL_P (label_ref) = 1; /* Copy the rtl for the slots so that they won't be shared in case the virtual stack vars register gets instantiated differently in the parent than in the child. */ { /* Restore frame pointer for containing function. This sets the actual hard register used for the frame pointer to the location of the function's incoming static chain info. The non-local goto handler will then adjust it to contain the proper value and reload the argument pointer, if needed. */ emit_move_insn (hard_frame_pointer_rtx,; } } else expand_goto_internal (label, label_rtx (label), NULL_RTX); } /* Generate RTL code for a `goto' statement with target label BODY. LABEL should be a LABEL_REF. LAST_INSN, if non-0, is the rtx we should consider as the last insn emitted (for the purposes of cleaning up a return). */ static void expand_goto_internal (body, label, last_insn) tree body; rtx label; rtx last_insn; { struct nesting *block; rtx stack_level = 0; if (GET_CODE (label) != CODE_LABEL) abort (); /* If label has already been defined, we can tell now whether and how we must alter the stack level. */ if (PREV_INSN (label) != 0) { /* Find the innermost pending block that contains the label. (Check containment by comparing insn-uids.) Then restore the outermost stack level within that block, and do cleanups of all blocks contained in it. */ for (block = block_stack; block; block = block->next) { if (INSN_UID (block->data.block.first_insn) < INSN_UID (label)) break; if (block->data.block.stack_level != 0) stack_level = block->data.block.stack_level; /* Execute the cleanups for blocks we are exiting. */ if (block->data.block.cleanups != 0) { expand_cleanups (block->data.block.cleanups, NULL_TREE, 1, 1); do_pending_stack_adjust (); } } if (stack_level) { /* Ensure stack adjust isn't done by emit_jump, as this would clobber the stack pointer. This one should be deleted as dead by flow. */ clear_pending_stack_adjust (); do_pending_stack_adjust (); /* Don't do this adjust if it's to the end label and this function is to return with a depressed stack pointer. */ if (label == return_label && (((TREE_CODE (TREE_TYPE (current_function_decl)) == FUNCTION_TYPE) && (TYPE_RETURNS_STACK_DEPRESSED (TREE_TYPE (current_function_decl)))))) ; else emit_stack_restore (SAVE_BLOCK, stack_level, NULL_RTX); } if (body != 0 && DECL_TOO_LATE (body)) error ("jump to `%s' invalidly jumps into binding contour", IDENTIFIER_POINTER (DECL_NAME (body))); } /* Label not yet defined: may need to put this goto on the fixup list. */ else if (! expand_fixup (body, label, last_insn)) { /* No fixup needed. Record that the label is the target of at least one goto that has no fixup. */ if (body != 0) TREE_ADDRESSABLE (body) = 1; } emit_jump (label); } /* Generate if necessary a fixup for a goto whose target label in tree structure (if any) is TREE_LABEL and whose target in rtl is RTL_LABEL. If LAST_INSN is nonzero, we pretend that the jump appears after insn LAST_INSN instead of at the current point in the insn stream. The fixup will be used later to insert insns just before the goto. Those insns will restore the stack level as appropriate for the target label, and will (in the case of C++) also invoke any object destructors which have to be invoked when we exit the scopes which are exited by the goto. Value is nonzero if a fixup is made. */ static int expand_fixup (tree_label, rtl_label, last_insn) tree tree_label; rtx rtl_label; rtx last_insn; { struct nesting *block, *end_block; /* See if we can recognize which block the label will be output in. This is possible in some very common cases. If we succeed, set END_BLOCK to that block. Otherwise, set it to 0. */ if (cond_stack && (rtl_label == cond_stack->data.cond.endif_label || rtl_label == cond_stack->data.cond.next_label)) end_block = cond_stack; /* If we are in a loop, recognize certain labels which are likely targets. This reduces the number of fixups we need to create. */ else if (loop_stack && (rtl_label == loop_stack->data.loop.start_label || rtl_label == loop_stack->data.loop.end_label || rtl_label == loop_stack->data.loop.continue_label)) end_block = loop_stack; else end_block = 0; /* Now set END_BLOCK to the binding level to which we will return. */ if (end_block) { struct nesting *next_block = end_block->all; block = block_stack; /* First see if the END_BLOCK is inside the innermost binding level. If so, then no cleanups or stack levels are relevant. */ while (next_block && next_block != block) next_block = next_block->all; if (next_block) return 0; /* Otherwise, set END_BLOCK to the innermost binding level which is outside the relevant control-structure nesting. */ next_block = block_stack->next; for (block = block_stack; block != end_block; block = block->all) if (block == next_block) next_block = next_block->next; end_block = next_block; } /* Does any containing block have a stack level or cleanups? If not, no fixup is needed, and that is the normal case (the only case, for standard C). */ for (block = block_stack; block != end_block; block = block->next) if (block->data.block.stack_level != 0 || block->data.block.cleanups != 0) break; if (block != end_block) { /* Ok, a fixup is needed. Add a fixup to the list of such. */ struct goto_fixup *fixup = (struct goto_fixup *) ggc_alloc (sizeof (struct goto_fixup)); /* In case an old stack level is restored, make sure that comes after any pending stack adjust. */ /* ?? If the fixup isn't to come at the present position, doing the stack adjust here isn't useful. Doing it with our settings at that location isn't useful either. Let's hope someone does it! */ if (last_insn == 0) do_pending_stack_adjust (); fixup->target = tree_label; fixup->target_rtl = rtl_label; /* Create a BLOCK node and a corresponding matched set of NOTE_INSN_BLOCK_BEG and NOTE_INSN_BLOCK_END notes at this point. The notes will encapsulate any and all fixup code which we might later insert at this point in the insn stream. Also, the BLOCK node will be the parent (i.e. the `SUPERBLOCK') of any other BLOCK nodes which we might create later on when we are expanding the fixup code. Note that optimization passes (including expand_end_loop) might move the *_BLOCK notes away, so we use a NOTE_INSN_DELETED as a placeholder. */ {_block_start_count; fixup->stack_level = 0; fixup->cleanup_list_list = ((block->data.block.outer_cleanups || block->data.block.cleanups) ? tree_cons (NULL_TREE, block->data.block.cleanups, block->data.block.outer_cleanups) : 0); fixup->next = goto_fixup_chain; goto_fixup_chain = fixup; } return block != 0; } /* Expand any needed fixups in the outputmost binding level of the function. FIRST_INSN is the first insn in the function. */ void expand_fixups (first_insn) rtx first_insn; { fixup_gotos (NULL, NULL_RTX, NULL_TREE, first_insn, 0); } /* When exiting a binding contour, process all pending gotos requiring fixups. THISBLOCK is the structure that describes the block being exited. STACK_LEVEL is the rtx for the stack level to restore exiting this contour. CLEANUP_LIST is a list of expressions to evaluate on exiting this contour. FIRST_INSN is the insn that began this contour. Gotos that jump out of this contour must restore the stack level and do the cleanups before actually jumping. DONT_JUMP_IN nonzero means report error there is a jump into this contour from before the beginning of the contour. This is also done if STACK_LEVEL is nonzero. */ static void fixup_gotos (thisblock, stack_level, cleanup_list, first_insn, dont_jump_in) struct nesting *thisblock; rtx stack_level; tree cleanup_list; rtx first_insn; int dont_jump_in; { struct goto_fixup *f, *prev; /* F is the fixup we are considering; PREV is the previous one. */ /* We run this loop in two passes so that cleanups of exited blocks are run first, and blocks that are exited are marked so afterwards. */ for (prev = 0, f = goto_fixup_chain; f; prev = f, f = f->next) { /* Test for a fixup that is inactive because it is already handled. */ if (f->before_jump == 0) { /* Delete inactive fixup from the chain, if that is easy to do. */ if (prev != 0) prev->next = f->next; } /* Has this fixup's target label been defined? If so, we can finalize it. */ else if (PREV_INSN (f->target_rtl) != 0) {) && INSN_UID (first_insn) > INSN_UID (f->before_jump) && ! DECL_ERROR_ISSUED (f->target)) { error_with_decl (f->target, "label `%s' used before containing binding contour"); /* Prevent multiple errors for one label. */ DECL_ERROR_ISSUED (f->target) = 1; } /* We will expand the cleanups into a sequence of their own and then later on we will attach this new sequence to the insn stream just ahead of the actual jump insn. */ start_sequence (); /* Temporarily restore the lexical context where we will logically be inserting the fixup code. We do this for the sake of getting the debugging information right. */ (*lang_hooks.decls.pushlevel) (0); (*lang_hooks.decls.set_block) (f->context); /* Expand the cleanups for blocks this jump exits. */ if (f->cleanup_list_list) { tree lists; for (lists = f->cleanup_list_list; lists; lists = TREE_CHAIN (lists)) /* Marked elements correspond to blocks that have been closed. Do their cleanups. */ if (TREE_ADDRESSABLE (lists) && TREE_VALUE (lists) != 0) { expand_cleanups (TREE_VALUE (lists), NULL_TREE, 1, 1); /* Pop any pushes done in the cleanups, in case function is about to return. */ do_pending_stack_adjust (); } } /* Restore stack level for the biggest contour that this jump jumps out of. */ if (f->stack_level && ! (f->target_rtl == return_label && ((TREE_CODE (TREE_TYPE (current_function_decl)) == FUNCTION_TYPE) && (TYPE_RETURNS_STACK_DEPRESSED (TREE_TYPE (current_function_decl)))))) emit_stack_restore (SAVE_BLOCK, f->stack_level, f->before_jump); /* Finish up the sequence containing the insns which implement the necessary cleanups, and then attach that whole sequence to the insn stream just ahead of the actual jump insn. Attaching it at that point insures that any cleanups which are in fact implicit C++ object destructions (which must be executed upon leaving the block) appear (to the debugger) to be taking place in an area of the generated code where the object(s) being destructed are still "in scope". */ cleanup_insns = get_insns (); (*lang_hooks.decls.poplevel) (1, 0, 0); end_sequence (); emit_insn_after (cleanup_insns, f->before_jump); f->before_jump = 0; } } /* For any still-undefined labels, do the cleanups for this block now. We must do this now since items in the cleanup list may go out of scope when the block ends. */ for (prev = 0, f = goto_fixup_chain; f; prev = f, f = f->next) if (f->before_jump != 0 && PREV_INSN (f->target_rtl) == 0 /* Label has still not appeared. If we are exiting a block with a stack level to restore, that started before the fixup, mark this stack level as needing restoration when the fixup is later finalized. */ && thisblock != 0 /* Note: if THISBLOCK == 0 and we have a label that hasn't appeared, it means the label is undefined. That's erroneous, but possible. */ && (thisblock->data.block.block_start_count <= f->block_start_count)) { tree lists = f->cleanup_list_list; rtx cleanup_insns; for (; lists; lists = TREE_CHAIN (lists)) /* If the following elt. corresponds to our containing block then the elt. must be for this block. */ if (TREE_CHAIN (lists) == thisblock->data.block.outer_cleanups) { start_sequence (); (_after (cleanup_insns, f->before_jump); f->cleanup_list_list = TREE_CHAIN (lists); } if (stack_level) f->stack_level = stack_level; } } /* Return the number of times character C occurs in string S. */ static int n_occurrences (c, s) int c; const char *s; { int n = 0; while (*s) n += (*s++ == c); return n; } /* Generate RTL for an asm statement (explicit assembler code). ',': break; /* Whether or not a numeric constraint allows a register is decided by the matching constraint, and so there is no need to do anything special with them. We must handle them in the default case, so that we don't unnecessarily force operands to memory. */ case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': { in TREE_PURPOSE. CLOBBERS is a list of STRING_CST nodes each naming a hard register that is clobbered by this insn. Not all kinds of lvalue that may appear in OUTPUTS can be stored directly. Some elements of OUTPUTS may be replaced with trees representing temporary values. The caller should copy those temporary values to the originally specified lvalues. VOL nonzero means the insn is volatile; don't optimize it. */ void expand_asm_operands (string, outputs, inputs, clobbers, vol, filename, line) tree string, outputs, inputs, clobbers; int vol; tail; int i; /* Vector of RTX's of evaluated output operands. */ rtx *output_rtx = (rtx *) alloca (noutputs * sizeof (rtx)); int *inout_opnum = (int *) alloca (noutputs * sizeof (int)); rtx *real_output_rtx = (rtx *) alloca (noutputs * sizeof (rtx)); enum machine_mode *inout_mode = (enum machine_mode *) alloca (noutputs * sizeof (enum machine_mode)); (); /* If an output operand is not a decl or indirect ref and our constraint allows a register, make a temporary to act as an intermediate. Make the asm insn write into that, then our caller will copy it to the real output operand. Likewise for promoted variables. */); emit_move_insn (memloc, op); op = memloc; } else if (GET_CODE (op) == MEM && MEM_VOLATILE_P (op)) { /* We won't recognize volatile memory as available a memory_operand at this point. Ignore it. */ } else if (queued_subexp_p (op)) ; else /* ??? Leave this only until we have experience with what happens in combine and elsewhere when constraints are not satisfied. */ warning ("asm operand %d probably doesn't match constraints", i +]; insn = emit_insn (gen_rtx_SET (VOIDmode, output_rtx[0], body)); } else if (noutputs == 0 && nclobbers == 0) { /* No output operands: put in a raw ASM_OPERANDS rtx. */ insn = emit_insn (body); } else { rtx obody = body; int num = noutputs; if (num == 0) num = 1; body = gen_rtx_PARALLEL (VOIDmode, rtvec_alloc (num + nclobbers)); /* For each output operand, store a SET. */ for (i = 0, tail = outputs; tail; tail = TREE_CHAIN (tail), i++) { XVECEXP (body, 0, i) = gen_rtx_SET (VOIDmode, output_rtx[i], gen_rtx_ASM_OPERANDS (GET_MODE (output_rtx[i]), TREE_STRING_POINTER (string), constraints[i], i, argvec, constraintvec, filename, line)); MEM_VOLATILE_P (SET_SRC (XVECEXP (body, 0, i))) = vol; } /* If there are no outputs (but there are some clobbers) store the bare ASM_OPERANDS into the PARALLEL. */ if (i == 0) XVECEXP (body, 0, i++) = obody; /* Store (clobber REG) for each clobbered register specified. */ for (tail = clobbers; tail; tail = TREE_CHAIN (tail)) { const char *regname = TREE_STRING_POINTER (TREE_VALUE (tail)); int j = decode_reg_name (regname); rtx clobbered_reg; if (j < 0) { if (j == -3) /* `cc', which is not a register */ continue; if (j == -4) /* `memory', don't cache memory across asm */ { XVECEXP (body, 0, i++) = gen_rtx_CLOBBER (VOIDmode, gen_rtx_MEM (BLKmode, gen_rtx_SCRATCH (VOIDmode))); continue; } /* Ignore unknown register, error already signaled. */ continue; } /* Use QImode since that's guaranteed to clobber just one); } insn = emit_insn (body); } /* For any outputs that needed reloading into registers, spill them back to where they belong. */ for (i = 0; i < noutputs; ++i) if (real_output_rtx[i]) emit_move_insn (real_output_rtx[i], output_rtx[i]); free_temp_slots (); } /* (exp))) return 0; switch (TREE_CODE (exp)) { case PREINCREMENT_EXPR: case POSTINCREMENT_EXPR: case PREDECREMENT_EXPR: case POSTDECREMENT_EXPR: case MODIFY_EXPR: case INIT_EXPR: case TARGET_EXPR: case CALL_EXPR: case METHOD_CALL_EXPR: case RTL_EXPR: case TRY_CATCH_EXPR: case WITH_CLEANUP_EXPR: case EXIT_EXPR: return 0; case BIND_EXPR: /* For a binding, warn if no side effect within it. */ return warn_if_unused_value (TREE_OPERAND (exp, 1)); case SAVE_EXPR: return warn_if_unused_value (TREE_OPERAND (exp, 1)); case TRUTH_ORIF_EXPR: case TRUTH_ANDIF_EXPR: /* In && or ||, warn if 2nd operand has no side effect. */ return warn_if_unused_value (TREE_OPERAND (exp, 1)); case COMPOUND_EXPR: if (TREE_NO_UNUSED_WARNING (exp)) return 0; if (warn_if_unused_value (TREE_OPERAND (exp, 0))) return 1; /* Let people do `(foo (), 0)' without a warning. */ if (TREE_CONSTANT (TREE_OPERAND (exp, 1))) return 0; return warn_if_unused_value (TREE_OPERAND (exp, 1)); case NOP_EXPR: case CONVERT_EXPR: case NON_LVALUE_EXPR: /* Don't warn about conversions not explicit in the user's program. */ if (TREE_NO_UNUSED_WARNING (exp)) return 0; /* Assignment to a cast usually results in a cast of a modify. Don't complain about that. There can be an arbitrary number of casts before the modify, so we must loop until we find the first non-cast expression and then test to see if that is a modify. */ { tree tem = TREE_OPERAND (exp, 0); while (TREE_CODE (tem) == CONVERT_EXPR || TREE_CODE (tem) == NOP_EXPR) tem = TREE_OPERAND (tem, 0); if (TREE_CODE (tem) == MODIFY_EXPR || TREE_CODE (tem) == INIT_EXPR || TREE_CODE (tem) == CALL_EXPR) return 0; } goto maybe_warn; case INDIRECT_REF: /* Don't warn about automatic dereferencing of references, since the user cannot control it. */ if (TREE_CODE (TREE_TYPE (TREE_OPERAND (exp, 0))) == REFERENCE_TYPE) return warn_if_unused_value (TREE_OPERAND (exp, 0)); /* Fall through. */ default: /* Referencing a volatile value is a side effect, so don't warn. */ if ( (); NO_DEFER_POP; expr_stmts_for_value++; return t; } /* Restore the previous state at the end of a statement that returns a value. Returns a tree node representing the statement's value and the insns to compute the value. The nodes of that expression have been freed by now, so we cannot use them. But we don't want to do that anyway; the expression has already been evaluated and now we just want to use the value. So generate a RTL_EXPR with the proper type and RTL value. If the last substatement was not an expression, return something with type `void'. */ tree expand_end_stmt_expr (t) tree t; { OK_DEFER_POP; if (! last_expr_value || ! last_expr_type) { last_expr_value = const0_rtx; last_expr_type = void_type_node; } else if (GET_CODE (last_expr_value) != REG && ! CONSTANT_P (last_expr_value)) /* Remove any possible QUEUED. */ last_expr_value = protect_from_queue (last_expr_value, 0); emit_queue (); TREE_TYPE (t) = last_expr_type; RTL_EXPR_RTL (t) = last_expr_value; RTL_EXPR_SEQUENCE (t) = get_insns (); rtl_expr_chain = tree_cons (NULL_TREE, t, rtl_expr_chain); end_sequence (); /* Don't consider deleting this expr or containing exprs at tree level. */ TREE_SIDE_EFFECTS (t) = 1; /* Propagate volatility of the actual RTL expr. */ TREE_THIS_VOLATILE (t) = volatile_refs_p (last_expr_value); clear_last_expr (); expr_stmts_for_value--; return t; } /* Generate RTL for the start of an if-then. COND is the expression whose truth should be tested. If EXITFLAG is nonzero, this conditional is visible to `exit_something'. */ void expand_start_cond (cond, exitflag) tree cond; int exitflag; { struct nesting *thiscond = ALLOC_NESTING (); /* Make an entry on cond_stack for the cond we are entering. */ thiscond->desc = COND_NESTING; thiscond->next = cond_stack; thiscond->all = nesting_stack; thiscond->depth = ++nesting_depth; thiscond->data.cond.next_label = gen_label_rtx (); /* Before we encounter an `else', we don't need a separate exit label unless there are supposed to be exit statements to exit this conditional. */ thiscond->exit_label = exitflag ? gen_label_rtx () : 0; thiscond->data.cond.endif_label = thiscond->exit_label; cond_stack = thiscond; nesting_stack = thiscond; do_jump (cond, thiscond->data.cond.next_label, NULL_RTX); } /* Generate RTL between then-clause and the elseif-clause of an if-then-elseif-.... */ void expand_start_elseif (cond) tree cond; { if (cond_stack->data.cond.endif_label == 0) cond_stack->data.cond.endif_label = gen_label_rtx (); emit_jump (cond_stack->data.cond.endif_label); emit_label (cond_stack->data.cond.next_label); cond_stack->data.cond.next_label = gen_label_rtx (); do_jump (cond, cond_stack->data.cond.next_label, NULL_RTX); } /* Generate RTL between the then-clause and the else-clause of an if-then-else. */ void expand_start_else () { if (cond_stack->data.cond.endif_label == 0) cond_stack->data.cond.endif_label = gen_label_rtx (); emit_jump (cond_stack->data.cond.endif_label); emit_label (cond_stack->data.cond.next_label); cond_stack->data.cond.next_label = 0; /* No more _else or _elseif calls. */ } /* After calling expand_start_else, turn this "else" into an "else if" by providing another condition. */ void expand_elseif (cond) tree cond; { cond_stack->data.cond.next_label = gen_label_rtx (); do_jump (cond, cond_stack->data.cond.next_label, NULL_RTX); } /* Generate RTL for the end of an if-then. Pop the record for it off of cond_stack. */ void expand_end_cond () { struct nesting *thiscond = cond_stack; do_pending_stack_adjust (); if (thiscond->data.cond.next_label) emit_label (thiscond->data.cond.next_label); if (thiscond->data.cond.endif_label) emit_label (thiscond->data.cond.endif_label); POPSTACK (cond_stack); clear_last_expr (); } /* Generate RTL for the start of a loop. EXIT_FLAG is nonzero if this loop should be exited by `exit_something'. This is a loop for which `expand_continue' will jump to the top of the loop. Make an entry on loop_stack to record the labels associated with this loop. */ struct nesting * expand_start_loop (exit_flag) int exit_flag; { struct nesting *thisloop = ALLOC_NESTING (); /* Make an entry on loop_stack for the loop we are entering. */ thisloop->desc = LOOP_NESTING; thisloop->next = loop_stack; thisloop->all = nesting_stack; thisloop->depth = ++nesting_depth; thisloop->data.loop.start_label = gen_label_rtx (); thisloop->data.loop.end_label = gen_label_rtx (); thisloop->data.loop.alt_end_label = 0; thisloop->data.loop.continue_label = thisloop->data.loop.start_label; thisloop->exit_label = exit_flag ? thisloop->data.loop.end_label : 0; loop_stack = thisloop; nesting_stack = thisloop; do_pending_stack_adjust (); emit_queue (); emit_note (NULL, NOTE_INSN_LOOP_BEG); emit_label (thisloop->data.loop.start_label); return thisloop; } /* Like expand_start_loop but for a loop where the continuation point (for expand_continue_loop) will be specified explicitly. */ struct nesting * expand_start_loop_continue_elsewhere (exit_flag) int exit_flag; { struct nesting *thisloop = expand_start_loop (exit_flag); loop_stack->data.loop.continue_label = gen_label_rtx (); return thisloop; } /*, NOTE_INSN_LOOP_CONT); emit_label (loop_stack->data.loop.continue_label); } /* Finish a loop. Generate a jump back to the top and the loop-exit label. Pop the block off of loop_stack. */ void expand_end_loop () { rtx start_label = loop_stack->data.loop.start_label; rtx) { if (--eh_regions < 0) /* We've come to the end of an EH region, but never saw the beginning of that region. That means that an EH region begins before the top of the loop, and ends in the middle of it. The existence of such a situation violates a basic assumption in this code, since that would imply that even when EH_REGIONS is zero, we might move code out of an exception region. */ abort (); } /*; /* If the start label is preceded by a NOTE_INSN_LOOP_CONT note, then we want to move this note also. */ if (GET_CODE (PREV_INSN (start_move)) == NOTE && NOTE_LINE_NUMBER (PREV_INSN (start_move)) == NOTE_INSN_LOOP_CONT) start_move = PREV_INSN (start_move); emit (); } /* Generate a jump to the current loop's continue-point. This is usually the top of the loop, but may be specified explicitly elsewhere. If not currently inside a loop, return 0 and do nothing; caller will print an error message. */ int expand_continue_loop (whichloop) struct nesting *whichloop; { /* Emit information for branch prediction. */ rtx note; if (flag_guess_branch_prob) { note = emit_note (NULL, NOTE_INSN_PREDICTION); NOTE_PREDICTION (note) = NOTE_PREDICT (PRED_CONTINUE, IS_TAKEN); } clear_last_expr (); if (whichloop == 0) whichloop = loop_stack; if (whichloop == 0) return 0; expand_goto_internal (NULL_TREE, whichloop->data.loop.continue_label, NULL_RTX); return 1; } /* Generate a jump to exit the current loop. If not currently inside a loop, return 0 and do nothing; caller will print an error message. */ int expand_exit_loop (whichloop) struct nesting *whichloop; { clear_last_expr (); if (whichloop == 0) whichloop = loop_stack; if (whichloop == 0) return 0; expand_goto_internal (NULL_TREE, whichloop->data.loop.end_label, NULL_RTX); return 1; } /* Generate a conditional jump to exit the current loop if COND evaluates to zero. If not currently inside a loop, return 0 and do nothing; caller will print an error message. */ int expand_exit_loop_if_false (whichloop, cond) struct nesting *whichloop; tree cond; { rtx label = gen_label_rtx (); rtx last_insn; clear_last_expr (); if (whichloop == 0) whichloop = loop_stack; if (whichloop == 0) return 0; /* In order to handle fixups, we actually create a conditional jump around an unconditional branch to exit the loop. If fixups are necessary, they go before the unconditional branch. */ do_jump (cond, NULL_RTX, label); last_insn = get_last_insn (); if (GET_CODE (last_insn) == CODE_LABEL) whichloop->data.loop.alt_end_label = last_insn; expand_goto_internal (NULL_TREE, whichloop->data.loop.end_label, NULL_RTX); emit_label (label); return 1; } /* if we should preserve sub-expressions as separate pseudos. We never do so if we aren't optimizing. We always do so if -fexpensive-optimizations. Otherwise, we only do so if we are in the "early" part of a loop. I.e., the loop may still be a small one. */ int preserve_subexpressions_p () { rtx insn; if (flag_expensive_optimizations) return 1; if (optimize == 0 || cfun == 0 || cfun->stmt == 0 || loop_stack == 0) return 0; insn = get_last_insn_anywhere (); return (insn && (INSN_UID (insn) - INSN_UID (loop_stack->data.loop.start_label) < n_non_fixed_regs * 3)); } /* Generate a jump to exit the current loop, conditional, binding contour or case statement. Not all such constructs are visible to this function, only those started with EXIT_FLAG nonzero. Individual languages use the EXIT_FLAG parameter to control which kinds of constructs you can exit this way. If not currently inside anything that can be exited, return 0 and do nothing; caller will print an error message. */ int expand_exit_something () { struct nesting *n; clear_last_expr (); for (n = nesting_stack; n; n = n->all) if (n->exit_label != 0) { expand_goto_internal (NULL_TREE, n->exit_label, NULL_RTX); return 1; } return 0; } /* Generate RTL to return from the current function, with no value. (That is, we do not do anything about returning any value.) */ void expand_null_return () { (); expand_goto_internal (NULL_TREE, end_label, last_insn); } /* Generate RTL to evaluate the expression RETVAL and return it from the current function. */ void expand_return (retval) tree retval; { /* If there are any cleanups to be performed, then they will be inserted following LAST_INSN. It is desirable that the last_insn, for such purposes, should be the last insn before computing the return value. Otherwise, cleanups which call functions can clobber the return value. */ /* ??? rms: I think that is erroneous, because in C++ it would run destructors on variables that might be used in the subsequent computation of the return value. */ rtx last_insn = 0;; last_insn = get_last_insn (); /* Distribute return down conditional expr if either of the sides may involve tail recursion (see test below). This enhances the number of tail recursions we see. Don't do this always since it can produce sub-optimal code in some cases and we distribute assignments into conditional expressions when it would help. */ if (optimize && retval_rhs != 0 && frame_offset == 0 && TREE_CODE (retval_rhs) == COND_EXPR && (TREE_CODE (TREE_OPERAND (retval_rhs, 1)) == CALL_EXPR || TREE_CODE (TREE_OPERAND (retval_rhs, 2)) == CALL_EXPR)) { rtx label = gen_label_rtx (); tree expr; do_jump (TREE_OPERAND (retval_rhs, 0), label, NULL_RTX); start_cleanup_deferral (); expr = build (MODIFY_EXPR, TREE_TYPE (TREE_TYPE (current_function_decl)), DECL_RESULT (current_function_decl), TREE_OPERAND (retval_rhs, 1)); TREE_SIDE_EFFECTS (expr) = 1; expand_return (expr); emit_label (label); expr = build (MODIFY_EXPR, TREE_TYPE (TREE_TYPE (current_function_decl)), DECL_RESULT (current_function_decl), TREE_OPERAND (retval_rhs, 2)); TREE_SIDE_EFFECTS (expr) = 1; expand_return (expr); end_cleanup_deferral (); return; } result_rtl = DECL_RTL (DECL_RESULT (current_function_decl)); /* If the result is an aggregate that is being returned in one (or more) registers, load the registers here. The compiler currently can't handle copying a BLKmode value into registers. We could put this code in a more general area (for use by everyone instead of just function call/return), but until this feature is generally usable it is kept here (and in expand_call). The value must go into a pseudo in case there are cleanups that will clobber the real return register. */ if (retval_rhs != 0 && TYPE_MODE (TREE_TYPE (retval_rhs)) == BLKmode && GET)), BITS_PER_WORD); rtx *result_pseudos = (rtx *) alloca (sizeof (rtx) * n_regs); rtx result_reg, src = NULL_RTX, dst = NULL_RTX; rtx result_val = expand_expr (retval_rhs, NULL_RTX, VOIDmode, 0); enum machine_mode tmpmode, result_reg_mode; if (bytes == 0) { expand_null_return (); return; } /* Structures whose size is not a multiple of a word are aligned to the least significant byte (to the right). On a BYTES_BIG_ENDIAN machine, this means we must skip the empty high order bytes when calculating the bit offset. */ if (BYTES_BIG_ENDIAN && bytes % UNITS_PER_WORD) big_endian_correction = (BITS_PER_WORD - ((bytes % UNITS_PER_WORD) * BITS_PER_UNIT)); /* Copy the structure BITSIZE bits at a time. */ for (bitpos = 0, xbitpos = big_endian_correction; bitpos < bytes * BITS_PER_UNIT; bitpos += bitsize, xbitpos += bitsize) { /* We need a new destination pseudo each time xbitpos is on a word boundary and when xbitpos == big_endian_correction (the first time through). */ if (xbitpos % BITS_PER_WORD == 0 || xbitpos == big_endian_correction) { /* Generate an appropriate register. */ dst = gen_reg_rtx (word_mode); result_pseudos[xbitpos / BITS_PER_WORD] = dst; /* Clear the destination before we move anything into it. */ emit_move_insn (dst, CONST0_RTX (GET_MODE (dst))); } /* We need a new source operand each time bitpos is on a word boundary. */ if (bitpos % BITS_PER_WORD == 0) src = operand_subword_force (result_val, bitpos / BITS_PER_WORD, BLKmode); /* Use bitpos for the source extraction (left justified) and xbitpos for the destination store (right justified). */ store_bit_field (dst, bitsize, xbitpos % BITS_PER_WORD, word_mode, extract_bit_field (src, bitsize, bitpos % BITS_PER_WORD, 1, NULL_RTX, word_mode, word_mode, BITS, tmpmode); if (GET_MODE_SIZE (tmpmode) < GET_MODE_SIZE (word_mode)) result_reg_mode = word_mode; else result_reg_mode = tmpmode; result_reg = gen_reg_rtx (result_reg_mode); emit_queue (); for (i = 0; i < n_regs; i++) emit_move_insn (operand_subword (result_reg, i, 0, result_reg_mode), result_pseudos[i]); if (tmpmode != result_reg_mode) result_reg = gen_lowpart (tmpmode, result_reg); expand_value_return (result_reg); } else if ); val = expand_expr (retval_rhs, val, GET_MODE (val), 0); val = force_not_mem (val); emit_queue (); /* Return the calculated value, doing cleanups first. */ expand_value_return (val); } else { /* No cleanups or no hard reg used; calculate value into hard return reg. */ expand_expr (retval, const0_rtx, VOIDmode, 0); emit_queue (); expand_value_return (result_rtl); } } /* Return 1 if the end of the generated RTX is not a barrier. This means code already compiled can drop through. */ int drop_through_at_end_p () { rtx insn = get_last_insn (); while (insn && GET_CODE (insn) == NOTE) insn = PREV_INSN (insn); return insn && GET_CODE (insn) != BARRIER; } /*, DECL_ARGUMENTS (current_function_decl))) { if (tail_recursion_label == 0) { tail_recursion_label = gen_label_rtx (); emit_label_after (tail_recursion_label, tail_recursion_reentry); } emit_queue (); expand_goto_internal (NULL_TREE, tail_recursion_label, last_insn); emit_barrier (); return 1; } return 0; } /* Emit code to alter this function's formal parms for a tail-recursive call. ACTUALS is a list of actual parameter expressions (chain of TREE_LISTs). FORMALS is the chain of decls of formals. Return 1 if this can be done; otherwise return 0 and do not emit any code. */ static int tail_recursion_args (actuals, formals) tree actuals, formals; { tree a = actuals, f = formals; int i; rtx *argvec; /* Check that number and types of actuals are compatible with the formals. This is not always true in valid C code. Also check that no formal needs to be addressable and that all formals are scalars. */ /* Also count the args. */ for (a = actuals, f = formals, i = 0; a && f; a = TREE_CHAIN (a), f = TREE_CHAIN (f), i++) { if (TYPE_MAIN_VARIANT (TREE_TYPE (TREE_VALUE (a))) != TYPE_MAIN_VARIANT (TREE_TYPE (f))) return 0; if (GET_CODE (DECL_RTL (f)) != REG || DECL_MODE (f) == BLKmode) return 0; } if (a != 0 || f != 0) return 0; /* Compute all the actuals. */ argvec = (rtx *) alloca (i * sizeof (rtx)); for (a = actuals, i = 0; a; a = TREE_CHAIN (a), i++) argvec[i] = expand_expr (TREE_VALUE (a), NULL_RTX, VOIDmode, 0); /* Find which actual values refer to current values of previous formals. Copy each of them now, before any formal is changed. */ for (a = actuals, i = 0; a; a = TREE_CHAIN (a), i++) { int copy = 0; int j; for (f = formals, j = 0; j < i; f = TREE_CHAIN (f), j++) if (reg_mentioned_p (DECL_RTL (f), argvec[i])) { copy = 1; break; } if (copy) argvec[i] = copy_to_reg (argvec[i]); } /* Store the values of the actuals into the formals. */ for (f = formals, a = actuals, i = 0; f; f = TREE_CHAIN (f), a = TREE_CHAIN (a), i++) { if (GET_MODE (DECL_RTL (f)) == GET_MODE (argvec[i])) emit_move_insn (DECL_RTL (f), argvec[i]); else {; /*); if (block_stack && !(block_stack->data.block.cleanups == NULL_TREE && block_stack->data.block.outer_cleanups == NULL_TREE)) thisblock->data.block.outer_cleanups = tree_cons (NULL_TREE, block_stack->data.block.cleanups, block_stack->data.block.outer_cleanups); else thisblock->data.block.outer_cleanups = 0; thisblock->data.block.label_chain = 0; thisblock->data.block.innermost_stack_block = stack_block_stack; thisblock->data.block.first_insn = note; thisblock->data.block.block_start_count = ++current_block_start_count; thisblock->exit_label = exit_flag ? gen_label_rtx () : 0; block_stack = thisblock; nesting_stack = thisblock; /* Make a new level for allocating stack slots. */ push_temp_slots (); } /* Specify the scope of temporaries created by TARGET_EXPRs. Similar to CLEANUP_POINT_EXPR, but handles cases when a series of calls to expand_expr are made. After we end the region, we know that all space for all temporaries that were created by TARGET_EXPRs will be destroyed and their space freed for reuse. */ void expand_start_target_temps () { /* This is so that even if the result is preserved, the space allocated will be freed, as we know that it is no longer in use. */ push_temp_slots (); /* Start a new binding layer that will keep track of all cleanup actions to be performed. */ expand_start_bindings ; } /* Emit a handler label for a nonlocal goto handler. Also emit code to store the handler label in SLOT before BEFORE_INSN. */ static rtx expand_nl_handler_label (slot, before_insn) rtx slot, before_insn; { rtx insns; rtx handler_label = gen_label_rtx (); /* Don't let cleanup_cfg delete the handler. */ LABEL_PRESERVE_P (handler_label) = 1; start_sequence (); emit_move_insn (slot, gen_rtx_LABEL_REF (Pmode, handler_label)); insns = get_insns (); end_sequence (); emit_insn_before (insns, before_insn); emit_label (handler_label); return handler_label; } /* Emit code to restore vital registers at the beginning of a nonlocal goto handler. */ static void expand_nl_goto_receiver () { ARG_POINTER_REGNUM != HARD_FRAME_POINTER_REGNUM (virtual_incoming_args_rtx, copy_to_reg (get_arg_pointer_save_area (cfun))); } } #endif #ifdef HAVE_nonlocal_goto_receiver if (HAVE_nonlocal_goto_receiver) emit_insn (gen_nonlocal_goto_receiver ()); #endif } /* Make handlers for nonlocal gotos taking place in the function calls in block THISBLOCK. */ static void expand_nl_goto_receivers (thisblock) struct nesting *thisblock; { tree link; rtx afterward = gen_label_rtx (); rtx insns, slot; rtx label_list; int any_invalid; /* Record the handler address in the stack slot for that purpose, during this block, saving and restoring the outer value. */ if (thisblock->next != 0) for (slot = nonlocal_goto_handler_slots; slot; slot = XEXP (slot, 1)) { rtx save_receiver = gen_reg_rtx (Pmode); emit_move_insn (XEXP (slot, 0), save_receiver); start_sequence (); emit_move_insn (save_receiver, XEXP (slot, 0)); insns = get_insns (); end_sequence (); emit_insn_before (insns, thisblock->data.block.first_insn); } /* Jump around the handlers; they run only when specially invoked. */ emit_jump (afterward); /* Make a separate handler for each label. */ link = nonlocal_labels; slot = nonlocal_goto_handler_slots; label_list = NULL_RTX; for (; link; link = TREE_CHAIN (link), slot = XEXP (slot, 1)) /* Skip any labels we shouldn't be able to jump to from here, we generate one special handler for all of them below which just calls abort. */ if (! DECL_TOO_LATE (TREE_VALUE (link))) { rtx lab; lab = expand_nl_handler_label (XEXP (slot, 0), thisblock->data.block.first_insn); label_list = gen_rtx_EXPR_LIST (VOIDmode, lab, label_list); expand_nl_goto_receiver (); /* Jump to the "real" nonlocal label. */ expand_goto (TREE_VALUE (link)); } /* A second pass over all nonlocal labels; this time we handle those we should not be able to jump to at this point. */ link = nonlocal_labels; slot = nonlocal_goto_handler_slots; any_invalid = 0; for (; link; link = TREE_CHAIN (link), slot = XEXP (slot, 1)) if (DECL_TOO_LATE (TREE_VALUE (link))) { rtx lab; lab = expand_nl_handler_label (XEXP (slot, 0), thisblock->data.block.first_insn); label_list = gen_rtx_EXPR_LIST (VOIDmode, lab, label_list); any_invalid = 1; } if (any_invalid) { expand_nl_goto_receiver ();'"); } /* Generate RTL code to terminate a binding contour. VARS is the chain of VAR_DECL nodes for the variables bound in this contour. There may actually be other nodes in this chain, but any nodes other than VAR_DECLS are ignored. MARK_ENDS is nonzero if we should put a note at the beginning and end of this binding contour. DONT_JUMP_IN is nonzero if it is not valid to jump into this contour. (That is true automatically if the contour has a saved stack level.) */ void expand_end_bindings (vars, mark_ends, dont_jump_in) tree vars; int mark_ends; int dont_jump && nonlocal_labels /* Make handler for outermost block if there were any nonlocal gotos to this function. */ && (thisblock->next == 0 ? current_function_has_nonlocal_label /* Make handler for inner block if it has something special to do when you jump out of it. */ : (thisblock->data.block.cleanups != 0 || thisblock->data.block.stack_level != 0))) expand_nl_goto_receivers (thisblock); /* Don't allow jumping into a block that has a stack level. Cleanups are allowed, though. */ if (dont_jump_in || thisblock->data.block.stack_level != 0) { struct label_chain *chain; /* Any labels in this block are no longer valid to go to. Mark them to cause an error message. */ for (chain = thisblock->data.block.label_chain; chain; chain = chain->next) { DECL_TOO_LATE (chain->label) = 1; /* If any goto without a fixup came to this label, that must be an error, because gotos without fixups come from outside all saved stack-levels. */ if (TREE_ADDRESSABLE (chain->label)) error_with_decl (chain->label, "label `%s' used before containing binding contour"); } } /* Restore stack level in effect before the block (only if variable-size objects allocated). */ /* Perform any cleanups associated with the block. */ if (thisblock->data.block.stack_level != 0 || thisblock->data.block.cleanups != 0) {); /* Do the cleanups. */ expand_cleanups (thisblock->data.block.cleanups, NULL_TREE, 0, reachable); if (reachable) do_pending_stack_adjust (); expr_stmts_for_value = old_expr_stmts_for_value; last_expr_value = old_last_expr_value; last_expr_type = old_last_expr_type; /* Restore the stack level. */ if (reachable && thisblock->data.block.stack_level != 0) { emit_stack_restore (thisblock->next ? SAVE_BLOCK : SAVE_FUNCTION, thisblock->data.block.stack_level, NULL_RTX); if (nonlocal_goto_handler_slots != 0) emit_stack_save (SAVE_NONLOCAL, &nonlocal_goto_stack_level, NULL_RTX); } /* Any gotos out of this block must also do these things. Also report any gotos with fixups that came to labels in this level. */ fixup_gotos (thisblock, thisblock->data.block.stack_level, thisblock->data.block.cleanups, thisblock->data.block.first_insn, dont_jump_in); } /* Mark the beginning and end of the scope if requested. We do this now, after running cleanups on the variables just going out of scope, so they are in scope for their cleanups. */ if (mark_ends) {), NULL_RTX, VOIDmode, 0); free_temp_slots (); /* Allocate space on the stack for the variable. Note that DECL_ALIGN says how the variable is to be aligned and we cannot use it to conclude anything about the alignment of the size. */ address = allocate_dynamic_stack_space (size, NULL_RTX, TYPE_ALIGN (TREE_TYPE (decl))); /* Reference the variable indirect through that rtx. */ || TREE_STATIC (decl)) return; /* Compute and store the initial value now. */ if (DECL_INITIAL (decl) == error_mark_node) { enum tree_code code = TREE_CODE (TREE_TYPE (decl)); if (code == INTEGER_TYPE || code == REAL_TYPE || code == ENUMERAL_TYPE || code == POINTER_TYPE || code == REFERENCE_TYPE) expand_assignment (decl, convert (TREE_TYPE (decl), integer_zero_node), 0, 0); emit_queue (); } else if (DECL_INITIAL (decl) && TREE_CODE (DECL_INITIAL (decl)) != TREE_LIST) { emit_line_note (DECL_SOURCE_FILE (decl), DECL_SOURCE_LINE (decl)); expand_assignment (decl, DECL_INITIAL (decl), 0, 0); emit_queue (); } /* Don't let the initialization count as "using" the variable. */ TREE_USED (decl) = was_used; /* Free any temporaries we made while initializing the decl. */ preserve_temp_slots (NULL_RTX); free_temp_slots (); } /* CLEANUP is an expression to be executed at exit from this binding contour; for example, in C++, it might call the destructor for this variable. We wrap CLEANUP in an UNSAVE_EXPR node, so that we can expand the CLEANUP multiple times, and have the correct semantics. This happens in exception handling, for gotos, returns, breaks that leave the current scope. If CLEANUP is nonzero and DECL is zero, we record a cleanup that is not associated with any particular variable. */ int expand_decl_cleanup (decl, cleanup) tree decl, cleanup; { struct nesting *thisblock; /* Error if we are not in any block. */ if (cfun == 0 || block_stack == 0) return 0; thisblock = block_stack; /* Record the cleanup if there is one. */ if (cleanup != 0) { tree t; rtx seq; tree *cleanups = &thisblock->data.block.cleanups; int cond_context = conditional_context (); if (cond_context) { rtx flag = gen_reg_rtx (word_mode); rtx set_flag_0; tree cond; start_sequence (); emit_move_insn (flag, const0_rtx); set_flag_0 = get_insns (); end_sequence (); thisblock->data.block.last_unconditional_cleanup = emit_insn; } /* DECL is an anonymous union. CLEANUP is a cleanup for DECL. DECL_ELTS is the list of elements that belong to DECL's type. In each, the TREE_VALUE is a VAR_DECL, and the TREE_PURPOSE a cleanup. */ void expand_anon_union_decl (decl, cleanup, decl_elts) tree decl, cleanup, decl_elts; { struct nesting *thisblock =_ALIGN (decl); /* If the element has BLKmode and the union doesn't, the union is aligned such that the element doesn't need to have BLKmode, so change the element's mode to the appropriate one for its size. */ if (mode == BLKmode && DECL_MODE (decl) != BLKmode) DECL_MODE (decl_elt) = mode = mode_for_size_tree ); } } /* Expand a list of cleanups LIST. Elements may be expressions or may be nested lists. If DONT_DO is nonnull, then any list-element whose TREE_PURPOSE matches DONT_DO is omitted. This is sometimes used to avoid a cleanup associated with a value that is being returned out of the scope. If IN_FIXUP is nonzero, we are generating this cleanup for a fixup goto and handle protection regions specially in that case. If REACHABLE, we emit code, otherwise just inform the exception handling code about this finalization. */ static void expand_cleanups (list, dont_do, in_fixup, reachable) tree list; tree dont_do; int in_fixup; int reachable; { tree tail; for (tail = list; tail; tail = TREE_CHAIN (tail)) if (dont_do == 0 || TREE_PURPOSE (tail) != dont_do) { if (TREE_CODE (TREE_VALUE (tail)) == TREE_LIST) expand_cleanups (TREE_VALUE (tail), dont_do, in_fixup, reachable); else { if (! in_fixup && using_eh_for_cleanups_p) expand_eh_region_end_cleanup (TREE_VALUE (tail)); if (reachable && !CLEANUP_EH_ONLY (tail)) { /* Cleanups may be run multiple times. For example, when exiting a binding contour, we expand the cleanups associated with that contour. When a goto within that binding contour has a target outside that contour, it will expand all cleanups from its scope to the target. Though the cleanups are expanded multiple times, the control paths are non-overlapping so the cleanups will not be executed twice. */ /* We may need to protect); free_temp_slots (); } } } } /* Mark when the context we are emitting RTL for as a conditional context, so that any cleanup actions we register with expand_decl_init will be properly conditionalized when those cleanup actions are later performed. Must be called before any expression (tree) is expanded that is within a conditional context. */ void start_cleanup_deferral () { /* block_stack can be NULL if we are inside the parameter list. It is OK to do nothing, because cleanups aren't possible here. */ if (block_stack) ++block_stack->data.block.conditional_code; } /* Mark the end of a conditional region of code. Because cleanup deferrals may be nested, we may still be in a conditional region after we end the currently deferred cleanups, only after we end all deferred cleanups, are we back in unconditional code. */ void end_cleanup_deferral () { /* block_stack can be NULL if we are inside the parameter list. It is OK to do nothing, because cleanups aren't possible here. */ if (block_stack) --block_stack->data.block.conditional_code; } /* Move all cleanups from the current block_stack to the containing block_stack, where they are assumed to have been created. If anything can cause a temporary to be created, but not expanded for more than one level of block_stacks, then this code will have to change. */ void move_cleanups_up () { struct nesting *block = block_stack; struct nesting *outer = block->next; outer->data.block.cleanups = chainon (block->data.block.cleanups, outer->data.block.cleanups); block->data.block.cleanups = 0; } tree last_cleanup_this_contour () { if (block_stack == 0) return 0; return block_stack->data.block.cleanups; } /* Return 1 if there are any pending cleanups at this point. If THIS_CONTOUR is nonzero, check the current contour as well. Otherwise, look only at the contours that enclose this one. */ int any_pending_cleanups (this_contour) int this_contour; { struct nesting *block; if (cfun == NULL || cfun->stmt == NULL || block_stack == 0) return 0; if (this_contour && block_stack->data.block.cleanups != NULL) return 1; if (block_stack->data.block.cleanups == 0 && block_stack->data.block.outer_cleanups == 0) return 0; for (block = block_stack->next; block; block = block->next) if (block->data.block.cleanups != 0) return 1; return 0; } /* Enter a case (Pascal) or switch (C) statement. Push a block onto case_stack and nesting_stack to accumulate the case-labels that are seen and to record the labels generated for the statement. EXIT_FLAG is nonzero if `exit_something' should exit this case stmt. Otherwise, this construct is transparent for `exit_something'. EXPR is the index-expression to be dispatched on. TYPE is its nominal type. We could simply convert EXPR to this type, but instead we take short cuts. */ void expand_start_case (exit_flag, expr, type, printname) int exit_flag; tree expr; tree type; const char *printname; { struct nesting *thiscase = ALLOC_NESTING (); /* Make an entry on case_stack for the case we are entering. */ thiscase->desc = CASE_NESTING; thiscase->next = case_stack; thiscase->all = nesting_stack; thiscase->depth = ++nesting_depth; thiscase->exit_label = exit_flag ? gen_label_rtx () : 0; thiscase->data.case_stmt.case_list = 0; thiscase->data.case_stmt.index_expr = expr; thiscase->data.case_stmt.nominal_type = type; thiscase->data.case_stmt.default_label = 0; thiscase->data.case_stmt.printname = printname; thiscase->data.case_stmt.line_number_status = force_line_numbers (); case_stack = thiscase; nesting_stack = thiscase; do_pending_stack_adjust (); /* Make sure case_stmt.start points to something that won't need any transformation before expand_end_case. */ if (GET_CODE (get_last_insn ()) != NOTE) emit_note (NULL, NOTE_INSN_DELETED); thiscase->data.case_stmt.start = get_last_insn (); start_cleanup_deferral (); } /* Start a "dummy case statement" within which case labels are invalid and are not connected to any larger real case statement. This can be used if you don't want to let a case statement jump into the middle of certain kinds of constructs. */ void expand_start_case_dummy () { struct nesting *thiscase = ALLOC_NESTING (); /* Make an entry on case_stack for the dummy. */ thiscase->desc = CASE_NESTING; thiscase->next = case_stack; thiscase->all = nesting_stack; thiscase->depth = ++nesting_depth; thiscase->exit_label = 0; thiscase->data.case_stmt.case_list = 0; thiscase->data.case_stmt.start = 0; thiscase->data.case_stmt.nominal_type = 0; thiscase->data.case_stmt.default_label = 0; case_stack = thiscase; nesting_stack = thiscase; start_cleanup_deferral (); } /* End a dummy case statement. */ void expand_end_case_dummy () { end_cleanup_deferral (); POPSTACK (case_stack); } /* Return the data type of the index-expression of the innermost case statement, or null if none. */ tree case_index_expr_type () { if (case_stack) return TREE_TYPE (case_stack->data.case_stmt.index_expr); return 0; } static void check_seenlabel () { /* If this is the first label, warn if any insns have been emitted. */ if (case_stack->data.case_stmt.line_number_status >= 0) { rtx insn; restore_line_number_status (case_stack->data.case_stmt.line_number_status); case_stack->data.case_stmt.line_number_status = -1; for (insn = case_stack->data.case_stmt.start; insn; insn = NEXT_INSN (insn)) { if (GET_CODE (insn) == CODE_LABEL) break; if (GET_CODE (insn) != NOTE && (GET_CODE (insn) != INSN || GET_CODE (PATTERN (insn)) != USE)) { do insn = PREV_INSN (insn); while (insn && (GET_CODE (insn) != NOTE || NOTE_LINE_NUMBER (insn) < 0)); /* If insn is zero, then there must have been a syntax error. */ if (insn) warning_with_file_and_line (NOTE_SOURCE_FILE (insn), NOTE_LINE_NUMBER (insn), "unreachable code at beginning of %s", case_stack->data.case_stmt.printname); break; } } } } /* Accumulate one case or default label inside a case or switch statement. VALUE is the value of the case (a null pointer, for a default label). The function CONVERTER, when applied to arguments T and V, converts the value V to the type T. If not currently inside a case or switch statement, return 1 and do nothing. The caller will print a language-specific error message. If VALUE is a duplicate or overlaps, return 2 and do nothing except store the (first) duplicate node in *DUPLICATE. If VALUE is out of range, return 3 and do nothing. If we are jumping into the scope of a cleanup or var-sized array, return 5. Return 0 on success. Extended to handle range statements. */ int pushcase (value, converter, label, duplicate) tree value;; /* Convert VALUE to the type in which the comparisons are nominally done. */ if (value != 0) value = (*converter) (nominal_type, value); check_seenlabel (); /* Fail if this value is out of range for the actual type of the index (which may be narrower than NOMINAL_TYPE). */ if (value != 0 && (TREE_CONSTANT_OVERFLOW (value) || ! int_fits_type_p (value, index_type))) return 3; return add_case_node (value, value, label, duplicate); } /* Like pushcase but this case applies to all values between VALUE1 and VALUE2 (inclusive). If VALUE1 is NULL, the range starts at the lowest value of the index type and ends at VALUE2. If VALUE2 is NULL, the range starts at VALUE1 and ends at the highest value of the index type. If both are NULL, this case applies to all values. The return value is the same as that of pushcase but there is one additional error code: 4 means the specified range was empty. */ int pushcase_range (value1, value2, converter, label, duplicate) tree value1, value2;; check_seenlabel (); /* Convert VALUEs to type in which the comparisons are nominally done and replace any unspecified value with the corresponding bound. */ if (value1 == 0) value1 = TYPE_MIN_VALUE (index_type); if (value2 == 0) value2 = TYPE_MAX_VALUE (index_type); /* Fail if the range is empty. Do this before any conversion since we want to allow out-of-range empty ranges. */ if ); value2 = (*converter) (nominal_type, value2); /* Fail if these values are out of range. */ if (TREE_CONSTANT_OVERFLOW (value1) || ! int_fits_type_p (value1, index_type)) return 3; if (TREE_CONSTANT_OVERFLOW (value2) || ! int_fits_type_p (value2, index_type)) return 3; return add_case_node (value1, value2, label, duplicate); } /* Do the actual insertion of a case label for pushcase and pushcase_range into case_stack->data.case_stmt.case_list. Use an AVL tree to avoid slowdown for large switch statements. */; } q = &case_stack->data.case_stmt.case_list; p = *q; while ((r = *q)) { p = r; /* Keep going past elements distinctly greater than HIGH. */ if (tree_int_cst_lt (high, p->low)) q = &p->left; /* or distinctly less than LOW. */ else if (tree_int_cst_lt (p->high, low)) q = &p->right; else { /* We have an overlap; this is an error. */ *duplicate = p->code_label; return 2; } } /* Add this label to the chain, and succeed. */ r = (struct case_node *) ggc_alloc (sizeof (struct case_node)); r->low = low; /* If the bounds are equal, turn this into the one-value case. */ if (tree_int_cst_equal (low, high)) r->high = r->low; else r->high = high; r->code_label = label; expand_label (label); *q = r; r->parent = p; r->left = 0; r->right = 0; r->balance = 0; while (p) { struct case_node *s; if (r == p->left) { int b; if (! (b = p->balance)) /* Growth propagation from left side. */ p->balance = -1; else if (b < 0) { if (r->balance < 0) { /* R-Rotation */ if ((p->left = s = r->right)) s->parent = p; r->right = p; p->balance = 0; r->balance = 0; s = p->parent; p->parent = r; if ((r->parent = s)) { if (s->left == p) s->left = r; else s->right = r; } else case_stack->data.case_stmt.case_list = r; } else /* r->balance == +1 */ { /* LR-Rotation */ int b2; struct case_node *t = r->right; if ((p->left = s = t->right)) s->parent = p; t->right = p; if ((r->right = s = t->left)) s->parent = r; t->left = r; b = t->balance; b2 = b < 0; p->balance = b2; b2 = -b2 - b; r->balance = b2; t->balance = 0; s = p->parent; p->parent = t; r->parent = t; if ((t->parent = s)) { if (s->left == p) s->left = t; else s->right = t; } else case_stack->data.case_stmt.case_list = t; } break; } else { /* p->balance == +1; growth of left side balances the node. */ p->balance = 0; break; } } else /* r == p->right */ { int b; if (! (b = p->balance)) /* Growth propagation from right side. */ p->balance++; else if (b > 0) { if (r->balance > 0) { /* L-Rotation */ if ((p->right = s = r->left)) s->parent = p; r->left = p; p->balance = 0; r->balance = 0; s = p->parent; p->parent = r; if ((r->parent = s)) { if (s->left == p) s->left = r; else s->right = r; } else case_stack->data.case_stmt.case_list = r; } else /* r->balance == -1 */ { /* RL-Rotation */ int b2; struct case_node *t = r->left; if ((p->right = s = t->left)) s->parent = p; t->left = p; if ((r->left = s = t->right)) s->parent = r; t->right = r; b = t->balance; b2 = b < 0; r->balance = b2; b2 = -b2 - b; p->balance = b2; t->balance = 0; s = p->parent; p->parent = t; r->parent = t; if ((t->parent = s)) { if (s->left == p) s->left = t; else s->right = t; } else case_stack->data.case_stmt.case_list = t; } break; } else { /* p->balance == -1; growth of right side balances the node. */ p->balance = 0; break; } } r = p; p = p->parent; } return 0; } /* Returns the number of possible values of TYPE. Returns -1 if the number is unknown,++; } } return count; } #define BITARRAY_TEST(ARRAY, INDEX) \ ((ARRAY)[(unsigned) (INDEX) / HOST_BITS_PER_CHAR]\ & (1 << ((unsigned) (INDEX) % HOST_BITS_PER_CHAR))) #define BITARRAY_SET(ARRAY, INDEX) \ ((ARRAY)[(unsigned) (INDEX) / HOST_BITS_PER_CHAR]\ |= 1 << ((unsigned) (INDEX) % HOST_BITS_PER_CHAR)) /* Set the elements of the bitstring CASES_SEEN (which has length COUNT), with the case values we have seen, assuming the case expression has the given TYPE. SPARSENESS is as determined by all_cases_count. The time needed is proportional to COUNT, unless SPARSENESS is 2, in which case quadratic time is needed. */ void mark_seen_cases (type, cases_seen, count, sparseness) tree type; unsigned char *cases_seen; HOST_WIDE_INT xlo; /* This less efficient loop is only needed to handle duplicate case values (multiple enum constants with the same value). */ TREE_TYPE (val) = TREE_TYPE (root->low); for (t = TYPE_VALUES (type), xlo = 0; t != NULL_TREE; t = TREE_CHAIN (t), xlo++) { TREE_INT_CST_LOW (val) = TREE_INT_CST_LOW (TREE_VALUE (t)); TREE_INT_CST_HIGH (val) = TREE_INT_CST_HIGH (TREE_VALUE (t)); n = root; do { /* Keep going past elements distinctly greater than VAL. */ if (tree_int_cst_lt (val, n->low)) n = n->left; /* or distinctly less than VAL. */ else if (tree_int_cst_lt (n->high, val)) n = n->right; else { /* We have found a matching range. */ BITARRAY_SET (cases_seen, xlo); break; } } while (n); } } else { if (root->left) case_stack->data.case_stmt.case_list = root = case_tree2list (root, 0); for (n = root; n; n = n->right) { TREE_INT_CST_LOW (val) = TREE_INT_CST_LOW (n->low); TREE_INT_CST_HIGH (val) = TREE_INT_CST_HIGH (n->low); while (! tree_int_cst_lt (n->high, val)) { /* Calculate (into xlo) the "offset" of the integer (val). The element with lowest value has offset 0, the next smallest element has offset 1, etc. */ unsigned HOST_WIDE_INT xlo; HOST_WIDE_INT xhi; tree t; if (sparseness && TYPE_VALUES (type) != NULL_TREE) { /* The TYPE_VALUES will be in increasing order, so starting searching where we last ended. */ t = next_node_to_try; xlo = next_node_offset; xhi = 0; for (;;) { if (t == NULL_TREE) { t = TYPE_VALUES (type); xlo = 0; } if (tree_int_cst_equal (val, TREE_VALUE (t))) { next_node_to_try = TREE_CHAIN (t); next_node_offset = xlo + 1; break; } xlo++; t = TREE_CHAIN (t); if (t == next_node_to_try) { xlo = -1; break; } } } else { t = TYPE_MIN_VALUE (type); if (t) neg_double (TREE_INT_CST_LOW (t), TREE_INT_CST_HIGH (t), &xlo, &xhi); else xlo = xhi = 0; add_double (xlo, xhi, TREE_INT_CST_LOW (val), TREE_INT_CST_HIGH (val), &xlo, &xhi); } if (xhi == 0 && xlo < ; /* True iff the selector type is a numbered set mode. */ int sparseness = 0; /* The number of possible selector values. */ HOST_WIDE_INT size; /* For each possible selector value. a one iff it has been matched by a case value alternative. */ unsigned char *cases_seen; /* The allocated size of cases_seen, in chars. */ an ENUMERAL_TYPE whose values do not increase monotonically, O(N*log(N)) time may be needed. */ mark_seen_cases (type, cases_seen, size, sparseness); for (i = 0; v != NULL_TREE && i < size; i++, v = TREE_CHAIN (v)) if (BITARRAY_TEST (cases_seen, i) == 0) warning ("enumeration value `%s' not handled in switch", IDENTIFIER_POINTER (TREE_PURPOSE (v))); free (cases_seen); } /* Now we go the other way around; we warn if there are case expressions that don't correspond to enumerators. This can occur since C and C++ don't enforce type-checking of assignments to enumeration variables. */ if (case_stack->data.case_stmt.case_list && case_stack->data.case_stmt.case_list->left) case_stack->data.case_stmt.case_list = case_tree2list (case_stack->data.case_stmt.case_list, 0); for (n = case_stack->data.case_stmt.case_list; n; n = n->right) { for (chain = TYPE_VALUES (type); chain && !tree_int_cst_equal (n->low, TREE_VALUE (chain)); chain = TREE_CHAIN (chain)) ; if (!chain) { if (TYPE_NAME (type) == 0) warning ("case value `%ld' not in enumerated type", (long) TREE_INT_CST_LOW (n->low)); else warning ("case value `%ld' not in enumerated type `%s'", (long) TREE_INT_CST_LOW (n->low), IDENTIFIER_POINTER ((TREE_CODE (TYPE_NAME (type)) == IDENTIFIER_NODE) ? TYPE_NAME (type) : DECL_NAME (TYPE_NAME (type)))); } if (!tree_int_cst_equal (n->low, n->high)) { for (chain = TYPE_VALUES (type); chain && !tree_int_cst_equal (n->high, TREE_VALUE (chain)); chain = TREE_CHAIN (chain)) ; if (!chain) { if (TYPE_NAME (type) == 0) warning ("case value `%ld' not in enumerated type", (long) TREE_INT_CST_LOW (n->high)); else warning ("case value `%ld' not in enumerated type `%s'", (long) TREE_INT_CST_LOW (n->high), IDENTIFIER_POINTER ((TREE_CODE (TYPE_NAME (type)) == IDENTIFIER_NODE) ? TYPE_NAME (type) : DECL_NAME (TYPE_NAME (type)))); } } } } /* a spurious warning in the presence of a syntax error; it could be fixed by moving the call to check_seenlabel after the check for error_mark_node, and copying the code of check_seenlabel that deals with case_stack->data.case_stmt.line_number_status / restore_line_number_status in front of the call to end_cleanup_deferral; However, this might miss some useful warnings in the presence of non-syntax errors. */ check_seenlabel (); /* An ERROR_MARK occurs for various reasons including invalid data type. */ if (index_type != error_mark_node) { /*"); /* If we don't have a default-label, create one here, after the body of the switch. */ if (thiscase->data.case_stmt.default_label == 0) { thiscase->data.case_stmt.default_label = build_decl (LABEL_DECL, NULL_TREE, NULL_TREE); expand_label (thiscase->data.case_stmt.default_label); } default_label = label_rtx (thiscase->data.case_stmt.default_label); before_case = get_last_insn (); if (thiscase->data.case_stmt.case_list && thiscase->data.case_stmt.case_list->left) thiscase->data.case_stmt.case_list = case_tree2list (thiscase->data.case_stmt.case_list, 0); /* Simplify the case-list before we count it. */ group_case_nodes (thiscase->data.case_stmt.case_list); /* Get upper and lower bounds of case values. Also convert all the case values to the index expr's data type. */ count = 0; for (n = thiscase->data.case_stmt.case_list; n; n = n->right) { /* Check low and high label values are integers. */ if (TREE_CODE (n->low) != INTEGER_CST) abort (); if (TREE_CODE (n->high) != INTEGER_CST) abort (); n->low = convert (index_type, n->low); n->high = convert (index_type, n->high); /* Count the elements and track the largest and smallest of them (treating them as signed even if they are not). */ if (count++ == 0) { minval = n->low; maxval = n->high; } else { if (INT_CST_LT (n->low, minval)) minval = n->low; if (INT_CST_LT (maxval, n->high)) maxval = n->high; } /* A range counts double, since it requires two compares. */ if (! tree_int_cst_equal (n->low, n->high)) count++; } /* Compute span of values. */ if (count != 0) range = fold (build (MINUS_EXPR, index_type, maxval, minval)); end_cleanup_deferral (); if (count == 0) { expand_expr (index_expr, const0_rtx, VOIDmode, 0); emit_queue (); emit_jump (default_label); } /* If range of values is much bigger than number of values, make a sequence of conditional branches instead of a dispatch. If the switch-index is a constant, do it this way because we can optimize it. */ || (TREE_CODE (index_expr) == COMPOUND_EXPR && TREE_CODE (TREE_OPERAND (index_expr, 1)) == INTEGER_CST)) { index = expand_expr (index_expr, NULL_RTX, VOIDmode, 0); /* If the index is a short or char that we do not have an insn to handle comparisons directly, convert it to a full integer now, rather than letting each comparison generate the conversion. */ if (GET_MODE_CLASS (GET_MODE (index)) == MODE_INT && ! have_insn_for (COMPARE, GET_MODE (index))) { enum machine_mode wider_mode; for (wider_mode = GET_MODE (index); wider_mode != VOIDmode; wider_mode = GET_MODE_WIDER_MODE (wider_mode)) if (have_insn_for (COMPARE, wider_mode)) { index = convert_to_mode (wider_mode, index, unsignedp); break; } } emit_queue (); do_pending_stack_adjust (); index = protect_from_queue (index, 0); if (GET_CODE (index) == MEM) index = copy_to_reg (index); if (GET_CODE (index) == CONST_INT || TREE_CODE (index_expr) == INTEGER_CST) { /* Make a tree node with the proper constant value if we don't already have one. */ if (TREE_CODE (index_expr) != INTEGER_CST) { index_expr = build_int_2 (INTVAL (index), unsignedp || INTVAL (index) >= 0 ? 0 : -1); index_expr = convert (index_type, index_expr); } /* For constant index expressions we need only issue an unconditional branch to the appropriate target code. The job of removing any unreachable code is left to the optimisation phase if the "-O" option is specified. */ for (n = thiscase->data.case_stmt.case_list; n; n = n->right) if (! tree_int_cst_lt (index_expr, n->low) && ! tree_int_cst_lt (n->high, index_expr)) break; if (n) emit_jump (label_rtx (n->code_label)); else emit_jump (default_label); } else { /* If the index expression is not constant we generate a binary decision tree to select the appropriate target code. This is done as follows: The list of cases is rearranged into a binary tree, nearly optimal assuming equal probability for each case. The tree is transformed into RTL, eliminating redundant test conditions at the same time. If program flow could reach the end of the decision tree an unconditional jump to the default code is emitted. */ use_cost_table = (TREE_CODE )); } /* Fill in the gaps with the default. */ for (i = 0; i < ncases; i++) if (labelvec[i] == 0) labelvec[i] = gen_rtx_LABEL_REF (Pmode, default_label); /* Output the table */ emit_label (table_label); if (CASE_VECTOR_PC_RELATIVE || flag_pic) emit_jump_insn (gen_rtx_ADDR_DIFF_VEC (CASE_VECTOR_MODE, gen_rtx_LABEL_REF (Pmode, table_label), gen_rtvec_v (ncases, labelvec), const0_rtx, const0_rtx)); else emit_jump_insn (gen_rtx_ADDR_VEC (CASE_VECTOR_MODE, gen_rtvec_v (ncases, labelvec))); /* If the case insn drops through the table, after the table we must jump to the default-label. Otherwise record no drop-through after the table. */ #ifdef CASE_DROPS_THROUGH emit_jump (default_label); #else emit_barrier (); #endif } before_case = NEXT_INSN (before_case); end = get_last_insn (); if (squeeze_notes (&before_case, &end)) abort (); reorder_insns (before_case, end, thiscase->data.case_stmt.start); } else end_cleanup_deferral (); if (thiscase->exit_label) emit_label (thiscase->exit_label); POPSTACK (case_stack); free_temp_slots (); } /* Convert the tree NODE into a list linked by the right field, with the left field zeroed. RIGHT is used for recursion; it is a list to be placed rightmost in the resulting list. */ static struct case_node * case_tree2list (node, right) struct case_node *node, *right; { struct case_node *left; if (node->right) right = case_tree2list (node->right, right); node->right = right; if ((left = node->left)) { node->left = 0; return case_tree2list (left, node); } return node; } /* Generate code to jump to LABEL if OP1 and OP2 are equal. */ static void do_jump_if_equal (op1, op2, label, unsignedp) rtx op1, op2, label; int unsignedp; { if (GET_CODE (op1) == CONST_INT && GET_CODE (op2) == CONST_INT) { if (INTVAL (op1) == INTVAL (op2)) emit_jump (label); } else emit_cmp_and_jump_insns (op1, op2, EQ, NULL_RTX, (GET_MODE (op1) == VOIDmode ? GET_MODE (op2) : GET_MODE (op1)), unsignedp, label); } /* Not all case values are encountered equally. This function uses a heuristic to weight case labels, in cases where that looks like a reasonable thing to do. Right now, all we try to guess is text, and we establish the following weights: chars above space: 16 digits: 16 default: 12 space, punct: 8 tab: 4 newline: 2 other "\" chars: 1 remaining chars: 0 If we find any cases in the switch that are not either -1 or in the range of valid ASCII characters, or are control characters other than those commonly used with "\", don't treat this switch scanning text. Return 1 if these nodes are suitable for cost estimation, otherwise return 0. */ static int estimate_case_costs (node) case_node_ptr node; { tree min_asci; for (i = 0; i < 128; i++) { if (ISALNUM (i)) COST_TABLE (i) = 16; else if (ISPUNCT (i)) COST_TABLE (i) = 8; else if (ISCNTRL (i)) COST_TABLE (i) = -1; } COST_TABLE (' ') = 8; COST_TABLE ('\t') = 4; COST_TABLE ('\0') = 4; COST_TABLE ('\n') = 2; COST_TABLE ('\f') = 1; COST_TABLE ('\v') = 1; COST_TABLE ('\b') = 1; } /* See if all the case expressions look like text. It is text if the constant is >= -1 and the highest constant is <= 127. Do all comparisons as signed arithmetic since we don't want to ever access cost_table with a value less than -1. Also check that none of the constants in a range are strange control characters. */ for (n = node; n; n = n->right) { if ((INT_CST_LT (n->low, min_ascii)) || INT_CST_LT (max_ascii, n->high)) return 0; for (i = (HOST_WIDE_INT) TREE_INT_CST_LOW (n->low); i <= (HOST_WIDE_INT) TREE_INT_CST_LOW (n->high); i++) if (COST_TABLE (i) < 0) return 0; } /* All interesting values are within the range of interesting ASCII characters. */ return 1; } /* Scan an ordered list of case nodes combining those with consecutive values or ranges. Eg. three separate entries 1: 2: 3: become one entry 1..3: */ static void group_case_nodes (head) case_node_ptr head; { case_node_ptr node = head; while (node) { rtx lb = next_real_insn (label_rtx (node->code_label)); rtx lb2; case_node_ptr np = node; /* Try to group the successors of NODE with NODE. */ while (((np = np->right) != 0) /* Do they jump to the same place? */ && ((lb2 = next_real_insn (label_rtx (np->code_label))) == lb || (lb != 0 && lb2 != 0 && simplejump_p (lb) && simplejump_p (lb2) && rtx_equal_p (SET_SRC (PATTERN (lb)), SET_SRC (PATTERN (lb2))))) /* Are their ranges consecutive? */ && tree_int_cst_equal (np->low, fold (build (PLUS_EXPR, TREE_TYPE (node->high), node->high, integer_one_node))) /* An overflow is not consecutive. */ && tree_int_cst_lt (node->high, fold (build (PLUS_EXPR, TREE_TYPE (node->high), node->high, integer_one_node)))) { node->high = np->high; } /* NP is the first node after NODE which can't be grouped with it. Delete the nodes in between, and move on to that node. */ node->right = np; node = np; } } /* Take an ordered list of case nodes and transform them into a near optimal binary tree, on the assumption that any target code selection value is as likely as any other. The transformation is performed by splitting the ordered list into two equal sections plus a pivot. The parts are then attached to the pivot as left and right branches. Each branch is then transformed recursively. */ static void balance_case_nodes (head, parent) case_node_ptr *head; case_node_ptr parent; { case_node_ptr np; np = *head; if (np) { int cost = 0; int i = 0; int ranges = 0; case_node_ptr *npp; case_node_ptr left; /* Count the number of entries on branch. Also count the ranges. */ while (np) { if (!tree_int_cst_equal (np->low, np->high)) { ranges++; if (use_cost_table) cost += COST_TABLE (TREE_INT_CST_LOW (np->high)); } if (use_cost_table) cost += COST_TABLE (TREE_INT_CST_LOW (np->low)); i++; np = np->right; } if (i > 2) { /* Split this list if it is long enough for that to help. */ npp = head; left = *npp; if (use_cost_table) { /* Find the place in the list that bisects the list's total cost, Here I gets half the total cost. */ int n_moved = 0; i = (cost + 1) / 2; while (1) { /* Skip nodes while their cost does not reach that amount. */ if (!tree_int_cst_equal ((*npp)->low, (*npp)->high)) i -= COST_TABLE (TREE_INT_CST_LOW ((*npp)->high)); i -= COST_TABLE (TREE_INT_CST_LOW ((*npp)->low)); if (i <= 0) break; npp = &(*npp)->right; n_moved += 1; } if (n_moved == 0) { /* Leave this branch lopsided, but optimize left-hand side and fill in `parent' fields for right-hand side. */ np = *head; np->parent = parent; balance_case_nodes (&np->left, np); for (; np->right; np = np->right) np->right->parent = np; return; } } /* If there are just three nodes, split at the middle one. */ else if (i == 3) npp = &(*npp)->right; else { /* Find the place in the list that bisects the list's total cost, where ranges count as 2. Here I gets half the total cost. */ i = (i + ranges + 1) / 2; while (1) { /* Skip nodes while their cost does not reach that amount. */ if (!tree_int_cst_equal ((*npp)->low, (*npp)->high)) i--; i--; if (i <= 0) break; npp = &(*npp)->right; } } *head = np = *npp; *npp = 0; np->parent = parent; np->left = left; /* Optimize each of the two split parts. */ balance_case_nodes (&np->left, np); balance_case_nodes (&np->right, np); } else { /* Else leave this branch as one level, but fill in `parent' fields. */ np = *head; np->parent = parent; for (; np->right; np = np->right) np->right->parent = np; } } } /* Search the parent sections of the case node tree to see if a test for the lower bound of NODE would be redundant. INDEX_TYPE is the type of the index expression. The instructions to generate the case decision tree are output in the same order as nodes are processed so it is known that if a parent node checks the range of the current node minus one that the current node is bounded at its lower span. Thus the test would be redundant. */ static int node_has_low_bound (node, index_type) case_node_ptr node; tree index_type; { tree low_minus_one; case_node_ptr pnode; /* If the lower bound of this node is the lowest value in the index type, we need not test it. */ if (tree_int_cst_equal (node->low, TYPE_MIN_VALUE (index_type))) return 1; /* If this node has a left branch, the value at the left must be less than that at this node, so it cannot be bounded at the bottom and we need not bother testing any further. */ if (node->left) return 0; low_minus_one = fold (build (MINUS_EXPR, TREE_TYPE (node->low), node->low, integer_one_node)); /* If the subtraction above overflowed, we can't verify anything. Otherwise, look for a parent that tests our value - 1. */ if (! tree_int_cst_lt (low_minus_one, node->low)) return 0; for (pnode = node->parent; pnode; pnode = pnode->parent) if (tree_int_cst_equal (low_minus_one, pnode->high)) return 1; return 0; } /* Search the parent sections of the case node tree to see if a test for the upper bound of NODE would be redundant. INDEX_TYPE is the type of the index expression. The instructions to generate the case decision tree are output in the same order as nodes are processed so it is known that if a parent node checks the range of the current node plus one that the current node is bounded at its upper span. Thus the test would be redundant. */ static int node_has_high_bound (node, index_type) case_node_ptr node; tree index_type; { tree high_plus_one; case_node_ptr pnode; /* If there is no upper bound, obviously no test is needed. */ if (TYPE_MAX_VALUE (index_type) == NULL) return 1; /* If the upper bound of this node is the highest value in the type of the index expression, we need not test against it. */ if (tree_int_cst_equal (node->high, TYPE_MAX_VALUE (index_type))) return 1; /* If this node has a right branch, the value at the right must be greater than that at this node, so it cannot be bounded at the top and we need not bother testing any further. */ if (node->right) return 0; high_plus_one = fold (build (PLUS_EXPR, TREE_TYPE (node->high), node->high, integer_one_node)); /* If the addition above overflowed, we can't verify anything. Otherwise, look for a parent that tests our value + 1. */ if (! tree_int_cst_lt (node->high, high_plus_one)) return 0; for (pnode = node->parent; pnode; pnode = pnode->parent) if (tree_int_cst_equal (high_plus_one, pnode->low)) return 1; return 0; } /* Search the parent sections of the case node tree to see if both tests for the upper and lower bounds of NODE would be redundant. */ static int node_is_bounded (node, index_type) case_node_ptr node; tree index_type; { return (node_has_low_bound (node, index_type) && node_has_high_bound (node, index_type)); } /* Emit an unconditional jump to LABEL unless it would be dead code. */ static void emit_jump_if_reachable (label) rtx label; { if (GET_CODE (get_last_insn ()) != BARRIER) emit_jump (label); } /* Emit step-by-step code to select a case for the value of INDEX. The thus generated decision tree follows the form of the case-node binary tree NODE, whose nodes represent test conditions. INDEX_TYPE is the type of the index of the switch. Care is taken to prune redundant tests from the decision tree by detecting any boundary conditions already checked by emitted rtx. (See node_has_high_bound, node_has_low_bound and node_is_bounded, above.) Where the test conditions can be shown to be redundant we emit an unconditional jump to the target code. As a further optimization, the subordinates of a tree node are examined to check for bounded nodes. In this case conditional and/or unconditional jumps as a result of the boundary check for the current node are arranged to target the subordinates associated code for out of bound conditions on the current node. We can assume that when control reaches the code generated here, the index value has already been compared with the parents of this node, and determined to be on the same side of each parent as this node is. Thus, if this node tests for the value 51, and a parent tested for 52, we don't need to consider the possibility of a value greater than 51. If another parent tests for the value 50, then this node need not test anything. */ static void emit_case_nodes (index, node, default_label, index_type) rtx index; case_node_ptr node; rtx default_label; tree index_type; { /* If INDEX has an unsigned type, we must make unsigned branches. */ int unsignedp = TREE_UNSIGNED (index_type); enum machine_mode mode = GET_MODE (index); enum machine_mode imode = TYPE_MODE (index_type); /* See if our parents have already tested everything for us. If they have, emit an unconditional jump for this node. */ if (node_is_bounded (node, index_type)) emit_jump (label_rtx (node->code_label)); else if (tree_int_cst_equal (node->low, node->high)) { /* Node is single valued. First see if the index expression matches this node and then check our children, if any. */ do_jump_if_equal (index, convert_modes (mode, imode, expand_expr (node->low, NULL_RTX, VOIDmode, 0), unsignedp), label_rtx (node->code_label), unsignedp); if (node->right != 0 && node->left != 0) { /* This node has children on both sides. Dispatch to one side or the other by comparing the index value with this node's value. If one subtree is bounded, check that one first, so we can avoid real branches in the tree. */ if (node_is_bounded (node->right, index_type)) { emit_cmp_and_jump_insns (index,, label_rtx (node->left->code_label)); emit_case_nodes (index, node->right, default_label, index_type); } else { /* Neither node is bounded. First distinguish the two sides; then emit the code for one side at a time. */ tree test_label = build_decl (LABEL_DECL, NULL_TREE, NULL_TREE); /* See if the value is on the right. */ emit_cmp_and_jump_insns (index, convert_modes (mode, imode, expand_expr (node->high, NULL_RTX, VOIDmode, 0), unsignedp), GT, NULL_RTX, mode, unsignedp, label_rtx (test_label)); /* Value must be on the left. Handle the left-hand subtree. */ emit_case_nodes (index, node->left, default_label, index_type); /* If left-hand subtree does nothing, go to default. */ emit_jump_if_reachable (default_label); /* Code branches here for the right-hand subtree. */ expand_label (test_label); emit_case_nodes (index, node->right, default_label, index_type); } } else if (node->right != 0 && node->left == 0) { /* Here we have a right child but no left so we issue conditional branch to default and process the right child. Omit the conditional branch to default if we it avoid only one right child; it costs too much space to save so little time. */ if (node->right->right || node->right->left || !tree_int_cst_equal (node->right->low, node->right->high)) { if (!node_has_low_bound (node, index_type)) { emit_cmp_and_jump_insns (index,), label_rtx (node->left->code_label), unsignedp); } } else { /* Node is a range. These cases are very similar to those for a single value, except that we do not start by testing whether this node is the one to branch to. */ if (node->right != 0 && node->left != 0) { /* Node has subtrees on both sides. If the right-hand subtree is bounded, test for it first, since we can go straight there. Otherwise, we need to make a branch in the control structure, then handle the two subtrees. */ tree test_label = 0; if (node_is_bounded (node->right, index_type)) /* Right hand node is fully bounded so we can eliminate any testing and branch directly to the target code. */ emit_cmp_and_jump_insns (index,)); /* Handle the left-hand subtree. */ emit_case_nodes (index, node->left, default_label, index_type); /* If right node had to be handled later, do that now. */ if (test_label) { /* If the left-hand subtree fell through, don't let it fall into the right-hand subtree. */ emit_jump_if_reachable (default_label); expand_label (test_label); emit_case_nodes (index, node->right, default_label, index_type); } } else if (node->right != 0 && node->left == 0) { /* Deal with values to the left of this node, if they are possible. */ if (!node_has_low_bound (node, index_type)) { emit_cmp_and_jump_insns (index,, label_rtx (node->code_label)); emit_case_nodes (index, node->right, default_label, index_type); } else if (node->right == 0 && node->left != 0) { /* Deal with values to the right of this node, if they are possible. */ if (!node_has_high_bound (node, index_type)) { emit_cmp_and_jump_insns (index," | http://opensource.apple.com//source/gcc_os/gcc_os-1666/gcc/stmt.c | CC-MAIN-2016-44 | refinedweb | 13,390 | 55.13 |
Ravenbrook / Projects / Perforce Defect Tracking Integration / Version 2.2 Product Sources / Design
Perforce Defect Tracking Integration Project
This document describes a Python extension that provides an interface to the TeamTrack API. It follows the design document procedure [RB 2000-10-05] and the design document structure [RB 2000-08-30].
This document is obsolete. The P4DTI no longer supports integration with TeamTrack. This document is retained for reference purposes, but will not be maintained.
The purpose of this document was to make it possible for people to maintain the extension, and to use the Python interface.
This document will not be modified as the product is developed.
The readership of this document is the product developers.
This document is not confidential.
The Python interface to TeamTrack is designed to work in all the supported configurations.
There are four dimensions to consider in a TeamTrack configuration:
The TeamTrack server version and build number. We support release 4.5 (various builds), 5.0 (build 5034), and 5.5 (build 55012). We also work with release 5.02 (build 50207), but do not support it.
The TeamTrack API version. We support versions 2 and 3.
The TeamTrack database schema. We support all TeamTrack database schemas from version 21.
Whether the TeamTrack database was created in TeamTrack prior
to 5.0, with a
TS_CASES table, or in TeamTrack 5.0 or
later, with a
TTT_ISSUES table.
Table 1 sets out the compatibility between TeamTrack API versions and TeamTrack releases, as far as we know. "T" in the table means that we have tested that the combination works. "Y" means that we believe the combination works. "N" means that we know the combination does not work. "?" means we don't know.
Notes:
TeamShare plan to move all customers to TeamTrack 5.5 by 2002-05-31. So after that point we may be able to drop support for older TeamTrack server versions and older TeamTrack API revisions. See [Shaw 2001-12-19].
To support both TeamTrack 4.5 and TeamTrack 5.*, we build two versions of the Python interface to TeamTrack:
teamtrack45.pydis the interface to TeamTrack 4.5 servers.
teamtrack50.pydis the interface to TeamTrack 5.* servers.
The module
teamtrack.py
calls either
from teamtrack45 import * or
from
teamtrack50 import * according to the value of the
teamtrack_version
configuration parameter. That way, the P4DTI can just
import teamtrack as before.
Note that there's no way to determine a TeamTrack server version through the API. If you have an incompatible API revision, then you may not be able to connect at all. See [Shaw 2001-07-02] and job000458.
In TeamTrack 4.5 (and TeamTrack 5.* with a database upgraded from
TeamTrack 4.5) the cases table is called
TS_CASES and its
table number is 1. In TeamTrack 5.0, the cases table is called
TTT_ISSUES and its table number is 1000. This can't be
determined by reference to the API version or by the database version
(since these are the same in TeamTrack 5.0 whether the database was
upgraded from TeamTrack 4.5 or not). Instead, one must check to see if
the
TS_TABLES table has a record with
TS_ID =
1 (if so, that table is the cases table), or if there is no so
record, run the query
SELECT * FROM TS_TABLES WHERE TS_SNAME LIKE
'Issue' and take the
TS_ID and
TS_DBNAME fields of the record returned. This is the
method suggested by TeamShare in [Shaw
2001-06-27] and [Shaw
2001-06-28].
This interface supports these configurations portably using server
methods
case_table_id
and
case_table_name.
The interface defines one module,
teamtrack (use
import teamtrack). There are two classes, representing TeamTrack
servers and records from the TeamTrack database. These classes don't have
names in Python: the only way to make a server object is to connect to a
server using
teamtrack.connect, and the only way to make
record objects (at present) is to query the server.
Some features are not supported by every version of the TeamTrack
API. The
teamtrack module indicates which features it
supports using the
feature
dictionary.
Errors are always indicated by throwing a Python exception:
teamtrack.error for errors in the
interface; or
teamtrack.tsapi_error for
errors in the TeamTrack API. The interface never indicates errors by
returning an exceptional value from a function. Exceptions of both types
are associated with a message. For example:
try:
# do teamtrack stuff
except teamtrack.error, message:
print 'teamtrack interface error: ', message
except teamtrack.tsapi_error, message:
print 'TeamTrack API error: ', message
The
teamtrack module can throw other exceptions than
teamtrack.error and
teamtrack.tsapi_error,
notably
KeyError (when a field name is not found in a
record).
Connect to a TeamTrack server on hostname (use the format
"
host:8080" to specify a non-default port number) with the
specified userid and password. If successful, return a server object
representing the connection. For example:
import socket
server = teamtrack.connect('joe', '', socket.gethostname())
teamtrack.error is the Python error object for errors
that occur in the teamtrack module.
A dictionary that maps the name of a feature to 1 if the feature is
supported. (So you can test if the Python interface to TeamTrack
supports feature
foo by evaluating
teamtrack.feature.get('foo').) The following feature names
are defined:
submit
submitmethod.
A dictionary that maps the name of a table in the TeamTrack database
(minus the initial
TS_) to its table identifier (a small
integer). For example
teamtrack.table['CASES']
is the table identifier for the
TS_CASES table.
A dictionary that maps the name of a TeamTrack field type to its identifier (a small integer). For example
teamtrack.field_type['TEXT']
is the field type for a text field.
teamtrack.tsapi_error is the Python error object for errors
that occur in the TeamTrack API.
Returns the table id of the table containing the cases (this is table 1 in databases created in TeamTrack 4.5 and 1000 in databases created in TeamTrack 5.0).
Returns the name of the table containing the cases (this is
TS_CASES in databases created in TeamTrack 4.5 and
TTT_ISSUES in databases created in TeamTrack 5.0).
Delete the record with the specified identifier from the specified
table (which must be one of the table identifiers specified in
teamtrack.table).
Returns a new record object. The record has the fields in the schema
for the specified table (which must be one of the table identifiers
specified in
teamtrack.table), and is suitable for
adding or submitting to that table.
Execute an SQL query on the specified table (which must be one of the
table identifiers specified in
teamtrack.table) of the
form
SELECT * FROM table
(if
where_clause is the empty string), or
SELECT * FROM table WHERE where_clause
(otherwise). Return the records matching the query as a list of record objects.
Remember to use the right field names in the where clause: the
returned record may contain a field called
foo, but the
database field is probably
TS_FOO. See [TeamShare
2000-01-20] for details of the TeamTrack database.
Read the record from the specified table (which must be one of the
table identifiers specified in
teamtrack.table) with the specified
record identifier. If successful, return a record object representing
the record. For example, the call
record = server.read_record(teamtrack.table['CASES'], 17)
is roughly equivalent to the SQL query
SELECT * FROM TS_CASES WHERE TS_ID = 17
Return a list of states which are available to cases in the given
workflow. If the
include_parent argument is 1, then the
list includes states inherited from the parent workflow.
The returned list is a list of record objects from the
TS_STATES table.
Return a list of states which are available to cases in the given project.
The returned list is a list of record objects from the
TS_TRANSITIONS table.
Records present (part of) the Python dictionary interface. To look up a field in a record object, index the record object with the field name. For example:
# Get the title of case 17
record = server.read_record(teamtrack.table['CASES'], 17)
title = record['TITLE']
To update a field in a record object, assign to the index expression.
For example,
record['TITLE'] = 'Foo'.
As for ordinary Python dictionaries, the
has_key
method determines if a field is present in the record, and the
keys method returns a list of names of fields in the
record.
Add the record to its table in the TeamTrack database. Update the record object so that it matches the new record in the table. Return the record object.
Add a field to the database schema, by adding a record to the
TS_FIELDS table and using the information in that record to
add the field to the appropriate table. Fields can only be added to the
following tables: Cases, Incidents, Companies, Contacts, Merchandise,
Products, Problems, Resolutions, and Service Agreements.
The record object must be in the right format for adding to the
TS_FIELDS table. Its
TABLEID field is used to
determine which table the field should be added to, and its
FLDTYPE field is used to determine the type of the added
field (it should be one of the vaues in the
teamtrack.field_type
dictionary.
For example, to add a field to the
TS_CASES table:
f = server.new_record(teamtrack.table['FIELDS']
f['TABLEID'] = teamtrack.table['CASES']
f['NAME'] = 'Cost to fix (in euros)'
f['DBNAME'] = 'COST'
f['FLDTYPE'] = teamtrack.field_type['NUMERIC']
f.add_field()
See [TeamShare
2000-01-20] for details of the fields in the
TS_FIELDS
table and what they mean.
This method is supported if and only if the submit feature is supported.
Submit a new record into the workflow. The
login_id
argument must be a string containing the user name of the user on whose
behalf the record is being submitted (this must correspond to the
TS_LOGINID field for a record in the
TS_USERS
table).
The remaining arguments are optional. The
project_id
argument is the project the record should belong to; if it is not
supplied then it is taken to be the
TS_PROJECTID field in
the submitted record. The
folder_id argument corresponds
to the
nFolderId argument to the
TSServer::Submit method. I don't know what it means. It
defaults to zero if not supplied.
Returns the record identifier of the submitted record.
For example, this fragment of code takes a copy of case 1 and submits it as a new case on behalf of user Joe:
# Make a copy of case 1
old_id = 1
new = server.new_record(teamtrack.table['CASES'])
old = server.read_record(teamtrack.table['CASES'], old_id)
fields_to_copy = ['TITLE', 'DESCRIPTION', 'ISSUETYPE', 'PROJECTID', 'STATE', 'ACTIVEINACTIVE', 'PRIORITY', 'SEVERITY']:
for f in fields_to_copy:
new[f] = old[f]
# Submit the copy.
new_id = new.submit('Joe')
Return the table identifier of the table in the TeamTrack database to
which this record corresponds. (For record retrieved from the TeamTrack
database this is the table they came from; for records created using the
server's
new_record
method, this is the table whose schema the record matches.)
Transition a record in the workflow. In version 1.2 of the API,
only records in the
TS_CASES and
TS_INCIDENTS
tables can be transitioned. The
login_id argument must be
a string containing the user name of the user on whose behalf the
transition is being made (this must correspond to the
TS_LOGINID field for a record in the
TS_USERS
table). The
transition argument is an integer identifying
the transition to be carried out. It must be the
TS_ID
field of a record in the
TS_TRANSITIONS table.
The
project_id argument is the project the record should
belong to after the transition; if it is not supplied then it is taken
to be the
TS_PROJECTID field in the record.
It is not straightforward to pick a transition that apply to a case if you don't know the transition's number. Transitions are local to workflows (and so there may be several transitions with a given name), but workflows form a hierarchy and inherit transitions from their parent.
The algorithm shown below constructs a map from workflow id and transition name to the transition record corresponding to that name in that workflow, and a map from project id to workflow id.
# Get all the transitions and workflows from the TeamTrack database.
transitions = server.query(teamtrack.table['TRANSITIONS'], '')
workflows = server.query(teamtrack.table['WORKFLOWS'], '1=1 ORDER BY TS_SEQUENCE')
# transition_map is a map from workflow id and transition name to
# the transition corresponding to that name in that workflow.
transition_map = {}
for t in transitions:
# This is really a workflow id, not a project id (see the TeamTrack
# schema documentation).
w = t['PROJECTID']
if not transition_map.has_key(w):
transition_map[w] = {}
if not transition_map[w].has_key(t['NAME']):
transition_map[w][t['NAME']] = t
# Now go through all the workflows and add transitions they inherit from
# their parent workflow. This works because we've used the TS_SEQUENCE
# field to put the workflows in pre-order, so that all of a workflow's
# ancestors is considered before the workflow itself is considered.
for w in workflows:
if not transition_map.has_key(w['ID']):
transition_map[w['ID']] = {}
if w['PARENTID']:
for name, transition in transition_map[w['PARENTID']].items():
if not transition_map[w['ID']].has_key(name):
transition_map[w['ID']][name] = transition
# project_workflow is a map from project id to workflow id.
projects = server.query(teamtrack.table['PROJECTS'], '')
project_workflow = {}
for p in projects:
project_workflow[p['ID']] = p['WORKFLOWID']
With these maps, we can apply the "Resolve" transition to case 12 on behalf of user Newton:
case = server.read_record(teamtrack.table['CASES'], 12)
transition = transition_map[project_workflow[case['PROJECTID']]]['Resolve']
case.transition('newton', transition['ID'])
Update the record in the TeamTrack database that corresponds to this
record object. Update the record object so that it matches the updated
record in the table. Return the record object. If unsuccessful, raise
a
teamtrack.error
exception.
For example, to add some text to the description of case 2:
case = server.read_record(teamtrack.table['CASES'], 2)
case['DESCRIPTION'] = case['DESCRIPTION'] + '\nAdditional text.'
case.update()
Python extension modules are described in [Lutz 1996, 14]. Additional details with respect to building Python extensions using Visual C++ on Windows are given in [Hammond 2000, 22].
The TeamTrack API is described in [TeamShare 2001-12-12].
I have only built the extension under Windows NT and Windows 2000 using Microsoft Visual C++. I believe it should build and run anywhere that Python and the TeamTrack API run.
TeamShare provide two versions of their library:
tsapi.lib and
TSApiWin32.dll. I can build
extensions using the former but not using the latter. I guess that the
former is suitable for console applications and that latter for MFC
applications.
There are two places where I have used Windows-specific code (in both
cases the code is protected by
#if defined(WIN32)
... #endif):
Sockets on Windows need to be initialized. The TeamTrack API
provides the function
TSInitializeWinsock to do this. I
call this from
initteamtrack in
teamtrackmodule.cpp.
Functions that are exported from a DLL need either to have the
declarator
__declspec(dllexport) or to be mentioned in a
/DLLEXPORT:foo compiler option. We use the former method,
defining the macro
TEAMTRACK_EXPORTED for this purpose.
The only exported function is
initteamtrack in
teamtrackmodule.cpp.
In the Developer Studio project for the Python interface to TeamTrack, there are three configurations. The "Release" configuration is normal. The "Debug" configuration builds a debugging interface but links with the non-debugging Python libraries. The "Python Debug" configuration builds a debugging interface and links with the debugging Python libraries. To use the third configuration you have to build a debugging version of Python (the binary distribution doesn't come with one).
Note that Python extensions have to be linked with the same Python library that the Python interpreter and all other extensions are linked with. So you can't build one extension with the debugging Python libraries and expect it to work with other extensions linked with the release Python libraries. This is explained in [Hammond 2000, 22].
Reference count management is briefly introduced in [Lutz 1996, page 585], but there's a much better account in [van Rossum 1999-04-13, 1.10].
I've commented each use of
Py_DECREF with one of:
the new owner of the object;
the location in the code of the corresponding
Py_INCREF if I am decrementing a reference count I
incremented; or
"Delete" if the intention is to delete the object.
Where a
Py_DECREF would be expected (because the object
has been passed to a new owner) but is not needed because the new owner
does not increment the reference count, I have added a note to say so.
This applies to objects passed to
PyList_SetItem and
PyTuple_SetItem (I guess that these functions are optimized
for the case where a newly-created object is added to the structure).
See [van
Rossum 1999-04-13, 1.10.2].
Python objects returned from functions need to have an extra reference count since they will be immediately put onto Python's stack without their reference count being incremented. Returning newly-created objects is safe, since they are always created with a reference count of 1. Other returned objects need to have their reference counts incremented.
The TeamTrack API is very memory-hungry: a record from the
TS_CASES table can take more than 600kb to represent in
memory in the API client [Shaw
2001-04-16].
The TeamTrack API makes no attempt to check that memory allocation succeeds [GDR 2000-09-11, 2.2.2].
These two defects mean that running out of memory is commonplace and that this won't be detected, leading quickly to memory corruption and crashing.
TeamShare recommend two approaches to work around these defects:
Don't select very many records at a time. This is proposed in [Shaw 2001-04-16] and analyzed in [GDR 2001-05-16]. This can be implemented in Python, so isn't considered any further here.
To install a C++ memory exception handler and catch out of
memory exceptions in the client code [Schreiber
2001-04-09] (this won't actually catch all memory allocation errors,
since the TeamTrack API uses a mixture of C memory allocation
(
malloc and
free) and C++ memory allocation
(
new and
delete); the memory exception handler
won't catch failed calls to
malloc). An example of the
second approach is given in [TeamShare
2001-04-09].
I tried out this approach, and discovered the following:
You can't add a memory exception handler when writing a Python extension library: Python installs its own memory exception handler and complains if you try to install your own.
Python's memory exception handler catches memory allocation
failures in the TeamTrack API and reports them as
MemoryError exceptions.
In our recommended configuration, where the P4DTI runs on the
same machine as the TeamTrack server, it's the server that fails when
memory is low, not the client (eventually a client call to
recv on the socket blocks and eventually this times out).
So maybe our advice is poor? See job000321./python-teamtrack-interface/index.html#2 $
Ravenbrook / Projects / Perforce Defect Tracking Integration / Version 2.2 Product Sources / Design | http://www.ravenbrook.com/project/p4dti/version/2.2/design/python-teamtrack-interface/ | crawl-003 | refinedweb | 3,173 | 57.67 |
Post your Comment
Mysql date API
Mysql date API
API stand for Application Programming Interface. API in SQL are used... on common API used in Mysql date. In this example, we
elaborate you
Hibernate Criteria API
("date", date));
The Criteria API also provides some additional Comparison...Hibernate Criteria API
The API (Application Programming Interface... on the persistence database.
The hibernate criteria API is very Simplified API for fetching
DOM API - Java Beginners
date
MM/dd/yyyy
Java Persistence API
Java Persistence API
Java Persistence API is the standard API used for the management of the
persistent data and object/relational mapping. Java Persistence API is added in Java EE 5
SQL Date, SQL Date Examples
the date in between.
Mysql date API
API stand... are the list of tutorials for manipulating the Date in your sql.
Mysql... the today current date.
Mysql Date no Time
Apache POI API for Microsoft Doc's Manipulation
Apache POI API for Microsoft Doc's Manipulation
This section contains the detail about the Apache POI API library with it's
implementation code... Excel's cell data type
Checking Date Value of Excel Cells
date
date how to insert date in database? i need coding
jUSB api
jUSB api How can i access the USB using jUSB api and also hw to add this api to the netbeans
Simple Date example
Simple Date example
In this section we have presented a simple Date example
that shows how you can use different constructors of Date. We can construct the
Date
Collections API
Collections API hello,
What is the Collections API?
hi
The Collections API is a set of classes and interfaces that support operations on collections of objects
API for beginners
API for beginners what is api?
why api is needed in java?
Hi Friend,
The Application Programming Interface (API) is a library... to know more about java api, please visit the following link:
Java API
Thanks
java api
java api what is work of api in java
An application programming interface (API) is a library of functions that Java provides....
For more information,visit the following link:
Java API
DATE
DATE I have the following as my known parameter Effective Date... of calcultion starts
My question is how to find the date of Thursday
This cycle... the date of all the thursdays of the given year.
import java.util.*;
import
date
Convert Date to Timestamp
a date
into a timestamp format.
Description of program:
The following program helps you in converting a
date...
java.util.Date that allows the JDBC API to identify this as an
SQL
criteria api
Fedex Api
Post your Comment | http://www.roseindia.net/discussion/25961-Mysql-date-API.html | CC-MAIN-2014-10 | refinedweb | 438 | 54.22 |
How to catch a signal emitted from cpp class in Qml ?
Hi all,
I am using s signal slot communication for cpp and qml interaction. but when i emit a signal from cpp class i am not able to catch that in Qml...can anyone guide me in this matter... the code is as follows...
thanks.
my .h code
#ifndef HANDLETEXTFIELD_H #define HANDLETEXTFIELD_H #include <QObject> #include <QDebug> class HandleTextField : public QObject { Q_OBJECT public: explicit HandleTextField(QObject *parent = 0); signals: void setTextField(); public slots: Q_INVOKABLE void handleSubmitTextField(); }; #endif // HANDLETEXTFIELD_H
my .cpp code
#include "handletextfield.h" HandleTextField::HandleTextField(QObject *parent) : QObject(parent) { } void HandleTextField::handleSubmitTextField() { qDebug() << "c++: HandleTextField::handleSubmitTextField:" << endl; emit setTextField(); }
my"))); HandleTextField handleTextField; QObject *topLevel = engine.rootObjects().value(0); QQuickWindow *window = qobject_cast<QQuickWindow *>(topLevel); // QObject::connect(window, SIGNAL(submitTextField(QString)), // &handleTextField, SLOT(handleSubmitTextField(QString))); QObject::connect(&handleTextField, SIGNAL(setTextField()), window, SLOT(setTextField())); return app.exec(); }
my qml.main code
import QtQuick 2.3 import QtQuick.Window 2.2 import QtQuick.Controls 1.2 import com.handletextfield 1.0 Window { visible: true width: 360 height: 360 HandleTextField { id: testsignal } MouseArea { anchors.fill: parent onClicked: { testsignal.handleSubmitTextField() } } function setTextField(){ console.log("setTextField signal: catched ") } }
QQuick creates signal handlers for your cpp classes like:
HandleTextField { id: testsignal onSetTextField: { //your code} }
You have 2 different objects of class HandleTextField in your code(one created in main(), the other in Qml), and you connect/send a signal from different ones. For your code to work, you have to connect the HandleTextField that you created in your Qml code, and you dont need the object in main() anymore, you don't do anything with it anyway.
In Qmal this would look like:
HandleTextField { id: testsignal onSetTextField: { console.log("signal called") } }
@Jagh but if i dont create a object of handletextfield in main.cpp how can i connect the signal and slot ?
@Jagh You mean to say the following part is not required in main.cpp ??
HandleTextField handleTextField; QObject *topLevel = engine.rootObjects().value(0); QQuickWindow *window = qobject_cast<QQuickWindow *>(topLevel); // QObject::connect(window, SIGNAL(submitTextField()), // &handleTextField, SLOT(handleSubmitTextField())); QObject::connect(&handleTextField, SIGNAL(setTextField()), window, SLOT(setTextField()));
only i need to emit the signal and catch it in qml..in the same way as you have done ?
- You connect objects, not classes with signal-slot mechanism. So if you connect one object HandleTextField to something, this connection will not automatically transfer to other objects of the same class.
- Exactly, that part is not required, and in fact, it doesn't do anything. Connections for one object in your C++ code don't transfer to unrelated objects in QML.
- Yep, for each signal an object emits, QML exposes a property named on<SignalName>, in your case this would be onSetTextField. You can set this property as i have done, but AFAIK it should also be possible to set it from outside, having
testsignal.onSetTextField: { console.log("signal called") }
somewhere in your Qml should suffice
@Jagh thank you for your reply...
now i have removed that part from main.cpp...but now also when i emit a signal i am not able to catch it in Qml...
the way i am registering that handletextfiled class, is it correct ...???
here is the code...
main.qml
import QtQuick 2.3 import QtQuick.Window 2.2 import QtQuick.Controls 1.2 import com.handletextfield 1.0 Window { visible: true width: 360 height: 360 HandleTextField { id: testsignal onSetTextField: { console.log("Signal catched successfully...") } } }"))); return app.exec(); }
handletextfield.cpp
#include "handletextfield.h" HandleTextField::HandleTextField(QObject *parent) : QObject(parent) { handleSubmitTextField(); } void HandleTextField::handleSubmitTextField() { qDebug() << "c++: HandleTextField::handleSubmitTextField:" << endl; emit setTextField(); qDebug()<<"Signal emitted..."<<endl; }
You are sending signal way too early, before there was any chance for a connection to be established. In constructor the object hasn't been fully initialized yet, in particular, the QML-interpreter hadn't had a chance to initialize the object's properties from QML side yet.
@Jagh Okay...
then when should i emit a signal...so that i can catch over there...calling my function in constructor is the mistake...??
I changed your code a bit so that handleSubmitTextField() is called in more appropriate time(actually i just added a Button and called this function in onClicked handler), and the signal handler was successfully called.
To when: anytime between QML object initialization (implement Component.onCompleted if you want to catch it) and object destruction is ok.
@Jagh can you please post the code so that i will get a clear picture of what i need to do...thanks...
And if you really want to execute some code as soon as possible and have a guarantee that a QML object was fully initialized by the time your code is running, do it in Component.onCompleted handler of that object.
Ok thanks... i will try that also...:-)
Hi i have doubt regarding catching a signal emitted from c++ file
if there are more than 2 .qml files does it catch the signal ?
for ex, i have one .cpp and 4 .qml such as mainform.qml,radio.qml,music.qml,setting .qml
so if i emit a signal and try to catch it in setting.qml does it work ??
- VincentLiu
@VincentLiu but in my case it is not working....
my code is...
this .cpp where i am emitting signal..
if(!WordList.isEmpty()) { WordList.removeFirst(); WordList.removeLast(); // QMessageBox m_popupmsgbox; // m_popupmsgbox.setWindowTitle("Voice Recognizer"); // Phoneme=WordList.join(" "); // qDebug()<<"firstword"<<Phoneme<<endl; // QSpacerItem* horizontalSpacer = new QSpacerItem(500, 100, QSizePolicy::Minimum, QSizePolicy::Expanding); // m_popupmsgbox.setText( Phoneme); // QGridLayout* layout = (QGridLayout*)m_popupmsgbox.layout(); // layout->addItem(horizontalSpacer, layout->rowCount(), 0, 1, layout->columnCount()); // m_popupmsgbox.exec(); // emitSignalFunc(); // m_emitSignal= new EmitSignalClass; // m_emitSignal->emitSignalmethod(); VoiceRecognition voice; voice.playmusicsignal(); } else { qDebug()<<"List is empty"<<endl; }
my .qml is
Rectangle{ id:voicerecRect width: settings_main_rect.width/5 height: settings_main_rect.height/4 color: "transparent" VoiceRecognition { id: voiceRecognizer onPlaymusicsignal: { console.log("Signal catched...") } } Image { id: vr_image source: "qrc:/AppbarIcon.png" //source: "qrc:/Voice-Recoder-icon.png" width: parent.width-10 height: parent.height-10 smooth: true fillMode: Image.PreserveAspectFit anchors.centerIn: parent Text { id: vrtext anchors.top: parent.bottom anchors.horizontalCenter: vr_image.horizontalCenter text: qsTr("Voice Recorder") color: "white" font.pixelSize: parent.height * (2 / 9) } MouseArea { anchors.fill: parent onClicked: { //popup.open() voicerecRect.color = 'green' voiceRecognizer.vstartVoiceRecognition() } } } } }
i have registered the class using qmlregister and i am using signal handler to catch the signal...
- VincentLiu
@Naveen_D
Hi, first of all, I should say that I use different ways to do this. However, according to some similar experience on it. I guess you shouldn't use a new VoiceRecognition object in your .cpp file. I don't think distinct object derived from the same class can communicate this way. Please correct me if I am wrong. Thanks
@VincentLiu the function in which i am emitting a signal is a global function so i need to create an obj of that class and emit.
@VincentLiu what are the different ways...?
Hello, can anyone please help me out with this...
thanks | https://forum.qt.io/topic/74076/how-to-catch-a-signal-emitted-from-cpp-class-in-qml | CC-MAIN-2017-43 | refinedweb | 1,149 | 52.56 |
TAKING DIRECT ACTION TO SAVE PARADISE
TAKING DIRECT ACTION TO SAVE PARADISE
Taking Direct Action to Save Paradise
Our ships, the Esperanza and the Arctic Sunrise,
were making waves this winter in the Southern Ocean confronting
Japanese whalers. Now, all eyes are on the third member of our fleet,
the Rainbow Warrior, as it takes on a company responsible for forest destruction.
Indonesia’s
forests are being destroyed faster than any other on Earth and at least
76 percent of the logging is illegal. Our research has shown that one
of Indonesia’s largest logging companies, Kayu Lapis Indonesia (KLI),
is responsible for much of the illegal logging.
For weeks, the Rainbow Warrior
has been on “Forest Crime Patrol” in Indonesia’s waters. Today our
activists sprang into action – unfurling a banner in front of the ship Ardhianto. The Ardhianto is
carrying plywood equivalent of up to 4,500 trees – logged from KLI’s
Henrison Iriana mill in Sorong. We know that this mill received timber
from dubious and potentially illegal sources in recent years.
“KLI
and a handful of other logging companies have already wiped out much of
the Paradise Forests. If they carry on logging at these rates, they
will destroy all of Indonesia’s large intact forests within 20 years,”
said Greenpeace forests campaigner, Hapsoro. “To protect these and
other ancient forests from companies like this, governments of
countries that produce timber must work together with countries that
import wood products, to ban the trade in illegal and destructively
logged timber.”
We’re asking KLI for proof that all timber
entering its mills is from legal, well-managed sources and to provide
documents that show exactly where each tree was cut to make sure they
are from responsible logging operations and not from pristine forest
areas.
Take Action!
Although this destruction seems a world away, the illegal wood is being sold right here in America. Contact Argo Fine Imports
- the largest U.S. importer of KLI’s plywood and use your power as a
consumer to demand that Argo sever its relationship with the illegal
loggers NOW. | http://tonernews.com/forums/topic/webcontent-archived-13826/ | CC-MAIN-2016-44 | refinedweb | 350 | 56.89 |
The future is here. It's just not widely distributed yet. - William Gibson
To start, I'll lay out some groundwork so you can understand how the article projects will be set up, and why. First, I do all my ASP.NET web projects as "Web Application Projects" running under IIS (not the built-in web dev server). I do this because the WAP under IIS is the closest you can get to an actual deployed app in production. Many developers and authors use the Web Site model, and use SQLEXPRESS with User Instance connection strings that point to the MDF database file in the App_Data folder. While this arrangement is very handy, I have seen that it also causes a great deal of confusion with newer developers when they attempt to deploy these, as "real life" hosted IIS accounts don't play well at all with these arrangements. All my databases are full SQL Server databases that are attached; no User Instance connections are employed.
Each solution in this series will uses a subset of my Quotations database. This "stripped down" database has 1000 famous quotations arranged in three tables (my original has 44,000 quotations, a little too big for a sample download!):
The database also includes a series of stored procedures sufficient to build a complete "Quotations" web site or Silverlight app. The SQL Script to create this database is in the /SQL subfolder of the solution. So first, you'll want to create a new SQL Server database named "QUOTES", and run the SQL Script to populate everything.
For our first solution, we will create an ASMX WebService with a GetRandomQuote WebMethod. The method will accept two parameters, an int "numberofquotes" to return, and an optional string subject that corresponds to the Subject column in the Subjects table. We will then create a Silverlight app that consumes this webservice and binds the returned data to a DataGrid.
Create a new ASP.NET Web Application. Now Add a Web Service (ASMX) to it with "Add, New Item". Now lets create a Quotation class that will mirror and contain the data returned from the database: SilverlightWeb
{
[Serializable]
public class Quotation
{
public string AuthorName { get; set; }
public string Subject { get; set; }
public string AuthorInfo { get; set; }
public string Quote { get; set; }
public Quotation( string authorName, string subject, string quote, string authorInfo)
{
this.AuthorName = authorName;
this.Subject = subject;
this.Quote = quote;
this.AuthorInfo = authorInfo;
}
public Quotation()
{
}
}
}
[WebMethod]
public List GetRandomQuote(int numQuotes, string subject)
{
var cnString = ConfigurationManager.ConnectionStrings["quotes"].ConnectionString;
SqlDataReader rdr = PAB.Data.Utils.SqlHelper.ExecuteReader(cnString,
"dbo.GetRandomQuote", numQuotes, subject);
var quotes = new List();
while (rdr.Read())
{
var AuthorName = (string)rdr["AuthorFirstName"] + " " +(string)rdr["AuthorLastname"];
var authorInfo = (string)rdr["AuthorInfo"];
var theQuote = (string)rdr["quotation"];
var subj = (string)rdr["Subject"];
var quote = new Quotation(AuthorName, subj, theQuote,authorInfo);
quotes.Add(quote);
}
rdr.Close();
return quotes;
}
Note that I'm using the handy SqlHelper class here. In later articles, we'll be using the LINQ to SQL generators. But for now, let's just keep it simple in order to get to "First Base".
Your WebService is now complete. Of course, you can look at the stored procs in the database and add additional WebMethods later. Test your WebService by right-clicking on the Service1.asmx page and selecting "View in Browser". You should see the standard ASMX webservice discovery page listing our one method. We are now ready to create our Silverlight Consumer App.
Add a new Silverlight Application to your Solution and accept the offer to add the Silverlight Test page to your existing Web Application. I called my app "quoter". Now let's add a Service Reference to our SL App. In Solution Explorer, under Service References, right-click and "Add Service Reference". You should be able to Discover the existing WebService in the solution and the ServiceReference1 will be added.
Now we will consume and display the results of the GetRandomQuote WebMethod in a Silverlight DataGrid. In your Page.xaml markup, add the following code inside the default Grid usercontrol:
<Controls:DataGrid x:
</Controls:DataGrid>
public partial class Page : UserControl
{
public Page()
{
InitializeComponent();
var c = new WebService1SoapClient();
c.GetRandomQuoteCompleted += new EventHandler(c_GetRandomQuoteCompleted);
c.GetRandomQuoteAsync(2, "");
}
void c_GetRandomQuoteCompleted(object sender, GetRandomQuoteCompletedEventArgs e)
{
ObservableCollection q = e.Result;
this.Grid1.ItemsSource = q;
}
}
In the Page constructor, we create an instance of the WebServiceSoapClient proxy. Then we set the GetRandomQuoteCompleted callback (you can type "+=" and hit the Tab key to stub this out automatically). Then we call the method, passing in a 2 to get 2 random quotes, and a null string for the subject, meaning "any subject".
In the callback method, we are going to get an ObservableCollection from our Generic List in the WebMethod. If your DataGrid has AutoGenerateColumns set, you can bind this directly to the DataGrid's ItemsSource property, and you are done!
The display should look something like this: | http://www.nullskull.com/a/904/silverlight-2-beta-2--doing-data-part-i.aspx | CC-MAIN-2017-30 | refinedweb | 815 | 56.96 |
Using JSFUnit to test 2 communicating webappsWim Vandenhaute Mar 21, 2008 9:35 AM
I am trying to use JSFUnit on 2 webapps simultaneouly.
The scenario is like this :
The first webapp submits a log in action, which redirects to a second webapp, the authentication webapp.
This one handles all authn and then redirects back to the original webapp with the auth credentials ( Its a saml request and response that are being communicated between those 2 ).
What would be the preferred way to use jsf unit on the 'first' webapp ?
All that should be tested, needs to have the user being logged in so the nicest way would being able to actually submit my saml request to the authentication webapp, use jsfunit also on the authentication webapp and return with a saml response to the first webapp, doing further tests on all pages.
Is such a scenario possible with JSFUnit ? Being we have 2 seperate war's here which need to communicate ... .
Or should I look further on emulating the saml response containing the authn assertions getting send to a servlet in my 'first' webapp.
Regards,
W.
1. Re: Using JSFUnit to test 2 communicating webappsStan Silvert Mar 24, 2008 12:59 PM (in response to Wim Vandenhaute)
If the client uses the same jsessionid for both webapps then there will be no problem.
Otherwise, you will have a similar problem as the one described in this thread:
You might be able to use the solution I proposed in that thread, but clearly we need more research into this kind of scenario.
Stan
2. Re: Using JSFUnit to test 2 communicating webappsWim Vandenhaute Mar 26, 2008 6:23 AM (in response to Wim Vandenhaute)
Hey Stan,
I solved it by emulating my authentication webapp with a single servlet.
This servlet receives the authn request, authenticates the user and sends back the authn response. As this is al done within the same session I can continue further as a logged in user.
One thing I have not found out yet is if it is possible to run more then 1 test under the same session. I will have to look further on how to handle that.
Thanks for the quick reply,
W.
3. Re: Using JSFUnit to test 2 communicating webappsStan Silvert Mar 26, 2008 8:27 AM (in response to Wim Vandenhaute)
Great! If you have time would you be willing to share your experience on our wiki? Maybe you could add/update or link from here:
Regarding more than one test under the same session. You can certainly run more than one test:
public class JSFUnitTest extends org.apache.cactus.ServletTestCase { public void testFoo() throws IOException, SAXException { // foo test } public void testBar() throws IOException, SAXException { // bar test } }
What you can not do is have more than one JSFClientSession active at the same time:
public class JSFUnitTest extends org.apache.cactus.ServletTestCase { public void testFoo() throws IOException, SAXException { JSFClientSession client1 = new JSFClientSession("/foo.jsf"); JSFClientSession client2 = new JSFClientSession("/bar.jsf"); client2.submit(); // this works client1.submit(); // this won't work, but it can be fixed in the future } }
To see why, look at this wiki:
Stan
4. Re: Using JSFUnit to test 2 communicating webappsWim Vandenhaute Mar 26, 2008 12:38 PM (in response to Wim Vandenhaute)
Hey Stan,
Well I dont know if my solution is actually quite that valuable to add it to the wiki. From a JSFUnit point of view I only redirect to a certain URL, a filter is activated that sends out a saml request to the servlet I added.
This one picks it up, authenticates the user to the container and sends a saml response to a second servlet, which in turn then redirects to the orignal URL ( where the filter was activated on ).
So basically I dont post to my second webapp but to the local servlet which handles the authentication.
But if you think it will give added value, sure, I'll add some documentation on my approach.
As for the multiple tests, first I was looking for a way to re-use my authenticated session in multiple tests as at the moment a new session was allways created. But actually in the test flows I will implement I will probably have no need for that. To achieve that I probably just have to re-use the client and server session I used during authentication right ?
Kind regards,
W. | https://developer.jboss.org/thread/18494 | CC-MAIN-2018-05 | refinedweb | 737 | 58.62 |
Virtual Scrolling with Angular
Yaser Adel Mehraban
・4 min read
If you have followed my series on web performance, you would've stumbled upon my image optimisation post where I went through a series of steps to lazy load images on your page.
Problem
When you have many items in your page, regardless of their nature (text, images, video, etc), it tends to slow down your page tremendously. There are ways to get around this but you should put too much effort to get it done.
When it comes to Angular, it gets even worst since this can cause really slow scrolling, plus you have to do dirty check on each of these nodes.
But fear not, I have some good news for you, since Angular V7.0.0, there is a new feature called Virtual Scroll which allows you to displays large lists of elements in a performing way by only rendering the items that fit on-screen. This may seem trivial but I am here to prove you wrong.
DOM nodes
To prove its benefits, I created an application which has a list of thousands items containing a title and an image.
Then I used the virtual scroll feature to see how many
DOM nodes are created at each time when I scroll and here is the result:
As you can see from the picture, it only loads 5 items at a time no matter where I am in the list. This is specifically good if you want to implement infinite scrolling behaviour on mobile 🔥.
Network calls
To even make it better, I used the site Lorem Picsum to give each item a different image (to prevent the browser from caching the image) and since only five
DOM nodes are created at a time, the network calls are also done accordingly.
Remember we had to use the
Intersection API to achieve this. It's very convenient isn't it? 👌
How to do it
Ok, let's get to how to implement this. First let's create a new project using Angular CLI:
ng new virtual-scroll
With the newer versions of CLI it prompts you to specify whether you will need routing module and what is the default style file format (CSS/SCSS, etc.).
Now you will need to add the
CDK package:
npm i -s @angular/cdk
Note: You will have to navigate to the virtual-scroll folder first.
Once done open the created folder with your editor of choice #VSCode 😁, and open your
app.module.ts file.
Import the
ScrollingModule and
ScrollDispatcher from CDK and add them to your module:
import { ScrollingModule, ScrollDispatcher, } from '@angular/cdk/scrolling'; @NgModule({ declarations: [AppComponent], imports: [ BrowserModule, ScrollingModule, MatListModule, MatToolbarModule, MatCardModule, ], providers: [ScrollDispatcher], bootstrap: [AppComponent], }) export class AppModule {}
Note: I am using Material Design and that's why I have more imports.
Now open your
app.component.ts file (feel free to create a new component if you like, I am just hacking something together 🤷) and create an array of 1000 items containing a title and an image:
import { Component } from '@angular/core'; import { BehaviorSubject } from 'rxjs'; interface IImage { title: string; src: string; } @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.scss'], }) export class AppComponent { images: IImage[] = Array.from( new Array(1000), (x, i) => ({ title: `Image #${i}`, src: `{i}`, }) ); observableImages = new BehaviorSubject< IImage[] >(this.images); }
I am using a subject behavior from
RxJs just to simulate having an observable and loading data asynchronously from server.
Now in the
app.component.html add we need to add the
cdk-virtual-scroll-viewport and give it an
itemSize which has a pixel unit.
This is basically where everything is glued together.
When all items are the same fixed size (in this case all cards have the same height), you can use the
FixedSizeVirtualScrollStrategy. This can be easily added to your viewport using the
itemSize directive. The advantage of this constraint is that it allows for better performance, since items do not need to be measured as they are rendered.
<cdk-virtual-scroll-viewport <div * <mat-card <mat-card-header> <div mat-card-avatar</div> <mat-card-title >{{image.title}}</mat-card-title > <mat-card-subtitle>WoW</mat-card-subtitle> </mat-card-header> <img mat-card-image [src]="image.src" alt="Random photo" /> <mat-card-content> <p> This is a random image selected from LoremPicsum. </p> </mat-card-content> <mat-card-actions> <button mat-button>LIKE</button> <button mat-button>SHARE</button> </mat-card-actions> </mat-card> </div> </cdk-virtual-scroll-viewport>
All I have here is a container with
200px as
itemSize. Inside I am creating a
div in a loop over my list asynchronously and giving it a title, and an image.
The
HTML for card is from Angular Material examples.
And that's it. Now run
ng serve in a VS Code terminal and open up a browser and navigate to
localhost:4200.
Voila
And that's it. Look at how easy it is to implement a lazy loading strategy for items in a list in Angular with their new
Virtual Scroll feature with so little code required.
You can read more about this feature on Angular Material website and the code for this example is available on my GitHub repository.
GitHub draft PRs are here!
Photo by EJ Yao on Unsplash Gone or soon to be gone are the days of creating a...
great post thanks. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/yashints/virtual-scrolling-with-angular-45bm | CC-MAIN-2019-35 | refinedweb | 899 | 52.19 |
For reasons I still don’t understand, T-Mobile effectively prevented me from preordering the iPhone X. This had something to do with Apples payment plans and T-Mobile’s stipulations. It ultimately meant that if I wanted an iPhone X I’d have to go in store, on November 3rd and purchase everything there. In person. No thanks. I don’t really consider myself an introvert, but I’ll avoid crowds if I can find a way. Unfortunately, that meant just deciding that I would stick with my 6s+ for awhile longer. Which isn’t that bad at all.
I called Apple out of curiosity a couple of days ago and asked what the iPhone X stock situation looked liked around my area. The rep mentioned to me that Apple stores get shipments every morning. Those shipments sometimes contain iPhone X’s but they are first come, first serve. She also mentioned that I could check the stock online at apple.com. If I found a store that had some in stock, I could reserve my device online then go in store and pick it up. She said the best hours to check this were between 4am and 6am. I usually get up at 4am anyways (I know…) so that’s what I did the next morning.
Checking the stock of the iPhone X online went exactly how I thought it would. Nothing. I did find one store with iPhones in stock in Minnesota, though. That didn’t help much considering I’m in southern California. I did notice something interesting while using Apple’s tool..
It’s just making json requests.
I decided to build a python module for this. I’ve open sourced it here. The usage is pretty simple. To install it, it’s just
pip install iphone-checker
After that, you can either use it as a module:
In [1]: from iphone_checker import check_availability In [2]: check_availability('tmobile', '92620') Out[2]: [] # Too accurate
or as a CLI via the
checkx command.
$ checkx -z 92620 No stores near 92620 have stock. :( $ checkx --help Usage: checkx [OPTIONS] Options: -c, --carrier TEXT Which carrier do you need? -z, --zipcode TEXT What zipcode to search in? --help Show this message and exit.
This will search the 12 nearest Apple stores and return the stores that have any iPhone X model (for the given carrier) in stock.
I’m using this with the wonderful notifiers library to run constantly and alert me via Pushover when it finds one.
import os from iphone_checker import check_availability from notifiers import get_notifier PUSHOVER_TOKEN = os.environ.get('PUSHOVER_TOKEN', None) PUSHOVER_USER = os.environ.get('PUSHOVER_USER', None) def main(): pushover = get_notifier('pushover') results = check_availability('tmobile', '92620') for store in results: device_name = store.get('partsAvailability', {}).values()[0].get('storePickupProductTitle') message = '{product}\t{store}\t{phone_number}\t{url}'.format( product=device_name, store=store.get('storeName'), url=store.get('reservationUrl'), phone_number=store.get('phoneNumber') ) pushover.notify( user=PUSHOVER_USER, token=PUSHOVER_TOKEN, title='iPhone X Located', message=message ) if __name__ == '__main__': main()
I’m just running this with:
$ watch -n 600 python checkx_task.py
It has actually worked, too! I got alerted that a store near me had an iPhone in stock, but by the time I called it was sold. :(
I tend to just jump into projects head first and start writing without looking to see if it’s been invented already. That was the case here and when I was almost done with this I found this CLI written in node - carlosesilva/iphone-x-availability-node-cli. I want to thank the author of that library because without it, finding all of the part numbers would have been tedious.
I hope this helps!
Interestingly enough, I was in a conference call with a colleague of mine and during the call I get a notification on my phone saying that an iPhone was located about 10 miles away so I
At lunch, I picked up the phone and drove home. I’m still very impressed with how well my script worked. Without it, I would have had to wait 2-3 weeks. Not the end of the world but thankful that I didn’t have to wait. :) | https://anthonyfox.io/posts/iphonex-search/ | CC-MAIN-2019-09 | refinedweb | 696 | 65.42 |
Inner Classes Change Access Rights Of Parent Members?!? access inner classes variables & methods from outer classeslonelyplanet999, Nov 13, 2003, in forum: Java
- Replies:
- 1
- Views:
- 2,319
- VisionSet
- Nov 13, 2003
Debate: Inner classes or public classes with package access?Christian Bongiorno, Aug 27, 2004, in forum: Java
- Replies:
- 5
- Views:
- 585
- Chris Uppal
- Aug 30, 2004
access to private members from inner classesWolfgang Jeltsch, Aug 14, 2003, in forum: C++
- Replies:
- 7
- Views:
- 447
- Josephine Schafer
- Aug 15, 2003
inner classes in python as inner classes in JavaCarlo v. Dango, Oct 15, 2003, in forum: Python
- Replies:
- 14
- Views:
- 1,143
- Alex Martelli
- Oct 19, 2003
failing to instantiate an inner class because of order of inner classesPyenos, Dec 27, 2006, in forum: Python
- Replies:
- 2
- Views:
- 434
- Pyenos
- Dec 27, 2006 | http://www.thecodingforums.com/threads/inner-classes-change-access-rights-of-parent-members.136305/ | CC-MAIN-2015-11 | refinedweb | 133 | 55.41 |
> Could.) Most likely this can't be seen on an elisp profile. To get a C profile, add "-pg -DPROFILING=1" to your CFLAGS. I use the patch below (under GNU/Linux) to provide a `moncontrol' elisp file that allows me to start/stop C profiling so I can start it once I get into a "strangely slow" state. Stefan --- orig/src/emacs.c +++ mod/src/emacs.c @@ -1769,7 +1777,7 @@ defined on all systems now. */ monstartup (safe_bcopy, &etext); } - else + /* else */ moncontrol (0); #endif #endif @@ -1791,6 +1799,14 @@ return 0; } +DEFUN ("moncontrol", Fmoncontrol, Smoncontrol, 1, 1, 0, + /* doc: toggle profiling. */) + (arg) + Lisp_Object arg; +{ + return moncontrol (!NILP (arg)) ? Qt : Qnil; +} + /* Sort the args so we can find the most important ones at the beginning of argv. */ @@ -2450,6 +2466,7 @@ defsubr (&Sinvocation_name); defsubr (&Sinvocation_directory); + defsubr (&Smoncontrol); DEFVAR_LISP ("command-line-args", &Vcommand_line_args, doc: /* Args passed by shell to Emacs, as a list of strings. | https://lists.gnu.org/archive/html/emacs-devel/2006-05/msg01412.html | CC-MAIN-2016-44 | refinedweb | 154 | 62.68 |
On Fri, Dec 17, 2010 at 1:17 PM, Antoine Pitrou <report@bugs.python.org> wrote:
..
>> A better question is why datetime.utcfromtimestamp(s) exists given
>> that it is actually longer than equivalent EPOCH + timedelta(0, s)?
>
> ??? EPOCH is not even a constant in the datetime module.
>
No, and it does not belong there. A higher level library that uses
seconds since epoch for interchange may define it (and make a decision
whether it should be a naive datetime(1970, 1, 1) or datetime(1970, 1,
1, tzinfo=timezone.utc)).
>".
>
I don't see anything obvious about the choice between
utcfromtimestamp(s), fromtimestamp(s) and utcfromtimestamp(s,
timezone.utc).
datetime(1970, 1, 1) + timedelta(seconds=s)
is obvious, self-contained, short and does not require any knowledge
other than elementary school arithmetic to understand. Compared to
this, "utcfromtimestamp" is a monstrosity that suggests that something
non-trivial, such as UTC leap seconds is been taken care of.
>> > And returning a (seconds, microseconds) tuple does retain the precision.
>> >
>>
>> It does, but it does not help much those who want a float - they would
>> still need another line of code.
>
> Yes, but a very obvious one at least.
>
Let's see:
def floattimestamp(t):
s, us = t.totimestamp()
return s + us * 1e-6
and
def floattimestamp(t):
s, us = t.totimestamp()
return s + us / 1000000
which one is *obviously* correct? Are they *obviously* equivalent?
Note that when timedelta.total_seconds() was first committed, it
contained a numerical bug. See issue8644.
>>.
>
Sure, but if the goal is to implement json serialization of datetime
objects, maybe stdlib should provide a high-level tool for *that* job?
Using float representation of datetime is probably the worst option
for json: it is non-standard, may either loose information or
introduce spurious differences, and is not human-readable.
In any case, you ignore the hard question about totimestamp():
fromtimestamp() is not invertible in most real life timezones. If you
have a solution that does not restrict totimestamp() to UTC, I would
like to hear it. Otherwise, I don't see any problem with (t -
datetime(1970, 1, 1)).total_seconds() expression. Maybe we can add
this recipe to utcfromtimestamp() documentation. | https://bugs.python.org/msg124248 | CC-MAIN-2022-05 | refinedweb | 361 | 66.13 |
n Pile up , Take at least one at a time , The rest can be divided into two piles . The last one to win loses
Watch and see Nim The game is the same ... bare Anti-SG ah
#include <iostream>
#include <cstdio>
#include <cstring>
#include <algorithm>
#include <cmath>
using namespace std;
typedef long long ll;
const int N=1e6;
inline int read(){
char c=getchar();int x=,f=;
while(c<''||c>''){if(c=='-')f=-;c=getchar();}
while(c>=''&&c<=''){x=x*+c-'';c=getchar();}
return x*f;
}
int n,a;
int main(){
//freopen("in","r",stdin);
while(scanf("%d",&n)!=EOF){
int sg=,flag=;
for(int i=;i<=n;i++) a=read(),sg^=a,flag|=a>;
if( (sg== && !flag) || (sg!= && flag) ) puts("Yes");
else puts("No");
}
}
HDU Be the Winner [Anti-SG] More articles about
- HDU 5754 Life Winner Bo ( game )
Life Winner Bo Topic link : Description Bo is a "Life W ...
- HDU 1729 class NIM seek SG
Every time n Boxes , Each box has a capacity limit , You can put stones in each operation , The quantity is not more than the square of the quantity in the current box , No operator can input . A box is a sub game . For a box, its capacity is s, The current number of stones is x, So if there is a Satisfy $a \t ...
- HDU 3980 Paint Chain (sg function )
Paint Chain Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others)Total ...
- hdu 1907 John (anti—Nim)
John Time Limit: 5000/1000 MS (Java/Others) Memory Limit: 65535/32768 K (Java/Others) ...
- HDU 5754 Life Winner Bo Combinatorial game
Life Winner Bo Problem Description Bo is a "Life Winner".He likes playing chessboard gam ...
- HDU 1729 Stone Game【SG function 】
The following is reproduced to : Zhao Xiaozhou's game theory of Changchun University of technology ppt The main idea of the topic : 1. Yes n Boxes , Every box has its capacity s 2. At the beginning of the game , There are some stones in each box 3. Both sides take turns playing the game , Put... Into a box n A stone , among n It can't be greater than when ...
- HDU 5724 Chess ( State compression sg Function game ) 2016 Hangdian multi school joint first
subject : Portal . The question : Yes n That's ok , The most in each line 20 A chess piece , For a piece of chess , If he doesn't have a piece on his right , You can move to his right : If there are pieces , Just skip these pieces and move to the space behind , People who can't move lose . Answer key : State compression game , For a line 2^ ...
- HDU 5754 Life Winner Bo ( Looking for a regular and game )
Topic link : Here are four pieces , The chess pieces were in the beginning (1,1) spot , Two people B and G Move the pieces according to the rules of each piece in turn , The pieces can only go down to the right ...
- 【 Game theory 】HDU 5754 Life Winner Bo
Topic link : The main idea of the topic : 4 Grow chess pieces , In chess 1 king ,2 vehicle ,3 Horse ,4 after , Choose one ,B and G Take turns , You can't go up left , One ...
- hdu 1536 S-Nim ( Simple sg function )
The question : First type K Represents the size of a set Then enter the set It means that for the pair of stones, only the number of elements in the set can be removed After input One m Indicates that the next step for this collection is m Time to ask after m That's ok Enter one per line n Express n Heaps Every time ...
Random recommendation
- poj2369 Permutations —— Permutation group
link: Permutation group , The simplest kind . Find the least common multiple of all the cyclic sections of a number . /* ID: zypz4571 LANG: C++ TASK: perm ...
- ax Example of error handling for
#OCCRetryCount ; try { ttsbegin; //example as insert or update or delete record ttscommit; } catch(E ...
- install MSITVPN Pop up when connecting : need ( Unknown ) File on 'MSITVPN.bmp.
Use msitvpn Connect microsoft Company intranet , In the installation msitvpn A dialog box will pop up to prompt you to msitvpn.bmp file , I haven't found a solution for a long time . Finally, we can only guess whether it is the user's permission or not ...
- window2008 64 The bit system doesn't have office Component problem analysis and solution
Server is windows server2008 64 Bit system , I need to use my system Microsoft.Office.Interop.Excel Components Upload on Excel The document encountered an error : retrieval COM Class factory CL ...
- Windows How to install and configure Snort Video tutorial
Windows How to install and configure Snort Video tutorial : First step : Second parts : ...
- In a formal production environment hadoop Clustered DNS+NFS+ssh Exemption password Login configuration
Blog address : Environment virtual machine centos6.5 Host name h1 IP 192.168.137.11 As DNS FNS Of server Host name h2 IP 19 ...
- Codeforces Round #277.5 (Div. 2)B——BerSU Ball
B. BerSU Ball time limit per test 1 second memory limit per test 256 megabytes input standard input ...
- CentOs 6.x upgrade Python edition 【 turn 】
stay CentOS 6.X The above installation Python 2.7.X CentOS 6.X Self contained python The version is 2.6 , Due to the need of work , A lot of times 2.7 edition . So we need to upgrade the version . Because of some system tools and services ...
- Effective STL Learning notes : multi-purpose vector & string
Effective STL Learning notes : multi-purpose vector & string If possible , Try to avoid writing dynamically allocated arrays by yourself , Switch to vector and string . The only thing that the author of the original book thought of ...
- HDU 6342 Expression in Memories( simulation ) Multiple proofreading
The question : Here's a rule for you , Ask if you wrote it right . Ideas : The general rule is : Leading zeros cannot appear , Both sides of the symbol must be legal numbers . Let's change all the question marks first , To judge whether it's legal or not , It's easier to think than to judge while changing . In the following explanation, the question mark is only changed to + ... | https://chenhaoxiang.cn/2021/06/20210604131125538c.html | CC-MAIN-2022-05 | refinedweb | 1,033 | 70.43 |
PTHREAD_CREATE(3) BSD Programmer's Manual PTHREAD_CREATE(3)
pthread_create - create a new thread
#include <pthread.h> int pthread_create(pthread_t *thread, const pthread_attr_t *attr, void *(*start_routine)(void *), void *arg);
The pthread_create() function is used to create a new thread, with attri- butes specified by attr, within a process. If attr is NULL, the default attributes are used. If the attributes specified by attr are modified later, the thread's attributes are not affected. Upon successful comple- tion pthread_create() will store the ID of the created thread in the lo- cation specified by thread. The thread is created executing start_routine with arg as its sole argu- ment. If the start_routine returns, the effect is as if there was an im- plicit call to pthread_exit() using the return value of start_routine as the exit status. Note that the thread in which main() was originally in- voked differs from this. When it returns from main(), the effect is as if there was an implicit call to exit() using the return value of main() as the exit status. The signal state of the new thread is initialized as: • The signal mask is inherited from the creating thread. • The set of signals pending for the new thread is empty.
If successful, the pthread_create() function will return zero. Otherwise an error number will be returned to indicate the error.
pthread_create() will fail if: [EAGAIN] The system lacked the necessary resources to create another thread, or the system-imposed limit on the total number of threads in a process [PTHREAD_THREADS_MAX] would be exceed- ed. [EINVAL] The value specified by attr is invalid.
fork(2), pthread_attr_init(3), pthread_attr_setdetachstate(3), pthread_attr_setstackaddr(3), pthread_attr_setstacksize(3), pthread_cleanup_pop(3), pthread_cleanup_push(3), pthread_exit(3), pthread_join(3)
pthread_create() conforms to ISO/IEC 9945-1:1996 ("POSIX"). MirOS BSD #10-current April 4,. | http://www.mirbsd.org/htman/i386/man3/pthread_create.htm | CC-MAIN-2016-50 | refinedweb | 297 | 55.03 |
C++ Program to Display Factors of a Number
Grammarly
In this post you will learn how to Display Factors of a Number
This lesson will teach you how to Display Factors of a Number, with a for loop, mathematical operations and decision making statement using the C++ Language. Let’s look at the below source code.
How to Display Factors of a Number?
Source Code
#include <iostream> using namespace std; int main() { int n, i; cin >> n; cout << "Enter a positive integer: "<<n<<endl; cout << "\nFactors of " << n << " are: "; for(i = 1; i <= n; ++i) { if(n % i == 0) cout << i << " "; } return 0; }
Input
10
Output
Enter a positive integer: 10 Factors of 10 are: 1 2 5 10
The statements #include<iostream>, using namespace std, int main are the main factors that support the function of the source code. Now we can look into the working and layout of the code’s function.
- Factors of a number are the integers that can divide the original number exactly. For example, 8 is the factor of 24, since it can divide into three equal parts. 24/8 = 3. Here we are building a code to find the factors of a number.
- First, declare the variables n, i an integers collect the number from the user and store it in n using function
cin>>and display the value using
cout<<and the Insertion Operators'<<‘ , ‘>>’.
- Use an output statement to display the answer.
- Using a for loop with the condition
(i = 1; i <= n; ++i)and the loop statement consisting of an if statement with a condition which verifies if the reminder of n and i is equal to 0 and an output statement to be displayed when the condition is true.
- The if statement checks if the number is a factor of the number as a factor leaves the reminder as 0, and the for loop according to the condition increments the value of i and repeats the loop statement until the condition is not satisfied.
- The output statement is displayed. | https://developerpublish.com/academy/courses/c-programming-examples-2/lessons/c-program-to-display-factors-of-a-number/ | CC-MAIN-2021-49 | refinedweb | 339 | 62.72 |
DIP50
Contents
- 1 Abstract
- 2 Example
- 3 Rationale
- 4 Formal Definition
- 5 Bonus
- 6 Use cases
Abstract
The basic concept of AST (Abstract Syntax Tree) macros, or just syntax macros, is fairly simple. A macro is just like any other function or method except that it will only run at compile time. When a macro is called, instead of evaluating its argument and then calling the function, an AST is created for each argument passed to the function. The macro will then return a new AST which is injected and type checked at the call site. This means that the call to the macro will be replaced with the AST returned by the macro.
Example
macro myAssert (Context context, Ast!(bool) val, Ast!(string) str = null) { auto message = str ? "Assertion failure: " ~ str.eval : val.toString(); auto msgExpr = literal(constant(message)); return <[ if (!$val) throw new AssertError($msgExpr); ]>; } void main () { myAssert(1 + 2 == 4); }
Compiling and running the above program would result in the following assert error:
core.exception.AssertError@main(13): Assertion failure: 1 + 2 == 4
The interesting part here is that the assert message contains the actual expression that failed.
Rationale
AST macros can be used extend the language with new semantics without changing the actual language. Instead of adding new features to the language AST macros can be a general solution
to implement language changes in library code. Many existing language features could have been
implemented with AST macros, like
scope,
foreach and similar language constructs.
Formal Definition
Declaring a Macro
A macro is always declared with the
macro keyword followed by its name and a parameter list. The first parameter of macro is always of the type
Context, therefore the parameter list cannot be empty. The rest of the parameters are always of the type
Ast. A macro always need to return either
void or a value of the type
Ast.
macro foo (Context context, Ast!(string) str) { return str; }
Calling a Macro
A macro is called just like any other function. The first parameter, which is of
the type
Context, is passed implicitly by the compiler. The rest of the
arguments are passed like in a regular function call. Although you won't pass
arguments of the type
Ast, regular values are passed instead and the compiler
creates an AST of the arguments and pass them as
Ast arguments to the macro.
The Context Parameter
The first parameter of a macro declaration is always of the type
Context. This
parameter is mostly passed implicitly by the compiler. It's also possible to
pass the context parameter manually. This is useful when having helper functions
for a macro and need to retain the context given to the original macro.
The context parameter contains information about the surrounding context where the macro was called. This can be information like the surrounding method and class from which the macro was called.
Bonus
This context parameter also contain information about the complete compilation environment, like:
- The arguments used when the compiler was invoked
- Functions for emitting messages of various verbosity level, like error,
warning and info
- Functions for querying various types of settings/options, like which versions
are defined, is "debug" or "release" defined and so on
- In general providing as much as possible of what the compiler knows about the
compile run
- The context should have an associative array with references to all scoped variables at initiation point.
This has the benefit of enabling a macro to check variables that are "passed" to it as well as modify it. The this keyword value should be available to the macro if it is from either a class or struct.
Semantics
Since macros can only be called at compile time the compiler will strip out all macros before the code generating phase.
Quasi-Quoting
Quasi-quoting is basically a form of syntax sugar for creating syntax tree. It could be considered as AST literals. In all examples in this text the following syntax is used for quasi-quoting:
<[ writeln("asd"); ]>
Splicing
Splicing is a syntax used for dynamically inserting a piece of an AST in
quasi-quotes. In all examples in this text a dollar sign,
$, is used for
splicing.
<[ writeln($expr); ]>
The syntax for quasi-quoting and splicing is just an abstraction of what the syntax would actually look like. Regardless of what syntax is used there's always the option to manually create syntax trees using an API.
The AST Macro
The
ast macro is an option to implement quasi-quoting in a library macro. The
ast macro takes an arbitrary expression and transform it to an AST. It also
supports splicing. The
ast macro has a couple of overloads and their
declarations look as follows:
macro ast (T) (Context context, Ast!(T) expr) { // ... } macro ast (Context context, Ast!(void delegate ()) block) { // ... }
The first overload takes an arbitrary expression and converts it into an AST. The second overload takes a delegate, this is to be able to convert a whole block of code to an AST.
Bonus
Calling a Macro
Macros are extend to be callable from anywhere it's possible to use a mixin.
Statement Macros
A statement macro is a macro that takes a
Statement as its last parameter.
The difference compared to regular macros is the calling syntax. Statement macros
are called with the same syntax used for statements, like the example below:
macro foo (Context context, Statement block) { return block; } macro bar (Context context, Ast!(int) arg, Statement block) { return block; } void main () { foo { writeln("foo"); writeln("foo again"); } foo writeln("foo2"); bar(3) { writeln("bar"); writeln("bar again"); } bar(3) writeln("bar2"); }
Just like many of the built-in statements the braces are optional when there's only a single expression in the statement. Since the statement is always the last parameter in the macro declaration and it's always passed outside the regular argument list it's legal to have parameters with default arguments or a variadic parameter list before the statement parameter.
macro foo (Context context, Ast!(string)[] arg ..., Statement block) { return block; } macro bar (Context context, Ast!(string) fmt = null, Statement block) { return block; }
Declaration macros
A declaration macro is a macro that acts like a user defined attribute. It can be applied to any declaration. When a declaration macro is used, the macro is called and the AST of the declaration is passed as the last parameter to the macro. The declaration is replaced with whatever syntax tree the macro returns.
A declaration macro always take a
Declaration as its last parameter. The same
rules about default arguments and variadic parameter that apply to statement
macros apply to declaration macros as well.
macro attr (Context context, Declaration decl) { auto attrName = decl.name; auto type = decl.type; return <[ private $decl.type _$decl.name; $decl.type $decl.name () { return _$decl.name; } $decl.type $decl.name ($decl.type value) { return _$decl.name = value; } ]>; } class Foo { @attr int bar; }
Use cases
Examples of usage of AST macros that would be useful for extending the language.
Linq
Linq is a .net library that encorperates searching and manipulation of data. A c# example is:
using System; using System.Linq; class Program { static void Main() { int[] array = { 1, 2, 3, 6, 7, 8 }; var elements = from element in array where element > 5 select element; foreach (var element in elements) { } } }
This could be implemented by an end user as:
import linq; import std.stdio; void main() { int[] array = [1, 2, 3, 6, 7, 8]; int[] data; query { from element in array where element > 2 add element to data } }
That code would be converted to:
import linq; void main() { int[] array = [1, 2, 3, 6, 7, 8]; int[] data; foreach (element; array) { if (element > 5) data ~= element; } }
C#'s ability of specifying the variable to be set to is not required at least for this example. However it should be able to be specified e.g.
query { int data from element in array where element > 5 select element }
This would be closer to c#'s.
That code would be converted to:
import linq; void main() { int[] array = [1, 2, 3, 6, 7, 8]; int[] data; foreach (element; array) { if (element > 5) data ~= element; } }
For improvements of this it would be suggested that the ability to be able to get the current variables declared within scope. This will enable the ability to check for if variables defined e.g. the array. If it is not it will be possible give a good compiler error. It would enable the ability to instead of specifying the type of an array value it could determine if based upon the array given.
Reflection
class Person { macro where (Context context, Statement statement) { // ... } } auto foo = "John"; auto result = Person.where(e => e.name == foo); // is replaced by auto foo = "John"; auto result = Person.query("select * from person where person.name = " ~ sqlQuote(foo) ~ ";");
Calculation
Given a simple macro example that will add two numbers together and then return it, the values requested must be available. Using scoped variables passed by reference on the context this is possible.
func(1, 2); // example args void func(int i, int i2) { foo { output i, i2 } } macro foo (Context context, Ast!(string) str) { string outputVariable = // get return through str string name1 = // get i through str string name2 = // get i2 through str return outputVariable = "auto " ~ outputVariable ~ text(context.scopeVariables!int(name1) + context.scopeVariables!int(name2)) ~ ";"; }
When unrolled it will become:
void func(int i, int i2) { auto output = 3; }
This essentially emulates pure functions however as stated in Linq example that it would enable checking of variables and types as required.
C++ Namespaces (issue 7961)
Bugzilla issue 7961 talks about adding support for C++ namespaces.
This should be possible to solve with library code, especially since we already have
pragma(mangle):
What we have today, declaration of a C++ function, without namespace:
extern (C++) void x ();
Namespaces in C++ is all about mangling of symbols. Since we already have
pragma(mangle)
one could think that it would be possible to solve with library code. Unfortunately this causes some problems:
string namespace (string namespace) { // mangle the namespace ... } pragma(mangle, namespace("foo::bar") extern (C++) void x ();
In the about example the namespace is properly mangled but we're missing the mangling of "x". That's not something we want to do manually. Next try:
string namespace (string namespace, alias func) () { // mangle the namespace ... } pragma(mangle, namespace!("foo::bar", x) extern (C++) void x ();
The above doesn't work either because of forward references of "x". Next try:
string namespace (string namespace, T, string name) () { // mangle the namespace ... } pragma(mangle, namespace!("foo::bar", void function (), "x") extern (C++) void x ();
The above would mostly likely work. But now we're duplicating the signature and the name of "x". This is error prone and we will lead hard to find bugs or irritating linker errors. Not something we want to do.
Instead we can solve it with AST macros:
string mangle_cpp (string namespace, T, string name) () { // mangle the declaration ... } macro namespace (Context context, Ast!(string) namespace, Declaration declaration) { auto name = declaration.name; auto type = declaration.type; auto mangledName = mangle_cpp(namespace.eval(), type.eval(), name.eval()); auto mangeldNameAst = literal(constant(mangledName)); return <| pragma(mangle, $mangledName) $declaration; ]>; }
Usage:
@namespace("foo::bar") extern (C++) void x ();
This can also be used to look more like a real namespace in C++:
@namespace("foo::bar") extern (C++) { void x (); void y (); }
Attribute inference
Currently attributes are inferred automatically for template functions. This shows an example of automatically infer attributes for a non-template function based the attributes of another symbol [1].
macro inferAttributes (Context context, Ast!(Symbol) symbol, Declaration decl) { foreach (attr ; symbol.attributes) decl.attributes ~= attr; return decl; }
Usage:
class Foo (T) { @inferAttributes(T.foo) void thisIsSoPolymorphic () { } } | https://wiki.dlang.org/DIP50 | CC-MAIN-2019-43 | refinedweb | 1,961 | 54.73 |
Get the highlights in your inbox every week.
How to port an awk script to Python | Opensource.com
How to port an awk script to Python
Porting an awk script to Python is more about code style than transliteration.
Subscribe now
Scripts are potent ways to solve a problem repeatedly, and awk is an excellent language for writing them. It excels at easy text processing in particular, and it can bring you through some complicated rewriting of config files or reformatting file names in a directory.
When to move from awk to Python
At some point, however, awk's limitations start to show. It has no real concept of breaking files into modules, it lacks quality error reporting, and it's missing other things that are now considered fundamentals of how a language works. When these rich features of a programming language are helpful to maintain a critical script, porting becomes a good option.
My favorite modern programming language that is perfect for porting awk is Python.
Standard awk to Python functionality
The following Python functionality is useful to remember:
with open(some_file_name) as fpin:
for line in fpin:
pass # do something with line
This code will loop through a file line-by-line and process the lines.
If you want to access a line number (equivalent to awk's NR), you can use the following code:
with open(some_file_name) as fpin:
for nr, line in enumerate(fpin):
pass # do something with line
awk-like behavior over multiple files in Python
If you need to be able to iterate through any number of files while keeping a persistent count of the number of lines (like awk's FNR), this loop can do it:
def awk_like_lines(list_of_file_names):
def _all_lines():
for filename in list_of_file_names:
with open(filename) as fpin:
yield from fpin
yield from enumerate(_all_lines())
This syntax uses Python's generators and yield from to build an iterator that loops through all lines and keeps a persistent count.
If you need the equivalent of both FNR and NR, here is a more sophisticated loop:
def awk_like_lines(list_of_file_names):
def _all_lines():
for filename in list_of_file_names:
with open(filename) as fpin:
yield from enumerate(fpin)
for nr, (fnr, line) in _all_lines:
yield nr, fnr, line
More complex awk functionality with FNR, NR, and line
The question remains if you need all three: FNR, NR, and line. If you really do, using a three-tuple where two of the items are numbers can lead to confusion. Named parameters can make this code easier to read, so it's better to use a dataclass:
import dataclass
@dataclass.dataclass(frozen=True)
class AwkLikeLine:
content: str
fnr: int
nr: int
def awk_like_lines(list_of_file_names):
def _all_lines():
for filename in list_of_file_names:
with open(filename) as fpin:
yield from enumerate(fpin)
for nr, (fnr, line) in _all_lines:
yield AwkLikeLine(nr=nr, fnr=fnr, line=line)
You might wonder, why not start with this approach? The reason to start elsewhere is that this is almost always too complicated. If your goal is to make a generic library that makes porting awk to Python easier, then consider doing so. But writing a loop that gets you exactly what you need for a specific case is usually easier to do and easier to understand (and thus maintain).
Understanding awk fields
Once you have a string that corresponds to a line, if you are converting an awk program, you often want to break it up into fields. Python has several ways of doing that. This will return a list of strings, splitting the line on any number of consecutive whitespaces:
line.split()
If another field separator is needed, something like this will split the line by :; the rstrip method is needed to remove the last newline:
line.rstrip("\n").split(":")
After doing the following, the list parts will have the broken-up string:
parts = line.rstrip("\n").split(":")
This split is good for choosing what to do with the parameters, but we are in an off-by-one error scenario. Now parts[0] will correspond to awk's $1, parts[1] will correspond to awk's $2, etc. This off-by-one is because awk starts counting the "fields" from 1, while Python counts from 0. In awk's $0 is the whole line -- equivalent to line.rstrip("\n") and awk's NF (number of fields) is more easily retrieved as len(parts).
Porting awk fields in Python
As an example, let's convert the one-liner from "How to remove duplicate lines from files with awk" to Python.
The original in awk is:
awk '!visited[$0]++' your_file > deduplicated_file
An "authentic" Python conversion would be:
import collections
import sys
visited = collections.defaultdict(int)
for line in open("your_file"):
did_visit = visited[line]
visited[line] += 1
if not did_visit:
sys.stdout.write(line)
However, Python has more data structures than awk. Instead of counting visits (which we do not use, except to know whether we saw a line), why not record the visited lines?
import sys
visited = set()
for line in open("your_file"):
if line in visited:
continue
visited.add(line)
sys.stdout.write(line)
Making Pythonic awk code
The Python community advocates for writing Pythonic code, which means it follows a commonly agreed-upon code style. An even more Pythonic approach will separate the concerns of uniqueness and input/output. This change would make it easier to unit test your code:
def unique_generator(things):
visited = set()
for thing in things:
if thing in visited:
continue
visited.add(thing)
yield thing
import sys
for line in unique_generator(open("your_file")):
sys.stdout.write(line)
Putting all logic away from the input/output code leads to better separation of concerns and more usability and testability of code.
Conclusion: Python can be a good choice
Porting an awk script to Python is often more a matter of reimplementing the core requirements while thinking about proper Pythonic code style than a slavish transliteration of condition/action by condition/action. Take the original context into account and produce a quality Python solution. While there are times when a Bash one-liner with awk can get the job done, Python coding is a path toward more easily maintainable code.
Also, if you're writing awk scripts, I am confident you can learn Python as well! Let me know if you have any questions in the comments.
6 Comments
All good so far.
Awk has that pattern-action way of coding that also Maps to Python well.
Excellent article. I think there's a really strong use case for Python over Awk when the input is spread across multiple files. Correlating between input sources is non-trivial in Awk.
Call me old fashioned, but in my mind the best replacement for AWK is still Perl. But i get it, nobody learns perl anymore these days.
I still thinking about learning Perl.
What's the consensus, these days? Perl 5 or 6?
Thanks Moshe, nice article.
I've been using awk for a while and agree with the benefits of porting a script to Python.
Would be interesting to see some performance comparison.
Hi, I think you should also mention "fileinput" from the Python standard library. It counts line numbers for you, it can automatically get filenames from the command line arguments if you want, and it can even do Perl-style in-place editing of text files (with optional back-ups).
with fileinput.input() as f:
for line in f:
parts = line.rstrip("\n").split()
if parts:
print(parts[0]) | https://opensource.com/article/19/11/awk-to-python | CC-MAIN-2020-10 | refinedweb | 1,246 | 60.85 |
.
Hanumantha Rao Jun 04, 2007
In oops world the polymorphisam is to ability inherits methods and properties of baseclass into derived class.
Example of polymorphisam is
public class Addres_Details
{
//here comes default construtor
//Now I am defines one method
public virtual void add_nam
1. function overloading,2. Runtime polymorphism --Using Interface reference, EX:interface name I ,abstract Method();implimenting classes A,Bref I;objA= new A();objB= new B();runtimeobjA=I objA.Method();objB=I;objB.Method();
Polymorphism means Poly=manymorphism=form Polymorphism is achieve by using function overloading.
Polymorphism means ability to take more than one form.VB.net and C# achieve polymorphism through inheritance.When class is inheriting by generalized class in to a specialized class,it includes the behavior of the generalized class. But if we need some different kind of implementation rather than which provides by the top class we can implement it with our specialized class without making any changes to the name. Polymorphism comes in to action with dynamic binding, so it's important to identify which object to have which behavior. (VB.net and C# both are default to dynamic binding)
Polymorphism refers to the ability to assume different forms. In OOP, it indicates a language’s ability to handle objects differently based on their runtime type. When objects communicate with one another, we say that they send and receive messages. The advantage of polymorphism is that the sender of a message doesn’t need to know which class the receiver is a member of. It can be any arbitrary class. The sending object only needs to be aware that the receiving object can perform a particular behavior.
A classic example of polymorphism can be demonstrated with geometric shapes. Suppose we have a Triangle, a Square, and a Circle. Each class is a Shape and each has a method named Draw that is responsible for rendering the Shape to the screen. With polymorphism, you can write a method that takes a Shape object or an array of Shape objects as a parameter (as opposed to a specific kind of Shape). We can pass Triangles, Circles, and Squares to these methods without any problems, because referring to a class through its parent is perfectly legal. In this instance, the receiver is only aware that it is getting a Shape that has a method named Draw, but it is ignorant of the specific kind of Shape. If the Shape were a Triangle, then Triangle’s version of Draw would be called. If it were a Square, then Square’s version would be called, and so on.
public virtual void add_name
{
}
//Now i am defining new class and inherits the Address_details class
public class New_address_Details:Address_Details
public override void add_name()
console.writeln("This is the Example of the Polymorphisam");
}
}}
Poly means many and morph means form. Thus, polymorphism refers to being able to use many forms of a type without regard to the details.
Polymorphism Overview : .
or.
polymorpihsm ->one name, many form.
have 2 types : 1. Compile time, 2 Run time
1 compile time: also called overloadinghave 3 types. 1. Constructor overloading(i.e.,with same constructor name and different no of arguments or different datat types or both ) 2 Function overloading(i.e.,with same function name and different no of arguments or different datat types or both)
3. operator overloading: exaample : String s = "James";
s = s + "Bond";
Runtime poly: is achieved by overridding parent class method:
This is called Dynamic method dispatch.
regards
Nizam
Multiple use of single form is known as poly...
If I remember correctly, it is something like when you have the same method names but different implementations.
For example in the base class there is a method called PrintMessage().In the derived calss there is the same method name.
The way that this can be achived without errors is to have Virtual in the base class and override in the derived class for the method names.
I think this should give you a basic understanding of polymosrphism. I am not too sure though.
©2014
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/Interviews/answer/587/what-is-polymorphism-how-does-VB-NetC-Sharp-achieve-polymorphis | CC-MAIN-2014-52 | refinedweb | 683 | 63.9 |
Scala: An attempt to eradicate the if
In a previous post I included a code sample where we were formatting a page range differently depending on whether the start page and end pages were the same.
The code looked like this:
trait PageAware { def startPage:String def endPage:String def pageRange = if(firstPage == lastPage) "page %s".format(firstPage) else "pages %s-%s".format(firstPage, lastPage) }
Looking at the if statement on the last line we were curious whether it would be possible to get rid of it and replace it with something else.
In Java we could use the ternary operator:
public class PageAware { public String pageRange() { return (firstPage == lastPage) ? String.format("page %s", firstPage) : String.format("pages %s-%s", firstPage, lastPage) } }
The if/else statement in Scala is supposed to replace that as far as I understand but I think the ternary operator looks neater.
Beyond defining that we played around with some potential alternatives.
We could use a Map to store the true and false values as keys:
trait PageAware { def pageRange = Map(true -> "page %s", false -> "pages %s-%s")(firstPage == lastPage).format(firstPage, lastPage) }
Uday came up with a pattern matching solution which looks like this:
trait PageAware { def pageRange = ((firstPage, lastPage) match { case (`firstPage`, `firstPage`) => "page %s" case _ => "pages %s-%s"}).format(firstPage, lastPage) }
Unfortunately both of these solutions are significantly less readable than the if/else one so it seems like this is one of the situations where it doesn’t actually make sense to get rid of it. | https://markhneedham.com/blog/2011/07/12/scala-an-attempt-to-eradicate-the-if/ | CC-MAIN-2020-24 | refinedweb | 255 | 58.62 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.