text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
These are chat archives for astropy/astropy Cadairbsipocz: you have been on a PR / issue closing rampage I see :D bsipoczI blame @hamogu to start this, but he is right some stuff needs to be closed. Cadair:D Cadairyou can come do SunPy as well if you want!!! bsipoczWe still have 117 open PRs ;) CadairI am sad that SunPy has cleared 60 bsipoczdon't worry, you will catch up, at least the monthly stat predicts it () Cadair+5 CadairI must close some!! bsipocz-13 :) bsipocz(without the ones closed without merge) Cadairimpressive! mohanagr on FreenodeCadair I was looking into your PR no. #5770 for drafting my GSoC application. 4 plugins have been separated out. Idea is basically to include all of them in a separate package right? mohanagr on FreenodeI'll try and fix that PR since some tests are failing for Python2 as pllim mentioned. Also Cadair/bsipocz I'll be glad if one of you could tell me if the mailing list is active. I'll be thankful if I could receive some guidance for my application for the Test helper package. CadairMy original idea was not to have them in a separate package, but to allow packages that depend in astropy to pick and choose which ones they want to use CadairBut it enables that as well bsipoczwhat do you have for the following command (to check which forks you have): git remote -v bsipoczThe first fetch everything to be up to date: bsipocz git fetch --all bsipocz git rebase -i astropy/master bsipoczthis will bring up an editor windows bsipoczwindow bsipoczoh, wait bsipoczmaking a copy of the branch may be useful before you do a rebase bsipoczhowever many of those are not relevant as are due to a previous rebase attempt bsipoczso in the editor remove those commits, I think that will help a bit bsipoczyes. those are not your original code changes, are most of them removed in later commits anyway. bsipoczalso while we're at it, remove all traces of the big test file from the history bsipoczfor that you move the last commit (with pick f136e96c2 Removed test file) bsipoczunderneath the commit you've added the tests ( pick 9fc481ff2 added tests) bsipoczchange the "pick" of the latter commit to "fixup" bsipoczso it will merge the two commits together, basically getting rid of the file bsipocz(we have to repeat this for removing the png file, too, but don't yet have a commit for that). bsipoczlet me know if you are ready bsipoczyes, you added the file in 9fc481ff2and removed it in f136e96c2. These two should be merged ideally. bsipoczif ready you shall see 8 non commented lines in the editor bsipoczsave and exit bsipoczfirst commit will result a conflict bsipocz both modified: docs/io/ascii/fast_ascii_io.rst bsipocz(sorry if I'm rushing, but got to go in 15 mins) bsipocznope, you haven't pushed all of these up (thus I didn't see them) bsipoczmove 308b787under 0ce5838and change it to "fixup" too bsipoczalso remove the last merge conflict commit (as I assume it was just the side effect of another rebase gone wrong) bsipoczso you have 9 commits bsipocznope bsipoczand also remove the last docstring formatting, as I see that's another remnant of a rebase bsipoczsave and exit the editor error: could not apply 9c21649... Fixes issue 5845 When you have resolved this problem, run "git rebase --continue". If you prefer to skip this patch, run "git rebase --skip" instead. To check out the original branch and stop rebasing, run "git rebase --abort". Could not apply 9c216497277a0dfef2f8530b0372db4040c3397d... Fixes issue 5845 bsipoczit will show that there is conflict in one file bsipocz both modified: docs/io/ascii/fast_ascii_io.rst bsipoczopen that file in an editor to resolve the conflict bsipoczthat's where the conflict starts <<<<<<< HEAD .. _fast_conversion_opts: ======= In order to optimize memory usage, specify the dtype of columns when reading in data using the fast readers. This can be done using the converters paramater which is consistent with the converters paramater used in basic read. import numpy as np converters = {'col1': [ascii.convert_numpy(np.uint)], ... 'col2': [ascii.convert_numpy(np.float32)]} ascii.read('file.dat', format = 'fast_basic', converters=converters) 9c21649... Fixes issue 5845 bsipoczfigure out what's the best place for it in the new modified version bsipoczdon't remove it, just try to find the right place for it. bsipoczWhat you need to remove is the strings "<<<<<<< HEAD" and "======" and at the bottom ">>>>>>> 9c2164972... Fixes issue 5845" bsipoczcurrently you've insterted the example right under the title here: bsipoczif you think that's a good place for it then you can leave it, otherwise move around in the file whereever it fits best bsipoczI got to go in 5 mins, so will rush through the next steps bsipoczalternatively can come back to gitten in about 2 hours time bsipocz1) when you're happy with the file, save it and exit the editor git add docs/io/ascii/fast_ascii_io.rst git rebase --continue bsipoczrepeat this requence until no commit is left bsipocz(there is a conflict coming in the next one as well) bsipoczI'll be back in ~2 hours 5 files changed, 70 insertions(+), 14 deletions(-) error: could not apply 09ccb30... minor changes When you have resolved this problem, run "git rebase --continue". If you prefer to skip this patch, run "git rebase --skip" instead. To check out the original branch and stop rebasing, run "git rebase --abort". Could not apply 09ccb30496fe8ec8e46c59901cf4d272da02c542... minor changes CadairI assume you can just test the plugins in a normal fashion with pytest CadairTill Sunday then UTC + 1 because daylight savings CadairYou might be better off emailing the astropy gsoc list, pllim had the original idea to move the code out of astropy. CadairAhhh fits :D CadairOK maybe she is not active on the list either XD CadairExcellent! That would be a great help, I have completely run out of time on that particular side quest Cadairmakes sense, don't think I tested it locally with 2.x and I have been coding 3 exclusively for a few months at this point. I forget what I can't do XD CadairI can't remember off the top of my head. CadairOk
https://gitter.im/astropy/astropy/archives/2017/03/23
CC-MAIN-2019-26
refinedweb
1,035
64.14
howz it Why we use form[0] in documnt.form.(fieldnam).value Sir, I am working in University of pune, We are going to developed a web accessibility evaluation tool, for visually impaired peoples, so I would like to know how we attached java script favelets to our validate url form...let me know plz..am waiting reply thank full to ROSE INDIA.NET... I had got a dream to start my own organization, however I didn't have got enough amount of money to do that. Thank goodness my close fellow advised to utilize the loan. So I used the secured loan and made real my old dream. Post your Comment validation in java script validation in java script i have put this code for only entering integer value in text box however error occured...;form <pre> java script validation - Java Beginners java script validation hi, i have two radio buttons yea and no. all text fiels r deactivated, when i click no radio button. its get active... Button Validation function callEvent1 stuts java script validation is not working. stuts java script validation is not working. hello my stuts client side validation is not working. pls help me out i have put jsp file's code...; </html:form> --of validation.xml-- <form name..." with javascript validation : Contact Details .txt { font-size:12px; font...:// go to that site Form Validation With Java Script Form Validation With Java Script  ... the Validation in JavaScript and your Validation in JavaScript program. JavaScript Java script is used to validate forms, that means checking the proper Java Script Java Script I want to display an error message for the validation of email at the top of the page in red font i want to use java scripts but i dont...").innerHTML=' '; } } </script> <form id="form" method="post" onsubmit Form Validation Form Validation Java script validation for checking special characters and white spaces and give an alert.Please help me urgent Thanks in advance please send me java script for html validation - Java Beginners please send me java script for html validation please send me code for javascript validation .........please send me its urgent a.first:link { color: green;text-decoration:none; } a.first:visited{color:green;text validation..... validation..... hi.......... thanks for ur reply for validation code. but i want a very simple code in java swings where user is allowed to enter only numerical values in textbox , if he enters string values then it should Validation Validation Hi.. How to Validate blank textfield in java that can accepts only integers? Have a look at the following link: Validation ). How do i perform a check using JAVA on the data before storing... javax.swing.*; import java.awt.event.*; class Form extends JFrame { JButton ADD; JPanel panel; JLabel label1,label2; final JTextField text1,text2;; Form validation form using spring ,hibernate and jsp my all classes and jsp pages are pasted here when i try to validate my form its says invalid property'validater of ur... language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding | validation is JSP using JavaScript | Java Script Code of Calendar... | Conditions In Java Script | Looping in JavaScript | Functions in JavaScript... Feature of JavaScript | Classes and Objects | Form Validation password validation with special character password validation with special character how to validate password with special character using java script Time validation box is in the correct format or not using java script. Please help me for doing...;html> <script language="JavaScript"> function check(timeStr) { var...."); document.frm.timetext.value=""; return false; } return false; } </script> <form name="frm"> struts validation -form-elements.js" type="text/javascript"></script> <script src...struts validation I want to apply validation on my program.But i am failure to do that.I have followed all the rules for validation still I am java script - Struts java script how validation for date is performed in the top-down... is the validation for selecting a date that is in future like jul 2009... Form Date Of Birth: Day 1 2 3 4 5 Java script Java script Source code for capturing the screen using Java Script Java Script Java Script why we are using java script validation - JSP-Servlet thankful to u....... Hi Friend, In a very simple way when you use java-script kind of validations which are done before submission of forms are client side validation and when you submit the form it passes the data to some file over Java Script Java Script Hi, Any one please share a code to swap the 2strings using java script Java Script. Java Script. Hi Sir, The below java script code is not working in Google chrome can yo give me the solution as soon as possible. <script type="text/javascript"> function setValue(){ var val=""; var frm heysam January 27, 2012 at 1:54 PM howz it javascriptshweta April 20, 2012 at 11:43 AM Why we use form[0] in documnt.form.(fieldnam).value To validate url using javascript faveletsYogesh May 7, 2012 at 5:57 PM Sir, I am working in University of pune, We are going to developed a web accessibility evaluation tool, for visually impaired peoples, so I would like to know how we attached java script favelets to our validate url form...let me know plz..am waiting gender velidation java scriptbharat patel July 19, 2012 at 3:44 PM reply want latest up dates in my sql and javarajkmar August 21, 2012 at 3:26 PM thank full to ROSE INDIA.NET... reCatalinaTurner May 20, 2012 at 10:25 PM I had got a dream to start my own organization, however I didn't have got enough amount of money to do that. Thank goodness my close fellow advised to utilize the loan. So I used the secured loan and made real my old dream. Post your Comment
http://roseindia.net/discussion/24088-Form-Validation-With-Java-Script.html
CC-MAIN-2013-20
refinedweb
974
54.52
.8 Git Internals - Environment Variables Environment Variables Git always runs inside a bash shell, and uses a number of shell environment variables to determine how it behaves. Occasionally, it comes in handy to know what these are, and how they can be used to make Git behave the way you want it to. This isn’t an exhaustive list of all the environment variables Git pays attention to, but we’ll cover the most useful. Global Behavior Some of Git’s general behavior as a computer program depends on environment variables. GIT_EXEC_PATH determines where Git looks for its sub-programs (like git-commit, git-diff, and others). You can check the current setting by running git --exec-path. HOME isn’t usually considered customizable (too many other things depend on it), but it’s where Git looks for the global configuration file. If you want a truly portable Git installation, complete with global configuration, you can override HOME in the portable Git’s shell profile. PREFIX is similar, but for the system-wide configuration. Git looks for this file at $PREFIX/etc/gitconfig. GIT_CONFIG_NOSYSTEM, if set, disables the use of the system-wide configuration file. This is useful if your system config is interfering with your commands, but you don’t have access to change or remove it. GIT_PAGER controls the program used to display multi-page output on the command line. If this is unset, PAGER will be used as a fallback. GIT_EDITOR is the editor Git will launch when the user needs to edit some text (a commit message, for example). If unset, EDITOR will be used. Repository Locations Git uses several environment variables to determine how it interfaces with the current repository. GIT_DIR is the location of the .git folder. If this isn’t specified, Git walks up the directory tree until it gets to ~ or /, looking for a .git directory at every step. GIT_CEILING_DIRECTORIES controls the behavior of searching for a .git directory. If you access directories that are slow to load (such as those on a tape drive, or across a slow network connection), you may want to have Git stop trying earlier than it might otherwise, especially if Git is invoked when building your shell prompt. GIT_WORK_TREE is the location of the root of the working directory for a non-bare repository. If not specified, the parent directory of $GIT_DIR is used. GIT_INDEX_FILE is the path to the index file (non-bare repositories only). GIT_OBJECT_DIRECTORY can be used to specify the location of the directory that usually resides at .git/objects. GIT_ALTERNATE_OBJECT_DIRECTORIES is a colon-separated list (formatted like /dir/one:/dir/two:…) which tells Git where to check for objects if they aren’t in GIT_OBJECT_DIRECTORY. If you happen to have a lot of projects with large files that have the exact same contents, this can be used to avoid storing too many copies of them. Pathspecs A “pathspec” refers to how you specify paths to things in Git, including the use of wildcards. These are used in the .gitignore file, but also on the command-line ( git add *.c). GIT_GLOB_PATHSPECS and GIT_NOGLOB_PATHSPECS control the default behavior of wildcards in pathspecs. If GIT_GLOB_PATHSPECS is set to 1, wildcard characters act as wildcards (which is the default); if GIT_NOGLOB_PATHSPECS is set to 1, wildcard characters only match themselves, meaning something like *.c would only match a file named “*.c”, rather than any file whose name ends with .c. You can override this in individual cases by starting the pathspec with :(glob) or :(literal), as in :(glob)*.c. GIT_LITERAL_PATHSPECS disables both of the above behaviors; no wildcard characters will work, and the override prefixes are disabled as well. GIT_ICASE_PATHSPECS sets all pathspecs to work in a case-insensitive manner. Committing The final creation of a Git commit object is usually done by git-commit-tree, which uses these environment variables as its primary source of information, falling back to configuration values only if these aren’t present. GIT_AUTHOR_NAME is the human-readable name in the “author” field. GIT_AUTHOR_EMAIL is the email for the “author” field. GIT_AUTHOR_DATE is the timestamp used for the “author” field. GIT_COMMITTER_NAME sets the human name for the “committer” field. GIT_COMMITTER_EMAIL is the email address for the “committer” field. GIT_COMMITTER_DATE is used for the timestamp in the “committer” field. user.email configuration value isn’t set. If this isn’t set, Git falls back to the system user and host names. Networking Git uses the curl library to do network operations over HTTP, so GIT_CURL_VERBOSE tells Git to emit all the messages generated by that library. This is similar to doing curl -v on the command line. GIT_SSL_NO_VERIFY tells Git not to verify SSL certificates. This can sometimes be necessary if you’re using a self-signed certificate to serve Git repositories over HTTPS, or you’re in the middle of setting up a Git server but haven’t installed a full certificate yet. If the data rate of an HTTP operation is lower than GIT_HTTP_LOW_SPEED_LIMIT bytes per second for longer than GIT_HTTP_LOW_SPEED_TIME seconds, Git will abort that operation. These values override the http.lowSpeedLimit and http.lowSpeedTime configuration values. GIT_HTTP_USER_AGENT sets the user-agent string used by Git when communicating over HTTP. The default is a value like git/2.0.0. Diffing and Merging GIT_DIFF_OPTS is a bit of a misnomer. The only valid values are -u<n> or --unified=<n>, which controls the number of context lines shown in a git diff command. GIT_EXTERNAL_DIFF is used as an override for the diff.external configuration value. If it’s set, Git will invoke this program when git diff is invoked. GIT_DIFF_PATH_COUNTER and GIT_DIFF_PATH_TOTAL are useful from inside the program specified by GIT_EXTERNAL_DIFF or diff.external. The former represents which file in a series is being diffed (starting with 1), and the latter is the total number of files in the batch. GIT_MERGE_VERBOSITY controls the output for the recursive merge strategy. The allowed values are as follows: 0 outputs nothing, except possibly a single error message. 1 shows only conflicts. 2 also shows file changes. 3 shows when files are skipped because they haven’t changed. 4 shows all paths as they are processed. 5 and above show detailed debugging information. The default value is 2. Debugging Want to really know what Git is up to? Git has a fairly complete set of traces embedded, and all you need to do is turn them on. The possible values of these variables are as follows: “true”, “1”, or “2” – the trace category is written to stderr. An absolute path starting with /– the trace output will be written to that file. GIT_TRACE controls general traces, which don’t fit into any specific category. This includes the expansion of aliases, and delegation to other sub-programs. $ GIT_TRACE=true git lga 20:12:49.877982 git.c:554 trace: exec: 'git-lga' 20:12:49.878369 run-command.c:341 trace: run_command: 'git-lga' 20:12:49.879529 git.c:282 trace: alias expansion: lga => 'log' '--graph' '--pretty=oneline' '--abbrev-commit' '--decorate' '--all' 20:12:49.879885 git.c:349 trace: built-in: git 'log' '--graph' '--pretty=oneline' '--abbrev-commit' '--decorate' '--all' 20:12:49.899217 run-command.c:341 trace: run_command: 'less' 20:12:49.899675 run-command.c:192 trace: exec: 'less' GIT_TRACE_PACK_ACCESS controls tracing of packfile access. The first field is the packfile being accessed, the second is the offset within that file: $ GIT_TRACE_PACK_ACCESS=true git status 20:10:12.081397 sha1_file.c:2088 .git/objects/pack/pack-c3fa...291e.pack 12 20:10:12.081886 sha1_file.c:2088 .git/objects/pack/pack-c3fa...291e.pack 34662 20:10:12.082115 sha1_file.c:2088 .git/objects/pack/pack-c3fa...291e.pack 35175 # […] 20:10:12.087398 sha1_file.c:2088 .git/objects/pack/pack-e80e...e3d2.pack 56914983 20:10:12.087419 sha1_file.c:2088 .git/objects/pack/pack-e80e...e3d2.pack 14303666 On branch master Your branch is up-to-date with 'origin/master'. nothing to commit, working directory clean GIT_TRACE_PACKET enables packet-level tracing for network operations. $ GIT_TRACE_PACKET=true git ls-remote origin 20:15:14.867043 pkt-line.c:46 packet: git< # service=git-upload-pack 20:15:14.867071 pkt-line.c:46 packet: git< 0000 20:15:14.867079 pkt-line.c:46 packet: git< 97b8860c071898d9e162678ea1035a8ced2f8b1f HEAD\0multi_ack thin-pack side-band side-band-64k ofs-delta shallow no-progress include-tag multi_ack_detailed no-done symref=HEAD:refs/heads/master agent=git/2.0.4 20:15:14.867088 pkt-line.c:46 packet: git< 0f20ae29889d61f2e93ae00fd34f1cdb53285702 refs/heads/ab/add-interactive-show-diff-func-name 20:15:14.867094 pkt-line.c:46 packet: git< 36dc827bc9d17f80ed4f326de21247a5d1341fbc refs/heads/ah/doc-gitk-config # […] GIT_TRACE_PERFORMANCE controls logging of performance data. The output shows how long each particular git invocation takes. $ GIT_TRACE_PERFORMANCE=true git gc 20:18:19.499676 trace.c:414 performance: 0.374835000 s: git command: 'git' 'pack-refs' '--all' '--prune' 20:18:19.845585 trace.c:414 performance: 0.343020000 s: git command: 'git' 'reflog' 'expire' '--all' Counting objects: 170994, done. Delta compression using up to 8 threads. Compressing objects: 100% (43413/43413), done. Writing objects: 100% (170994/170994), done. Total 170994 (delta 126176), reused 170524 (delta 125706) 20:18:23.567927 trace.c:414 performance: 3.715349000 s: git command: 'git' 'pack-objects' '--keep-true-parents' '--honor-pack-keep' '--non-empty' '--all' '--reflog' '--unpack-unreachable=2.weeks.ago' '--local' '--delta-base-offset' '.git/objects/pack/.tmp-49190-pack' 20:18:23.584728 trace.c:414 performance: 0.000910000 s: git command: 'git' 'prune-packed' 20:18:23.605218 trace.c:414 performance: 0.017972000 s: git command: 'git' 'update-server-info' 20:18:23.606342 trace.c:414 performance: 3.756312000 s: git command: 'git' 'repack' '-d' '-l' '-A' '--unpack-unreachable=2.weeks.ago' Checking connectivity: 170994, done. 20:18:25.225424 trace.c:414 performance: 1.616423000 s: git command: 'git' 'prune' '--expire' '2.weeks.ago' 20:18:25.232403 trace.c:414 performance: 0.001051000 s: git command: 'git' 'rerere' 'gc' 20:18:25.233159 trace.c:414 performance: 6.112217000 s: git command: 'git' 'gc' GIT_TRACE_SETUP shows information about what Git is discovering about the repository and environment it’s interacting with. $ GIT_TRACE_SETUP=true git status 20:19:47.086765 trace.c:315 setup: git_dir: .git 20:19:47.087184 trace.c:316 setup: worktree: /Users/ben/src/git 20:19:47.087191 trace.c:317 setup: cwd: /Users/ben/src/git 20:19:47.087194 trace.c:318 setup: prefix: (null) On branch master Your branch is up-to-date with 'origin/master'. nothing to commit, working directory clean Miscellaneous GIT_SSH, if specified, is a program that is invoked instead of ssh when Git tries to connect to an SSH host. It is invoked like $GIT_SSH [username@]host [-p <port>] <command>. Note that this isn’t the easiest way to customize how ssh is invoked; it won’t support extra command-line parameters, so you’d have to write a wrapper script and set GIT_SSH to point to it. It’s probably easier just to use the ~/.ssh/config file for that. GIT_ASKPASS is an override for the core.askpass configuration value. This is the program invoked whenever Git needs to ask the user for credentials, which can expect a text prompt as a command-line argument, and should return the answer on stdout. (See Credential Storage for more on this subsystem.) GIT_NAMESPACE controls access to namespaced refs, and is equivalent to the --namespace flag. This is mostly useful on the server side, where you may want to store multiple forks of a single repository in one repository, only keeping the refs separate. GIT_FLUSH can be used to force Git to use non-buffered I/O when writing incrementally to stdout. A value of 1 causes Git to flush more often, a value of 0 causes all output to be buffered. The default value (if this variable is not set) is to choose an appropriate buffering scheme depending on the activity and the output mode. GIT_REFLOG_ACTION lets you specify the descriptive text written to the reflog. Here’s an example: $ GIT_REFLOG_ACTION="my action" git commit --allow-empty -m 'my message' [master 9e3d55a] my message $ git reflog -1 9e3d55a HEAD@{0}: my action: my message
https://git-scm.com/book/cs/v2/Git-Internals-Environment-Variables
CC-MAIN-2022-05
refinedweb
2,051
59.6
[Design] view is gone - Sunday, June 03, 2012 2:48 PM I worked on a VB project using Visual Studio 11 Beta, getting all of the controls situated on the form. The only actual "code" that was created was to create the controls and connect a few drop down boxes to an Access database. When I went to reopen my project today, my main interface form could no longer access [Design] view so I have no way to continue adding controls to my form. I've tried multiple solutions I found on these forums, including adding "public class Form1 end class" to my .vb form, to no avail. Currently, the "public class Form1 end class" is all I have in my code file. my .designer.vb file is full of components that I added to the form, I just can't view them in Design mode. I have no errors on my form, however I did have some when I originally opened my project that were not there when I closed it last night. I did correct these errors though, and can run unit tests without any errors. But no matter what I do I cannot get the [Design] view to come back. I know that you can normally get to design view by going to View > Designer from the toolbar or my right clicking on the .vb file in the solution explorer and clicking "View Designer", but for this form those options no longer exist. This is really frustrating because the form worked fine last night and I could access it without a problem, and for some reason this morning it does not work. I haven't done anything to the form since last night when I closed it so I don't know why it suddenly won't allow me to get to the [Design] view. I'm starting to question even using VB for this project at all if I keep having problems like this. It's too frustrating and time consuming. If anyone can think of a solution that I am missing, I would greatly appreciate it. All Replies - Sunday, June 03, 2012 6:35 PMIn solution explorer I double left click on the Forms .vb file and it brings the Form up in the design tab. I am also using MS VS 11 Beta but can not recreate your problem. Hopefully one of the Microsoft engineers will see your post sometime today and maybe have an answer for you. - Sunday, June 03, 2012 6:45 PM I don't know the problem, did you by the way know that VB beta is replaced by RC The name is changed to 2012 VB version stays of course VB11 But like monkeyboy says i had not your behaviour. Check if you have changes something in xx.designer.vb by accident. Success Cor - Sunday, June 03, 2012 6:46 PM Like mr. Monkeyboy says is this normal, check if you changed something in xx.designer.vb Did you know by the way that Version 2012 RC is there. The VB version in that is of course VB11 Success Cor - Sunday, June 03, 2012 7:00 PMThis may be a solution - Proposed As Answer by Shanks ZenMicrosoft Contingent Staff, Moderator Tuesday, June 19, 2012 2:34 AM - Marked As Answer by Shanks ZenMicrosoft Contingent Staff, Moderator Tuesday, June 19, 2012 2:34 AM -
http://social.msdn.microsoft.com/Forums/en-US/vbgeneral/thread/6585c383-ab7c-4e18-b54d-5093f3835b1a
CC-MAIN-2013-20
refinedweb
564
77.16
/> For this post I will write a simple implementation of a 1-dimensional cellular automaton in Python. implementation I will write a simple console-based output, but the project is designed in a way which allows other UIs to be "plugged in". Some patterns produced can be much more interesting than others. Coding I have covered everything you need to know to code an elementary cellular automaton, so create a new folder and within it create the following empty files. You can download the source code as a zip or clone/download from Github if you prefer. - ca1d.py - ca1dview.py - main.py Source Code Links The code for printing the cellular automaton will be kept separate from the CA code itself, with a printing function being passed to the CA. This means that any visual output code can be used with the same CA code just by passing a pointer to the required function. For example the console-based output included with this post could be replaced with a Tkinter implementation with no change to ca1d.py. Firstly let's look at ca1dpy itself. ca1d.py class CA1D(object): def __init__(self, cell_count, init_pattern, rule, iterations, on_change): """ Creates attributes with values from arguments or defaults. Set initial state of cells from init_pattern and then calls the on_change function to let whatever UI has been plugged in to update the output. """ self.cell_count = cell_count self.init_pattern = init_pattern self.rule = rule self.iterations = iterations self.on_change = on_change self.iteration = 0 self.cells = [] self.__next_state = [] self.rule_binary = format(self.rule, '08b') # set cells from init pattern for c in self.init_pattern: if c == "0": self.cells.append("0") elif c == "1": self.cells.append("1") self.__next_state.append("0") # call on_change to let UI know CA has been created self.on_change(self) def start(self): """ Loop for specified number of iterations, calculating next state and updating UI """ neighbourhood = "" for i in range(0, self.iterations): self.iteration += 1 self.__calculate_next_state() self.on_change(self) def __calculate_next_state(self): """ For each cell, calculate that cells next state depending on the current rule. Then copy the next state to the current state """ for c in range(0, self.cell_count - 1): if c == 0: # roll beginning round to end prev_index = self.cell_count - 1 else: prev_index = c - 1 if c == (self.cell_count - 1): # roll end round to beginning next_index = 0 else: next_index = c + 1 neighbourhood = self.cells[prev_index] + self.cells[c] + self.cells[next_index] if neighbourhood == "111": self.__next_state[c] = self.rule_binary[0] elif neighbourhood == "110": self.__next_state[c] = self.rule_binary[1] elif neighbourhood == "101": self.__next_state[c] = self.rule_binary[2] elif neighbourhood == "100": self.__next_state[c] = self.rule_binary[3] elif neighbourhood == "011": self.__next_state[c] = self.rule_binary[4] elif neighbourhood == "010": self.__next_state[c] = self.rule_binary[5] elif neighbourhood == "001": self.__next_state[c] = self.rule_binary[6] elif neighbourhood == "000": self.__next_state[c] = self.rule_binary[7] for c in range(0, self.cell_count): self.cells[c] = self.__next_state[c] The cellular automaton code is implemented as a class, and in __init__ we add the necessary properties to self. Some of these are set directly from the arguments, including the on_change function, and we also create a couple of lists for the current and next states. Next we iterate init_pattern, adding cells to the list with the corresponding state. Finally we call the on_change function to let whatever UI we have plugged in know it needs to output the current state. Next we have the start method. This is a simple loop to the required number of iterations, calling __calculate_next_state and then on_change to update the UI. Finally we have a private function __calculate_next_state which does all the hard work. Within a loop through the cells it firstly sets the indexes of the previous and next cells to the current - we need a couple of if/elses here to allow for rolling round the first and last cell. We can then set the neighbourhood to the current cell and its two neighbours. Next comes a pile of if/elifs, setting __next_state according to the rule. Lastly we just copy __next_state to cells, ie the current state. Now let's move on to ca1dview.py. ca1dview.py class CA1Dview(object): """ Provides a UI for a CA1D object. Can be replaced by any class providing the same methods. """ def __init__(self, off_color, on_color): """ These cryptic attributes use ANSI terminal codes to print a space in either the off colour or the on colour, resetting the colour at the end. """ self.off_color = "\033[0;" + off_color + "m \033[0m" self.on_color = "\033[0;" + on_color + "m \033[0m" def print_ca(self, ca): """ Before the first iteration calls show_properties. Then prints the iteration number as a row heading. Finally iterates the cells, printing either the off_color or on_color string. """ if ca.iteration == 0: self.show_properties(ca) print(str(ca.iteration).ljust(2) + " ", end='') for c in ca.cells: if c == "0": print(self.off_color, end='') else: print(self.on_color, end='') print("") def show_properties(self, ca): """ Short utility function to output the cellular automaton's attributes """ print("cell_count: " + str(ca.cell_count)) print("init_pattern: " + ca.init_pattern) print("rule: " + str(ca.rule)) print("iterations: " + str(ca.iterations) + "\n") Again this is a class, and __init__ takes as arguments the colours of off and on cells which are then used to initialise ANSI terminal codes for later use. The central method print_ca first checks to see if this is the 0th iteration; if so it calls show_properties to output a few details of the cellular automaton. It then prints the iteration before looping through the cells, printing a space on the appropriate background colour. Most of the coding is now completed so let's write a main function to put it to use. main.py import ca1d import ca1dview def main(): """ Demonstration of the CA1D and CA1Dview classes. Creates a CA1Dview and then a CA1D. The CA1Dview's print_ca method is passed to CA1D to provide a "plug-in" front end. """ print("-------------------------") print("| codedrome.com |") print("| Cellular Automaton 1D |") print("-------------------------\n") # Valid colours # 40 black, 41 red, 42 green, 43 yellow, 44 blue, 45 purple, 46 cyan, 47 white cav = ca1dview.CA1Dview("47", "40") ca = ca1d.CA1D(cell_count = 66, init_pattern = "000000000000000000000000000000001000000000000000000000000000000000", rule = 22, iterations = 30, on_change = cav.print_ca) ca.start() main() Firstly we create a CA1Dview object. The valid colours which can be used here are listed in the comment. Next we create a CA1D object, passing the CA1Dview's print_ca method as the last argument, before kicking everything off with the start method. Now run the program with this command: Running the program python3.7 main.py Which should give you something like this, depending on the arguments used./> This shows one of the more interesting rules. You might like to spend a bit of time experimenting with different rules, and also with various starting patterns.
https://www.codedrome.com/one-dimensional-cellular-automata-in-python/
CC-MAIN-2020-34
refinedweb
1,138
50.73
Events - Thursday: Hardware Hacking Meeting today in this room @ 12:30 I. Some pivots have experience with this working pretty well! Draper/CI Reporter wasn’t our problem. Not entirely sure what was – but signs point to a Spork/ResqueSpec loader order problem on Linux only. Sometimes when you’re paginating you have to run 2 queries, one for the first n records, and another for the total count. It’s possible to get both in one query using OVER(). SELECT , COUNT() OVER() as total_count FROM blog_posts ORDER BY created_at DESC LIMIT 25; Each record will come back with a total_count attribute, so you will have to look at one of the records to figure out the count. Some good links and resources in this article, including a sweet color scheme generator, FontAwesome, Kickstrap, and more.! Eventbrite Invite: It looks like there’s a lot of them – does anyone have any personal recommendations? This goes back to the Help a couple of weeks ago re: JSON being undefined. We blame Draper & its RSpec integration, but don’t yet have a solution.. Come and talk about programming, and other aspects of XP. 6:30pm brew updateto get new packages. Clones and inits and updates all the submodules. seems like it will be useful for something. A new(ish) Javascript library similar to Underscore but that extends objects instead of just providing a function namespace. Any tips for reading HttpPerf? LogReplay, any good gems or tips to replay requests from server logs? Jeff dean mentioned some strategies dealing with week based reporting, and the oddities that arise when week 0 bleeds into the next year. Use Date#strf_time) One mistake I’ve seen made a few times is the notion that CSS’s nth-child pseudoselector acts like jQuery’s :eq pseudoselector. jQuery’s :eq(n) pseudoselector gives you a single element that is at index n out of all matched elements. While this is certainly a useful selector to have, it’s unfortunately not supported in standard CSS. If you find yourself repeatedly using :eq in your jQuery code, be careful that you are not relying too heavily on :eq to the point where your styles are difficult to match in “pure” CSS. If we were to express :nth-child in terms of the :eq selector, it would be like using :eqscoped to all of the immediate descendant contents of a single element. Or in my own words, :nth-child adds a constraint to the selector that the matched element must be the nth element in its parent container. So if we had a snippet of HTML like <div> <div id="bar1" class="foo"></div> <div id="bar2" class="foo"></div> <div id="bar3" class="foo"></div> </div> Then the selector .foo:nth-child(2)will match the div #bar2. If we insert another element at the front of the container: <div> <p>Shift!</p> <div id="bar1" class="foo"></div> <div id="bar2" class="foo"></div> <div id="bar3" class="foo"></div> </div> And again we select .foo:nth-child(2), we match the div #bar1 because the 2nd child of the container also matches .foo. Thus, in this second example, if we try .foo:nth-child(1) or the equivalent .foo:first-child, we will not match any elements because the first child element in that container — the p tag — does not match .foo. Likewise, :nth-child can match children in multiple containers. In the HTML snippet: <div> <p>Shift!</p> <div id="bar1" class="foo"></div> <div id="bar2" class="foo"></div> <div id="bar3" class="foo"></div> </div> <div> <div id="quux" class="foo"></div> </div> the selector .foo:last-child will match the divs #bar3 and #quux; but .foo:first-child or .foo:nth-child(1) will only match #quux because the first child of the first container is, again, not a .foo. Correlating debugging errors to source files. Currently using Sprockets template name in-lined. Any better solutions? Suggestions:
http://pivotallabs.com/2012/page/11/
CC-MAIN-2013-48
refinedweb
664
65.32
Thomas Goirand <zigo <at> debian.org> writes: > > You have to define what problem we are trying to solve. And this still > hasn't been defined yet in this list. What for? Seriously. There are a whole lot of features in systemd which I, for one, do NOT want to do without any longer. Decent process state reporting. Decent and comprehensive logging. Service termination that works reliably. The ability to run processes in their own namespace without jumping through hoops and spending a day debugging the stuff. Socket activation. Udev activation. Replacing a whole heap of somewhat-buggy sysvinit scripts. I could go on. systemd's feature list is impressive, and so is the list of distros which have adopted it. Ship basic sysvinit scripts for the handful of non-Linux-kernel Debian users if you have to, but otherwise transition to systemd, dammit. Any other option makes no sense whatsoever. IMHO. Among the heap of computers I own, every single one runs with systemd (even the outdated v44 in Wheezy is better than sysvinit), and I'd rather switch distros than going back. Seriously. -- -- Matthias
https://lists.debian.org/debian-devel/2013/07/msg00575.html
CC-MAIN-2015-22
refinedweb
185
67.04
The main elf forgot a critical password, and we have to hack his password. You can find the puzzle here. To do this, we will brute force every option there is. Luckily for us, there are only a couple options since they always used the same format, which is: A-000 The A can be A-Z, and the 000 can loop to 999. This makes it a bit easier for us. Thinking about a solution permalink Another thing we get out of the box because the passwords are encrypted via SHA1. This means we know how to encode our tries and match them against the existing hashed password. If the hashes match, it must mean that is the password. Example: The SHA1 for A-000 is 8b066367bcbce6be1fe09450994b00c703918e23. So if we hash A-000 this should be the output. Another great thing is that Node.js comes with a crypto library already out of the box, so no need to install anything else. Bute forcing password in JavaScript permalink We can use the crypto package that comes with Node.js, so let's import it. import crypto from 'crypto'; Then we need a way to loop through all of the letters of the alphabet. There are multiple ways of doing this. I choose to highlight a funny one, as you might not know this is possible. Note: You could also just create an array with all letters, for instance for (let i = 0; i < 26; i++) { const letter = String.fromCharCode(65 + i); } This is a pretty unique way, and it loops 26 times for each letter of the alphabet. Then we use the fromCharCode function and pass 66-92, which represents A-Z. Then we need to loop from 000-999. As you can imagine, we can again use a standard loop for this. A normal for loop is actually the quickest option here. We can break out of them efficiently, so they don't keep running in the background like a forEach would, for instance. for (let i = 0; i < 26; i++) { const letter = String.fromCharCode(65 + i); for (let n = 0; n < 1000; n++) { // todo } } This will give us 0-999, but we miss all the prefix zeroes. For this we can use the padStart function. This function takes a string and adds padding in front. const paddedCode = n.toString().padStart(3, '0'); // When testing on `0` we get: `000` // On `10` we get `010` Then we can construct the hash by combing the letter and the padded code. const password = `${letter}-${paddedCode}`; The next step is to convert this password into a test hash. const testHash = crypto.createHash('sha1').update(testHash).digest('hex'); The last thing we need to do is check if this matches the hash we received. if (testHash === hash) { return password; } And that's it. This function will loop for all possible options until we hit the password that matches. Let's see if we succeeded by running the tests. Thank you for reading, and let's connect! permalink Thank you for reading my blog. Feel free to subscribe to my email newsletter and connect on Facebook or Twitter
https://daily-dev-tips.com/posts/public-solving-hacking-santas-password/
CC-MAIN-2022-21
refinedweb
525
83.96
22 March 2012 07:47 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> Consolidated sales in 2011 rose by 21.2% to €2.14bn compared to the same period a year earlier, primarily driven by higher average selling prices in its core fibre business and higher fibre shipment volumes, the Austrian cellulosic fibres, plastics and engineering group said in a statement. The company’s earnings before interest, tax, depreciation and amortization (EBITDA) for 2011 grew by 45.3% year on year to €480.3m, it added. According to the Lenzing’s preliminary estimates, global fibre production rose by 4.1% to 79.1m tonnes in 2011. The company’s plastics products segment showed an EBITDA margin of 9.5%, while its engineering segment achieved an EBITDA margin of 8.4%, it said. In 2012 the company expects sales to rise to a level between €2.2bn-2.3bn, while EBITDA to be in the range of €400m-480m, depending on the development of fibre and raw material prices, as well as the overall global economic environment,
http://www.icis.com/Articles/2012/03/22/9543870/austrias-lenzing-2011-net-profit-up-62.6-sales-up-21.2.html
CC-MAIN-2014-42
refinedweb
175
60.82
>>. 51 Reader Comments Really? So you're saying if I'm implementing a network service, and I'm reading a data stream where the first byte in the header is the command type, rather than converting the byte to an enumeration and using a switch to handle dispatching, its better to instead use polymorphisms here? How do you plan on determining which object type to instantiate without using a switch?. You're not listening. A wrapper won't work if the right plumbing isn't exposed by your encapsulated object. If the object doesnt expose the underlying ID because the ID is abstracted by the object model, then you can't expose a new ID without source to the underlying library being linked to. If the 3rd party framework doesn't expose the functionality you need you can't use it in a case block just as much as you can't call it in a handler function, there's absolutely no difference there. ProtocolHandler - ABCShared - A, B, C ProtocolHandler - DEFShared - D, E, F works perfectly fine. No need to ever need MI here Again, you're not listening. Several is not the same as 2. Show me how to have this relationship without multiple inheritance: Handler - ABCShared - A, B, C Handler - DEFShared - D, E, F Handler - AFShared - A, F A and F cannot derive two different classes. Well exactly the same way you solve the absence of MI in such languages for every other problem, less decomposition, composition instead of inheritance, mixins,.. Worst case you end up with a class that does more work than it ought to semantically. On the other hand a switch makes sure all the work is in one place so in the worst case it's just as bad as that. Again, you're not listening. First of all, as long as something is modular and you break your tasks into smaller manageable tasks its easy to test. Whether it uses polymorphism or not is irrelevant. Absolutely true, modular makes testing much easier.. and a giant switch is the perfect example of non-modular code. Return <self> if nothing else is returned? That is what Smalltalk does, and it makes the language easier to work with. There is always something returned; if nothing else, then <self>. :-) - Jesper Return <self> if nothing else is returned? That is what Smalltalk does, and it makes the language easier to work with. There is always something returned; if nothing else, then <self>. :-) - Jesper Smalltalk also typically uses Nil for error conditions - so if something goes wrong the result should be nil Regards, A 13 year Smalltalk veteran. Exceptions are not hard to work with if took the time to understand object oriented programming and the particular language implementation of the paradigm. Who catches an exception? Really, you have to ask this? The answer is your code catches it and deal with it. Your annoying question can be forgiven if you happen to use a language that does not believe in checked excpetions.. What about methods that return an output like an int or string (or any type really), how will you communicate error information to the caller?. If all you are doing is instantiating a class, using a switch or a hashmap are basically the same. A switch uses an integral indexer under the hood, just like a hashmap, except a hashmap runs a hash function over the key before indexing it, so you won't gain any performance boost by using a hashmap. Granted, the hashmap can be strongly typed, but that really doesn't matter if all we're doing is instantiating an object. This is a good solution,but doesn't address my main point about unnecessary complexity. Also, you'll need more than the interface specification, you need to have access to the class factory. That's how most of the libraries are set up in Java and C#. Of course you can get around this if you want to take advantage of reflection. But like I said, one of my previous points, was about maintainability. When you go back in after the fact, or someone else has to look at your code, it's very easy to walk a switch statement to see which object types are instantiated for which enumerations, etc. If you use a HashMap, you have to inspect every class to see its declared ID. Now don't take it as me saying that switches are awesome, and they make everything more maintainable I'm talking about a very narrow use case. Thank you captain obvious. If you have a guy that wrote a library that uses enumerations to identify operations, (which is what I was talking about this entire time), since the conversation started about things that didn't have abstracted object model, then of course the API is exposing this, since that's what I said at the very beginning. I was never arguing that you should go and design everything from the ground up using enumerations. I was talking about "possibility", since the original thread started when someone was talking about there being an impossibility of one solution being necessary, so I gave an example... To back up, and give you a higher level picture, I was talking about scenarios where the person you inherited the code from, didn't think their design all the way through... If you design your code well, then YES, you can make VERY good OO code. I was saying that I found it common, (especially with greenhorns or outsourced code), for people to improperly design their object model. For example, the HashMap solution you mentioned. That's a very good solution, and is how lots of things in .NET/JAVA already work. However, I see few people actually think their designs through and actually design their code this way. All too often, I see people thinking of ways to abstract anything and everything, instead of thinking their architecture through deciding what actually needs to be abstracted and how/why it should be abstracted. Typically you want to abstract things to reduce the error plane, or to simplify things. But then you need to take into account if you would actually reduce the error plane in the particular scenario, and/or are you simplifying things in the right manner, and for the right purposes/reasons. For the particular example we were talking about, you aren't reducing the error plane by introducing abstraction, and for the particular problem that was being solved, you'd be adding a small amount of complexity. No it's not. You can write test cases to test each branch individually, and you make sure each branch has tasks that are broken down into smaller tasks, that can also not only be tested individually, but can be reused. Last edited by a_v_s on Mon Oct 21, 2013 12:11 pm A common thing that is done, similar to COM/DCOM, is to return error status, and pass all your outputs by reference. Some people return status enumerations, others return a boolean, etc... When you have an exception, you can then specify the return type as either and you'll get a Left or a Right. Great so how does this make a programmers life easier. Most modern langs include lambdas. Once you have a lambda you can use this either type as follows: On Either declare an abstract method called map. For the concrete Left type, do nothing and return a Left then the concrete Right type, define a function called map that takes the type parameter for Right and then performs some action and then rewraps it in an either. So, now you can do something like: def foo: Either = {..} def bar(a: Either): Either = {..} foo.map( (wrappedType => bar(wrappedType)) and this would produce an Either. The nice part about this is that it abstracts all of the exception handling and just lets you do the stuff that's specific to your app. This seems to be a case of "When all you have is a hammer, everything looks like a nail." I'm glad to learn you can express exception-like behavior using ADTs. But can you honestly say you think this is clearer than languages with explicit support for exceptions? And what are the performance implications of implementing exception behavior in this way? (This makes the assumption that there is a language implementing these features). A common thing that is done, similar to COM/DCOM, is to return error status, and pass all your outputs by reference. Yeah, I've never liked that too much. For one, it does not facilitate the sensible use of method chaining. You must login or create an account to comment.
http://arstechnica.com/information-technology/2013/10/are-there-reasons-for-returning-exception-objects-instead-of-throwing-them/?comments=1&start=40
CC-MAIN-2014-15
refinedweb
1,461
61.97
Overview This chapter helps you to create your first Odoo module and deploy it in your Odoo.sh project. This tutorial requires you created a project on Odoo.sh, and you know your Github repository’s URL. Basic use of Git and Github is explained. The below assumptions are made: - ~/src is the directory where are located the Git repositories related to your Odoo projects, - odoo is the Github user, - odoo-addons is the Github repository, - feature-1 is the name of a development branch, - master is the name of the production branch, - my_module is the name of the module. Replace these by the values of your choice. Create the development branch From Odoo.sh In the branches view: - hit the +button next to the development stage, - choose the branch master in the Fork selection, type feature-1 in the To input. Once the build created, you can access the editor and browse to the folder ~/src/user to access to the code of your development branch. From your computer Clone your Github repository on your computer: $ mkdir ~/src $ cd ~/src $ git clone $ cd ~/src/odoo-addons Create a new branch: $ git checkout -b feature-1 master Create the module structure Scaffolding the module While not necessary, scaffolding avoids the tedium of setting the basic Odoo module structure. You can scaffold a new module using the executable odoo-bin. From the Odoo.sh editor, in a terminal: $ odoo-bin scaffold my_module ~/src/user/ Or, from your computer, if you have an installation of Odoo: $ ./odoo-bin scaffold my_module ~/src/odoo-addons/ If you do not want to bother installing Odoo on your computer, you can also download this module structure template in which you replace every occurrences of my_module to the name of your choice. The below structure will be generated: my_module ├── __init__.py ├── __manifest__.py ├── controllers │ ├── __init__.py │ └── controllers.py ├── demo │ └── demo.xml ├── models │ ├── __init__.py │ └── models.py ├── security │ └── ir.model.access.csv └── views ├── templates.xml └── views.xml 警告 Do not use special characters other than the underscore ( _ ) for your module name, not even an hyphen ( - ). This name is used for the Python classes of your module, and having classes name with special characters other than the underscore is not valid in Python. Uncomment the content of the files: - models/models.py, an example of model with its fields, - views/views.xml, a tree and a form view, with the menus opening them, - demo/demo.xml, demo records for the above example model, - controllers/controllers.py, an example of controller implementing some routes, - views/templates.xml, two example qweb views used by the above controller routes, __manifest__.py, the manifest of your module, including for instance its title, description and data files to load. You just need to uncomment the access control list data file: # 'security/ir.model.access.csv', Manually If you want to create your module structure manually, you can follow Build an Odoo module to understand the structure of a module and the content of each file. Push the development branch Stage the changes to be committed $ git add my_module Commit your changes $ git commit -m "My first module" Push your changes to your remote repository From an Odoo.sh editor terminal: $ git push https HEAD:feature-1 The above command is explained in the section Commit & Push your changes of the Online Editor chapter. It includes the explanation regarding the fact you will be prompted to type your username and password, and what to do if you use the two-factor authentication. Or, from your computer terminal: $ git push -u origin feature-1 You need to specify -u origin feature-1 for the first push only. From that point, to push your future changes from your computer, you can simply use $ git push Test your module Your branch should appear in your development branches in your project. In the branches view of your project, you can click on your branch name in the left navigation panel to access its history. You can see here the changes you just pushed, including the comment you set. Once the database ready, you can access it by clicking the Connect button. If your Odoo.sh project is configured to install your module automatically, you will directly see it amongst the database apps. Otherwise, it will be available in the apps to install. You can then play around with your module, create new records and test your features and buttons. Test with the production data You need to have a production database for this step. You can create it if you do not have it yet. Once you tested your module in a development build with the demo data and believe it is ready, you can test it with the production data using a staging branch. You can either: Make your development branch a staging branch, by drag and dropping it onto the staging section title. Merge it in an existing staging branch, by drag and dropping it onto the given staging branch. You can also use the git merge command to merge your branches. This will create a new staging build, which will duplicate the production database and make it run using a server updated with your latest changes of your branch. Once the database ready, you can access it using the Connect button. Install your module Your module will not be installed automatically, you have to install it from the apps menu. Indeed, the purpose of the staging build is to test the behavior of your changes as it would be on your production, and on your production you would not like your module to be installed automatically, but on demand. Your module may not appear directly in your apps to install either, you need to update your apps list first: - activate the Developer mode - in the apps menu, click the Update Apps List button, in the dialog that appears, click the Update button. Your module will then appear in the list of available apps. Deploy in production Once you tested your module in a staging branch with your production data, and believe it is ready for production, you can merge your branch in the production branch. Drag and drop your staging branch on the production branch. You can also use the git merge command to merge your branches. This will merge the latest changes of your staging branch in the production branch, and update your production server with these latest changes. Once the database ready, you can access it using the Connect button. Install your module Your module will not be installed automatically, you have to install it manually as explained in the above section about installing your module in staging databases. Add a change This section explains how to add a change in your module by adding a new field in a model and deploy it. - From the Odoo.sh editor, - browse to your module folder ~/src/user/my_module, - then, open the file models/models.py. - Or, from your computer, - use the file browser of your choice to browse to your module folder ~/src/odoo-addons/my_module, - then, open the file models/models.py using the editor of your choice, such as Atom, Sublime Text, PyCharm, vim, … Then, after the description field description = fields.Text() Add a datetime field start_datetime = fields.Datetime('Start time', default=lambda self: fields.Datetime.now()) Then, open the file views/views.xml. After <field name="value2"/> Add <field name="start_datetime"/> These changes alter the database structure by adding a column in a table, and modify a view stored in database. In order to be applied in existing databases, such as your production database, these changes requires the module to be updated. If you would like the update to be performed automatically by the Odoo.sh platform when you push your changes, increase your module version in its manifest. Open the module manifest __manifest__.py. Replace 'version': '0.1', with 'version': '0.2', The platform will detect the change of version and trigger the update of the module upon the new revision deployment. Browse to your Git folder. Then, from an Odoo.sh terminal: $ cd ~/src/user/ Or, from your computer terminal: $ cd ~/src/odoo-addons/ Then, stage your changes to be committed $ git add my_module Commit your changes $ git commit -m "[ADD] my_module: add the start_datetime field to the model my_module.my_module" Push your changes: From an Odoo.sh terminal: $ git push https HEAD:feature-1 Or, from your computer terminal: $ git push The platform will then create a new build for the branch feature-1. Once you tested your changes, you can merge your changes in the production branch, for instance by drag-and-dropping the branch on the production branch in the Odoo.sh interface. As you increased the module version in the manifest, the platform will update the module automatically and your new field will be directly available. Otherwise you can manually update the module within the apps list. Use an external Python library If you would like to use an external Python library which is not installed by default, you can define a requirements.txt file listing the external libraries your modules depends on. The platform will use this file to automatically install the Python libraries your project needs. The feature is explained in this section by using the Unidecode library in your module. Create a file requirements.txt in the root folder of your repository From the Odoo.sh editor, create and open the file ~/src/user/requirements.txt. Or, from your computer, create and open the file ~/src/odoo-addons/requirements.txt. Add unidecode Then use the library in your module, for instance to remove accents from characters in the name field of your model. Open the file models/models.py. Before from odoo import models, fields, api Add from unidecode import unidecode After start_datetime = fields.Datetime('Start time', default=lambda self: fields.Datetime.now()) Add @api.model def create(self, values): if 'name' in values: values['name'] = unidecode(values['name']) return super(my_module, self).create(values) def write(self, values): if 'name' in values: values['name'] = unidecode(values['name']) return super(my_module, self).write(values) Adding a Python dependency requires a module version increase for the platform to install it. Edit the module manifest __manifest__.py Replace 'version': '0.2', with 'version': '0.3', Stage and commit your changes: $ git add requirements.txt $ git add my_module $ git commit -m "[IMP] my_module: automatically remove special chars in my_module.my_module name field" Then, push your changes: In an Odoo.sh terminal: $ git push https HEAD:feature-1 In your computer terminal: $ git push
https://www.odoo.com/documentation/user/14.0/zh_CN/odoo_sh/getting_started/first_module.html
CC-MAIN-2021-21
refinedweb
1,768
64.51
I created a personal website while on a business trip back in July 2019. It was thrown together in a couple of days using plain HTML and CSS and a pretty okay visual design. Now that I'm in the job market again and finally looking to jump into development professionally, I wanted to remake my portfolio website with a little more pizazz. I had a few requirements for this: - I would start with an MVP and build upon it - It had to be made in code, not with a website or blog builder - It must be modular with the ability to add new projects with as little code as possible - The website itself should contain a simple list of my projects MVP With my requirements set, I took to creating an MVP product. Since the website would be a list of my projects, the MVP was also a simple list of my projects publicly available online. I used Airtable for this. Check out the MVP here. One of the great things about Airtable is how it automatically generates unique API documentation for every sheet and view in the base. This was the perfect springboard into the modular concept for the site, in which I wouldn't need any code to add new portfolio entries. React web app I enjoy coding in React. I find the modular nature of components to be intuitive. I used React previously for Smashesque.com and had a good time, so I went with it again. Bootstrap is my framework of choice for throwing together pretty sites so I chose to use it too. Modular lists using Airtable With the help of Tania Rascia's article on Using Context API in React (Hooks and Classes), I used Axios and the Airtable API to grab my view of choice and all the rows, fields and content therein from my MVP Airtable. My implementation is a little messy, but it worked, so no problem! I started with EntryContexts.js which performs the API call and creates a context state containing the spreadsheet object. import React, { Component } from 'react' import axios from 'axios' export const EntryContext = React.createContext() class EntryContextProvider extends Component { state = { entries: [] } componentDidMount() { const fetchData = () => { axios .get(' Portfolio%20Entries?api_key=[MY_API_KEY]') .then(({ data }) => { this.setState({ entries: data.records }) }) .catch(console.log) } fetchData(); } render() { return ( <EntryContext.Provider value={{ ...this.state }}> {this.props.children} </EntryContext.Provider> ) } } export default EntryContextProvider Next I created a component called EntryList.js maps the EntryContextProvider component's state data into some simple HTML elements: import React from 'react' const ListEntry = props => { const EnEntry = props.entryData.map((entry, key) => { return ( <div> <h3>{entry.fields.title}</h3> <p>{entry.fields.notes}</p> <p><a href={entry.fields.link}>Link</a></p> </div> ) }) return <div>{EnEntry}</div> } export default ListEntry Finally, I created a page called Entries.js which ties the EntryContextProvider and ListEntry components together and displays them on the page in simple React fashion. In this case, it is displayed as a list of portfolio entries on the home page of the website. import React, { Component } from 'react' import { EntryContext } from '../contexts/EntryContext' import ListEntry from '../components/EntryList' class Entries extends Component { render() { return ( <EntryContext.Consumer>{(context) => { const { entries } = context return ( <ListEntry entryData={entries} /> ) }} </EntryContext.Consumer> ) } } export default Entries In App.js, I wrapped my site in the EntryContextProvider component, which ensures that every page has access to the Airtable context. <EntryContextProvider> <Switch> <Route exact path="/" component={Entries} /> </Switch> </EntryContextProvider> Finally, I had the results I wanted! A simple list of all portfolio entries that were in my Airtable spreadsheet: Aesthetic challenges Many developers revel with minimal websites with lists of achievements and projects. A white colour scheme and emoji are both very popular. I enjoy being a bit contrarian and a total 90s kid, so I took inspiration from the new SEGA MegaDrive Mini website and tried to match its look. Unfortunately, there's a lot of history, imagery and the theme of a retro console that helps bring the 90s Spaceship look together. Without these things (and a lack of artistic talent at my disposal) the results were less than inspiring. I realised that a dark theme for my portfolio was somewhat uninviting and less friendly than I wanted it to be, so I ended up going with a light theme. I wanted to keep some semblence of character, so I kept a scrolling background grid and gave the primary container a "sheet of paper" look. At this point I decided to add images for each project and an emoji to identify what kind of project each is, again all contained in the spreadsheet and called with the Airtable API. I hope the emoji are intuitive to anyone viewing the portfolio but the verdict is still out on that. Once everything was styled, I was extremely happy with the outcome: Final Touches Since my website was made from scratch, I considered it an addition to my portfolio. However, I didn't want it to be added to the list with a link to itself. Therefore I added a ❔ icon in the upper-left which triggered a popover that gives more information on the site. This article will be added onto it, too: Finally, there was a site-breaking bug to be squashed. An empty field in the spreadsheet caused the entire Airtable context to fail, causing a blank web-page. I added some very rudimentary validation to resolve this but I didn't over-think it too much since the airtable should never have empty fields if I'm managing it. At the very least, correct entries load as they should with a simple inline error if there are any problems with a field: And that's about it for my V1 portfolio website! To add new projects I just add a row to the sheet, avoiding any code at all. Let's look at my requirements from the beginning of the project: - I would start with an MVP and build upon it ✔ - It had to be made in code, not with a website or blog builder ✔ - It must be modular with the ability to add new projects with as little code as possible ✔ - The website itself should contain a simple list of my projects ✔ As you can see, I hit all four of my requirements! It was a great journey and an interesting project. I learned the Airtable API, the importance of validation and plenty of design quirks. I'm very happy with the end result! What's next? I enjoy the site as it is and will most likely keep it simple for now. I may use more spreadsheets to add additional list-based sections to the site- articles, testimonials, cat photos... whatever I want to add, I can do so with very little code- Clone the Entries, EntryContextProvider and ListEntry components, replacing the Airtable API link and making any styling changes I want to. Airtable is not ideal for, say, entire blog posts but I'm actually curious about whether it could be done. Imagine an entire site with an Airtable backend? It's possible and perhaps I'll dabble in that idea in the future. For now, I'm happy to mark this V1 project complete! BONUS I just added a new field to the Airtable named "order" and a new code block. With this new snippet, I can adjust the order in which the list entries appear by adding an order value in Airtable! const { entries } = context let sortedEntries = entries.sort( function(a,b){return a.fields.order - b.fields.order}) I'm currently looking for work! Drop me an email at sgyll@protonmail.com if you'd like to chat Discussion (3) Added the ## BONUS section 🎉 Hi, I liked your way of approaching the work. Give a check for mobile screen too. It goes out of screen a bit. Thank you! Mobile optimisation coming next :)
https://dev.to/koabrook/mvp-to-v1-creating-my-portfolio-website-with-react-and-the-airtable-api-3fac
CC-MAIN-2022-27
refinedweb
1,319
63.19
Since the introduction of the asset pipeline, managing javascript assets in a typical rails application is pretty straight forward. You figure out some structure for organizing the files under app/assets/javascripts/, concatenate them together with some directives in application.js, and that file is most likely included in a layout and you’re on your way. But what about those pesky page specific initializer scripts? A popular approach is to use a strategy like this. The gist of this approach is to organize your page initializers in an object literal by controller name and action, something like initializers: foos: index: -> show: -> The document body is decorated with the current controller name and action. A document ready script extracts the controller and action, looks it up in the initializers object and executes the associated initializer. Generally the initalizer scripts are placed somewhere under app/assets/javascripts and included in the application.js bundle. This approach is perfectly fine, but a few minor things bug me: An approach that has worked for me looks something like this: I like this approach for a few reasons: The downsides: You probably noticed the code at the bottom of the application.js file. This code doesn’t specifically have anything to do with the loading strategy outlined above, but it does complement it. It creates a namespace for your JavaScript, named after your rails app. Within this namespace it creates a page object. This object provides a few things: This is obviously just one of many approaches out there. What other approaches have worked for you? Interested in a Career at Carbon Five? Check out our job openings.
https://blog.carbonfive.com/a-strategy-for-loading-page-specific-javascript/
CC-MAIN-2021-25
refinedweb
274
54.63
note Juerd <p><blockquote><em> I'll point out that moving from Perl to PHP is *very* easy. </em></blockquote></p> <p> Is it really? Perhaps you are right for simple web based stuff like a mail form, a simple database driven site or just templating. However, real programming isn't translated to PHP so easily. </p> <p> First of all, what are you going to do with namespaces? PHP <em>still</em> does not have namespaces. </p> <p> Then, what will you do with closures? PHP has no closures. Heck, it doesn't even have anonymous functions. That's another thing: how will you be rewriting that hash of coderefs? A hash of strings that are <tt>eval</tt>ed at runtime? </p> <p> And what about all those objects that aren't simple hashes? </p> <p> But let's assume you didn't use any of these slightly more advanced programming techniques than the average PHP "programmer" can handle. But you did use modules. You do use modules, don't you? </p> <p> PHP is a web programming language, so it must have a good HTML parser ready, right? One is available, but it cannot be called good. It cannot even parse processing instructions like, ehm, <code><?php ...?></code> itself. </p> <p> Another common task in web programming is sending HTML mail with a few inline images. So what alternative for MIME::Lite do you have? The PHP-ish solution is to build the message manually. Good luck, and have fun. </p> <p>. </p> <p> Enough with the modules. I think I've proven my point that CPAN makes Perl strong. Now let's discuss the core. In fact, let's focus on something extremely elementary in programming: arrays! </p> <p> PHP's "arrays" are hashes. It <em>does not have</em> arrays in the sense that most languages have them. You can't just translate <code>$foo[4] = 4; $foo[2] = 2; foreach $element (@foo) { print $element }</code> to <code>$foo[4] = 4; $foo[2] = 2; foreach ($foo as $element) { print $element }</code>. The Perl version prints 24, PHP insists on 42. Yes, there is <tt>ksort()</tt>, but that isn't something you can <em>guess</em>. It requires very in-depth knowledge of PHP. And that's the one thing PHP's documentation tries to avoid :) </p> <p> Also, don't think <code>$foo = bar() || $baz</code> does the same in PHP. In PHP, you end up with <tt>true</tt> or <tt>false</tt>. So you must write it in two separate expressions. </p> <p> Exactly what makes you think and even say moving from Perl to PHP is easy? It's very, very hard to un-learn concise programming and go back to medieval programming times. And converting existing code is even harder. <> 421963 422001
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=422100
CC-MAIN-2014-42
refinedweb
472
77.13
fmtmsg - print formatted error messages Synopsis Description Environment Versions Notes Example Colophon #include <fmtmsg.h> int fmtmsg(long classification, const char *label, int severity, const char *text, const char *action, const char *tag); is the sum of values describing 4 types of information. The first value defines the output channel. The severity argument can take one of the following values: The numeric values are between 0 and 4. Using addseverity(3) or the environment variable SEV_LEVEL you can add more levels and strings to print. The function can return 4 values:(). fmtmsg() is provided in glibc since version 2.1. The functions fmtmsg() and addseverity(3), and environment variables MSGVERB and SEV_LEVEL come from System V. The function fmtmsg() and the environment variable MSGVERB are described in POSIX.1-2001. System V and UnixWare man pages tell us that these functions have been replaced by "pfmt() and addsev()" or by "pfmt(), vpfmt(), lfmt(), and vlfmt()", and will be removed:and afterand after util-linux:mount: ERROR: unknown mount option TO FIX: See mount(8). util-linux:mount:017 the output becomes:the output becomes: MSGVERB=text:action; export MSGVERB unknown mount option TO FIX: See mount(8). addseverity(3), perror(3) This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.sgvulcan.com/fmtmsg.3.php
CC-MAIN-2017-09
refinedweb
229
56.45
How to specify multiple return types using type-hints I have a function in python that can either return a bool or a list. Is there a way to specify the return types using type hints. For example, Is this the correct way to do it? def foo(id) -> list or bool: ... From the documentation class typing.Union Union type; Union[X, Y] means either X or Y. Hence the proper way to represent more than one return data type is from typing import Union def. >>> foo.__annotations__ {'return': <class 'list'>, 'a': <class 'str'>} Please Go through PEP 483 for more about Type hints. Also see What are Type hints in Python 3.5? Kindly note that this is available only for Python 3.5 and upwards. This is mentioned clearly in PEP 484. ★ Back to homepage or read more recommendations:★ Back to homepage or read more recommendations: From: stackoverflow.com/q/33945261
https://python-decompiler.com/article/2015-11/how-to-specify-multiple-return-types-using-type-hints
CC-MAIN-2019-26
refinedweb
153
77.23
import java.util.Scanner; public class IfTest0 { public static void main(String[] args) { Scanner scan = new Scanner( System.in ); //Prompt Hits System.out.print( "Enter number of hits > " ); int hitAmount = scan.nextInt(); System.out.println( "Number of hits is " + hitAmount ); //Promt at bats System.out.print( "Enter number of at bats > " ); int atBats = scan.nextInt(); System.out.println( "Number of at bats is " + atBats ); int Average; Average = (hitAmount/atBats); if ( Average>0.300 ) { System.out.println( "Eligible for All-Stars " ); } else { System.out.println ( "Not eligible for All-Stars " ); } } } So I'm trying to use an if/else statement to state that when the batting average is greater than .300 the player is eligible for all stars. What happens is that when I input the hits and at bats it always says the player is not eligible. But when the average equals one the player is eligible which I find very strange but I can't figure out what is wrong with my code. Edit: I think I figured it out, I was using the / to divide which would give me a value of 0 I just need to figure out how to properly divide now I'm assuming.
http://www.javaprogrammingforums.com/whats-wrong-my-code/10050-very-new-java-basic-question.html
CC-MAIN-2015-48
refinedweb
199
56.96
Hi I need guidance for this program and it seems i don't understand basic fundamentals of creating histogram with c++. I'm not asking for spoonfeeding or anything i just need to know what i'm doing wrong and is there anything i can include or to improve it. This programe needs to print out a histogram about students and their marks, this program should allow a user to key in a total of 20 marks (20 students) and these marks have been keyed in (1-100 marks) it should print a histogram something like this. 0-29 ***** 30-39 ***** 40-69 ***** 70-100 ***** This is what i've written so far and it doesn't seem so good. #include <iostream> using namespace std; int main() { int i, j; int count; int Input, FirstInterval, SecondInterval, ThirdInterval; cout << "How many marks did the student recieve?" <<endl; cin >> Input; if ( Input < 29 ) FirstInterval++; for(i = 0; i < FirstInterval ; i++)count; { cout <<"*" << endl; } if ( Input < 39 ) SecondInterval++; for(i = 0; i < SecondInterval ; i++)count; { cout <<"*" << endl; } if ( Input < 69 ) ThirdInterval++; for(i = 0; i < ThirdInterval ; i++)count; { cout <<"*" << endl; } return 0; } What my program does is, it can key in the number but when it keys in any number it dots down 3 stars below the number. 28 * * * [Please press any button to continue] thats what comes up after i run my program. Any assistance is appreciate thanks for your time.
https://www.daniweb.com/programming/software-development/threads/338310/need-help-with-histogram
CC-MAIN-2019-04
refinedweb
239
68.54
The Button widget is a standard Tkinter widget used to implement various kinds of buttons. Buttons. When to use the Button Widget #. Patterns # Plain buttons are pretty straightforward to use. All you have to do is to specify the button contents (text, bitmap, or image) and what function or method to call when the button is pressed: from Tkinter import * master = Tk() def callback(): print "click!" b = Button(master, text="OK", command=callback) b.pack() mainloop() A button without a callback is pretty useless; it simply doesn’t do anything when you press the button. You might wish to use such buttons anyway when developing an application. In that case, it is probably a good idea to disable the button to avoid confusing your beta testers: b = Button(master, text="Help", state=DISABLED) If you don’t specify a size, the button is made just large enough to hold its contents. You can use the padx and pady option to add some extra space between the contents and the button border. You can also use the height and width options to explicitly set the size. If you display text in the button, these options define the size of the button in text units. If you display bitmaps or images instead, they define the size in pixels (or other screen units). You can specify the size in pixels even for text buttons, but that requires some magic. Here’s one way to do it (there are others): f = Frame(master, height=32, width=32) f.pack_propagate(0) # don't shrink f.pack() b = Button(f, text="Sure!") b.pack(fill=BOTH, expand=1) Buttons can display multiple lines of text (but only in one font). You can use newlines, or use the wraplength option to make the button wrap text by itself. When wrapping text, use the anchor, justify, and possibly padx options to make things look exactly as you wish. An example: b = Button(master, text=longtext, anchor=W, justify=LEFT, padx=2) To make an ordinary button look like it’s held down, for example if you wish to implement a toolbox of some kind, you can simply change the relief from RAISED to SUNKEN: b.config(relief=SUNKEN) You might wish to change the background as well. Note that a possibly better solution is to use a Checkbutton or Radiobutton with the indicatoron option set to false: b = Checkbutton(master, image=bold, variable=var, indicatoron=0) In earlier versions of Tkinter, the image option overrides the text option. If you specify both, only the image is displayed. In later versions, you can use the compound option to change this behavior. To display text on top of an image, set compound to CENTER: b = Button(master, text="Click me", image=pattern, compound=CENTER) To display an icon along with the text, set the option to one of LEFT, RIGHT, TOP, or BOTTOM: # put the icon to the left of the text label b = Button(compound=LEFT, image=icon, text="Action") # put the icon on top of the text b = Button(compound=TOP, image=icon, text="Quit") Reference # - Button(master=None, **options) (class) [#] A command button. -. If this option is not used, nothing will happen when the user presses the button. ) - default= - If set, the button is a default button. Tkinter will indicate this by drawing a platform specific indicator (usually an extra border). The default is DISABLED (no default behavior). (default/Default) -) - justify= - Defines how to align multiple lines of text. Use LEFT, RIGHT, or CENTER. Default is CENTER. (justify/Justify) -. Usually, the button is SUNKEN when pressed, and RAISED otherwise. Other possible values are GROOVE, RIDGE, and FLAT. Default is RAISED. (relief/Relief) - repeatdelay= - (repeatDelay/RepeatDelay) - repeatinterval= - (repeatInterval/RepeatInterval) -) - flash() [#] Flash the button. This method redraws the button several times, alternating between active and normal appearance. - invoke() [#] Call the command associated with the button.
http://www.effbot.org/tkinterbook/button.htm
CC-MAIN-2015-18
refinedweb
646
55.64
[ ] Musachy Barroso closed WW-1888. ------------------------------- Resolution: Fixed Fix Version/s: (was: Future) 2.1.3 This was partially fixed, see comments for details > Forms generate invalid xhtml using the xhtml template > ----------------------------------------------------- > > Key: WW-1888 > URL: > Project: Struts 2 > Issue Type: Improvement > Components: Plugin - Tags > Affects Versions: 2.0.6 > Reporter: Niko Korhonen > Assignee: Musachy Barroso > Priority: Minor > Fix For: 2.1.3 > > > Struts 2 forms with the XHTML template seem to have some XHTML compliancy problems. > The <s:submit> tag inside <s:form> produces invalid XHTML. The submit tag produces code like this: > <td colspan="2"> > <div align="right"><input type="submit" id="SettingsSubmit_0" value="Submit"/></div> > </td> > The problem is that the "align" attribute of the div tag is deprecated since HTML 4.01 and is not valid XHTML. This can be fixed by including the align attribute to the td tag as such: > <td colspan="2" align="right"> > Using validation in forms produces invalid XHTML code. The required attribute "type" not specified in the produced script tag: > <script src="/../struts/xhtml/validation.js"></script> > This can be corrected by including the type attribute as such: > <script src="/../struts/xhtml/validation.js" type="text/javascript"></script> > The form tag generates this code: > <form namespace="/pages/webui" id="SettingsSubmit" name="SettingsSubmit" onsubmit="return validateForm_SettingsSubmit();" action="/../SettingsSubmit.jspa" method="POST"> > There are some problems here: > 1. There are no "name" attribute in XHTML. This can be fixed by removing it completely and only using "id". > 2. There is no "namespace" attribute in XHTML. I don't know how this can be fixed. I use an edited template that does not produce the "namespace" attribute at all. > 3. "POST" should be written in lowercase, i.e. "post". > All of these issues can be fixed by editing the XHTML template files. I can provide my own modifications if needed. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
http://mail-archives.us.apache.org/mod_mbox/struts-issues/200812.mbox/%3C716254221.1228171777074.JavaMail.jira@brutus%3E
CC-MAIN-2020-45
refinedweb
327
58.99
I386_GET_MTRR(2) System Calls Manual (i386) I386_GET_MTRR(2) NAME i386_get_mtrr, i386_set_mtrr -- access Memory Type Range Registers LIBRARY i386 Architecture Library (libi386, -li386) SYNOPSIS #include <<sys/types.h>> #include <<machine/sysarch.h>> #include <<machine/mtrr.h>> int i386_get_mtrr(struct mtrr *mtrrp, int *n); int i386_set_mtrr(struct mtrr *mtrrp, int *n); DESCRIPTION These functions provide an interface to the MTRR registers found on 686-class processors for controlling processor access to memory ranges. This is most useful for accessing devices such as video accelerators on pci(4) and agp(4) buses. For example, enabling write-combining allows bus-write transfers to be combined into a larger transfer before bursting over the bus. This can increase performance of write operations 2.5 times or more. mtrrp is a pointer to one or more mtrr structures, as described below. The n argument is a pointer to an integer containing the number of structures pointed to by mtrrp. For i386_set_mtrr() the integer pointed to by n will be updated to reflect the actual number of MTRRs successfully set. For i386_get_mtrr() no more than n structures will be copied out, and the integer value pointed to by n will be updated to reflect the actual number of valid structures retrieved. A NULL argument to mtrrp will result in just the number of MTRRs available being returned in the integer pointed to by n. The argument mtrrp has the following structure: struct mtrr { uint64_t base; uint64_t len; uint8_t type; int flags; pid_t owner; }; The location of the mapping is described by its physical base address base and length len. Valid values for type are: MTRR_TYPE_UC uncached memory MTRR_TYPE_WC use write-combining MTRR_TYPE_WT use write-through caching MTRR_TYPE_WP write-protected memory MTRR_TYPE_WB use write-back caching Valid values for flags are: MTRR_PRIVATE own range, reset the MTRR when the current process exits MTRR_FIXED use fixed range MTRR MTRR_VALID entry is valid The owner member is the PID of the user process which claims the mapping. It is only valid if MTRR_PRIVATE is set in flags. To clear/reset MTRRs, use a flags field without MTRR_VALID set. RETURN VALUES Upon successful completion zero is returned, otherwise -1 is returned on failure, and the global variable errno is set to indicate the error. The integer value pointed to by n will contain the number of successfully processed mtrr structures in both cases. ERRORS [ENOSYS] The currently running kernel or CPU has no MTRR support. [EINVAL] The currently running kernel has no MTRR support, or one of the mtrr structures pointed to by mtrrp is invalid. [EBUSY] No unused MTRRs are available. HISTORY The i386_get_mtrr() and i386_set_mtrr() functions appeared in NetBSD 1.6. NetBSD 6.1.5 November 10, 2001 NetBSD 6.1.5
http://modman.unixdev.net/?sektion=2&page=i386_set_mtrr&manpath=NetBSD-6.1.5
CC-MAIN-2017-30
refinedweb
453
53.41
Pig Latin June 2, 2009 We provide a single function, pig-latin, that works in both directions; strings that contain a hyphen are translated from Pig Latin to English, and strings without a hyphen are translated from English to Pig Latin. (define (pig-latin word) (let* ((vowels (string->list "aeiouAEIOU")) (ws (string->list word)) (rs (reverse ws))) (if (member #\- ws) ; pig-latin to english (let* ((front (take-while (lambda (c) (not (char=? c #\-))) ws)) (back (drop (+ (length front) 1) ws))) (if (string=? (list->string back) "way") (list->string front) (list->string (append (take (- (length back) 2) back) front)))) ; english to pig-latin (if (member (car ws) vowels) (string-append word "-way") (let ((init-cons (take-while (lambda (c) (not (member c vowels))) ws))) (list->string (append (drop (length init-cons) ws) (list #\-) init-cons (list #\a #\y)))))))) Here are some examples, in both directions; notice the translation of art-way, which resolves the ambiguity of art and wart in favor of art. > (map pig-latin '("art" "eagle" "start" "door" "spray" "prays" "wart")) ("art-way" "eagle-way" "art-stay" "oor-day" "ay-spray" "ays-pray" "art-way") > (map pig-latin '("art-way" "eagle-way" "art-stay" "oor-day" "ay-spray" "ays-pray" "art-way")) ("art" "eagle" "start" "door" "spray" "prays" "art") Pig-latin uses take, drop and take-while from the Standard Prelude. You can run pig-latin at. […] Praxis – Pig Latin By Remco Niemeijer Today’s Programming Praxis problem is about Pig Latin. Our target is 11 lines (the size of the provided […] My Haskell solution (see for a version with comments): vowellist = [‘a’,(‘ ‘)]) def p2eword(pigword): pigsplit = pigword.split(‘-‘) if pigsplit[1][0]==’w’: return pigsplit[0] else: return pigsplit[1][0]+pigsplit[0] def p2e(pigstr): return ” “.join([p2eword(word) for word in pigstr.split(‘ ‘)])))))
http://programmingpraxis.com/2009/06/02/pig-latin/2/
CC-MAIN-2015-40
refinedweb
297
63.43
2017-03-18 19:30 GMT+01:00 Petr Jelinek <petr.jeli...@2ndquadrant.com>: > On 16/03/17 17:15, David Steele wrote: > > On 2/1/17 3:59 PM, Pavel Stehule wrote: > >> Hi > >> > >> 2017-01-24 21:33 GMT+01:00 Pavel Stehule <pavel.steh...@gmail.com > >> <mailto:pavel.steh...@gmail.com>>: > >> > >> Perhaps that's as simple as renaming all the existing _ns_* > >> functions to _block_ and then adding support for pragmas... > >> > >> Since you're adding cursor_options to PLpgSQL_expr it should > >> probably be removed as an option to exec_*. > >> > >> I have to recheck it. Some cursor options going from dynamic > >> cursor variables and are related to dynamic query - not query > >> that creates query string. > >> > >> hmm .. so current state is better due using options like > >> CURSOR_OPT_PARALLEL_OK > >> > >> if (expr->plan == NULL) > >> exec_prepare_plan(estate, expr, (parallelOK ? > >> CURSOR_OPT_PARALLEL_OK : 0) | > >> expr->cursor_options); > >> > >> This options is not permanent feature of expression - and then I > >> cannot to remove cursor_option argument from exec_* > >> > >> I did minor cleaning - remove cursor_options from plpgsql_var > >> > >> + basic doc > > > > This patch still applies cleanly and compiles at cccbdde. > > > > Any reviewers want to have a look? > > > > I'll bite. > > I agree with Jim that it's not very nice to add yet another > block/ns-like layer. I don't see why pragma could not be added to either > PLpgSQL_stmt_block (yes pragma can be for whole function but function > body is represented by PLpgSQL_stmt_block as well so no issue there), or > to namespace code. In namespace since they are used for other thing > there would be bit of unnecessary propagation but it's 8bytes per > namespace, does not seem all that much. > > My preference would be to add it to PLpgSQL_stmt_block (unless we plan > to add posibility to add pragmas for other loops and other things) but I > am not sure if current block is easily (and in a fast way) accessible > from all places where it's needed. Maybe the needed info could be pushed > to estate from PLpgSQL_stmt_block during the execution. > > There is maybe partial misunderstand of pragma - it is set of nested configurations used in compile time only. It can be used in execution time too - it change nothing. The pragma doesn't build a persistent tree. It is stack of configurations that allows fast access to current configuration, and fast leaving of configuration when the change is out of scope. I don't see any any advantage to integrate pragma to ns or to stmt_block. But maybe I don't understand to your idea. I see a another possibility in code - nesting init_block_directives() to plpgsql_ns_push and free_block_directives() to plpgsql_ns_pop() Pavel > -- > Petr Jelinek > PostgreSQL Development, 24x7 Support, Training & Services >
https://www.mail-archive.com/pgsql-hackers@postgresql.org/msg308669.html
CC-MAIN-2019-18
refinedweb
438
60.75
RvsPython #1: Webscraping Want to share your content on python-bloggers? click here. Webscraping is a powerful tool available for efficent data collection. There are ways to do it in both R and Python. I’ve built the same scraper in R and Python which gathers information about all the whitehouse breifings available on (don’t worry guys–it’s legal); This is based off of what I learned from FreeCodeCamp about webscraping in Python (heres the link: ). This blog is about approaches I naturally used with R’s rvest package and Python’s BeautifulSoup library. Here are two versions of code which I use to scrape all the breifings This webscraper extracts: 1) Date of the Breifing 2) The title of the Breifing 3) The URL to the Breifing 4) The The Issue Type and puts them in a data frame. The differences between the way I did this in Python vs R: Python (a) I grabbed the data using the xml (b) Parsing the data was done with the html classes (and cleaned with a small amount of Regex) (c) I used for loops (d) I had to import other libraries besides for bs4 R (a) I used a CSS selector to get the raw data. (b) The data was parsed using good ol’ regular expressions. (c) I used sapply() (d) I just used rvest and the base library. This is a comparison between how I learned to webscrape in Python vs How I learned how to do it in R. Lets jump in and see which one did faster! Python Version with BeautifulSoup # A simple webscraper providing a dataset of all Whitehouse Breifings from bs4 import BeautifulSoup import requests import pandas as pd import re import lxml def get_whitehouse_breifings(): # Generalize to all pages orig_link = requests.get("") orig_content = orig_link.content sp = BeautifulSoup(orig_content, 'lxml') pages = sp.find_all('a', {'class': 'page-numbers'}) the_pages = [] for pg in pages: the_pages.append(pg.get_text()) # Now make set of links the_links = [] for num in range(1, int(max(the_pages)) + 1): the_links.append('' + 'page/' + str(num) + '/') dat = pd.DataFrame() for link in the_links: link_content = requests.get(link) link_content = link_content.content sp = BeautifulSoup(link_content, 'lxml') h2_links = sp.find_all('h2') date_links = sp.find_all('p', {"class": "meta__date"}) breif_links = sp.find_all('div', {"class": "briefing-statement__content"}) title = [] urls = [] date = [] breifing_type = [] for i in h2_links: a_tag = i.find('a') urls.append(a_tag.attrs['href']) title.append(a_tag.get_text()) for j in date_links: d_tag = j.find('time') date.append(d_tag.get_text()) for k in breif_links: b_tag = k.find('p') b_tag = b_tag.get_text() b_tag = re.sub('\\t', '', b_tag) b_tag = re.sub('\\n', '', b_tag) breifing_type.append(b_tag) dt = pd.DataFrame(list(zip(date, title, urls, breifing_type))) dat = pd.concat([dat, dt]) dat.rename(columns={"Date": date, "Title": title, "URL": urls, "Issue Type": breifing_type}) return (dat) Running the code, Python’s Time import time start_time=time.time() pdt = get_whitehouse_breifings() # Time taken to run code print("--- %s seconds ---" % (time.time() - start_time)) ## --- 162.8423991203308 seconds --- R Version with rvest library(rvest) get_whitehouse_breifings<- function(){ #Preliminary Functions pipeit<-function(url,code){ read_html(url)%>%html_nodes(code)%>%html_text() } pipelink<-function(url,code){ read_html(url)%>%html_nodes(code)%>%html_attr("href") } first_link<-"" # Get total number of pages pages<-pipeit(first_link,".page-numbers") pages<-as.numeric(pages[length(pages)]) #Get all links all_pages<-c() for (i in 1:pages){ all_pages[i]<-paste0(first_link,"page/",i,"/") } urls<-unname(sapply(all_pages,function(x){ pipelink(x,".briefing-statement__title a") })) %>% unlist() breifing_content<-unname(sapply(all_pages,function(x){ pipeit(x,".briefing-statement__content") })) %>% unlist() # Data Wrangling test<-unname(sapply(breifing_content,function(x) gsub("\\n|\\t","_",x))) test<-unname(sapply(test,function(x) strsplit(x,"_"))) test<-unname(sapply(test,function(x) x[x!=""])) breifing_type<-unname(sapply(test,function(x) x[1])) %>% unlist() title<-unname(sapply(test,function(x) x[2])) %>% unlist() dat<-unname(sapply(test,function(x) x[length(x)])) %>% unlist() dt<- data.frame("Date"=dat,"Title"=title,"URL"=urls,"Issue Type"= breifing_type) dt } Running the code,R’s Time ## user system elapsed ## 16.77 4.22 415.95 Analysis and Conclusion: On my machine Python was waaaaay faster than R. This was primarily because the function I wrote in R had to go over the website a second time to extract links. Could it be sped up if I wrote the code extracting text and links in one step? Very likely. But I would have to change the approach to be similar to how I did it in Python. For me rvest seems to be great for “quick and dirty” code (Point and click with a CSS selector, put it in a function, iterate accross pages; Repeat for next field). BeautifulSoup seems like its better for more methodical scraping. The approach is naturally more html heavy. Python requires one to refrence the library every time they call a function from it, which for myself being a native R user find frustrating as opposed to just attaching the library to the script. For R you have to play with the data structure (from lists to vectors) to get the data to be coerced to a dataframe. I didn’t need to do any of this for Python. I’m sure theres more to write about these libraries (and how there are better ways to do it in both of these languages), but I’m happy that I am aquainted with them both! Let me know what you think! P.S. This was uploaded with the RWordpress Package. Check out my Linkedin Post on the topic here. Want to share your content on python-bloggers? click here.
https://python-bloggers.com/2020/07/rvspython-1-webscraping/
CC-MAIN-2021-10
refinedweb
917
58.08
How to Send Emails With Python Python is a highly versatile programming language. It has a diverse ecosystem of open-source libraries that make it easy to build common functionality in a project. Sending emails with Python is one example of such functionality. Sending emails is extremely useful - it allows you to send bulk emails, send confirmation or verification codes to users, or even automate regular emailing tasks. This tutorial will teach you how to send emails with Python using SMTP (Simple Mail Transfer Protocol). Table of Contents You can skip to a specific section of this tutorial on how to send emails with Python using the table of contents below: - Prerequisites Online and Offline Email Sending Options - Secure SMTP Connection - - Other Email Libraries - Bulk Emails - Final Thoughts Prerequisites There's a few things you need before you can send emails with Python. Let’s get the prerequisites out of the way before we proceed further: - Python: I assume that you already have Python installed. If not, please install the latest stable version from here. - A Gmail account: It’s fine to use an existing Gmail account. However, in this tutorial, you'll need to change a Gmail security setting. You'll also be using your username and password in plain text. I recommend you create a separate Gmail account for this tutorial because of this. Note: It is possible to use your existing account to send emails with Python without changing your security settings. To do this, you need to interact with the Gmail API. That’s a topic for another day. Whether you create a new account or use an existing one, the first thing you'll need to do is turn ON “Less secure app access”. Click here to go to the relevant page on your Gmail account settings page. Ensure that you are logged in to the correct account if you are logged in to multiple accounts on the same browser. Now that your Gmail account is set up properly, there's a few Python libraries you'll need to install: - The SSL Library In this tutorial, you'll be using a secure SMTP connection. We'll need the SSL library to do this. Open a shell or command prompt and install it using pip with the following command: pip install ssl smtplib(part of the standard Python library) SMTPLib is included as a part of the standard libraries of Python. This means you will not need to install it explicitly, but it's important to know what you'll be working with moving forward. Online and Offline Email Sending Options You can test the code in this tutorial using both localhost and Gmail. This allows you to simulate the process as well as to send emails using a real account. Let's explore both of these options one-by-one. Offline Option - Local SMTP server Setting up a local SMTP server is a good way to test the functionality of your code. You can send emails and print their content in an interactive shell. Enter the following command to the terminal to start a local SMTP debugging server. python -m smtpd -c DebuggingServer -n localhost:1025 If you want to use a port other than 1025, you will need administrator privileges. The code in this tutorial is mainly targeted for real-world testing. If you want to test your code using a local SMTP debugging server, this configuration will clear things up for you to adapt the other code to your requirement. Here's an example of code you could use to simlulate the delivary of emails locally: import smtplib smtp_server = 'localhost' port = 1025 sender = 'from@someemail.com' receiver = 'sendingto@somemail.com' message = 'Hi. This is a test.' with smtplib.SMTP(smtp_server, port) as server: server.sendmail(sender, receiver, message) This will simulate sending an email through your local server. Online Option - Generic email account (Gmail) This is a great way to put your code to the test and see it actually deliver to an email inbox. We will see an example of this in the next section. Secure SMTP Connection In the previous section, we sent an email through a local SMTP server without proper security or encryption. When you send emails in the real-world, this is not advisable. You need to make sure that your SMTP connection is secure. There are two protocols that make an SMTP connection secure: - SSL (Secure Socket Layer) - TLS (Transport Layer Security). TLS is an improved version of SSL. Although SSL is somewhat older, it is still widely used. There are also two ports (465, 587) that you will use to initiate the connection. If port 587 is used, an unencrypted connection is established, and later encrypted. The connection is initiated using server = smtplib.SMTP and later encrypted using server.starttls. If port 465 is used, then SSL encryption is initiated prior to any communication. The SMTP connection is initiated using server = smtplib.SMTP_SSL Despite the port used, Gmail will encrypt the connection using TLS. However, default and trusted CA certificates and their validation can be loaded by calling .create_default_context()and creating a secure SSL context. Sending Emails Plain Text Emails We will be using port 465 in this guide. However, the code required for both ports is given below. Let’s send an email with only a body and no subject. Be mindful to change the sender and receiver addresses, and password accordingly. Port 465 import smtplib, ssl smtp_server = 'smtp.gmail.com' port = 465 sender = 'from@someemail.com' password = 'myPassword' receiver = 'nick@nickmccullum.com' message = 'Hi. This is port 465.' context = ssl.create_default_context() with smtplib.SMTP_SSL(smtp_server, port, context=context) as server: server.login(sender, password) server.sendmail(sender, receiver, message) Port 587 import smtplib, ssl smtp_server = 'smtp.gmail.com' port = 587 sender = 'from@someemail.com' password = 'myPassword' receiver = 'nick@nickmccullum.com' message = 'Hi. This is port 587.' context = ssl.create_default_context() try: server = smtplib.SMTP(smtp_server,port) server.starttls(context=context) server.login(sender, password) server.sendmail(sender, receiver, message) except Exception as exep: print(exep) finally: server.quit() Adding a Subject to the Email We can add a subject by changing the message to have a ‘Subject:’ and having two newline characters. (\n). Change the message variable (as shown below) and try resending the email. Please note the triple quotes instead of a single quote. message = """Subject: Hi there This message is sent from Python."" If you want to use single quotes instead, use the following code: message = 'Subject: Hi \n\n This message is sent from Python. ' Styling the Email In the last section of this tutorial, we sent emails with a subject line and a simple body. This is usually not enough to solve real-world email-related problems. You'll need to style your text, add hyperlinks, and include images. We'll use Python's built-in First, let's import our libraries and define the variables. import smtplib, ssl from email.mime.text import MIMEText from email.mime.multipart import MIMEMultipart smtp_server = 'smtp.gmail.com' port = 465 sender = 'from@someemail.com' password = 'myPassword' receiver = 'sendingto@somemail.com' The next thiing you'll need to do is define a MIMEMultipart object. You'll convert this into a string later and send this as the email message. message = MIMEMultipart('alternative') message['Subject'] = 'MIME Test' message['From'] = sender message['To'] = receiver Next you'll create plain-text and HTML versions of the email you want to send. email_plain = """\ Hey there, Take care Bye!""" email_html = """\ <html> <body> <p>Hey there,<br>Take care<br>Bye!</p> </body> </html> """ Next you'll transform these plain-text and HTML emails into MIMETEXT objects and attach them to the message. mimetext_plain = MIMEText(email_plain, "plain") mimetext_html = MIMEText(email_html, "html") message.attach(mimetext_plain) message.attach(mimetext_html) Now you can send the email! context = ssl.create_default_context() with smtplib.SMTP_SSL(smtp_server, port, context=context) as server: server.login(sender, password) server.sendmail(sender, receiver, message.as_string()) This code gets the jbo done. However, it requires us to write lots of boilerplate code. We can greatly reduce the number of lines required to send an email if we use the email_to library. It simplifies the entire email delivery and styling process by combining smtplib and MIMEText into one library. We'll try this approach now. First, install the pip install email-to The Python code to send an email with the import email_to smtp_server = 'smtp.gmail.com' port = 587 sender = 'from@someemail.com' password = 'myPassword' receiver = 'sendingto@somemail.com' server = email_to.EmailServer(smtp_server, port, sender, password) server.quick_email(receiver, 'Testing New Library', ['# My Title', 'My description'], style='h1 {color: red}') Notice the port number 587 instead of 465. This is what is used by the This library also supports building the body of your email line by line. Here's an example: import email_to smtp_server = 'smtp.gmail.com' port = 587 sender = 'from@someemail.com' password = 'myPassword' receiver = 'nick@nickmccullum.com' server = email_to.EmailServer(smtp_server, port, sender, password) message = server.message() message.add('# Here is the shopping list') message.add('- Milk') message.add('- Espresso') message.add('- Sugar') message.style = 'h1 { color: red}' message.send(receiver, 'My Shopping List') As you can see, you can use the message.add() method to add lines to your email body. Adding Attachments to the Email Sending attachments is an important part of sending emails. We need to encode the files to be sent via email since we are working with a stream of textual data. The easiest way to do it is to use encoders.encode_base64(). Let's see this principle in action. Note that while we'll be using an image in this tutorial, you can use similar logic to send other file types (like PDFs). With that said, Gmail does refuse certain file extensions like .exe. First, let's build our code to the extent where we've specified our variables definitions and instantiated the email body with a simple message. import smtplib, ssl from email.mime.base import MIMEBase from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from email import encoders smtp_server = 'smtp.gmail.com' port = 465 sender = 'from@someemail.com' password = 'myPassword' receiver = 'sendingto@somemail.com' file_name = 'uploadThis.jpg' message = MIMEMultipart('alternative') message['Subject'] = 'Sending File' message['From'] = sender message['To'] = receiver message.attach(MIMEText('Sending an attachment', 'plain')) Now let's open the file, encode it, and add a header. with open(file_name, 'rb') as attachment: file_part = MIMEBase('application', 'octet-stream') file_part.set_payload(attachment.read()) encoders.encode_base64(file_part) file_part.add_header( 'Content-Disposition', 'attachment; filename='+ str(file_name) ) The attach method can be use to attach the attachment to our email: message.attach(file_part) Now we're ready to send the email! context = ssl.create_default_context() with smtplib.SMTP_SSL(smtp_server, port, context=context) as server: server.login(sender, password) server.sendmail(sender, receiver, message.as_string()) Sending Multiple Emails Sending multiple emails is another important real-world problem. If you know how to import a text file (preferably a CSV), you can loop through it to gather email addresses and use the principles from this tutorial to send multiple emails with Python. Let's see this principle in action. For this tutorial, we will send a simple email with a personalized greeting that says "Hi Name”. First, create a CSV file with emails and any other content you want to add to the email. You can use MS Excel or any other spreadsheet software to do this. Open your spreadsheet software, create a table with two columns 'Name’, and ‘Email' and save it as a CSV. This is how it will look like in a text editor: name,email John,[john@example.com](mailto:johnsmith@example.com) Sherlock,[sherlock@example.com](mailto:sherlock@example.com) Dave,[dave@example.com](mailto:dave@example.com) Next you'll want to write all the common components of your code: import smtplib, ssl import csv smtp_server = 'smtp.gmail.com' port = 465 sender = 'from@someemail.com' password = 'myPassword' context = ssl.create_default_context() Now you'll want to loop through the CSV and read the names and emails. with open('myemails.csv') as f: reader = csv.reader(f) next(reader) for row in reader: Within the loop, add the logic to actually send the emails: with open('myemails.csv') as f: reader = csv.reader(f) next(reader) for row in reader: message = 'Hi ' + row[0] with smtplib.SMTP_SSL(smtp_server, port, context=context) as server: server.login(sender, password) server.sendmail(sender, row[1], message) Here the full code. import smtplib, ssl import csv smtp_server = 'smtp.gmail.com' port = 465 sender = 'from@someemail.com' password = 'myPassword' context = ssl.create_default_context() with smtplib.SMTP_SSL(smtp_server, port, context=context) as server: with open('myemails.csv') as f: reader = csv.reader(f) next(reader) for row in reader: message = 'Hi ' + row[0] server.login(sender, password) server.sendmail(sender, row[1], message) Other Email Libraries There are many other libraries that make is easy to send emails with Python. You've already learned about one of them. Here are a few others: Bulk Emails Bulk email services can be used to send a massive amount of emails for different purposes like marketing and promotional tasks. There are many bulk email service providers. Although they are not free, some of them offer trial versions without you having to purchase their complete service. Here are a few bulk email providers along with the usage that is available on their free plan: - SendinBlue: 300 emails a day - Pepipost: 100 emails a day - Mailgun: 10000 emails per month - Mailjet: 6000 lifetime emails - SendGrid: 40000 within the first month Final Thoughts Python is a great way to automate email sending tasks or even sending automatically generated emails to customers. The standard library required to send an email is smtplib, but there are many other libraries that allow additional styling, adding images, and adding attachments. Emails you send through Python can be transmitted through protocols like SSL and TLS ensuring that your email's content is encrypted. Disclaimer: This article is purely for educational purposes. If anyone uses it with malicious intent (spamming), I am not be liable for any losses or damages in connection with the use of this tutorial. Please use this information for good!
https://nickmccullum.com/how-to-send-emails-with-python/
CC-MAIN-2021-31
refinedweb
2,366
60.01
PyCharm 2.6 Early Access Preview Thought that things were pretty quiet here during the summer? Not really! The PyCharm team has been hard at work on a new release of PyCharm, which (following the tradition of Python interpreter version numbers) will be called 2.6. The big new feature in this version is diagrams support. PyCharm is now able to show you a visual overview of the structure of your code – and that includes class diagrams for any Python project, as well as model dependency diagrams for Django and App Engine projects. Other than that, the team has been primarily focused on smaller changes that improve the quality of PyCharm’s code analysis and completion. This includes: - New code completion shortcut: press Ctrl-Space twice to complete any non-imported identifiers (modules, classes, functions or variables) anywhere in your code; - Many fixes and improvements in Django code completion and inspections; - Code insight fixes and changes to default inspection settings to reduce the number of false positives in PyCharm’s inspection reports; - New intention actions to assist in specifying types for parameters and variables; - Autodetection of test framework and docstring format used in the project. Besides that, PyCharm has been updated to support the new language features in current beta version of Python 3.3. Full support of Python 3.3, including the new virtual environments and namespace packages, is planned for the final release of PyCharm 2.6. All of these features are available in the first EAP build which is available for download now. The final release is still a few weeks away, and we hope that we will be able to sneak a few other goodies in before the release is done. As usual, your feedback is welcome on our new forum and in the issue tracker. 15 Responses to PyCharm 2.6 Early Access Preview Jakub says:August 11, 2012 Would you please explain how to generate a diagram? Thanks! Jakub says:August 11, 2012 …ups, I was still running 2.5.2. The yellow background for library files in the project view is a small change, but greatly improves usability. Ronnie Kolehmainen says:August 13, 2012 The article states that the EAP build is available for download but I can’t find any downloads on the linked page. Alan says:August 13, 2012 Ronnie, the download links are in the box labelled “Download PyCharm 2.6 EAP”. The page heading should read “Welcome to the Early Access Program for JetBrains PyCharm”. Try copy-pasting the link directly: Ronnie Kolehmainen says:August 13, 2012 Yeah the page is updated now but it wasn’t a few hours ago. Thanks anyway. Rob van der Linde says:August 13, 2012 Nice, but I can’t seem to figure out how to see a class diagram of my entire project, I have been waiting for this for a while and would really like to see a class diagram of the whole project, not just little pieces of a class diagram. I have been looking through the menus for something like code->view class diagram, but I haven’t found anything like that yet. Dmitry Jemerov says:August 14, 2012 There is no such feature at the moment; we inherited the diagrams feature from the Java IDE, where it’s normally infeasible to put the entire project onto a class diagram. Please file a feature request at Alex Besogonov says:August 13, 2012 Can you also produce EAP Python plugin? It’ll also be great if it could work with Leda. Dmitry Jemerov says:August 14, 2012 Yes, it will be released soon. AaronD says:August 31, 2012 Any update on the python plugin release date? It looks like the python plugin for Leda is still not available in today’s EAP release =( *sadpanda* Diederik van der Boor says:August 14, 2012 ohh.. that diagram feature looks very nice! 🙂 I’m curious about the code intend features, I haven’t been able to put them to use yet. Could you post an explaination how I can trigger them? There are a few things I’d love to have: – for loop expansion in Django templates – typing “super ” to get the whole super call – automatic “*args, **kwargs” expansion somehow – snippet support, I can’t find them eiter. I’d love to have “from django.utils.translation import ugettext_lazy as _” in a snippet somewhere. Dmitry Jemerov says:August 14, 2012 Diederik, The “snippets” feature is called “Live Templates” and has been available in PyCharm since version 1.0. There is a bundled live template for ‘for’ loop in Django templates – just type “for” and press Tab. You can configure your own live templates under Settings | Live Templates. If you use the Override Methods feature (Code | Override Methods), the correct super call is generated for you automatically. For manually written method declarations, feel free to vote for Not sure what you mean by “automatic expansion somehow”. Diederik van der Boor says:August 20, 2012 Dmitry, Thanks for mentioning the Live Templates. I didn’t recognize them as such, and it seems to have many of the cases I mentioned, including the *args, **kwargs expansion (I now type _ + tab to get it) I have the following live templates currently: * = *args, **kwargs _ = from django.utils.translation import ugettext_lazy as _ __init = a __init__ constructor with super call. istlota says:September 2, 2012 I am wondering if, out of ignorance of how best to use the Diagram feature, I am missing the full scope of what is currently available. Right now, I have to go thru a lot of steps just to get to a Diagram of only one class: a) Navigate in the Project window to a desired directory b) Expand that directory to its modules c) Highlight the desired module d) Navigate in the Structure window to the desired class in that module e) Control-click that class to popup a menu f) Navigate in that popup menu to Diagrams g) Which then pops up another window where I navigate to Show Diagram h) Which then pops up another window where I click Python Class Diagram e) Which then , finally, causes the Diagram of just that class to be displayed It would be more useful for me to have some Way, with few steps, to get to a Diagram of the entire project rather than of just one class. Then, from there, there should be a navigation feature building upon what you can now do by dragging the Structure window viewport to drill into any block of the project diagram you wish. I can See how what it supported now with Diagrams could some day be expanded to encompass much of what is being done now by the Project, Structure, and Diagram windows. Grok? DJango개발 하실때 Modeling은 어떻게 진행하시나요 ? 모델링 툴을 사용하신다면 | fbdjango says:September 8, 2015 […] django 에 대한 다이어그램 지원이 들어가서, 가끔 써먹고 있습니다. 델링때 쓰기 보단 그냥 중간 확인용 정도로만 쓰고 […]
https://blog.jetbrains.com/pycharm/2012/08/pycharm-2-6-early-access-preview/
CC-MAIN-2021-17
refinedweb
1,162
65.35
Open Messaging Interface (O-MI), an Open Group Internet of Things (IoT) Standard – Messaging Objects The XML schema “omi.xsd” in O-MI XSD Schema provides a formal specification for the O-MI. If there is any conflicting information between the chapters of this standard and the “omi” schema, then the information in the schema file is to be used. Documentation automatically generated from the schema file is included in O-MI and XML Schema. The following units or formats SHALL be used in the O-MI to express the listed quantities. The xs namespace is defined in XML Schema Part 2: Datatypes Second Edition. Attributes of omiEnvelope defines generic information included in all O-MI requests and responses. They are defined in the omiEnvelope element, which is always the highest-level element of an O-MI message. The omiEnvelope CAN have only one of the following child elements: read, write, cancel, response. Attributes of omiEnvelope Requests O-MI requests SHALL be read, write, or cancel requests. O-MI requests MAY define the recipient O-MI nodes for the request. If so, the receiving O-MI node SHOULD send the request to these nodes for execution. If no nodes are defined, the recipient of the request is the only intended recipient. The receiving O-MI node SHOULD include information about which node(s) provided which of the results in the response. O-MI requests CAN be one-time, or for repeated data updates (subscriptions). One-time requests are only answered once – see One-Time Read – while subscription requests are answered several times using a callback service provided by the requesting O-MI node – see Subscription Request. Read Request Read requests are used to request data or to set up subscription requests for data. An O-MI node SHALL answer each read request it receives with the requested data or with an error message. The responding O-MI node SHOULD take care of data persistency of requests. It SHOULD buffer a subscription request with a callback response mechanism until the request is completely fulfilled. The responding O-MI node SHALL NOT assume that the requesting O-MI node keeps track of placed requests. The responding O-MI node SHALL answer the request with an O-MI response or with a communication error code in case the request failed on communication connection level. The response message SHALL NOT contain any other data than requested data, or error information about the processing of the request. Attributes of Read Request Child Elements of Read Request Write Request Write requests may be used to write various data to different systems, from sensor values to information extracted from databases or other systems. An O-MI node SHALL answer each write request that it receives with a confirmation that the write was successful or with an error message either immediately or to the provided callback address. The responding O-MI node SHOULD take care of data persistency of requests. The responding O-MI node SHOULD buffer a request with callback response mechanisms until the request could be completely fulfilled. An O-MI node SHOULD NOT respond to a write request as successful before the write can be guaranteed. (It is not always possible to guarantee the write was successful, and in such a case the O-MI node SHOULD respond successfully when the request is communicated for writing successfully.) The responding O-MI node SHALL NOT assume that the requesting system keeps track of placed requests. The responding O-MI node SHALL answer the request with an O-MI-compatible message or with an HTTP error code in case the request failed on communication connection level. If there are any subscriptions to information affected by the write request, then the responding O-MI node SHALL generate corresponding event(s) in order to reflect the value change. Attributes of Write Request Child Elements of Write Request Cancel Request An O-MI node SHALL answer each cancel request that it receives with a confirmation that the cancellation of a request was successful or with an error message. An O-MI node receiving a cancel request for an original request with a recipient list SHALL direct the cancel request to the recipients. A cancel request SHALL define the ID of the request(s) to cancel. An O-MI node receiving a cancel request SHALL answer it with an O-MI-compatible message or with an HTTP error code in case the request failed on communication connection level. Attributes of Result Element Response Element Response content varies with the request type and usage of the response. This section defines the generic response structures. A response SHALL contain at least one result. Every result object SHALL contain a return object that indicates whether the request was successful or not using HTTP status codes; i.e., “200” if successful. If there are several result objects in a response, then they typically correspond to different callback requests. Smallest Possible Response to a Successful Request <?xml version="1.0" encoding="UTF-8"?> <omiEnvelope version="1.0" ttl="0"> <response> <result> <return returnCode="200"></return> </result> </response> </omiEnvelope> An O-MI node that receives a callback request SHALL send a response immediately as a return value to the request and the response SHALL contain a request ID. If the request is received correctly, this response message SHOULD NOT contain anything else. An O-MI node that receives a callback or immediate request that is unsuccessful or that could not be received properly SHALL send to the requester an immediate response that contains error information. All result data in a response object SHALL be in the msg element. The msgformat attribute SHOULD contain a text string that identifies the payload format used, such as “odf”, “csv”, “obix”, or similar. A responding O-MI node SHALL NOT include data that was not requested in a response. If a request encountered problems during its execution or some data was not available, the responding O-MI node SHALL send a corresponding result that contains information about the problem. Response objects MAY contain both successful and unsuccessful result objects. Child Elements of Result Element Attributes of Response Element
http://www.opengroup.org/iot/omi/p3.htm
CC-MAIN-2019-13
refinedweb
1,027
52.49
I just posted an interview on Channel 9 with Yang Xiao, a tester on the VB IDE. In this interview Yang shows us the new XML Schema Explorer in Visual Studio 2008 Service Pack 1. This new window is invoked when you right-click on an XML literal element or namespace and select “Show in XML Schema Explorer” in Visual Basic programs. It’s a nice way to visually display the structure of your schema sets which makes you even more productive when working with XML in Visual Basic. XML Schema Explorer in Visual Studio 2008 SP1 Enjoy! Join the conversationAdd Comment PingBack from Hi Beth, It looks like a pretty good tool. The example set up had many more schemas than I have ever used at once. It might have been a better presentation if the example was a little more simplified. I have yet to install SP1, does this schema explorer replace the schema tool? Thanks, John. Hi John, No it doesn’t replace the XML to Schema tool, it serves a different purpose — it just helps navigate the schemas you already have in the project. The XML To Schema actually is a wizard that infers the schema from XML data and adds it to your project. This tool is now built into Visual Studio SP1 now so there’s no need to download it. Regarding the sample, I think Yang wanted to show off all the features of the window and how it handles complex schemas. A simple example wouldn’t have shown all the powerful features. I’ll make note to blog about it with a simple example in the future. HTH, -B Beth Massi and Yang Xiao did a great Channel 9 session that centered on XML Literals and the XML Schema Now, the explorer is nice, but what tool is available to allow us to manipulate the results at design time. If you make a strongly-typed dataset, you can bind it at design time to a datagridview and manipulate the look. Is there someway to do this with XML data that isn’t cumbersome? Right now I use the XML to Schema tool and some work arounds to make the XML show up in the datasources tab, which them allows me to bind it to the datagridview. Thanks.
https://blogs.msdn.microsoft.com/bethmassi/2008/08/15/channel-9-interview-xml-schema-explorer-in-visual-studio-2008-sp1/
CC-MAIN-2017-09
refinedweb
385
68.3
Dynamic URL's in Flask play an important role in the ability to create unique URL's that aren't hard-coded into our application. For example, let's say our application allows users to create an account and log into their profile, we'll need a way to dynamically generate a route for that specific user. Can you imagine hard coding a unique URL for each and every user of your application?! Let's dive in and learn how we can generate dynamic URL's in Flask. creating a dynamic route First up, let's create a new route in out app. We're going to give it the URL /profile: app/app/views.py @app.route("/profile") def profile(): return render_template("public/profile.html") We'll also create a new template for this route. Go ahead and create a file named profile.html in the templates/public directory (Or whatever directory contains your HTML files) Let's just create a simple page for now: app/app/templates/public/dynamic.html {% extends "public/templates/public_template.html" %} {% block title %}Profile{% endblock %} {% block main %} <div class="container"> <div class="row"> <div class="col"> <h1>Profile</h1> <hr> </div> </div> </div> {% endblock %} Tip - We're using the Bootstrap CSS library but feel free to use your own or leave it out completely As it is, our new route isn't dynamic. We need to make some tweaks to the URL we've provided in out @app.route decorator. Go ahead and make the following changes to the @app.route URL: app/app/views.py @app.route("/profile/<username>") - We've added trailing slash to /profile/ - We've provided a variable in the URL path and wrapped in with two opposing arrows <>as shown. Essentially, we're expecting some sort of value to be passed into the URL after /profile/. We're then capturing that data as a variable called username. Just remember, to catch a a value in the URL and store it as a variable it must look <like_this> and follow a trailing slash. Before we can work with the username data, we need to pass it into the routes function as an argument: app/app/views.py def profile(username): We now have access to the username variable and its data. For clarity, our route now looks like this: app/app/views.py @app.route("/profile/<username>") def profile(username): return render_template("public/profile.html") At this point, if you try to access the /profile route in your browser, Flask will throw an error because the profile function is expecting an value! Tip - Trailing slashes matter in Flask. If you try to access a route that's not defined with a trailing slash in the URL, you'll get an error. Capturing URL variables Go to /profile/x in your browser. You'll see Flask returns our new profile.html page and displays /profile/x in the browser URL address bar. We can now work with data that comes into our app from the URL! Let's create a dictionary containing a few usernames and some information about our users. We'll search the dictonary for the username variable and return some basic information about that user. Let's create our users dictionary: app/app/views.py users = { "mitsuhiko": { "name": "Armin Ronacher", "bio": "Creatof of the Flask framework", "twitter_handle": "@mitsuhiko" }, "gvanrossum": { "name": "Guido Van Rossum", "bio": "Creator of the Python programming language", "twitter_handle": "@gvanrossum" }, "elonmusk": { "name": "Elon Musk", "bio": "technology entrepreneur, investor, and engineer", "twitter_handle": "@elonmusk" } } We'll add some logic to our profile route to look up the user and return their information: app/app/views.py @app.route("/profile/<username>") def profile(username): user = None if username in users: user = users[username] return render_template("public/profile.html", username=username, user=user) Lastly, let's refactor our profile.html to display our user information: app/app/templates/public/dynamic.html {% extends "public/templates/public_template.html" %} {% block title %}Profile{% endblock %} {% block main %} <div class="container"> <div class="row"> <div class="col"> <h1>Profile</h1> <hr> <div class="card"> <div class="card-body"> {% if user %} <h5 class="card-title">{{ username }}</h5> <hr> <p><strong>{{ user["name"] }}</strong></p> <p style="color: blue">{{ user["twitter_handle"] }}</p> <p class="text-muted">{{ user["bio"] }}</p> {% else %} <p>User {{ username }} not found</p> {% endif %} </div> </div> </div> </div> </div> {% endblock %} Requesting a dynamic route Save the files and head to /profile/mitsuhiko or any of the other usernames in your browser to see their profile! Also, try a name that's not in our dictionary to see how we've handled that with a simple {% if user %} statement in our template. Dynamic URL's aren't just limited to one variable. Let's stack some more veriables to capture in our URL string. Multiple URL variables In our first example, we created the profile route to expect only one variable in the URL. However, we can add as many as we like. Let's create a new route with multiple variables that just prints the variables and returns a simple string to the client: app/app/views.py @app.route("/multiple/<foo>/<bar>/<baz>") def multiple(foo, bar, baz): print(f"foo is {foo}") print(f"bar is {bar}") print(f"baz is {baz}") return f"foo is {foo}, bar is {bar}, baz is {baz}" Go to /multiple/foo/bar/baz in your browser, you'll see: foo is foo, bar is bar, baz is baz As you can see, we have full access to the variables captured in the URL and passed into our function! In the next part of this series, we'll be covering how to work with query strings in Flask.
https://pythonise.com/series/learning-flask/generating-dynamic-urls-with-flask
CC-MAIN-2021-17
refinedweb
950
62.58
mention USE_PKGLOCALEDIR_CONFIGURE= yes 18: USE_LIBTOOL= yes 19: USE_PKGLOCALEDIR= yes 20: 21: Neither meson or cmake (another increasingly popular alternative) uses 22: libtool to make shared libraries. 23: 24: `USE_PKGLOCALEDIR` is also actively harmful here. 25: 26: You will need the following: 27: 28: .include "../../devel/meson/build.mk" 29: 30: Tools 31: ----- 32: 33: Only the following tools are generally needed: 34: 35: USE_TOOLS+= pkg-config msgfmt xgettext 36: 37: Nearly every meson package needs pkg-config, since it's the main way it 38: resolves dependencies. This also means that dependencies where the 39: buildlink isn't included won't be detected. 40: 41: meson does not use intltool or other gettext tools than xgettext 42: and msgfmt. If your package uses `i18n.gettext` in a `meson.build` file 43: it needs xgettext, and if it uses `i18n.merge_file` it needs msgfmt. 44: It's easy to check this with grep. 45: 46: Python 47: ------ 48: 49: Every Meson project uses Python to build, but many don't install 50: Python scripts and don't need it to run, so you should add the following: 51: 52: PYTHON_FOR_BUILD_ONLY= tool 53: 54: If the build process needs a `python3`, `python2`, or `python` binary to 55: execute scripts during the build, you should also include the following: 56: 57: .include "../../lang/python/tool.mk" 58: 59: This will automatically create a python executable with the appropriate 60: suffix (or lack of suffix) for the duration of the build. 61: 62: pkgconfig overrides 63: ------------------- 64: 65: Libraries that use meson generally generate .pc files for use 66: by pkg-config. This happens during the build, while previously 67: they would have been pregenerated. 68: 69: You should search for .pc files after the build is complete 70: and set the appropriate flags: 71: 72: PKGCONFIG_OVERRIDE_STAGE= pre-install 73: PKGCONFIG_OVERRIDE+= output/meson-private/glesv2.pc 74: 75: This is necessary so that pkgsrc sets the correct runtime 76: path for libraries. 77: 78: Remove any old references to nonexistent .pc files from the old 79: build system in the PKGCONFIG_OVERRIDE. 80: 81: gettext issues on NetBSD 82: ------------------------ 83: 84: NetBSD includes an (old, pre-GPLv3?) version of msgfmt. This doesn't 85: have the new --desktop or --xml flags, so you'll get a build error 86: if a package attempts to use these. 87: 88: There's an easy workaround: 89: 90: .include "../../mk/bsd.prefs.mk" 91: 92: # msgfmt: unknown option -- desktop 93: .if ${OPSYS} == "NetBSD" 94: TOOLS_PLATFORM.msgfmt= 95: .endif 96: 97: Flags 98: ----- 99: 100: Flags of the following form: 101: 102: CONFIGURE_ARGS+= --enable-opengl 103: 104: Generally now take the following form: 105: 106: MESON_ARGS+= -Dopengl=true 107: 108: Check the `meson_options.txt` file.
https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/pkgsrc/how_to_convert_autotools_to_meson.mdwn?rev=1.5;content-type=text%2Fx-cvsweb-markup;sortby=author;f=h;only_with_tag=MAIN;ln=1
CC-MAIN-2022-33
refinedweb
456
64.61
GameFromScratch.com Since the release of Phaser 3.0 earlier this year, the HTML5 game framework has seen a rapid succession of updates. Today Phaser 3.8.0 was released, this release focusing heavily on the plugin system, making it easier to acquire and ultimately use them in your game. This release also enables you to provide your own already created WebGL context when initializing Phaser. Of course the release is also packed with other smaller fixes, features and improvements. Further details from the change log: New FeaturesYou can pass in your own canvas and context elements which is derived from gl.MAX_TEXTURE_IMAGE_UNITS, you can get it via the new method getMaxTextures(). WebGLRenderer.config has a new property maxTextureSize which is derived from gl.MAX_TEXTURE_SIZE, you can get it via the new method getMaxTextureSize()WebGLRenderer has a new property compression which holds the browser / devices compressed texture support gl extensions, which is populated during init. When calling generateFrameNames to which will randomly position them anywhere within the defined area, or if no area is given, anywhere within the game size. UpdatesGame.step now emits a prestep event, which some of the global systems hook in to, like Sound and Input. You can use it to perform pre-step tasks, ideally from plugins. Game.step now emits a step event. This is emitted once per frame. You can hook into it from plugins or code that exists outside of a Scene. Game.step now emits a poststep event. This is the last chance you get to do things before the render process begins. Optimized TextureTintPipeline.drawBlitter so it skips bobs that have alpha of zero and only calls setTexture2D if the bob sourceIndex has changed, previously it called it for every single bob. Game.context used to be undefined if running in WebGL. It is now set to be the WebGLRenderingContext during WebGLRenderer.init. If you provided your own custom context, it is set to this instead. The Game onStepCallback has been removed. You can now listen for the new step events instead. Phaser.EventEmitter was incorrectly namespaced, it's now only available under Phaser.Events.EventEmitter (thanks Tigran) Bug FixesTBounds on a nested container to fail. Fix #3624 (thanks @poasher) Calling a creator, such as GraphicsCreator, without passing in a config object, would cause an error to be thrown. All Game Object creators now catch against this. canvas context maxTextures gl.MAX_TEXTURE_IMAGE_UNITS getMaxTextures() maxTextureSize gl.MAX_TEXTURE_SIZE getMaxTextureSize() compression init generateFrameNames this.input.keyboard.on('keydown_NUMPAD_ZERO') setRandomPosition prestep step poststep setTexture2D WebGLRenderingContext onStepCallback document.body getBounds If you are interested in learning Phaser 3, be sure to check out our Getting Started video, also embedded below: GameDev News Phaser. Phaser is a popular open source HTML5 2D game framework, that just released version 3.3.0. Phaser has been on a rapid release schedule since Phaser 3 was released just last month. Highlights of this release include: event. destroy Additionally the documentation has seemed heavy focus which will hopefully result in Typescript definitions being available soon™. In addition to the above features there were several other smaller improvements and bug fixes. You can read the full change log here. If you are interested in getting started with Phaser, be sure to check out our recently released Getting Started with Phaser 3 video tutorial, also embedded below. Another quick update for the recently released Phaser 3 game engine, this one bringing Phaser to version 3.2.1. Phaser is a popular and full featured 2D framework for developing HTML5 games. This release is almost entirely composed of bug fixes and quality of life improvements. Details of the release from the release notes: Bug FixesFixed issue with Render Texture tinting. Fix #3336 (thanks @rexrainbow) Fixed Utils.String.Format (thanks @samme) The Matter Debug Layer wouldn't clear itself in canvas mode. Fix #3345 (thanks @samid737) TimerEvent.remove would dispatch the Timer event immediately based on the opposite of the method argument, making it behave the opposite of what was expected. It now only fires when requested (thanks @migiyubi) The TileSprite Canvas Renderer did not support rotation, scaling or flipping. Fix #3231 (thanks @TCatshoek) Fixed Group doesn't remove children from Scene when cleared with the removeFromScene argument set (thanks @iamchristopher) Fixed an error in the lights pipeline when no Light Manager has been defined (thanks @samme) The ForwardDiffuseLightPipeline now uses sys.lights instead of the Scene variable to avoid errors due to injection removal. Phaser.Display.Color.Interpolate would return NaN values because it was loading the wrong Linear function. Fix #3372 (thanks @samid737) RenderTexture.draw was only drawing the base frame of a Texture. Fix #3374 (thanks @samid737) TileSprite scaling differed between WebGL and Canvas. Fix #3338 (thanks @TCatshoek) Text.setFixedSize was incorrectly setting the text property instead of the parent property. Fix #3375 (thanks @rexrainbow) RenderTexture.clear on canvas was using the last transform state, instead of clearing the whole texture. UpdatesThe SceneManager.render will now render a Scene as long as it's in a LOADING state or higher. Before it would only render RUNNING scenes, but this precluded those that were loading assets. A Scene can now be restarted by calling scene.start() and providing no arguments (thanks @migiyubi) The class GameObject has now been exposed, available via Phaser.GameObjects.GameObject (thanks @rexrainbow) A Camera following a Game Object will now take the zoom factor of the camera into consideration when scrolling. Fix #3353 (thanks @brandonvdongen) Calling setText on a BitmapText object will now recalculate its display origin values. Fix #3350 (thanks @migiyubi) You can now pass an object to Loader.atlas, like you can with images. Fix #3268 (thanks @TCatshoek) The onContextRestored callback won't be defined any more unless the WebGL Renderer is in use in the following objects: BitmapMask, Static Tilemap, TileSprite and Text. This should allow those objects to now work in HEADLESS mode. Fix #3368 (thanks @16patsle) The SetFrame method now has two optional arguments: updateSize and updateOrigin (both true by default) which will update the size and origin of the Game Object respectively. Fix #3339 (thanks @Jerenaux) removeFromScene sys.lights text parent scene.start() Phaser.GameObjects.GameObject setText onContextRestored updateSize updateOrigin Phaser is available for download here. If you are interested in learning more about Phaser 3 development be sure to check out our getting started video available here and embedded below..
http://www.gamefromscratch.com/?tag=/Phaser
CC-MAIN-2018-39
refinedweb
1,059
57.67
Data::Downloader::Config # command line : dado config init --filename=my_config_file.txt dado config update --filename=my_config_file_modified.txt Module : use Data::Downloader; Data::Downloader::Config->init( filename => "my_config_file.txt"): Data::Downloader::Config->init( yaml => qq[...some yaml...] ); Configure Data::Downloader. Data::Downloader uses sqlite to store both its configuration and data about the files which are downloaded. (For the location of this sqlite file, see Data::Downloader::DB.) The configuration describes url patterns, file metadata, RSS feeds, and how to create trees of symbolic links to various subsets of the files that are downloaded. DD::Config can also update the configuration by reading a new file and determining which changes have been made. Any changes will _only_ affect the configuration, they will not cause changes to any of the metadata that has been stored, or the location of any of the files on disk. Certain configuration changes may be invalid, if they would cause the database to inconsistent. In such cases, to force a configuration change, you may need to either remove the database file and start from scratch, or else use SQL commands to manually update the configuration within the database to reflect a re-organization of items on disk. DD uses YAML to read files; please see that page for documentation about YAML. A configuration file is a collection of yaml "documents"; a sequence of lines separated by lines containing only three dashes ("---'). Each of these documents represents a Data::Downloader::Repository. A repository is a collection of files stored in a common root directory. The first few fields of a repository are : --- name: my_repo An arbitrary name for this repository. This name will reflect the character of this data, e.g. "images", "videos", "web_pages", "ozone_data". (required) storage_root: /path/to/root/storage The root directory for the storage of files. (required) file_url_template: '<variable1>/<variable2>/<date_variable:%Y/%m/%d> This is a String::Template style string for downloading files. This is required if data will be downloaded directly from urls using this template. The variables listed in the template will become required command-line arguments to dado, e.g. dado download file --variable1=foo --variable2==bar --date_variable='2001-03-04' URLs may also come from RSS feeds (below), in which case the file_url_template is not relevant. cache_strategy: LRU The strategy for cache expiration. Currently only LRU is supported. (required) cache_max_size: 1073741824 The approximate maximum size (in bytes) for the cache. The cache size is checked before downloading files (this may change to be less frequent). (required) disks: - root: disk1/ - root: disk2/ - root: disk3/ These are top level subdirectories of "storage_root" in which to place files. In practice, these may be located on different devices. Currently new files will be placed in the directory whose device has the most free space (as determined by "df"). If two partitions have the same amount of free space, the new file will be placed on the one which has the most free space within that directory (i.e. the sum of DD's files is the smallest). If those are the same, a random one will be used. feeds: If there are RSS feeds that describe the locations (and/or metadata) of the files, they may be listed in a "feeds" section. Each feed is a Data::Downloader::Feed. The syntax is simplest if there is only one feed, but it is possible to specify multiple feeds (see EXAMPLES) name: georss Each feed also has an arbitrary "name", used to identify it. The name should correspond to the source of the RSS feed. feed_template: '<var1>/<var2>/<var3> This is a String::Template string (or just a string if there are no variables) which describes the url for the RSS feed. Variables in the template will become required command-line arguments to dado when refreshing the feed, e.g. dado feeds refresh --var1=foo --var2=bar --var3=baz It is also possible to assign default values to some of the parameters in the template, in which case they will be optional. This happens like so : feed_parameters: - name: var1 default_value: 'foo' - name: var2 default_value: 'bar' With the above defaults, var1 and var2 could be omitted from the command line : dado feeds refresh --var3=baz An RSS feed contains various items in <item></item> tags. An atom feed uses "entry" instead of "item". Within these tags, there may be information about the location of the files to be downloaded, as well as various pieces of metadata that should be stored (so that they may be used to construct symbolic links and search for files). file_source: filename_xpath: 'some_xpath_within_item' md5_xpath: 'another_one' url_path: 'yet_another_one' These lines describe where to find the filename, md5 and url of an individual file within the <item> (or <entry>) tags. e.g. for the example above, the full (document-level) xpath for the filename would be //item/some_xpath_within_item. Note that if an RSS feed contains tags with namespaces, then (per the xpath specification) all of the tags need namespaces. Data::Downloader assigns tags with no namespace to a namespace named "default". So, e.g. if the RSS feed contains <link> within an <item> (entry), but there are also tags like <datacasting:orbit>, then the xpath for <link> will be //default:item/default:link. And url_path, above, would be "default:link". See XML::LibXML::Node for a discussion of this. metadata_sources: - name: metadata_var1 xpath: metadata_var1s_xpath_in_an_item - name: metadata_var2 xpath: nother_xpath_in_an_item These are the xpaths within an //item for pieces of metadata to be stored for each file. The above indicates that //item/metadata_var1s_xpath_in_an_item describes a piece of data that should be called "metadata_var1". Keep reading to see how to use these. linktrees: This section (one per data source, not one per feed) describes a list of trees (each is a Data::Downloader::Linktree) of symbolic links to be maintained; the symlinks will point to data within the repository. - root: /some/path/where/these/symlinks/go condition: '{ metadata_var1 => "a value for this piece of data"}' path_template: some/subdir/that/uses/vars/<metadata_var1>/<metadata_var2> - root: /another/path/for/more/symlinks condition: '{ metadata_var2 => { ">=" => 42, "<=" => 99 }' path_template: anothersubdir/<metadata_var2> Each linktree has a "root" (an absolute path), a condition (an SQL::Template style clause for limiting which files get symlinks under this path. Use "~" to get all files"), and a "path_template" (a String::Template string for laying out the symlinks). Inserts information about repository and feeds using a config file. Parameters : filename: the name of a config file yaml: yaml content of the file (can be sent instead of file) update_ok: allow updates, not just initialization Update the config Dump the config. Parameters : format - the format (yaml, array) Here's a sample configuration file : --- name: my_images storage_root: /some/where feeds: [ { name : flickr, feed_template : '<tags>&lang=en-us&format=rss_200', file_source : { url_xpath : 'media:content/@url', filename_xpath : 'media:content/@url', filename_regex : '/([^/]*)$' }, metadata_sources: [ { name: 'date_taken', xpath: 'dc:date.Taken' }, { name: 'tags', xpath: 'media:category' } ] }, { name : smugmug, feed_template : TODO, file_source : TODO, metadata_sources : TODO } ] linktrees : - root: /images condition: ~ path_template: '<date_taken:%Y/%m/%d>'
http://search.cpan.org/~bduggan/Data-Downloader/lib/Data/Downloader/Config.pm
CC-MAIN-2016-26
refinedweb
1,157
53.71
Download presentation Presentation is loading. Please wait. Published byGonzalo Trepp Modified about 1 year ago 1 Econ 141 Fall 2013 Slide Set 7 The Gains from Financial Globalization 2 Gains from Financial Globalization First, we study factors that limit international borrowing and lending. Second, we see how a nation’s ability to use international financial markets allows it to accomplish three different goals: 1. Consumption smoothing (keeping consumption steady as income fluctuates) 2. Efficient investment (borrowing to invest in productive capital) 1. Diversification of risk (trading of financial assets between countries) 3 Limits on Borrowing from Abroad: the long-run budget constraint The ability to borrow in times of need and lend in times of prosperity has profound effects on a country’s well-being. We first derive the long-run budget constraint (LRBC) - the constraint that limits a country’s foreign borrowing in the long run. The LRBC shows precisely how and why a country must, in the long run, “live within its means.” use changes in an open economy’s external wealth to derive 4 The long-run budget constraint Suppose a household borrows $100,000 at 10% annually. There are two different ways the household can deal with its debt each year: Case 1 The debt is serviced. The borrower pays the interest but never needs to repay any principal. Case 2 The debt is not serviced. The borrower pays neither interest nor principal. The debt grows by 10% every year. Case 2 is not sustainable. Sometimes called a rollover scheme, a pyramid scheme, or a Ponzi game, this case illustrates the limits on the use of borrowing. In the long run, lenders will simply not allow the debt to grow beyond a certain point. This requirement is the essence of the long-run budget constraint. 5 What we assume: 1.Prices are perfectly flexible. All analysis can be conducted in terms of real variables, so that monetary aspects of the economy can be ignored. 2.The country is a small open economy. The country cannot influence prices in world markets for goods and services. 3.All debt carries a real interest rate r*, the world real interest rate, which is constant. The country can lend or borrow an unlimited amount at this interest rate. The Long-Run Budget Constraint For a Country 6 More assumptions we make: 4.The country pays a real interest rate r* on its start-of-period debt liabilities L and is paid the same interest rate r* on its start-of- period debt assets A. Net interest income payments equal to r*A minus r*L, or r*W, where W is external wealth (A − L) at the start of the period. 5.There are no unilateral transfers (NUT = 0), no capital transfers (KA = 0), and no capital gains on external wealth. Under these assumptions, there are only two nonzero items in the current account: the trade balance and net factor income from abroad, r*W. The Long-Run Budget Constraint For a Country 7 Calculating Future Wealth Calculating the Change in Wealth Each Period 8 Assume that all debts owed or owing must be paid off, and the country must end that year with zero external wealth. At the end of year 1 Substituting, we get The two-period budget constraint is The Budget Constraint in a Two-Period Example At the end of year 0, 9 A two-period example Let W−1 = −$100 million, and r = 10% To pay off $110 million at the end of period 1, the country must have a present value of future trade balances of +$110 million. The country could run a trade surplus of $110 million in period 0, or it could wait to pay off the debt until the end of period 1, and run a trade surplus of $121 million in period 1. Or it could use any other combination of trade balances in periods 0 and 1 that allow it to pay off the debt with the accumulated interested so that external wealth at the end of period 1 is zero. 10 The long-run budget constraint in general We let N go to infinity to get the equation of the LRBC: A debtor (surplus) country must have future trade balances that are offsetting and positive (negative) in present value terms. 11 A Long-Run Example: The Perpetual Loan The formula below helps us compute PV(X) for any stream of constant payments: For example, the present value of a stream of payments on a perpetual loan, with X = 100 and r*=0.05, equals 2,000: 12 The LRBC tells us that in the long run, a country’s national expenditure (GNE) is limited by how much it produces (GDP). To see this, we use the equation for the LRBC and the fact that The left side of this equation is the present value of resources of the country in the long run: the present value of any inherited wealth plus the present value of present and future product. The right side is the present value of all present and future spending (C + I + G), measured by GNE. Implications of the LRBC for GNE and GDP 13 The Limits on Borrowing The long-run budget constraint says that in the long run, in present value terms, a country’s expenditures (GNE) must equal its production (GDP) plus any initial wealth. The LRBC therefore shows quite precisely how an economy must live within its means in the long run. 14 Since the 1980s, the United States has been a net debtor, W = A − L < 0. Negative external wealth would lead to a deficit on net factor income from abroad: r*W= r* (A − L) < 0. But we learned that U.S. net factor income from abroad has been positive for decades. How can this be? The only way a net debtor can earn positive net interest income is by receiving a higher rate of interest on its assets than it pays on its liabilities. In the 1960s, Charles de Gaulle, complained about the United States’ “exorbitant privilege” of being able to borrow cheaply while earning higher returns on U.S. external assets. The U.S LRBC and Exorbitant Privilege 15 16 The U.S. has long received positive capital gains, KG, on its external wealth. These large capital gains on external assets and the smaller capital losses on external liabilities are not the result of price or exchange rate effects. They are gains that cannot be otherwise measured. As a result, some skeptics call these capital gains “statistical manna from heaven.” This financial gain for the U.S. is a loss for the rest of the world. As a result, some economists describe the United States as more like a “venture capitalist to the world” than a “banker to the world.” The U.S LRBC: more benefits 17 Adding the 2% capital gain differential to the 1.5% interest differential, we end up with a U.S. total return differential (interest plus capital gains) of about 3.5% per year since the 1980s. In the same period, the total return differential was about zero in every other G7 country. Adding these additional effects to the budget constraint for the U.S., we get The U.S LRBC 18 The United States borrows low and lends high. For most poorer countries, the opposite is true. Because of country risk, investors typically expect a risk premium before they will buy any assets issued by these countries, whether government debt, private equity, or FDI profits. The Situation in Emerging Markets 19 Public debt and bond ratings 20 In a sudden stop, a borrower country sees its financial account surplus rapidly shrink. Problems for Emerging Market Borrowers 21 22 23 24 Reprise of the long-run budget constraint Begin with the standard identity for the change in external wealth: The change in a country’s external wealth equals the sum of the CA and KA plus capital gains on external wealth, KG. Replacing the current account with its components leads to Next, we can rearrange this to write external wealth at the end of year N as 25 To keep the algebra simple, for the time being, we will assume that We find a relationship between external wealth this year and future year’s trade balances, TB, by repeatedly adding equations like this to get Reprise of the long-run budget constraint 26 Adding these three identities leads to Cancelling terms that appear on both sides of the equation gives us Reprise of the long-run budget constraint 27 Extrapolating leads to our equation, If This means that does not grow as or faster. If it did, some country’s debt is growing explosively. Reprise of the long-run budget constraint 28 The LRBC is If is not zero, TB is replaced by For the U.S., we will also need to include Reprise of the long-run budget constraint 29 Gains from Consumption Smoothing We make some additional assumptions. These hold whether the economy is closed or open: GDP is denoted Q. It is produced using labor as the only input. Production of GDP may be subject to shocks; depending on the shock, the same amount of labor input may yield different amounts of output. We use the terms “household” and “country” interchangeably. Preferences of the country/household are such that it will choose a level of consumption C that is constant over time, or smooth. This level of smooth consumption must be consistent with the long-run budget constraint. 30 For now, we assume that consumption is the only source of demand. Both investment I and government spending G are zero. Under these assumptions, GNE equals personal consumption expenditures C. We start at time 0 and assume the country has zero initial wealth inherited from the past. W−1 is set equal to zero. We assume that the country is small and the rest of the world (ROW) is large, and the prevailing w.orld real interest rate is constant at r Gains from Consumption Smoothing 31 These assumptions give us a special case of the LRBC that requires the present value of current and future trade balances to equal zero (because initial wealth is zero): or equivalently, Gains from Consumption Smoothing 32 Closed versus Open Economy: Shocks TABLE 6-3 Shocks An Open Economy with Temporary Shocks A trade deficit is run when output is temporarily low. Consumption is smooth. 33 Closed versus Open Economy: Shocks TABLE 6-2 A Closed Economy with Temporary Shocks Output equals consumption. Trade balance is zero. Consumption is volatile. 34 Suppose, more generally, that output Q and consumption C are initially stable at some value with Q = C and external wealth of zero. The LRBC is satisfied. If output falls in year 0 by ΔQ, and then returns to its prior value for all future periods, then the present value of output decreases by ΔQ. To meet the LRBC, a closed economy lowers its consumption by the whole ΔQ in year 0. An open economy can lower its consumption uniformly (every period) by a smaller amount, so that ΔC < ΔQ. Gains from Consumption Smoothing 35 A loan of ΔQ − ΔC in year 0 requires interest payments of r*(ΔQ − ΔC) in later years. If the subsequent trade surpluses of ΔC are to cover these interest payments, then we know that ΔC must be chosen so that: Rearranging to find ΔC: Gains from Consumption Smoothing 36 Smoothing Consumption when a Shock Is Permanent With a permanent shock, output will be lower by ΔQ in all years, so the only way either a closed or open economy can satisfy the LRBC while keeping consumption smooth is to cut consumption by ΔC= ΔQ in all years. Comparing the results for a temporary shock and a permanent shock, we see an important point: consumers can smooth out temporary shocks—they have to adjust a bit, but the adjustment is far smaller than the shock itself—but they must adjust immediately and fully to permanent shocks. 37 Summary Financial openness allows countries to “save for a rainy day.” Without financial institutions to lend or borrow, you have to spend what you earn each period. Using financial transactions to smooth consumption fluctuations makes a household and/or country better off. In a closed economy, Q = C, so output fluctuations immediately generate consumption fluctuations. In an open economy, the desired smooth consumption path can be achieved by running a trade deficit during bad times and a trade surplus during good times. Deficits and surpluses can be used to finance emergency spending. 38 Consumption smoothing and war YearSaving/GDPInvestment/GDP Japan’s gross saving and investment during the Russo- Japanese War, 39 Consumption Volatility and Financial Openness Does the evidence show that countries avoid consumption volatility by embracing financial globalization? The ratio of a country’s consumption to the volatility of its output should fall as more consumption smoothing is achieved. In our model of a small, open economy that can borrow or lend without limit this ratio should fall to zero when the gains from financial globalization are realized. Since not all shocks are global, countries ought to be able to achieve some reduction in consumption volatility through external finance. 40 The lack of evidence suggests that some of the relatively high consumption volatility must be unrelated to financial openness. Consumption-smoothing gains in emerging markets require improving poor governance and weak institutions, developing their financial systems, and pursuing further financial liberalization. Consumption Volatility and Financial Openness 41 Countries may engage in precautionary saving, whereby the government acquires a buffer of external assets, a “rainy day” fund. Precautionary saving is on the rise and takes two forms. The first is the accumulation of foreign reserves by central banks, which may be used to achieve certain goals, such as maintaining a fixed exchange rate, or as reserves that can be deployed during a sudden stop. The second form is called sovereign wealth funds, whereby state- owned asset management companies invest some of the government savings. Precautionary Saving, Reserves, and Sovereign Wealth Funds 42 Gains from Efficient Investment Openness delivers gains not only on the consumption side but also on the investment side by improving a country’s ability to augment its capital stock and take advantage of new production opportunities. Basic Model Now assume that production requires both labor and capital. Capital is created over time through investment. The LRBC must be modified to include investment I as a component of GNE. We still assume that G is zero. With this change, the LRBC becomes: 43 Because the TB is output (Q) minus consumption (C), we can rewrite this last equation as Using this modified LRBC, we now study investment and consumption decisions in two cases: 1.A closed economy, in which external borrowing and lending are not possible, the trade balance is zero in all periods, and the LRBC is automatically satisfied. 2.An open economy, in which borrowing and lending are possible, the trade balance can be more or less than zero, and we must verify that the LRBC is satisfied. Gains from Efficient Investment 44 Q = 100, C = 100, I = 0, TB = 0, and W = 0. We assume that a shock in year 0 in the form of a new investment opportunity requires an expenditure of 16 units, and will pay off in future years by increasing the country’s output by 5 units in year 1 and all subsequent years (but not in year 0). Output would be 100 today and then 105 in every subsequent year. The present value of this stream of output is 100 plus 105/0.05 or 2,200, and the present value of consumption must equal 2,200 minus 16, or 2,184. Efficient Investment: A numerical example 45 46 Efficient Investment Suppose that a country starts with zero external wealth, constant output Q, consumption C equal to output, and investment I equal to zero. A new investment opportunity appears requiring ΔK units of output in year 0. This investment will generate an additional ΔQ units of output in year 1 and all later years (but not in year 0). The increase in the present value of output PV(Q) comes from extra output in every year but year 0, and the present value of these additions to output is: The change in the present value of investment PV(I) is simply ΔK. Investment will increase the present value of consumption if and only if ΔQ/r* ≥ ΔK. 47 The change in the present value of investment PV(I) is simply ΔK. Investment will increase the present value of consumption if and only if ΔQ/r* ≥ ΔK. Rearranging, Dividing by ΔK, investment is undertaken when Firms will take on investment projects as long as the marginal product of capital, or MPK, is at least as great as the real interest rate. Efficient Investment 48 In an open economy, firms borrow and repay to undertake investment that maximizes the present value of output. Households also borrow and lend to smooth consumption. An open economy should investment until its MPK equals the world real rate of interest. In a closed economy, resources invested are not consumed. More investment means less consumption. This creates a trade-off. Financial openness allows countries to increase investment and smooth consumption at the same time. Gains from Efficient Investment 49 Example: North Sea oil boom and Norwegian investment 50 If the world real interest rate is r* and a country has investment projects for which MPK exceeds r*, then the country should borrow to finance those projects. This implies that capital should flow to higher MPK countries from lower MPK countries. Since poor countries have low ratios of capital to labor, the MPK of capital should be higher in poorer countries than in rich (all else equal). Production Function Approach To look at what determines a country’s marginal product of capital, economists use a version of a production function that maps available capital per worker, k = K/L, and the prevailing level of productivity A to the level of output per worker, q = Q/L, where Q is GDP. Gains from Efficient Investment 51 52 A Benchmark: Assume countries have the same level of productivity, A = 1. A country with less capital per worker will have a higher MPK due to the two assumptions of diminishing marginal product and a common productivity level. Investment ought to be very profitable in poorer countries relative to richer countries. For example, investment should be higher as a share of GDP in Mexico than in the U.S. until the ratio of capital to labor is the same in both countries. Capital per worker should c trajectory is called convergence. If the world is characterized by convergence, countries can reach the level of capital per worker and output per worker of the rich country through investment and capital accumulation alone. Gains from Efficient Investment 53 Mexico’s output per worker is about 43% of that for the U.S. If the level of productivity, A, is the same, then k for Mexico is 43% of k for the U.S. Investment in Mexico should increase k and output per worker until they are the same as in the U.S. 54 Why Doesn’t Capital Flow to Poor Countries? This doesn’t happen in reality. Poor and rich countries have different levels of productivity (different production functions) and so MPK may not be much higher in poor countries than it is in rich countries. Instead, output is proportional to A given the amount of capital per worker: Even if MPK is the same between the two countries, output per worker and capital per worker are lower in the country with a lower A. 55 The poorer country (Mexico) is now at C and not at B. Investment increases k only until MPK is the same as in the U.S. at point D. Capital per worker k and output per worker q do not converge to the levels seen in the rich country. 56 This is called the Lucas paradox from Nobel laureate Robert Lucas’ article “Why Doesn’t Capital Flow from Rich to Poor Countries?” Lucas noted that if international financial markets are open to free movements of resources, then investment goods should flow from rich to poor countries and investment should be very low in wealthy countries. Why Doesn’t Capital Flow to Poor Countries? 57 What if Countries Have Different Productivity Levels? To see why capital may not flow to poor countries, we now suppose that A, the productivity level, is different in the United States and Mexico. Suppose that and output is produced in each country using the technologies The MPKs are related as 58 Now, if then Capital per worker in Mexico is about 1/3 of the ratio of capital per worker in the U.S.. If Mexico has the same productivity as the U.S., then Mexico’s output per worker should be about is about 69% of U.S. output per worker. That is, Instead, it is 43% as much. What if Countries Have Different Productivity Levels? 59 If we assume capital is perfectly mobile so that its marginal productivity is equal between Mexico and the U.S., we can estimate the difference between productivity levels in Mexico and the U.S. using our equation, If this is the case, then Mexico’s productivity level is 63% of the U.S. productivity level. What if Countries Have Different Productivity Levels? 60 Comparing A and k as sources of output differences 61 62 Differences in A could reflect a country’s technical efficiency, construed narrowly as a function of its technology and management capabilities. Many economists believe that the level of A may primarily reflect a country’s social efficiency, construed broadly to include institutions, public policies, and cultural differences. Indeed, some evidence that, among poorer countries, more capital does tend to flow to the countries with better institutions. Comparing A and k as sources of output differences 63 Other factors may also explain the lack of convergence. The model makes no allowance for risk. Differences in MPK could be risk premiums that compensate for the risk of investing in an emerging market (e.g., risks of regulatory changes, tax changes, expropriation, and other political risks). Risk premiums can be substantial in practice: for example, the average real rate of interest for emerging market economies was over 6% for the decade before the global financial crisis (for Argentina, it was 25%, but for Mexico, around 3%). Comparing A and k as sources of output differences 64 Gains for Diversification of Risk Diversification can help smooth shocks by promoting risk sharing. With diversification, countries may be able to reduce the volatility of their incomes (and hence their consumption levels) without any net lending or borrowing. An example: Two countries, A and B, with outputs that fluctuate asymmetrically. Two possible “states of the world,” with equal probability of occurring. State 1 is a bad state for A and a good state for B; state 2 is good for A and bad for B. We assume that all output is consumed (there is no investment or government spending). Output is divided between labor income and capital income. 65 No diversification means that each country owns 100% of its capital. Output is the same as income. Suppose that in state 1, A’s output is 90, of which 54 units are payments to labor and 36 units are payments to capital In state 2, A’s output rises to 110, and factor payments rise to 66 for labor and 44 units for capital. The opposite is true in B: in state 1, B’s output is higher than it is in state 2. The variation of GNI about its mean of 100 is plus or minus 10 in each country. Because households prefer smooth consumption, this variation is undesirable. Gains for Diversification of Risk - example 66 Example without diversification 67 Diversification allows the two countries to partially smooth national income by holding portfolios of capital assets in each country. For example, each country could own half of the domestic capital stock, and half of the other country’s capital stock. Indeed, this is what standard portfolio theory says that investors should try to do. For our example, capital income is smoothed at 40 units for each country (same in each state 1 or 2). Total capital income is 80 in each state, so each country receives a constant share of 40. Because labor income is not shared, each country’s GNI still fluctuates around a mean of 60 by plus or minus 6. Gains for Diversification of Risk - example 68 Example with diversification 69 What happens if each country owns 100 percent of other country’s capital stock? In state 1 (bad for Country A), A’s labor income is 54. Capital income in country B belongs to country A and equals 44. GNI for Country A is now = 98. In state 2 (good for Country A), A’s labor income is 66 and its income from capital (in Country B) is 36. GNI for Country A equals 102. GNI fluctuates around a mean of 100 plus or minus only 2 in each country. Gains for Diversification of Risk - example 70 Example with more diversification 71 Consider country A. In state 1 (bad for A, good for B), A’s income or GNI exceeds A’s output. The extra income is net factor income from abroad, which is the difference between the income earned on A’s external assets and the income paid on A’s external liabilities. With that net factor income, country A runs a negative trade balance, which means that A can consume more than it produces. Adding the trade balance of –4 to net factor income from abroad of +4 means that the current account is 0, and there is still no need for any net borrowing or lending. These flows are reversed in state 2 (good for A, bad for B). Gains for Diversification of Risk and the Balance of Payments 72 Capital income smoothing Each country’s payments to capital are volatile. A portfolio of 100% country A’s capital or 100% of country B’s capital has capital income that varies by plus or minus 4 (between 36 and 44). But a mix of the two leaves the investor with a portfolio of minimum, zero volatility (it always pays 40). In general, there will be some common shocks, which are identical shocks experienced by both countries. In this case, there is no way to avoid this shock by portfolio diversification. But as long as some shocks are asymmetric, the two countries can take advantage of gains from the diversification of risk. 73 Return Correlations and Gains from Diversification The charts plot the volatility of capital income against the share of the portfolio devoted to foreign capital. The two countries are identical in size and experience shocks of similar amplitude. In panel (a), shocks are perfectly asymmetric (correlation = −1), capital income in the two countries is perfectly negatively correlated. Risk can be eliminated by holding the world portfolio, and there are large gains from diversification. 74 Return Correlations and Gains from Diversification (continued) In panel (b), shocks are perfectly symmetric (correlation = +1), capital income in the two countries is perfectly positively correlated. Risk cannot be reduced, and there are no gains from diversification. In panel (c), when both types of shock are present, the correlation is neither perfectly negative nor positive. Risk can be partially eliminated by holding the world portfolio, and there are still some gains from diversification. 75 Limits to Diversification: Capital versus Labor Income Labor income risk (and hence GDP risk) may not be diversifiable through the trading of claims to labor assets or GDP. But capital and labor income in each country are perfectly correlated, and shocks to production tend to raise and lower incomes of capital and labor simultaneously. This means that, as a risk-sharing device, trading claims to capital income can substitute for trading claims to labor income. 76 The Home Bias Puzzle In practice, we do not observe countries owning foreign-biased portfolios or even the world portfolio. Countries tend to own portfolios that suffer from a strong home bias, a tendency of investors to devote a disproportionate fraction of their wealth to assets from their own home country, when a more globally diversified portfolio might protect them better from risk. 77 The Home Bias Puzzle Portfolio Diversification in the United States The figure shows the return (mean of monthly return) and risk (standard deviation of monthly return) for a hypothetical portfolio made up from a mix of a pure home U.S. portfolio (the S&P 500) and a pure foreign portfolio (the Morgan Stanley EAFE) using data from the period 1970 to 1996. 78 The Home Bias Puzzle 79 80 Gains from Diversification of Risk: Summary If countries could borrow and lend without limit or restrictions in an efficient global capital market, they should be able to cope quite well with the array of possible shocks, in order to smooth consumption. In reality, as the evidence shows, countries are not able to fully borrow and choose not to lend so freely. In theory, if countries were able to pool their income streams and take shares from that common pool of income, all country-specific shocks would be averaged out, leaving countries exposed only to common global shocks to income, the sole remaining undiversifiable shocks that cannot be avoided. 81 Financial openness allows countries—like households—to follow the old adage “Don’t put all your eggs in one basket.” In practice, however, risk sharing through asset trade is limited. For one thing, the number of assets is limited. The market for claims to capital income is incomplete because not all capital assets are traded (e.g., many firms are privately held and are not listed on stock markets), and trade in labor assets is legally prohibited. Investors have shown very little inclination to invest their wealth outside their own country, although that may be slowly changing in an environment of ongoing financial globalization. Gains from Diversification of Risk: Summary 82 The gains from financial globalization: summary Financial markets allow: households to save and borrow to smooth consumption over shocks to their income. firms to borrow in order to invest efficiently and permit investors to diversify their holdings across a wide range of assets. International financial markets should allow the same gains subject to the long-run budget constraint. Countries face national income shocks, new investment opportunities and country-specific risks. 83 These Gains are Elusive: In poorer countries, we do not see consumption smoothing gains, and there is little scope for development based on external finance until productivity levels are improved. Financial globalization hasn’t really been fully tried yet. Many emerging markets are still on the road to full financial liberalization, and large barriers remain. Institutional weaknesses in these countries may hinder the efficient operation of the mechanisms we have studied. Such weaknesses may be corrected by the stimulus to competition, transparency, accountability, and stability that financial openness may provide. The benefits of financial globalization are likely to be much smaller for these countries, and they must also be weighed against potential offsetting costs, such as the risk of crises. The gains from financial globalization: summary Similar presentations © 2016 SlidePlayer.com Inc.
http://slideplayer.com/slide/3459879/
CC-MAIN-2016-50
refinedweb
5,270
51.38
Silverlight 2 is a great idea, but there are some gaping holes. One of the things I missed was support for ordered and unordered lists - a counterpart for the HTML <ol> and <ul> tags. This library fills that gap. Add OrderedList elements to the toolbox The new controls are only visible when you have a .xaml file open. You won't see them while looking at an .aspx file. Add references to the library When you drag an OrderedList element from the toolbox to your page, the editor automatically adds the required namespace/assembly declaration to your page and adds a reference to OrderedList.dll to your project. However, if you simply type in the XAML, or copy and paste, be sure to to go through these steps to add these things yourself: Add a namespace declaration to the UserControl defintion at the top of the .xaml file, like this: <UserControl xmlns:my="clr-namespace:OrderedList;assembly=OrderedList" ...... > Simple list To add an ordered list on your page, simply add something like the following to the XAML: <my:OrderedList> <StackPanel> <my:ListItem> <TextBlock>First text item</TextBlock> </my:ListItem> <my:ListItem> <TextBlock>Second text item</TextBlock> </my:ListItem> <my:ListItem> <TextBlock>Third text item</TextBlock> </my:ListItem> </StackPanel> </my:OrderedList> The OrderedList element sets up the list, just as the <ol> and <ul> tags do in HTML. The ListItem element creates a single list item, just like <li> in HTML. OrderedList is derived from ContentControl, so it can hold only one element. As a result, you need to place the ListItem elements in a StackPanel or a Grid. Multiple elements per item ListItem too is derived from ContentControl, so if you have multiple elements within a list item, you need to place them within a StackPanel or Grid as well. Like so: <my:ListItem> <StackPanel> <TextBlock>First text item, first text block</TextBlock> <TextBlock>First text item, second text block</TextBlock> <TextBlock>First text item, third text block</TextBlock> </StackPanel> </my:ListItem> Nested Lists Finally, you can place any elements within a ListItem, including more OrderedList elements - to create nested lists: > Putting it all togetherBelow is a page with the completed example. You'll find it in the sources you downloaded, in project Example1. <UserControl x: <Grid x: <my:OrderedList> <StackPanel> > <my:ListItem> <StackPanel> <TextBlock>Second text item, first text block</TextBlock> <TextBlock>Second text item, second text block</TextBlock> <TextBlock>Second text item, third text block</TextBlock> </StackPanel> </my:ListItem> <my:ListItem> <TextBlock>Third text item</TextBlock> </my:ListItem> </StackPanel> </my:OrderedList> </Grid> </UserControl> You can use a OrderedList to number the items of a ItemsControl, such as a ListBox, like so: <my:OrderedList> <ListBox x: <ListBox.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal" > <my:ListNumber/> <TextBlock Text="{Binding Title}" Width="150" Margin="0,0,5,0" /> <TextBlock Text="{Binding ISBN}" Width="120" Margin="0,0,5,0" /> <TextBlock Text="{Binding PublishDate}" Width="200" Margin="0,0,5,0" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </my:OrderedList> Note that: ListBoxsits within the OrderedList. ListNumberat the point where you want to show the item number. The difference between a ListNumber and a ListItem is that a ListNumber represents just the number itself, while a ListItem takes care of indenting the elements that come after the number. Have a look at the generic.xaml file in the OrderedList project, and you'll see that ListItem actually uses ListNumber to show the actual number. The project Example2 in the solution you downloaded has a working example of all this. In addition to decimal numbers you can use upper- and lowercase letters and upper- and lowercase roman numerals. You specify the number type via the NumberType property of OrderedList. For example, to use lowercase roman numerals, you'd write: <my:OrderedList ..... </my:OrderedList> To make it easy to have bullets instead of numbers, the NumberType property also lets you specify a bullet type. That gives you the counterpart of the HTML <ul> tag. For example, to use a black square as your bullet, you'd write: <my:OrderedList ..... </my:OrderedList> More bullets If the bullet you want isn't listed above, you can use any Unicode character you want by using the NumberFormat property. For example, to use a smiley as your bullet, you'd write: <my:OrderedList ..... </my:OrderedList> You'll find lots of interesting characters in the Unicode Geometric Shapes Block and the Miscellaneous Symbols Block. Using an image as a bullet Say you have an image greentick.png in a directory images in your Silverlight Application, with a height of 20 pixels and a width of 20 pixels. Here is how you would use that image as your bullet: <my:OrderedList ..... </my:OrderedList> Note that you need to set NumberType to Image. For this to work, your image needs to be present in the proper directory while the page with your Silverlight is being displayed. To ensure this happens: Also, Silverlight doesn't seem to support .gif files, so you can only use .jpg and .png files. The Example3 project has working examples of all the bullets. Suppressing the bullet If for some reason you don't want a bullet (or number) at all, you'd write: <my:OrderedList ..... </my:OrderedList> This however also suppresses the indent. If you do want the indent (but no bullet or number), just use an empty string as your bullet: <my:OrderedList ..... </my:OrderedList> With nested lists, sometimes there is a need to show the numbers of the "parent" items in the "child" items, like this: To achieve this effect, use the ShowParentNumbers property on the nested list, like this: <my:OrderedList ..... </my:OrderedList> A working example is in the Example4 project. You can have as many nesting levels as you want. When using numbers, you don't have to start at 1 and increment by 1. If you wanted to start at 10 and increment by 5, simply write: <my:OrderedList ..... </my:OrderedList> Suppose you want to embellish the item numbers with some text, like this: 1) ..... 2) ..... 3) ..... or even this: Step 1: ..... Step 2: ..... Step 3: ..... This is easy to achieve with the NumberFormat property: <my:OrderedList ..... </my:OrderedList> {0} is the placeholder for the actual number or bullet. {} at the start tells XAML that the {0} should be taken literally, rather then as a Markup Extension. If you want just fixed text (without a number), leave out both the {0} and the {}. The position of the number or bullet, and the indentation of the text, are determined by these properties: If you don't set any of these properties, you get a default NumberWidth that's big enough for a few digits, a "right" NumberTextAlignment and a small NumberPostdent so the text doesn't hug the number. In most cases, this is what you want. Sometimes you'll want to set the width yourself, for example to make it big enough for fixed text: <my:OrderedList ..... </my:OrderedList> Example5 has a working example of this. Spacing for image bullets Things are slightly different when using image bullets. Here, the property ImageWidth is used instead of NumberWidth, and NumberTextAlignment is ignored: How about red bullets? Or really big underlined numbers? If you have a look at the style for ListNumber towards the end of the generic.xaml file in the OrderedList project, you'll see that the number or bullet sits in a TextBlock element. You can set the most important properties of the TextBlocks holding the numbers or bullets via properties of the OrderedList element. The property names are the same as those for the corresponding TextBlock properties, except they have "Number" prepended. So you can write something like this: <my:OrderedList ..... </my:OrderedList> You can use these properties: The Test project within the solution shows the use of some of these properties. 9 June 2008: First release, for Silverlight 2 Beta 1 11 June 2008: Converted to Silverlight 2 Beta 2 General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/silverlight/OrderedList_Silverlight.aspx
crawl-002
refinedweb
1,326
62.27
How to Get Active Windows with C#Hello everyone, in this article we are going to make an example application in C# that will get the title of the active windows at that time and record it time by time. Let's get started. Firstly we need to add below library in our namespace: using System.Runtime.InteropServices; Now define GetForeGroundWindow() and GetWindowsText() from user32.dll inside the class which will be called. [DllImport("user32.dll")] static extern IntPtr GetForegroundWindow(); [DllImport("user32.dll")] static extern int GetWindowText(IntPtr hwnd, StringBuilder ss, int count); Below code block will get the title of active window. Below method will first get active window and assign it to a pointer variable. Then via this pointer it will get the title text of active window. Then it will return the title as string. private string ActiveWindowTitle() { //Create the variable const int nChar = 256; StringBuilder ss = new StringBuilder(nChar); //Run GetForeGroundWindows and get active window informations //assign them into handle pointer variable IntPtr handle = IntPtr.Zero; handle = GetForegroundWindow(); if (GetWindowText(handle, ss, nChar) > 0) return ss.ToString(); else return ""; } Now I will see the active windows time by time. To perform this I have created a Timer and in Tick event of this timer, At tick event we have recorded the active titles inside a Listbox with their time values. public Form1() { InitializeComponent(); this.TopMost = true; Timer tmr = new Timer(); tmr.Interval = 1000; tmr.Tick += Tmr_Tick; tmr.Start(); } private void Tmr_Tick(object sender, EventArgs e) { //get title of active window string title = ActiveWindowTitle(); //check if it is null and add it to list if correct if (title != "") { lbActiveWindows.Items.Add( DateTime.Now.ToString("hh:mm:ss") + " - " + title); } } Run the program and switch between the screens. Then you will see the switched screens inside listbox. That is all in this article. You can reach the example project on Github via below link: Burak Hamdi TUFAN Hi sir, I want to get all user actions in window such as opening folder, clicking on an application ... How do it ?2021/06/20 11:08:35 This idea is similar to keylogger and I think you can perform it with windows API. But probably, an antivirus program will prevent your application to read the user activities.2021/06/28 08:28:22
https://thecodeprogram.com/how-to-get-active-windows-with-c-
CC-MAIN-2021-31
refinedweb
382
57.98
A source code could be downloaded from here. Global artifacts are usually tentative things. Languages and tools have different methods to limit the artifact visibility. Think about public and private variables, for example. BizTalk limits the artifact visibility usually by the assembly (project) boundaries. For example, the port types, the correlation set types. The BizTalk applications were introduced as containers for artifacts and they naturally limit the artifact visibility. Artifacts are not visible outside of an applications by default. But sometimes in BizTalk the artifact visibility can be global. We place the artifacts in different assemblies and it doesn't limit the global visibility. We place the artifacts in different BizTalk applications and it doesn't limit the global visibility. I am talking about schemas. In my previous post BizTalk: Internals: namespaces I've shown that schemas have additional parameter, the Xml [target] namespace. Why it is so important? The BizTalk receives messages on the Receive Locations. Theoretically messages are just the byte arrays. But the BizTalk frequently has to do additional tasks like changing the message format from the one system format to another. In this case a message should be received in the well-known format. Without knowing the message format the BizTalk cannot make not the message validation nor the message transformation. (The BizTalk message internal format is an Xml or a byte array. Here I am talking about the messages in the Xml format.) So the Xml messages are received by the Receive port as the byte arrays and should be parsed into Xml format. The XMLReceive pipeline makes this transformation. The first thing the XMLReceive makes, it searches a Xml namespace and a root node. Those two parameters create a MessageType parameter with is promoted as a context parameter.As you can see, the BizTalk uses the MessageType to find out the schema for this message. This Xml schema is used to parse the whole message into Xml format. So the first step is to find a Xml namespace and a root node inside the byte array regardless of anything else, the second step is to find out the Xml schema inside the schema repository, then this schema is used to parse the whole message into Xml format. Now we are discussing the second step, how the BizTalk is searching the right schema. The BizTalk searches through the whole schema list regardless of the application boundaries. Each schema belongs to one of the BizTalk application, but BizTalk ignores this. The global schema list is stored inside the BizTalk management database. When we deploy a project with a schema, the schema is registered inside this database. When an inbound message is received and processed by the XMLReceive pipeline, the BizTalk extracts a MessageType (a namespace + a root node), and searches for this MessageType in the management database. An error is raised if the MessageType is not found. The error is raised if more than one schemas with this MessageType is found. An important note is the schema uniqueness rule is not verified at the deployment time. I have created several samples to show how it works. These samples are bare-bone projects, focused on the “schema uniqueness” rule. The application includes two projects. Each project is compounded from one schema. Both schemas have the same structure and the same Xml namespaces. The only difference is the .NET namespaces. Each schema is placed in different assembly. I have created one receive port with one File receive location and XMLReceive pipeline. A send port works as a subscriber to this receive port. Nothing else is in this application. Two identical schemas in different assemblies and receive and send ports. I dropped a test Xml file in the receive folder. What do you think is happened? You are right, I've got an error and two related events, 5753 and 5719. It is a famous "Cannot locate document specification because multiple schemas matched the message type ###" error. :) Conclusion: schemas of the receive messages should be unique across all BizTalk application. Now two the same projects are in different BizTalk applications. (As you can see I have broken the naming convention, which forces to place the BizTalk applications in the different Visual Studio solutions. In real life there would be two Visual Studio solutions. Sorry for that. ) The ports are the same as they are in the first sample. When I dropped a test file I have got the same error as in the first sample. So the schemas are now placed in different applications but it doesn't change the schema visibility. The schema from one application is visible from another application. Conclusion: schemas of the receive messages should be unique in different BizTalk applications. In other words they should be unique across the whole BizTalk Server Group. Now let’s ask a question. Is the “schema uniqueness” rule relating to the imported schemas? Here is a real-life example. We are consuming the WCF web-service metadata, and it creates the schema set where we can see the Microsoft Serialization schema. It is the same schema almost in each and every WCF service. Do we have to take care of this schema if we consume several WCF services? Do we have to “fight” this schema, extract it into Shared project, etc.? This sample is the same as a previous sample with two additional schemas: Importing.xsd and Imported.xsd. Both Importing schemas in two applications are different. They have different Xml namespaces. But both Imorting schemas are identical. The ports are the same as they were in the first sample. When I dropped a test file there was no error. So the “root” schema namespaces, the namespaces which define the MessageType were different, but the namespaces of the imported schemas were the same and this the XMLReceive pipeline could successfully recognize the MessageType of the message. Conclusion: the “schema uniqueness” rule works only for the schema which define a MessgeType of the received message and doesn't work for the imported schemas. Notes: Is there a good reason for this rule? Looks like a reason for this “schema uniqueness” rule is simple. The BizTalk application was introduced only in the BizTalk 2006. In this time Microsoft decided do not change the XMLReceive and XMLTransmit pipelines to embrace new application conception and since then this feature was not on the priority list in the BizTalk development. I would prefer if the schema visibility is limited inside the BizTalk application. It would extremely simplify the schema and service versioning. Let’s take an example when our partner added a new version of the service. The service schemas itself are different in both services (several nodes were removed, several nodes were added), but the schema namespaces and roots are the same in both services. The new and old services work side-by-side and we have to consume both service versions. If we could limit the schema visibility inside the application boundaries, creating the BizTalk applications for consuming both service versions would be simple. Now it is cumbersome. I am sure, each experienced BizTalk developer can remember several real-life examples, when this “schema uniqueness” rule was a reason when several hours project was instead implemented in several days.
http://geekswithblogs.net/LeonidGaneline/archive/2012/12/24/biztalk-internals-schema-uniqueness-rule.aspx
CC-MAIN-2019-43
refinedweb
1,208
66.64
I'm doing some research, what do you want the most from your IDE, what features do you most desire, what do hate the most, what features do you envy the most in other (say Java) IDEs? Let me know your thoughts. If only my IDE (VS.NET) could save my custom window layouts. That way, I would not face window re-org micro-tedium every time I switched between single and dual-monitos on VMWare workstation (I spread my VS.NET windows across both monitors). Pingback from Dew Drop - December 2, 2008 | Alvin Ashcraft's Morning Dew If only it could list all implementors of an interface, or abstract or overridable member. If only my IDE (VS2008) could allow me to continuously type and not freeze every time (every 10-30s or so) it needs to do some background parsing or god knows what (aspx editing). If I could only have the ability to wire up events (with cs stubs) from aspx to cs without switching to design view (which breaks a lot with "fancy" code that runs perfectly fine at runtime). And, if only my IDE could somehow check my aspx for compilation errors the same way it checks cs files, ooo, if only... P.S. In VS10 you guys better figure out a way to disable clear-type / anti-aliasing in the editor or you'll see a wrath of angry programmers that will make Vista's bad rep small by comparison. One big thing I'd like in Visual Studio is *true* multi-monitor support. This could be accomplished with some simple changes. At least on the surface -- I have no idea how difficult it would be to implement. Although simple, the changes would allow for some interesting things when you have multiple monitors. Here they are: 1) Make the dockable tool windows in the IDE first-class windows in the operating system when they're un-docked. I.e. if they were derived from System.Windows.Forms.Form (I have no idea if they actually are), their FormBorderStyle would be Sizable rather than SizableToolWindow when they were un-docked. This would allow maximization of the tool window in either/any monitor rather than require the user to size the window manually 2) Allow all items in the tabbed documents bar (including code documents and designers) to be dockable tool windows with a full set of toolbars. When I want to have more than one document visible at any time, I have to go into split mode by creating a horizontal or vertical tab group. If I want those documents on separate monitors (perhaps I want the designer on the left with the toolbox and properties windows docked near it, with the code-behind on the right with the System Explorer docked near it) I have to de-maximize the main IDE window, manually expand it across both monitors, create a vertical tab group, then monkey around with the location of the split until it's almost (but not quite) right at the border between monitors. If I go into debug mode new docked tool windows are added, which means I have to manually resize to the way I want (and it's never exactly the same as the way it was when I was coding), then deal with the switch when debugging is done. It's a pain. But, by making tool windows first-class windows and allowing code and designer tabbed documents to be tool windows (which are also now first-class windows) I can utilize the maximize feature on separate monitors, allowing me to have a more consistent development environment. It's amazing how useful that maximize feature is. What I use a lot in Eclipse that I was missing in VS.NET (2002 to 2005, don't know for 2008): - Ctrl+Click on a class/method/field to jump to its definition - search where a class/method/field is referenced/called - show the inheritance hierarchy *rooted* at the current class (i.e. show interface implementors and/or derived classes); in Eclipse, this is as easy as pressing F4 when the caret/cursor is in a class/method name. The hierarchy tree then shows the methods defined for a class when you select it, with the method override (in case you launched it from a method) selected (so you just have to double-click on it to jump to it; without first looking for it by its name) I also love refactoring tools, but VS.NET 2008 seems to have pretty good ones (maybe even better than Eclipse) I hate the way Intellisense currently works. To take the most recent example, the other day I was trying to figure out how to use the ReportViewer control in VS2008. And I was trying to figure out how to set the DataSource correctly. But when I typed the ".", Intellisense innundated me with far too much irrelevant data. In this case, I don't *care* about the fact that it derives from Control, as thus has a Left property, and a Right property, and a Click event, and so on, and so on, ad nauseam. I wanted just the properties of *that* class, without the base classes cluttering up things. That way I wouldn't have to sift through the inheritance detritus obscuring what I was trying to find. So what I'd like is a drill-down capability in Intellisense. (Or maybe, as an option, show just the properties, etc of the current object, with a drill-up capability.) And since this is a wish list, keep track of (and, of course, display) the current level of the inheritance hierarchy I'm interested in, on a control-by-control basis. And persist this as part of the project settings. Well, wish me luck. I've given this feedback before, and it's been ignored every time. Maybe this time... @Andy If you are using VS, ReSharper will do that and more. It should not crash. It should be fast. A easy way to close all open files. Integrated GhostDoc like tools and sandcaste / CodeDoc like help file generation. Explorer-like folder project -- I just want a folder full of random folders and files (text documents, word documents, etc.) accessible from within a solution (in a separate project). I don't want to have to refresh the folders and include the files myself. On that note, it would be nice (but definitely not necessary) if it could use IFilters for searching those projects and ideally open the documents within the IDE like Outlook does when you view an attachment. Also, it would be nice to have remote desktop connections open within the IDE (with rdp connection files in the solution somewhere). If only i could bind commands to mouse button (with ctrl/alt/shift). ok... here we go... dump the navigational and hierarchy drop-downs at the top of the editor windows and utilize the class view more. let the class view know what class i am working on and allow me to determine when it should be loaded or not. the code definition window is too touchy. as soon as i click on a single word the code definition window loads that class in (even if it's the same class i am working on). i love the concept behind the code definition window but is there any way to be a little smarter about it? give me a "dumpster" meaning give me something like the tokenizer but beefier. i should be able to add code samples, common references, blocks of code, etc... to a general or common bucket that is managed by visual studio (not solution items) it would be great if i could incorporate the snip-its functionality to pull them in. when i import a namespace... attempt to add a reference to that dll for me. if you don't know that namespace or it isn't a common namespace... ask me where to find it. that should shave time off of the add references. speaking of adding references... when i right click to add a reference... produce a list of the current projects in my solution to select from. in other words... get a little better about adding references. it seems to take for ever. i love the way eclipse has perspectives... vs has attempted to do the same but there is just something off. it seems like it is not as robust. give me a little more refactoring tools. the only one that stands out for me right now are generating a file from the selected (or highlighted) class and updating an interface from the class. i don't want the resharper functionality because i think they are doing a great job and it shouldn't be duplicated by the vs team but give me a little more functionality more to come as i think of them... :) thanks for listening Real snippet functionality. 1) Expose the ability to write our own Functions to be used in snippets. I want things like MethodName() that would find the enclosing method and place its name into the snippet. 2) Check out Eclipse's way it allows one to search for an option throughout the option dialog. Instead of having to remember Project and Solutions - General - Track Active Item in Solution Explorer you can just start typing it is searches for matching options. I wish they would integrate sql management studio in VS. Instead of only doing 1/4 of the things you can do in ssms, do it all in vs.
http://weblogs.asp.net/astopford/archive/2008/12/02/if-only-my-ide-could-do.aspx
crawl-002
refinedweb
1,598
70.73
Unofficial BSP v30 File Spec Table of content - Introduction - Header - LUMP_ENTITIES - LUMP_PLANES - LUMP_TEXTURES - LUMP_VERTICES - LUMP_VISIBILITY - LUMP_NODES - LUMP_TEXINFO - LUMP_FACES - LUMP_LIGHTING - LUMP_CLIPNODES - LUMP_LEAVES - LUMP_MARKSURFACES - LUMP_EDGES - LUMP_SURFEDGES - LUMP_MODELS Introduction The following file specification concerns the BSP file format version 30, as it has been designed by the game developper Valve and used in their famous GoldSrc Engine. The file extension is ".bsp". Important: The following informations do NOT rely on any officially published file specification from Valve Corporation. The file format is still in use by the proprietary Half-Life Engine (better known name of the GoldSrc Engine) implying that there is no public source code of either the file loader or renderer. The following specification has been put together based on informations from the open source project Black Engine as well as the compilers included in the Half-Life SDK which also contains their source. This file spec uses constructs from the C programming language to describe the different data structures used in the BSP file format. Architecture dependent datatypes like integers are replaced by exact-width integer types of the C99 standard in the stdint.h header file, to provide more flexibillity when using x64 platforms. Basic knowledge about Binary Space Partitioning is recommended. There is a common struct used to represent a point in 3-dimensional space which is used throughout the file spec and the code of the hlbsp project. #include <stdint.h> typedef struct _VECTOR3D { float x, y, z; } VECTOR3D; Header Like almost every file also a BSP file starts with a specific file header which is constucted as follows: #define LUMP_ENTITIES 0 #define LUMP_PLANES 1 #define LUMP_TEXTURES 2 #define LUMP_VERTICES 3 #define LUMP_VISIBILITY 4 #define LUMP_NODES 5 #define LUMP_TEXINFO 6 #define LUMP_FACES 7 #define LUMP_LIGHTING 8 #define LUMP_CLIPNODES 9 #define LUMP_LEAVES 10 #define LUMP_MARKSURFACES 11 #define LUMP_EDGES 12 #define LUMP_SURFEDGES 13 #define LUMP_MODELS 14 #define HEADER_LUMPS 15 typedef struct _BSPHEADER { int32_t nVersion; // Must be 30 for a valid HL BSP file BSPLUMP lump[HEADER_LUMPS]; // Stores the directory of lumps } BSPHEADER; The file header begins with an 32bit integer containing the file version of the BSP file (the magic number). This should be 30 for a valid BSP file used by the Half-Life Engine. Subseqently, there is an array of entries for the so-called lumps. A lump is more or less a section of the file containing a specific type of data. The lump entries in the file header address these lumps, accessed by the 15 predefined indexes. A lump entry struct is defined as follows: typedef struct _BSPLUMP { int32_t nOffset; // File offset to data int32_t nLength; // Length of data } BSPLUMP; To read the different lumps from the given BSP file, every lump entry file states the beginning of each lump as an offset relativly to the beginning of the file. Additionally, the lump entry also gives the length of the addressed lump in bytes. The Half-Life BSP compilers also define several constants for the maximum size of each lump, as they use static, global arrays to hold the data. The hlbsp project uses malloc() to allocate the required memory for each lump depending on their actual size. #define MAX_MAP_HULLS 4 #define MAX_MAP_MODELS 400 #define MAX_MAP_BRUSHES 4096 #define MAX_MAP_ENTITIES 1024 #define MAX_MAP_ENTSTRING (128*1024) #define MAX_MAP_PLANES 32767 #define MAX_MAP_NODES 32767 #define MAX_MAP_CLIPNODES 32767 #define MAX_MAP_LEAFS 8192 #define MAX_MAP_VERTS 65535 #define MAX_MAP_FACES 65535 #define MAX_MAP_MARKSURFACES 65535 #define MAX_MAP_TEXINFO 8192 #define MAX_MAP_EDGES 256000 #define MAX_MAP_SURFEDGES 512000 #define MAX_MAP_TEXTURES 512 #define MAX_MAP_MIPTEX 0x200000 #define MAX_MAP_LIGHTING 0x200000 #define MAX_MAP_VISIBILITY 0x200000 #define MAX_MAP_PORTALS 65536 The following sections will focus on every lump of the BSP file. The Entity Lump (LUMP_ENTITIES) The entity lump is basically a pure ASCII text section. It consists of the string representations of all entities, which are copied directly from the input file to the output BSP file by the compiler. An entity might look like this: { "origin" "0 0 -64" "angles" "0 0 0" "classname" "info_player_start" } Every entity begins and ends with curly brackets. Inbetween there are the attributes of the entity, one in each line, which are pairs of strings enclosed by quotes. The first string is the name of the attribute (the key), the second one its value. The attribute "classname" is mandatory for every entity specifiying its type and therefore, how it is interpreted by the engine. The map compilers also define two constants for the maximum length of key and value: #define MAX_KEY 32 #define MAX_VALUE 1024 The Planes Lump (LUMP_PLANES) This lump is a simple array of binary data structures: #define PLANE_X 0 // Plane is perpendicular to given axis #define PLANE_Y 1 #define PLANE_Z 2 #define PLANE_ANYX 3 // Non-axial plane is snapped to the nearest #define PLANE_ANYY 4 #define PLANE_ANYZ 5 typedef struct _BSPPLANE { VECTOR3D vNormal; // The planes normal vector float fDist; // Plane equation is: vNormal * X = fDist int32_t nType; // Plane type, see #defines } BSPPLANE; Each of this structures defines a plane in 3-dimensional space by using the Hesse normal form: normal * point - distance = 0 Where vNormal is the normalized normal vector of the plane and fDist is the distance of the plane to the origin of the coord system. Additionally, the structure also saves an integer describing the orientation of the plane in space. If nType equals PLANE_X, then the normal of the plane will be parallel to the x axis, meaning the plane is perpendicular to the x axis. If nType equals PLANE_ANYX, then the plane's normal is nearer to the x axis then to any other axis. This information is used by the renderer to speed up some computations. The Texture Lump (LUMP_TEXTURES) The texture lump is somehow a bit more complex then the other lumps, because it is possible to save textures directly within the BSP file instead of storing them in external WAD files. This lump also starts with a small header: typedef struct _BSPTEXTUREHEADER { uint32_t nMipTextures; // Number of BSPMIPTEX structures } BSPTEXTUREHEADER; The header only consists of an unsigned 32bit integer indicating the number of stored or referenced textures in the texture lump. After the header follows an array of 32bit offsets pointing to the beginnings of the seperate textures. typedef int32_t BSPMIPTEXOFFSET; Every offset gives the distance in bytes from the beginning of the texture lump to one of the beginnings of the BSPMIPTEX structure, which are equal in count to the value given in the texture header. ; Each of this structs describes a texture. The name of the texture is a string and may be 16 characters long (including the null-character at the end, char equals a 8bit signed integer). The name of the texture is needed, if the texture has to be found and loaded from an external WAD file. Furthermore, the struct contains the width and height of the texture. The 4 offsets at the end can either be zero, if the texture is stored in an external WAD file, or point to the beginnings of the binary texture data within the texture lump relative to the beginning of it's BSPMIPTEX struct. The Vertices Lump (LUMP_VERTICES) This lump simply consists of all vertices of the BSP tree. They are stored as a primitve array of triples of floats. typedef VECTOR3D BSPVERTEX; Each of this triples, obviously, represents a point in 3-dimensional space by giving its three coordinates. The VIS Lump (LUMP_VISIBILITY) The VIS lump contains data, which is irrelevant to the actual BSP tree, but offers a way to boost up the speed of the renderer significantly. Especially complex maps profit from the use if this data. This lump contains the so-called Potentially Visible Sets (PVS) (also called VIS lists) in the same amout of leaves of the tree, the user can enter (often referred to as VisLeaves). The visiblilty lists are stored as sequences of bitfields, which are run-length encoded. Important: The generation of the VIS data is a very time consuming process (several hours) and also done by a seperate compiler. It can therefore be skipped when compiling the map, resulting in BSP files with no VIS data at all! The Nodes Lump (LUMP_NODES) This lump is simple again and contains an array of binary structures, the nodes, which are a major part of the BSP tree. typedef struct _BSPNODE { uint32_t iPlane; // Index into Planes lump int16_t iChildren[2]; // If > 0, then indices into Nodes // otherwise bitwise inverse indices into Leafs int16_t nMins[3], nMaxs[3]; // Defines bounding box uint16_t firstFace, nFaces; // Index and count into Faces } BSPNODE; Every BSPNODE structure represents a node in the BSP tree and every node equals more or less a division step of the BSP algorithm. Therefore, each node has an index (iPlane) referring to a plane in the plane lump which devides the node into its two child nodes. The childnodes are also stored as indexes. Contrary to the plane index, the node index for the child is signed. If the index is larger than 0, the index indicates a child node. If it is equal to or smaller than zero (no valid array index), the bitwise inversed value of the index gives an index into the leaves lump. Additionally two points (nMins, nMaxs) span the bounding box (AABB, axis aligned bounding box) delimitting the space of the node. Finally firstFace indexes into the face lump and spezifies the first of nFaces surfaces contained in this node. The Texinfo Lump (LUMP_TEXINFO) The texinfo lump contains informations about how textures are applied to surfaces. The lump itself is an array of binary data structures. typedef struct _BSPTEXTUREINFO { VECTOR3D vS; float fSShift; // Texture shift in s direction VECTOR3D vT; float fTShift; // Texture shift in t direction uint32_t iMiptex; // Index into textures array uint32_t nFlags; // Texture flags, seem to always be 0 } BSPTEXTUREINFO; This struct is mainly responsible for the calculation of the texture coordinates (vS, fSShift, vT, fTShift). This values determine the position of the texture on the surface. The iMiptex integer refers to the textures in the texture lump and would be the index in an array of BSPMITEX structs. Finally, there are 4 Bytes used for flags. Somehow they seem to always be 0; The Faces Lump (LUMP_FACES) The face lump contains the surfaces of the scene. Once again an array of structs: typedef struct _BSPFACE { uint16_t iPlane; // Plane the face is parallel to uint16_t nPlaneSide; // Set if different normals orientation uint32_t iFirstEdge; // Index of the first surfedge uint16_t nEdges; // Number of consecutive surfedges uint16_t iTextureInfo; // Index of the texture info structure uint8_t nStyles[4]; // Specify lighting styles uint32_t nLightmapOffset; // Offsets into the raw lightmap data } BSPFACE; The first number of this data structure is an index into the planes lump giving a plane which is parallel to this face (meaning they share the same normal). The second value may be seen as a boolean. If nPlaneSide equals 0, then the normal vector of this face equals the one of the parallel plane exactly. Otherwise, the normal of the plane has to be multiplied by -1 to point into the right direction. Afterwards we have an index into the surfedges lump, as well as the count of consecutive surfedges from that position. Furthermore there is an index into the texture info lump, which is used to find the BSPTEXINFO structure needed to calculate the texture coordinates for this face. Afterwards, there are four bytes giving some lighting information (partly used by the renderer to hide sky surfaces). Finally we have an offset in byes giving the beginning of the binary lightmap data of this face in the lighting lump. The Lightmap Lump (LUMP_LIGHTING) This is one of the largest lumps in the BSP file. The lightmap lump stores all lightmaps used in the entire map. The lightmaps are arrays of triples of bytes (3 channel color, RGB) and stored continuously. The Clipnodes Lump (LUMP_CLIPNODES) This lump contains the so-called clipnodes, which build a second BSP tree used only for collision detection. typedef struct _BSPCLIPNODE { int32_t iPlane; // Index into planes int16_t iChildren[2]; // negative numbers are contents } BSPCLIPNODE; This structure is a reduced form of the BSPNODE struct from the nodes lump. Also the BSP tree built by the clipnodes is simpler than the one described by the BSPNODEs to accelerate collision calculations. The Leaves Lump (LUMP_LEAVES) The leaves lump contains the leaves of the BSP tree. Another array of binary structs: #define CONTENTS_EMPTY -1 #define CONTENTS_SOLID -2 #define CONTENTS_WATER -3 #define CONTENTS_SLIME -4 #define CONTENTS_LAVA -5 #define CONTENTS_SKY -6 #define CONTENTS_ORIGIN -7 #define CONTENTS_CLIP -8 #define CONTENTS_CURRENT_0 -9 #define CONTENTS_CURRENT_90 -10 #define CONTENTS_CURRENT_180 -11 #define CONTENTS_CURRENT_270 -12 #define CONTENTS_CURRENT_UP -13 #define CONTENTS_CURRENT_DOWN -14 #define CONTENTS_TRANSLUCENT -15 typedef struct _BSPLEAF { int32_t nContents; // Contents enumeration int32_t nVisOffset; // Offset into the visibility lump int16_t nMins[3], nMaxs[3]; // Defines bounding box uint16_t iFirstMarkSurface, nMarkSurfaces; // Index and count into marksurfaces array uint8_t nAmbientLevels[4]; // Ambient sound levels } BSPLEAF; The first entry of this struct is the type of the content of this leaf. It can be one of the predefined values, found in the compiler source codes, and is litte relevant for the actual rendering process. All the more important is the next integer containing the offset into the vis lump. It defines the start of the raw PVS data for this leaf. If this value equals -1, no VIS lists are available for this leaf, usually if the map has been built without the VIS compiler. The next two 16bit integer triples span the bounding box of this leaf. Furthermore, the struct contains an index pointing into the array of marksurfaces loaded from the marksufaces lump as well as the number of consecutive marksurfaces belonging to this leaf. The marksurfaces are looped through during the rendering process and point to the actual faces. The final 4 bytes somehow spezify the volume of the ambient sounds. The Marksurfaces Lump (LUMP_MARKSURFACES) The marksurfaces lump is a simple array of short integers. typedef uint16_t BSPMARKSURFACE; This lump is a simple table for redirecting the marksurfaces indexes in the leafs to the actial face indexes. A leaf inserts it's marksurface indexes into this array and gets the associated faces contained within this leaf. The Edges Lump (LUMP_EDGES) The edges are defined as an array of structs: typedef struct _BSPEDGE { uint16_t iVertex[2]; // Indices into vertex array } BSPEDGE; The edges delimit the face and further refer to the vertices of the face. Each edge is pointing to the start and end vertex of the edge. The Surfedges Lump (LUMP_SURFEDGES) Another array of integers. typedef int32_t BSPSURFEDGE; This lump represents pretty much the same mechanism as the marksurfaces. A face can insert its surfedge indexes into this array to get the corresponding edges delimitting the face and further pointing to the vertexes, which are required for rendering. The index can be positive or negative. If the value of the surfedge is positive, the first vertex of the edge is used as vertex for rendering the face, otherwise, the value is multiplied by -1 and the second vertex of the indexed edge is used. The Models Lump (LUMP_MODELS) Array of structs: #define MAX_MAP_HULLS 4 typedef struct _BSPMODEL { float nMins[3], nMaxs[3]; // Defines bounding box VECTOR3D vOrigin; // Coordinates to move the // coordinate system int32_t iHeadnodes[MAX_MAP_HULLS]; // Index into nodes array int32_t nVisLeafs; // ??? int32_t iFirstFace, nFaces; // Index and count into faces } BSPMODEL; A model is kind of a mini BSP tree. Its size is determinded by the bounding box spaned by the first to members of this struct. The major difference between a model and the BSP tree holding the scene is that the models use a local coordinate system for their vertexes and just state its origin in world coordinates. During rendering the coordinate system is translated to the origin of the model (glTranslate()) and moved back after the models BSP tree has been traversed. Furthermore their are 4 indexes into node arrays. The first one has proofed to index the root node of the mini BSP tree used for rendering. The other three indexes could probably be used for collision detection, meaning they point into the clipnodes, but I am not sure about this. The meaning of the next value is also somehow unclear to me. Finally their are direct indexes into the faces array, not taking the redirecting by the marksurfaces.
http://hlbsp.sourceforge.net/index.php?content=bspdef
CC-MAIN-2018-22
refinedweb
2,701
55.07
Compact Attribute Declaration In Java Attributes in a class don't have to be declared in the verbose way: public class VerboseClass { private String firstName; private String lastName; private String description; } Attributes of the same type and visibility can be declared in a more compact manner: public class CompactClass { private String firstName,lastName,description; } This works in most cases - except you want to annotate each attribute in different way.... Even more compact: public class CompactClass {private String firstName,lastName,description;} What is the argument? Posted by Stefan Bley on April 09, 2010 at 03:01 PM CEST # @Stefan, There is no argument. At my C++ time, I used the compact style of declaration. In Java I tend to use the verbose one. For DTOs (if you really need them), the compact style would be more appropriate - they are bloated any way... Posted by Adam Bien on April 09, 2010 at 03:47 PM CEST # Hi Adam, In my opinion the bloated stuff is not the attribute declaration it's the JavaBean style getter and setter declaration. I hope this will change to somewhat like in C# in a next Java release. See my blog: Kind Regards, Simon Posted by Simon Martinelli on April 09, 2010 at 04:00 PM CEST # @Simon, you are very right, see: (the post is 4 years older, than yours :-)). I only noticed, that I tend not to use the compact declaration - and wrote the post to remember the another option. The compact style is not used in Java a lot - what is interesting... thanks, as always, for your comment!, adam Posted by Adam Bien on April 09, 2010 at 04:27 PM CEST # Why not address something easier to solve like the one true placement of braces or how to achieve peace in the middle east? So, Reductio ad absurdum: public class C{private String f,l,d;} See how much "bloat" I've saved! What is achieved? It's harder to maintain and grok in the first place. Are those your goals? Posted by Gene De Lisa on April 09, 2010 at 04:31 PM CEST # This compact style of declaration is widely known as bad style, isn't it? But lately I'm getting to like bad style in some scenarios. DTOs is one case. What about making the attributes even public to get access through simple dot notation, like verboseObject.firstName? This would also make the clients more compact. Posted by Nick Wiedenbrück on April 09, 2010 at 04:36 PM CEST # @Nick, why it is bad practice? Providing the same information (private String) is just redundant. The verbose style, however, is more readable. Btw. I would have nothing against public attributes in DTOs. It should be a conscious and consequent decision... thanks for your feedback!, adam Posted by Adam Bien on April 09, 2010 at 04:44 PM CEST # Could be worthwile to mention the immutable, with public fields, value object as well here as the getter/setter pattern annoyances are mentioned. Immutable is nice too. public class CompactClass { public final String firstName; ... public CompactClass(String firstName, ...) { this.firstName = firstname; ... } } Posted by Mattias Bergander on April 09, 2010 at 04:45 PM CEST # @Gene, I only removed the redundant information - you removed actually interesting information :-). Why to keep redundant information in source? One reason would be readability - but that is a matter of taste. thanks!, adam Posted by Adam Bien on April 09, 2010 at 04:47 PM CEST # @Matthias, good stuff - will write a dedicated post about that. Thanks for your suggestion!, adam Posted by Adam Bien on April 09, 2010 at 04:56 PM CEST # For a compact style to work, javadoc needs to be fixed. As it stands now, if I do declaration: /** debug levels */ int ERROR=0, INFO=2, DEBUG=4; javadoc will generate description for "int ERROR=0" and all other will be empty. Posted by Igor on April 09, 2010 at 08:46 PM CEST # And I completely agree that setters/getters bloat need to be fixed, but do not think it should go C# way. Much more efficient would be declaration like this: @property private String name; for which compiler would automatically generate public get/set duple. For the quite rare cases when read-only or write-only or special get/set is required people would just use current way things are done. Posted by Igor on April 09, 2010 at 08:55 PM CEST # @Igor: You described an enum, which can perfectly be described with JavaDoc ;-) Posted by Marcus on April 10, 2010 at 12:52 AM CEST # Declaring the fields on multiple lines in multiple declarations can also aid version control and merging - less collisions on the same line? Posted by James Barrow on April 11, 2010 at 11:47 PM CEST # Code convention is dead? "One declaration per line is recommended since it encourages commenting" Posted by Joel Lobo on April 19, 2010 at 10:34 PM CEST # @Joel, right. I also prefer the verbose syntax - and forgot about the compact one:-). Regarding JavaDoc - I would only comment, what really provides added value - otherwise it could be even a smell: thanks for your feedback!, adam Posted by Adam Bien on April 22, 2010 at 02:41 PM CEST # @Matthias, I did some experimentation with this, but be careful: You will start you development using: x = myObject.publicFinalField; Then you may want to serialize myObject to XMl using JAXB for example. And then you realize that JAXB is not compatible with final field ... You go back to setters and getter and you have to change your code all around. A great thing about Scala is that it would not be a problem, since they have a Uniform Access: Bruno Posted by Bruno on August 26, 2010 at 01:07 PM CEST #
https://adambien.blog/roller/abien/entry/compact_attribute_declaration_in_java
CC-MAIN-2020-29
refinedweb
969
70.33
: javax.xml.xpath, javax.xml.validation, javax.xml.datatype, and javax.xml.namespacecomponents of JAXP. These packages contain the APIs that give applications a consistent way to obtain instances of XML processing implementations. The javax.xml.xpathpackage supports the standard XPath API. The information in this section pertains to the Xerces technology. The latest updates can be found at. For information on known bugs and recent fixes in the latest Apache version, see. This section discusses known schema processing bugs, limitations, and implementation-dependent operations. This section covers known issues that arise when migrating from earlier versions of JAXP. JAXP 1.2 is built into JWSDP and J2EE 1.4. These sections of the Compatibility Guide cover the relevant migration issues: JAXP 1.1 is built into J2EE 1.3 and J2SE 1.4. For differences in functionality from JAXP 1.1, see the JAXP Compatibility Guide. This section contains implementation notes for DOM Level 3 Core and DOM Level 3 Load and Save. Not implemented: Supported parameters: Not supported: The XSLTC transformer generates a transformation engine, or translet, from an XSL stylesheet. This approach separates the interpretation of stylesheet instructions from their runtime application to XML data. XSLTC works by compiling a stylesheet into Java byte code (translets), which can then be used to perform XSLT transformations. This approach greatly improves the performance of XSLT transformations where a given stylesheet is compiled once and used many times. It also generates an extremely lightweight translet, because only the XSLT instructions that are actually used by the stylesheet are included. Note: XSLT is supported by the JAXP transform package. See javax.xml.transformfor details on accessing basic XSLT functionality in an implementation-independent manner. A problem can occur when using a custom class loader with a transformation factory. Transformation factories in JAXP always prefer the use of the "context class loader" to the use of the "system class loader". Thus, if an application uses a custom class loader, it may need to set the custom class loader as the context class loader for transformation factory to use it. Setting a custom class loader on the current thread can be done as follows: FactoryConfigurationError is thrown that says:.
http://java.sun.com/j2se/1.5.0/docs/guide/xml/jaxp/ReleaseNotes_150.html
crawl-002
refinedweb
366
50.02
Revise the coverage mapping format to reduce binary size by: This shrinks the size of llc's coverage segment by 82% (334MB -> 62MB) and speeds up end-to-end single-threaded report generation by 10%. For reference the compressed name data in llc is 81MB (__llvm_prf_names). Rationale for changes to the format: See CoverageMappingFormat.rst for the details on what exactly has changed. Fixes PR34533 [2], hopefully. [1] [2] I'd probably skip all these & let it be a plain struct (& use {} init to construct it, as good as the ctor provided I think). Declaring things from other files without a common header's not ideal. (makes it easy to violate ODR - accidentally change one declaration without the other, etc) Declare DoInstrProfNameCompression in a shared header, address remaining feedback and rebase. I remember the direction from the BoF back in 2017. I'm glad you were able to find the time to work on this and that it ultimately worked out. Thanks so much! I thought maybe you would need to do more to find .lcovfun on Windows, but I saw this comment: /* Do *NOT* merge .lprfn and .lcovmap into .rdata. llvm-cov must be able to find * after the fact. */ So, I guess it will work as is. Would it help review to split the NFC portion of the change in the coverage::accessors namespace into its own patch? Gentle ping. I'm not in a rush to land this, but it'd help to have some feedback about whether/how this should patch should be split up. I'm not listed as a reviewer, but it looks good to me. I think this needs to be namespaced somehow, something like __covrec_0afeec...u. The meaning is more obvious, and there is a (admittedly small) risk of colliding with other random 64-bit hex symbols without a double-underscore prefix. I guess we really do want llvm.used, we really want these records to be retained in the binary by the linker even though nothing references them. I guess 6 was just stale/wrong. This section of the document isn't word wrapped to 80 cols, but the rest is. Do you need to highlight "dummy" records? I thought all function records are linkonce_odr, so that any duplicate records can be deduplicated, not just the dummy ones for unused inline functions. Looks good as far as I can tell. I think it can be simplified by asserting that zlib::compress doesn't fail, though. missing 'the' before "function name's"? s/consider/considered/ It seems the only way this can fail is if zlib::compress fails, which should not be possible except if it runs out of memory, which we generally don't diagnose. I think we could assert that zlib::compress succeeds, simplify the interface of this function, and remove the need for the new err_code_coverage_mapping_failed diag, as well as the compression_failed enumerator. Address review feedback. Yes, this ensures that the records can be read back when generating a report. The record names can be stripped out post-link to reduce bloat in the linkedit section. I think it's helpful to highlight "dummy" records, as these motivate the linkonce_odr idea. I'll point out that all function records are linkonce_odr. Thanks, sgtm. llvm::report_bad_alloc_error? I think it looks good, but I'd wait for @hans too. BTW, after this we should remove -limited-coverage-experimental: While working on CodeView, I think we came up with a good code pattern that could be used here to make this easier to read and less error prone. Check out llvm::codeview::consume: Ignore the use of BinaryStreamReader, that exists to abstract over discontiguous inputs, and isn't relevant to this code here. You could make a helper that looks like this and use it in various places to avoid all the reinterpret_casts: template <typename T> bool consumeObject(StringRef &Buf, const T *&Obj) { if (Buf.size() < sizeof(T)) return false; Obj = reinterpret_cast<const T*>(Buf.data()); Buf = Buf.drop_front(sizeof(T)); return true; } And then instead of dealing in pointers and pointer arithmetic, you slice up StringRef buffers into sub buffers. It becomes much harder to create array OOB bugs. This is just a suggestion for future work, it seems out of scope for this patch. I think the shorter pattern for this is cantFail(zlib::compress(...)). lgtm Thanks for the suggestion, I'll keep this in mind for future refactors. Both sound reasonable, I'll pick the shorter spelling. cantFail and report_bad_alloc_error have substantially different semantics. In particular, for non-Asserts builds, cantFail is a no-op, which doesn't seem appropriate for an API that fails on out-of-memory. I reverted this due to failures on Windows that I did not encounter in local testing. I suspect that there's an error in the coverage parsing logic, as the same binary coverage data parses successfully locally, but not on some armv7 bots. I see, I'll revise this to use report_bad_alloc_error in the next update. Sorry for the delay here. An update: I tested this patch with a sanitized build on a Linux install, but could not reproduce the "malformed coverage data" errors seen on the Windows bots. At this point, it seems likely that there is a bug in CoverageMappingReader (this would explain why the .covmapping testing bundles fail to parse), and it may be that there is a separate issue on the producer side (this could explain why the 'instrprof-merging.cpp' test from check-profile failed). I think I need a Windows set up to debug further, and will try to find one. Meanwhile, if anyone has the bandwidth to attempt to reproduce on Windows, and to share a backtrace from where the CoverageMapError constructor is called, I would greatly appreciate it. I have a Windows build directory and am motivated to debug this. I'll try to do it tomorrow, but I have a couple of other deadlines so I can't make a very firm promise. I haven't forgotten about this, even though it's been two months. Hopefully I get to it soon. Rebase. Sorry this hasn't seen any update: at this point in our release cycle I've had to put this on the back burner. Compiling with -fdump-record-layouts revealed the problem: *** Dumping AST Record Layout 0 | struct llvm::coverage::CovMapFunctionRecordV3 0 | struct llvm::coverage::accessors::FuncHashAndDataSize<struct llvm::coverage::CovMapFunctionRecordV3> (base) (empty) 1 | struct llvm::coverage::accessors::HashedNameRef<struct llvm::coverage::CovMapFunctionRecordV3> (base) (empty) 1 | const int64_t NameRef 9 | const uint32_t DataSize 13 | const uint64_t FuncHash 21 | const uint64_t FilenamesRef 29 | const char CoverageMapping | [sizeof=30, align=1, | nvsize=30, nvalign=1] Everything is off-by-one because the empty bases are not zero sized. The MSVC record layout algorithm is just different in this area. =/ So, I think this patch would be fine if you refactor it to avoid the accessor classes. I took a stab at it, but it's not straightforward.)? Another option is to chain the accessor classes: template <class T, class Base = void> struct MyAccessor1 : Base { /* ... */ }; template <class T> struct MyAccessor1<void> { /* ... */ }; template <class T, class Base = void> struct MyAccessor2 : Base { /* ... */ }; template <class T> struct MyAccessor2<void> { /* ... */ }; Then you can: struct MyFormat : MyAccessor1<MyFormat, MyAccessor2<MyFormat>> { /* ... */ }; In D69471#1884043, @dexonsmith wrote:)? Yes, we can use it. Clang supports it too. However, you might want this class to follow the rules for a standard layout type: " ... has .... at most one base class with non-static data members, or has no base classes with non-static data members." So the use of multiple inheritance makes a type non-standard layout. Nevermind that MSVC accepts the following program: #include <type_traits> struct EmptyA {}; struct EmptyB {}; struct Foo : EmptyA, EmptyB { int x, y; }; static_assert(std::is_standard_layout<Foo>::value, "asdf"); Why not use free function templates like these? template <typename T> uint64_t getFoo(T *p) { return p->Foo; } template <typename T> uint64_t getBar(T *p) { return p->Bar; } These accessors don't seem like they have to be methods. It actually makes them a bit more awkward to use: uint32_t DataSize = CFR->template getDataSize<Endian>(); @rnk Thanks for chasing this down. I'll update the function record structs to use free functions instead of multiple inheritance. I don't plan on getting rid of the awkward method calls at this point. The coverage reader is still templated by CovMapFunctionRecordX via CovMapTraits, so we'd need to untangle that first. Get rid of multiple inheritance in the coverage::accessors namespace. This is ninja check-{llvm, clang, profile} clean on macOS. I'll wait on this until next week and re-land it unless any objections / new issues pop up. Sounds like a plan, thanks!
https://reviews.llvm.org/D69471?id=232172
CC-MAIN-2020-50
refinedweb
1,455
64.81
Thanks, Mattia ~Sean Unless the insertion order is important, you could use a set -- where a second insertion will have no effect. >>> s = set() >>> s.add(1) >>> s.add(2) >>> s.add(1) >>> print s set([1, 2]) Gary Herron How about using a set instead? >>> a = {1, 2, 3} >>> a {1, 2, 3} >>> a |= {4} >>> a {1, 2, 3, 4} >>> a |= {4} >>> a {1, 2, 3, 4} >>> _ Cheers, - Alf Ok, so you all suggest to use a set. Now the second question, more interesting. Why can't I insert a list into a set? I mean, I have a function that returns a list. I call this function several times and maybe the list returned is the same as another one already returned. I usually put all this lists into another list. How can I assure that my list contains only unique lists? Using set does'n work (i.e. the python interpreter tells me: TypeError: unhashable type: 'list')... Sets can contain *only* hashable objects, but lists are not hashable (since they are mutable). Perhaps, you could convert your list to a tuple first -- tuples *are* hashable. >>> s = set() >>> l = [1,2,3] >>> s.add(l) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'list' >>> s.add(tuple(l)) >>> s set([(1, 2, 3)]) > Using set does'n work (i.e. the python interpreter tells me: > TypeError: unhashable type: 'list')... Convert the lists to tuples before adding them. Tuples are hashable. /W -- INVALID? DE! You could also define a custom object that manages a custom ordered set class unique_set(object): def __init__(self,list): self.list = list def __add___(self,val): if val not in self.list self.list.append(val) return True return False >>> unique_list = unique_set(['a','b','c']) >>> unique_list.list ['a', 'b', 'c'] >>> unique_list + 'd' True >>> unique_list + 'e' True >>> unique_list + 'd' False >>> unique_list.list ['a', 'b', 'c', 'd', 'e'] >>> I've used this on a few projects, it makes for wonderfully clean code, because you can look at your program as an overview without all the arithmetic behind it. hope it helps > You could also define a custom object that manages a custom ordered set [...] > I've used this on a few projects, it makes for wonderfully clean code, > because you can look at your program as an overview without all the > arithmetic behind it. Which is all very good, but beware that your "ordered set" implementation is O(N) for insertions, which may be slow once the set grows large enough. Also, I'm not sure I like your abuse of the + operator to modify the object in place and return a flag. It is an API not shared by (as far as I can see) any other data type in Python. -- Steven Could probably just abuse an odict as cleanly. The other option that leaps to mind is to use a bloom filter and a list. Geremy Condra I agree it couuld be more consisten with other object apis, I also think that if every api has to conform to every other api nothing will ever get done. Heres a slightly more familiar version, it returns the value added or none to conform with other APIs. class unique_set(object): def __init__(self,list): self.list = list def __add___(self,val): if val not in self.list self.list.append(val) return val return None >>> unique_list = unique_set(['a','b','c']) >>> unique_list.list ['a', 'b', 'c'] >>> unique_list + 'd' 'd' then a test opperand to verify if a value was inserted could be if unique_list + 'd': # done some stuff but it could also be used in cases like this unique_added = unique_list + 'd' or 'not added' this makes the set type hashable. class Set(set): __hash__ = lambda self: id(self) s = Set() s.add(1) s.add(2) s.add(1) print s set([1, 2]) d = {} d[s] = 'voila' print d {Set([1,2]):'voila'} print d[s] 'voila' although it's not what you've asked about. it's intereting to make set hashable using __hash__. Or you could just use frozenset and get the correct semantics: Cheers, Chris -- but if every object has its own distinct API, we will never finish reading the docs (assuming they do exist for each object). > this makes the set type hashable. > > class Set(set): > __hash__ = lambda self: id(self) That's a *seriously* broken hash function. >>>>> d = { Set(key): 1 } >>> d {Set(['i', 'a', 'l', 'o', 'v']): 1} >>> d[ Set(key) ] Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: Set(['i', 'a', 'l', 'o', 'v']) -- Steven of course it is broken as long as it uses it's instance id. i added this to notify that unhashable can become hashable implementing __hash__ inside the class. which probably set to None by default. Ok, nice example, but I believe that using id() as the hash function can lead to unexpected collisions. For dict and set to work correctly, the hash function must conform to the contract that: - if A == B then hash(A) == hash(B) If the id() of two objects differ but their content equal (i.e. they are two equivalent, but distinct object), they should have the same hash. If id() is used for the hash of an arbitrary object, the contract will be broken unless you define A == B in terms of id(A) == id(B). No, you have that backwards. Using id() as the hash function means that equal keys will hash unequal -- rather than unexpected collisions, it leads to unexpected failure-to-collide-when-it-should-collide. And it isn't a "nice example", it is a terrible example. Firstly, the example fails to behave correctly. It simply doesn't work as advertised. Secondly, and much worse, it encourages people to do something dangerous without thinking about the consequences. If it is so easy to hash mutable objects, why don't built-in lists and sets don't have a __hash__ method? Do you think that the Python developers merely forgot? No, there is good reason why mutable items shouldn't be used as keys in dicts or in sets, and this example simply papers over the reasons why and gives the impression that using mutable objects as keys is easy and safe when it is neither. Using mutable objects as keys or set elements leads to surprising, unexpected behaviour and hard-to-find bugs. Consider the following set with lists as elements: L = [1, 2] s = Set() # set that allows mutable elements s.add(L) s.add([1, 2, 3]) So far so good. But what should happen now? L.append(3) The set now has two equal elements, which breaks the set invariant that it has no duplicates. Putting the problem of duplicates aside, there is another problem: L = [1, 2] s = Set([L]) L.append(3) There are two possibilities: either the hash function of L changes when the object changes, or it doesn't. Suppose it changes. Then the hash of L after the append will be different from the hash of L before the append, and so membership testing (L in s) will fail. Okay, suppose we somehow arrange matters so that the hash of the object doesn't change as the object mutates. This will solve the problem above: we can mutate L as often as we like, and L in s will continue to work correctly. But now the hash of an object doesn't just depend on it's value, but on its history. That means that two objects which are identical can hash differently, and we've already seen this is a problem. There is one final approach which could work: we give the object a constant hash function, so that all objects of that type hash identically. This means that the performance of searches and lookups in sets and dicts will fall to that of lists. There is no point in paying all the extra overhead of a dict to get behaviour as slow, or slower, than a list. In other words, no matter what you do, using mutable objects as keys or set elements leads to serious problems that need to be dealt with. It simply isn't true that all you need to do to make mutable objects usable in dicts or sets is to add a hash function. That part is trivial. -- Steven I agree with you, and in fact I'm inserting tuples in my set. All the workaroun to use somehow mutable object are poor attemps to solve in a quick-and-dirty way a difficult problem like hashing. But I think that during the discussion we have slowly forgot the main topic of my question, that was insert unique objects in a container. Hash are good to retrieve items in constant time, and when we are dealing with collisions we have to provide some solutions, like chaining or open addressing. Note also that in hash collisions happen and the hash function is used to retrieve items, not to insert unique items. >>> s1 = Set([1]) >>> s2 = Set([1]) >>> s1 == s2 True >>> d = {} >>> d[s1] = 3 >>> d[s2] = 5 >>> d {Set([1]): 3, Set([1]): 5} >>> Equality is kept for comparison, but what is it worth to hash them by id? On the original problem: You could turn you lists into tuples. This would be clean and correct. ciao - chris -- Christian Tismer :^)<mailto:t?
https://groups.google.com/g/comp.lang.python/c/gEkbm8L0VUc
CC-MAIN-2022-05
refinedweb
1,573
71.65
Set the value of the specified context property of type char. #include <screen/screen.h> int screen_set_context_property_cv(screen_context_t ctx, int pname, int len, const char *param) The handle of the context of an array of type char with a maximum length of len. Function Type: Delayed Execution This function sets the value of a context property from a user-provided buffer. No more than len bytes will be read from param. 0 if the command to set the new property value(s) was queued, or -1 if an error occurred (errno is set; refer to /usr/include/errno.h for more details).
http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.screen/topic/screen_set_context_property_cv.html
CC-MAIN-2018-43
refinedweb
102
64
I just had an interesting conversation with Kathryn Kleiman, who's involved in technology law and co- founded the "Domain Name Rights Coalition" in additional to a bunch of other cool stuff. The main area we don't see eye-to-eye on is the need for new top-level domains. As I understand her argument, we need to break the ".com" monopoly up and give everybody a chance to have a "short" name. In the current system, "mcdonalds.com" goes to the company with the golden arches and everybody else loses out. I understand the concern, I just don't see how TLDs solve the problem. Suppose I get "mcdonalds.consulting" in a brand new TLD. Even if McDonalds doesn't successfully sue me to take it away, in what sense am I better off than registering "mcdonalds-consulting.com"? If my web browser tries ".com" by default, then the difference is just a period instead of a dash. Introducing new TLDs is not going to deprecate the worth of ".com" as long as it is seen as the default. The ugly truth, as I see it, is that there really is only one namespace. Somebody is going to win and get the preferred name as long as it is available; chopping up the bits differently just rearranges where the money goes. All we can really do is ensure there are still (less-desirable) alternatives for everybody, and we don't need TLDs to do so. (One idea might be to auction off names every couple of years--- no automatic renewals. But that needs more thought.) It's also worth pointing out that even with 1000 TLDs, the cost of McDonald's registering in all of them is miniscule. The only solution seems to be setting up yet more rules for what you're allowed to register in each TLD, but I don't see that as a winning solution.
http://www.advogato.org/person/Grit/diary/5.html
CC-MAIN-2016-30
refinedweb
321
73.68
Im trying to create a program that has 2 options, view and compute. Right now im trying to figure how to turn my array where i will input my values into a function, so i can go in and out several times to store the values. I also want to view to be a function where i can view the values several times. i've managed to get the computing part to work in my main, now i need to turn it into a function. Secondly how do i create a second function to view it? My code is a bit mess please bear with me. #include <stdio.h> int printArray(); int main(void) { char option; printf("Press V to view and C to input"); // My main menu scanf("%c", &option); // here it should store the input int arrayTable[10] = {0}; // turn this into function int iValue = 0; int i = 0; while(i < 10) { printf("Enter Measurement #%i (or 0): ", i+1); scanf("%d", &iValue); // input iValue if (!iValue) // if iValue is zero then exit loop without affecting array with this value break; else { arrayTable[i] = iValue; // if the value is non-zero store it in array and continue i++; } } printf("Your input: %d", printArray); // if i select V for view return 0; } int printArray() // here i should be the print function { int i; for (i = 0; i < j; i++) { printf("%d", ); p++; } } Alex, continuing from your last comment, to display a menu that will allow you to add values to your array, delete values from your array and view the array (along with the max, min and average of the values), can do something similar to the following. Note: the command line isn't a windowed user interface, so your menu operations are more like a printed receipt of your transactions with it. (you can do nice text windows and stationary menus, but that generally requires an text library, such as ncurses which is well beyond the scope of your question. As explained in the comment, your basic approach is simply to create a loop that repeats continually. It will display your menu and allow you to enter your selection from a list, e.g.: ======== Program Menu ========= V) View the Array. I) Insert New Value. D) Delete Existing Value. N) Display Minimum Value. X) Display Maximum Value. A) Display Average of Values. S) Display Sum of Values. Q) Quit. Selection: After the user enters the selection, to make the comparison easier, the user's input in converted to lower-case. Also note, that the input is read as a string using fgets (a line-oriented input function) which makes taking user input much easier than having to worry about whether the '\n' remains in the input buffer ( stdin) just waiting to cause problems for your next input. (you can use the scanf family of functions, but YOU are the one responsible for accounting for each character entered by the user (and emptying the input buffer). Reading input with fgets will read up to and including the '\n', so their is no chance of the '\n' being left unread in stdin. Also note that fgets will read a string or characters, where you are only interested in the first. That is easily handled simply by referencing the first character in the buffer. (e.g. if you are reading user-input into a buffer called buf, you can simply use buf[0] to access the first character, or simply *buf for that matter) After the user input is read, the first character is passed to a switch statement, where each case of the statement is compared against the first character. If the character matches a case, then the actions associated with that case are taken, ending with the break (you can read about fall-through processing if the break is omitted on your own) As mentioned in the comment, if you simply need to break out of one loop, then break is all you need. However here, your switch statement is inside the loop. A single break will only get you out of the switch but not the outside for loop. To break out of nested loops, use the goto statement. (you could also add more variables and set some type of exit flag, but why? This is what the goto was meant to do. (there are some cases where a flag is equally good as well) In each case within the switch, you are free to call whatever code is needed to handle that menu selection. You will note we simply call short helper functions from within the switch to print the array, insert values, remove values, etc. You can put all the code in the switch if you like, it just rapidly become unreadable. Putting that altogether, you can do something like the following: #include <stdio.h> #include <stdlib.h> #include <limits.h> #include <ctype.h> enum { MAXN = 64 }; /* constant - max numbers of vals & chars */ void showmenu (); void prnarray (int *a, int n); int addvalue (int *a, int n, int newval); int delvalue (int *a, int n, int index); int minvalue (int *a, int n); int maxvalue (int *a, int n); int sumvalues (int *a, int n); int main (void) { int vals[MAXN] = { 21, 18, 32, 3, 9, 6, 16 }, /* the array */ n = 7; for (;;) { /* loop until user quits or cancels, e.g. ctrl+d */ showmenu(); /* show the menu */ char buf[MAXN] = ""; fgets (buf, MAXN, stdin); /* read user input */ /* convert to lower-case for comparison of all entries */ switch (tolower (buf[0])) { /* 1st char is entry */ case 'p' : prnarray(vals, n); break; case 'i' : printf ("\n enter the new value: "); if (fgets (buf, MAXN, stdin) && isdigit (buf[0])) { n = addvalue (vals, n, atoi (buf)); } break; case 'd' : printf ("\n enter the index to delete: "); if (fgets (buf, MAXN, stdin) && isdigit (buf[0])) { n = delvalue (vals, n, atoi (buf)); } break; case 'n' : printf ("\n Mininum of '%d' values is : %d\n", n, minvalue (vals, n)); break; case 'x' : printf ("\n Maxinum of '%d' values is : %d\n", n, maxvalue (vals, n)); break; case 'a' : printf ("\n Average of '%d' values is : %.2lf\n", n, (double)sumvalues (vals, n)/n); break; case 's' : printf ("\n Sum of '%d' values is : %d\n", n, sumvalues (vals, n)); break; case 'q' : printf (" that's all folks...\n"); goto done; default : if (!buf[0]) { /* check for manual EOF */ putchar ('\n'); /* tidy up */ goto done; } fprintf (stderr, "error: invalid selection.\n"); } } done:; /* goto label - breaking 'for' and 'switch' */ return 0; } void showmenu () { fprintf(stderr, "\n ======== Program Menu =========\n\n" " V) View the Array.\n" " I) Insert New Value.\n" " D) Delete Existing Value.\n" " N) Display Minimum Value.\n" " X) Display Maximum Value.\n" " A) Display Average of Values.\n" " S) Display Sum of Values.\n" "\n" " Q) Quit.\n" "\n" " Selection: "); } void prnarray (int *a, int n) { int i; printf ("\n there are '%d' values in the array:\n\n", n); for (i = 0; i < n; i++) printf (" array[%2d] : %d\n", i, a[i]); } int addvalue (int *a, int n, int newval) { if (n == MAXN) { fprintf (stderr, "error: all '%d' values filled.\n", n); return n; } a[n++] = newval; return n; } int delvalue (int *a, int n, int index) { if (index < 0 || index > n - 1) { fprintf (stderr, "error: index out of range.\n"); return n; } int i; for (i = index + 1; i < n; i++) a[i-1] = a[i]; a[i] = 0; return --n; } int minvalue (int *a, int n) { int i, min = INT_MAX; for (i = 0; i < n; i++) if (a[i] < min) min = a[i]; return min; } int maxvalue (int *a, int n) { int i, max = INT_MIN; for (i = 0; i < n; i++) if (a[i] > max) max = a[i]; return max; } int sumvalues (int *a, int n) { int i, sum = 0; for (i = 0; i < n; i++) sum += a[i]; return sum; } (note: there are always additional validation checks you can add to test whether you have read all the input the user provided, etc.. But given the crux of your question I'll leave that learning to you) Example Use/Output $ ./bin/menusimple ======== Program Menu ========= V) View the Array. I) Insert New Value. D) Delete Existing Value. N) Display Minimum Value. X) Display Maximum Value. A) Display Average of Values. S) Display Sum of Values. Q) Quit. Selection: v there are '7' values in the array: array[ 0] : 21 array[ 1] : 18 array[ 2] : 32 array[ 3] : 3 array[ 4] : 9 array[ 5] : 6 array[ 6] : 16 ======== Program Menu ========= V) View the Array. I) Insert New Value. D) Delete Existing Value. N) Display Minimum Value. X) Display Maximum Value. A) Display Average of Values. S) Display Sum of Values. Q) Quit. Selection: i enter the new value: 77 ======== Program Menu ========= V) View the Array. I) Insert New Value. D) Delete Existing Value. N) Display Minimum Value. X) Display Maximum Value. A) Display Average of Values. S) Display Sum of Values. Q) Quit. Selection: v there are '8' values in the array: array[ 0] : 21 array[ 1] : 18 array[ 2] : 32 array[ 3] : 3 array[ 4] : 9 array[ 5] : 6 array[ 6] : 16 array[ 7] : 77 Look things over and let me know if you have any questions. Also, as you are just learning, make sure you are compiling your code with compiler warnings enabled and that you don't consider your code reliable until it compiles without warning. That means you should be compiling with at least the -Wall -Wextra flags set. If you are using gcc and the command line, then it would be: gcc -Wall -Wextra -O2 -o simplemenu simplemenu.c To compile the code in simplemenu.c into an executable named simplemenu with the -O2 optimizations applied. If you are really wanting to eliminate all warnings, add -pedantic as well. For codeblock or other IDE, look through the compiler menu options, they all provide a place to input all of the options you would like. Good luck with your code.
https://codedump.io/share/pFigp9ZqWNfw/1/turning-my-array-into-a-function-and-making-2nd-function-to-print-it
CC-MAIN-2017-09
refinedweb
1,664
69.72
Everyone... Post on how much you love Java!!! This is the place to be. JAVA JAVA JAVA JAVA JAVA!!!! Printable View Everyone... Post on how much you love Java!!! This is the place to be. JAVA JAVA JAVA JAVA JAVA!!!! Java is awesomly. Is that even a word?:confused: I'm cool ----> :cool: None of the above. It's sort of okay I have a job... Working at K-Mart is teh best job anyone could Have!!!!!!!! Go Java!!! Kmart all the way! Getting paid for doing nothing. AAAAAHHHHHH!!!!! Another useless thread!!! Someone help!!!!! *looks over shoulder awaiting a mod* I don't hate or like Java.... but I do hate pointless threads like this one. We've had enough of these conversations over the past few weeks, if you really want to know peoples opinions, do a search for an old thread. Or better still, go post on a Java board. i dont like java for a few unselfish and selfish reasons. 1. i SERIOUSLY dont like programming in a language named after a drink. as it makes me thrist and i never get anything done, AND i always get weird unconfortable feeling (its strange) 2. its too much like a little edited version of C++ with some of its own functions and putting things together with a . ! 3. its way to friggin simplized. it takes only a week or two to learn basicaly the entire language and its too simple. and makes some good games, like the ski game on motionplayground. but C++ made Half life, and thats why im sticking with C++ 4. barly any challenege. to make a menu its and in c++ its much more complicated. menu items...TOO EASY!and in c++ its much more complicated. menu items...TOO EASY!Code: import java.awt.*; import java.applet.*; public class frames extends Applet { public void init() { Frame f = new Frame("Simple Window"); f.add("Center", new Label("Look at the custom smilies menu for referance!", Label.CENTER)); f.setSize(this.getSize().width, this.getSize().height); f.setLocation(320,240); MenuBar menu= new MenuBar(); this.makeFileMenu(menu); this.makeEditMenu(menu); this.makeOtherMenu(menu); f.setMenuBar(menu); f.show(); } private void makeEditMenu(MenuBar mb) { Menu editMenu = new Menu("Edit"); editMenu.add("Undo"); editMenu.addSeparator(); editMenu.add("Cut"); editMenu.add("Copy"); editMenu.add("Paste"); editMenu.add("Clear"); mb.add(editMenu); } private void makeFileMenu(MenuBar mb) { Menu fileMenu = new Menu("File"); fileMenu.add("New"); fileMenu.add("Open..."); fileMenu.addSeparator(); fileMenu.add("Close"); fileMenu.add("Save"); fileMenu.add("Save As..."); fileMenu.addSeparator(); fileMenu.add("Page Setup..."); fileMenu.add("Print"); fileMenu.addSeparator(); fileMenu.add("Quit"); mb.add(fileMenu); } private void makeOtherMenu(MenuBar mb) { Menu smily = new Menu("Smilies"); smily.add(":) Happy"); smily.add(":( Sad"); smily.add(";) Wink"); smily.add(";( Evil Wink"); smily.add(">:( Mad"); smily.add(">:) Evil"); smily.add("-_-' Sorry/Awwww Crap!"); smily.add("O-0 Glasses"); smily.add("0o0 OMG!"); smily.add("^-^ Happy2"); smily.add("^_0 Wink2"); smily.add(":-D Big Grin"); smily.add(":---- Long face"); smily.add(">:-D Twisted"); mb.add(smily); } } (this was compiled and proved to run on MSVJ++ 5. who names their company "sun" anyways? those are my reasons. good or not. i dont really like java. but im forced to learn it in InfoTech 11. *edit* sorry, hehehe, wrong code, the old one was for animating, and it never worked i hate java with a deep fiery passion, why do i want to program in a language that was originally designed to control a toaster? This true?:pThis true?:pQuote: why do i want to program in a language that was originally designed to control a toaster? Yes, Java was orignially designed for small appliances. Check out the history of Java and java.sun.com thats true too! java was once a language that was going to be used in microchips and used for toasters and other things. but the sad truth was that it was too unstable. java sucks ass and theres no way that they can turn around now without those old fogies at Sun industries going out of buisness for the rest of their lives. they tried to make it good. but they goofed. idiots at sun...hehehe. why the **** would you make the language harder as you improve it! this.hide(); is now implememted and recommended to be this: this.setVisible(false); they made it so you hafta type more! at least the people who made HTML had some sense. they made the tags OH so much shorter! Java > * The name java is a ploy. Sun wanted to make the "hot" new programming language that would be the "next big thing." Basically, it's a watered down version of c++, but because of the good marketing techniques Sun used to sell Java, it's become huge. A lot of colleges are switching over to Java, because it's getting tough to teach something as complex as c++ in such a short period of time. Harvard is even trying to make the switch to java, because students are having a tough time with c++, and too many are failing/going below expectations. Java has a lot of benefits, like virtual machines to compile on multiple o.s.'. Personally I think Java/C# are going to continue to grow in the future. It's still not fast enough(or should I say computers aren't fast enough), nor is it powerful enough to take over c++, and never will. Overall, Java's pretty damn good. I'm hoping to get a Java job in the future! read my title <------- now u know my opinion
http://cboard.cprogramming.com/brief-history-cprogramming-com/29513-how-cool-java-printable-thread.html
CC-MAIN-2014-15
refinedweb
932
69.99
We are about to switch to a new forum software. Until then we have removed the registration on this forum. I have this problem, of which my ButtonOpenClose.java cannot get reference to CENTER and GROUP. I have tried to implements PConstants but it does not make my sketch working. What should I do? Answers Use import staticfor the class processing.core.PConstants: :ar! import static processing.core.PConstants.*; Also using implements processing.core.PConstantsworks as well! Does not working. I am using Processing 2.2.1 by the way. Try import processing.core.PConstants; Hei quark, your solution was in my first screenshot. It says that The function shapeMode(int) does not exist. You have PContantsit should be PConstants, you are missing an 's' and you don't need the static keyword so again it should be import processing.core.PConstants; Now that I've taken more time to pay attention, your problem got nothing to do w/ PConstants! =P~ You're trying to access sketch's "canvas" w/o its PApplet's reference! X_X Any classes accessing the "canvas" needs to request its owner's reference somehow. Usually right at their constructors and storing it in some field afterwards: "ForeignJavaClassNeedsCanvas.java" tab file: Testing "some.pde" tab file: If it still doesn't work then replace ALL 3 imports with import processing.core.*; which will make all classes and interfaces in processing.coreavailable to you. It is working! Thanks quark and GoToLoop for helping ^:)^ . I have been using Processing for a year and never experience this. Am I missing something? And why suddenly I need to put suffix f (example. 0.01f) in every float value in ForeignJavaClassNeedsCanvas.java but not int the .pde file? .or an ein them w/ an f. .5becomes .5f, 3.becomes 3.f, 1e3becomes 1e3f, etc. public. #becomes 0xff. colorbecomes int. Thanks man really helpful. One last question, why P2D is slower than the default renderer? When I run the P2D sketch, it goes to grey blank screen before showing the sktech. Is there anyway I can make my P2D sketch run faster? Java has 2 floating point number types floatand double In Java the value 5.0 is treated as a doubleto make it a floatwe need to add the suffix i.e. 5.0f In .pde tabs the Processing pre-processor adds the 'f' suffix for you. So the statement float x = 5.0; is OK in a ,pde tab but not OK in a .java tab Unfortunately the error message is not related to PConstants but everything else you write is about PConstants no wonder GoToLoop and I started down the wrong track. Worth reading, somewhat related to your issue too, but on Eclipse IDE: O:-) There is another thing why I think I never have this problem before. The class here is a .java file. If I want to make a Processing pre - compiled object, I can just make my class in .pde. Hence, I do not need to worry about importing Processing internal libraries, adding suffix f in float, and refering PApplet as class argument.
https://forum.processing.org/two/discussion/13034/
CC-MAIN-2021-21
refinedweb
517
69.79
Introduction: MPU 6050 Gyro,Accelerometer Communication With Arduino (Atmega328p) The MPU6050 IMU has both 3-Axis accelerometer and 3-Axis gyroscope integrated on a single chip. The gyroscope measures rotational velocity or rate of change of the angular position over time, along the X, Y and Z axis. The outputs of the gyroscope are in degrees per second, so in order to get the angular position we just need to integrate the angular velocity. On the other hand, the MPU6050 accelerometer measures acceleration by measuring gravitational acceleration along the 3 axes and using some trigonometry math we can calculate the angle at which the sensor is positioned. So, if we fuse, or combine the accelerometer and gyroscope data we can get very accurate information about the sensor orientation.. Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Step 2: Calculations. For example, Suppose, after 2’ complement we get accelerometer X axes raw value = +15454 Then Ax = +15454/16384 = 0.94 g. More, So we know we are running at a sensitivity of +/-2G and +/- 250deg/s but how do our values correspond to those accelerations/angles. These are both straight line graphs and we can work out from them that for 1G we will read 16384 and for 1degree/sec we will read 131.07(Although the .07 will get ignore due to binary) these values were just worked out by drawing the straight line graph with 2G at 32767 and -2G at -32768 and 250/-250 at the same values. So now we know our sensitivity values (16384 and 131.07) we just need to minus the offsets from our values and then devide by the sensitivity. These will work fine for the X and Y values but as the Z was recorded at 1G and not 0 we will need to minus off 1G (16384) before we divide by our sensitivity. Step 3: MPU6050-Atmega328p Connections - Just Connect everyting as given in the diagram.... The connections are given as follows:- - MPU6050<======> Arduino Nano. VCC <======> 5v out pin GND <======>Ground pin SDA <======> A4 pin //serial data SCL<======> A5 pin // serial clock Pitch and Roll Calculation: Roll is the rotation around x-axis and pitch is the rotation along y-axis. The result is in radians. (convert to degrees by multiplying by 180 and dividing by pi) Step 4: Codes and Explanations #include <Wire.h> }</p><p </p><p> //Acceleration data correction AcXcal = -950; AcYcal = -300; AcZcal = 0;</p><p> //Temperature correction tcal = -1600;</p><p> //Gyro correction GyXcal = 480; GyYcal = 170; GyZcal = 210;</p><p> //read accelerometer data) //read temperature data Tmp=Wire.read()<<8|Wire.read(); // 0x41 (TEMP_OUT_H) 0x42 (TEMP_OUT_L) //read gyroscope data) </p><p> //temperature calculation tx = Tmp + tcal; t = tx/340 + 36.53; //equation for temperature in degrees C from datasheet tf = (t * 9/5) + 32; //fahrenheit</p><p> /); </p><p></p><p> //converting radians into degrees pitch = pitch * (180.0/3.14); roll = roll * (180.0/3.14) ; } ----------------------------------------------------------------------------------------------- Results:- -------------------------------------------------------------------------------------- Angle: Pitch = 88.89 Roll = -0.47<br>Accelerometer: X = 15974 Y = -440 Z = 312 Temperature in celsius = 29.38 fahrenheit = 84.88 Gyroscope: X = -111 Y = 341 Z = 211 Angle: Pitch = 89.41 Roll = -0.27 Accelerometer: X = 16102 Y = -380 Z = 172 Temperature in celsius = 29.42 fahrenheit = 84.96 Gyroscope: X = -115 Y = 373 Z = 228 Angle: Pitch = 89.28 Roll = -0.34 Accelerometer: X = 16058 Y = -400 Z = 204 Temperature in celsius = 29.42 fahrenheit = 84.96 Gyroscope: X = -98 Y = 354 Z = 224 Angle: Pitch = 88.83 Roll = -0.54 Accelerometer: X = 15978 Y = -460 Z = 320 Temperature in celsius = 29.33 fahrenheit = 84.79 Gyroscope: X = -124 Y = 376 Z = 207 Angle: Pitch = 89.21 Roll = -0.31 Accelerometer: X = 15978 Y = -392 Z = 228 Temperature in celsius = 29.42 fahrenheit = 84.96 Gyroscope: X = -121 Y = 364 Z = 189 Angle: Pitch = 89.00 Roll = -0.56 Accelerometer: X = 15890 Y = -464 Z = 260 Temperature in celsius = 29.38 fahrenheit = 84.88 Gyroscope: X = -111 Y = 361 Z = 221 Angle: Pitch = 88.67 Roll = -0.65 Accelerometer: X = 16018 Y = -492 Z = 360 Temperature in celsius = 29.38 fahrenheit = 84.88 Gyroscope: X = -130 Y = 340 Z = 216 Angle: Pitch = 88.53 Roll = -0.43 Accelerometer: X = 16110 Y = -428 Z = 432 Temperature in celsius = 29.42 fahrenheit = 84.96 Gyroscope: X = -92 Y = 380 Z = 217 Angle: Pitch = 88.85 Roll = -0.60 Accelerometer: X = 15930 Y = -476 Z = 304 Temperature in celsius = 29.47 fahrenheit = 85.05 Gyroscope: X = -102 Y = 374 Z = 219 Angle: Pitch = 88.87 Roll = -0.24 Accelerometer: X = 16222 Y = -372 Z = 344 Temperature in celsius = 29.52 fahrenheit = 85.13 Gyroscope: X = -96 Y = 351 Z = 226 Angle: Pitch = 89.05 Roll = -0.26 Accelerometer: X = 15970 Y = -376 Z = 284 Temperature in celsius = 29.38 fahrenheit = 84.88 Gyroscope: X = -105 Y = 367 Z = 201 Angle: Pitch = 89.13 Roll = -0.62 Accelerometer: X = 16034 Y = -484 Z = 200 Temperature in celsius = 29.52 fahrenheit = 85.13 Gyroscope: X = -110 Y = 391 Z = 207 Angle: Pitch = 88.98 Roll = -0.51 Accelerometer: X = 16178 Y = -452 Z = 280 Temperature in celsius = 29.47 fahrenheit = 85.05 Gyroscope: X = -117 Y = 379 Z = 221 Angle: Pitch = 89.27 Roll = -0.43 Accelerometer: X = 16066 Y = -428 Z = 192 Temperature in celsius = 29.42 fahrenheit = 84.96 Gyroscope: X = -101 Y = 359 Z = 208 Angle: Pitch = 89.31 Roll = -0.19 Accelerometer: X = 16150 Y = -356 Z = 212 Temperature in celsius = 29.52 fahrenheit = 85.13 Gyroscope: X = -115 Y = 361 Z = 189 Angle: Pitch = 88.76 Roll = -0.51 Accelerometer: X = 16026 Y = -452 Z = 348 Temperature in celsius = 29.42 fahrenheit = 84.96 Gyroscope: X = -139 Y = 368 Z = 192 Angle: Pitch = 88.57 Roll = -0.69 Accelerometer: X = 16086 Y = -504 Z = 388 Temperature in celsius = 29.33 fahrenheit = 84.79 Gyroscope: X = -118 Y = 352 Z = 214</p> Step 5: Understanding Tilt Angle Accelerometer The earth’s gravity is a constant acceleration where the force is always pointing down to the centre of the Earth. When the accelerometer is parallel with the gravity, the measured acceleration will be 1G, when the accelerometer is perpendicular with the gravity, it will measure 0G. Tilt angle can be calculated from the measured acceleration by using this equation: θ = sin-1 (Measured Acceleration / Gravity Acceleration) Gyro Gyro (a.k.a. rate sensor) is used to measure the angular velocity (ω). In order to get the tilt angle of a robot, we need to integrate the data from the gyro as shown in the equation below: ω = dθ / dt , θ = ∫ ω dt Gyro and Accelerometer Sensor Fusion After studying the characteristics of both gyro and accelerometer, we know that they have their own strengths and weakness. The calculated tilt angle from the accelerometer data has slow response time, while the integrated tilt angle from the gyro data is subjected to drift over a period of time. In other words, we can say that the accelerometer data is useful for long term while the gyro data is useful for short term. Link for better understanding: Click Here Be the First to Share Recommendations Discussions
https://www.instructables.com/id/Accelerometer-MPU-6050-Communication-With-AVR-MCU/
CC-MAIN-2020-24
refinedweb
1,216
68.87
Error using "--generate" option to system-config-kickstart Bug Description Not sure if this is one error or several. Apologies if it should have been several! Wanting to save an existing configuration I ran "system- --generate myserver-ks.cfg" and got the following error: Traceback (most recent call last): File "/usr/share/ in ? useCliMode( File "/usr/share/ in useCliMode import profileSystem File "/usr/share/ import language_backend ImportError: No module named language_backend Looks like there is a dependency on system- to be available to Ubuntu. Simple hack, commented out the calls to language_backend: --- profileSystem.py 2005-04-09 17:15:36.499703768 +0100 +++ profileSystem. @@ -19,14 +19,14 @@ import string import sys import os -# sys.path. -# import language_backend +sys.path. +import language_backend import mouse class ProfileSystem: def __init__(self, kickstartData): - # self.languageBa + self.languageBa self.mouse = mouse() @@ -42,7 +42,7 @@ def getLang(self): - # default, langs = self.languageBa + default, langs = self.languageBa Re-running the generate command I get a different error: Traceback (most recent call last): File "/usr/share/ in ? useCliMode( File "/usr/share/ in useCliMode profileSystem = profileSystem. File "/usr/share/ self.mouse = mouse() TypeError: 'module' object is not callable I am using version 2.5.20-0ubuntu10 of system- version 2.4.1. I get the same results, BUT: I don't really understand why you think --generate is a valid option. man system- As I understand it, you just have to use Save file in File menu to select the name of the configuration file. Now, it is possible that there is really undocumented option. Is it, or have you some documentation showing this option? I also get this language_backend error when executing system- According to system- --generate <filename> Generate a kickstart file from the current machine and write" The GUI does not seem to generate anything from the current configuration so this option is quite useful, if not essential. "I also hit this bug -- it makes the whole package more or less unusable"... sounds Confirmed to me. is this fixed on feisty? something new to this bug? has anyone a functionally kickstart.cfg for me??? Sorry for the delay in getting this fixed. I've just uploaded a fix to Intrepid. This bug was fixed in the package system- --------------- system- * Fix --generate option for Ubuntu (LP: #15156). - Read language information from /etc/default/locale and / - Read keyboard information from /etc/default/ - Don't read mouse configuration from anywhere; we autodetect in nearly all cases anyway. - Read timezone information from /etc/timezone. - Deal with the root password being disabled. -- Colin Watson <email address hidden> Tue, 16 Sep 2008 15:41:53 +0100 I also hit this bug -- it makes the whole package more or less unusable. conf-kickstart: 2.5.20-0ubuntu14 Version of system- Depends: python (<< 2.5), python (>= 2.4), python-gtk2, python-glade2, console-data, hwdata, iso-codes, localechooser-data, python-apt
https://bugs.launchpad.net/ubuntu/+source/system-config-kickstart/+bug/15156
CC-MAIN-2015-18
refinedweb
477
60.61
I have a class that opens a file in constructor, writes in file during a call to a method. Where shall I close the file ? The class implements a general interface that has only one method and I shall not add public methods. public interface I { void makeChange(); } public class C implements I { FileWriter f; PrintWriter p; public C() { f = new FileWriter("log.txt"); p = new PrintWriter(p); } @Override public void makeChange() { p.println("something"); p.flush(); } } public static void main(String args[]) { I c1 = new C(); c1.makeChange(); I c2 = new D(); // D is an implementation of I that doesn't handle a file c2.makeChange(); } A design that prevents you from adding public method endChanges() is inherently wrong and prevents you from doing this the right way. If this is some kind of homework or a quiz (or worse still, a quiz-homework), the "correct" answer is probably to use finalize(). But as you have noticed yourself, this method may not even run (e.g. if the JVM stops before it needs to free the object). In that simple case, it's OK because it will close the file at that same time, but generally the use of finalize() is discouraged for exactly the reason you've run into. The correct approach is for the class to implement Closeable interface and provide a close() methods for the user to invoke when done. Since you want to invoke multiple calls on the file between opening and closing it, this is the standard approach. (Editted to add): If you have multiple implementations and some of them need any cleanup logic and others do not, it is absolutely OK for the interface to have a cleanup (close) method that the user must call but that some implementations leave empty. If you need to be absolutely sure the user does not forget to close the file, you could instaed create a doWithFile() method that accepts a callback to perform on the file, opens the file, performs the callback then closes the file.
https://codedump.io/share/ZZcW5Ihoj0CO/1/java-how-to-ensure-a-file-gets-closed
CC-MAIN-2018-05
refinedweb
340
67.69
I was trying to see if moving my blog (based on Hugo) to Next.js was a good move (it wasn’t) and I found a problem. I had to change the path of each of my images because Hugo allows a post to be in the same folder as the markdown file, while Next.js does not. But I didn’t want to disrupt my workflow. It’s so nice that I can get my images in the same folder as the markdown file. It’s very easy for authoring and maintenance. So I got this plan: at build time I’d go through all the posts, gather the images, and create a folder for each of the posts in /public/images. I had to change each image path in the markdown, and that was the easiest thing. Then I had to run a post-build command by creating a postbuild.mjs file whose job was to go through my content/posts folder, and copy each image in public/images: import fs from 'fs' import path from 'path' import fsExtra from 'fs-extra' const source = './content/posts' const destination = './public/images' const posts = fs.readdirSync(source) fsExtra.emptyDirSync(destination) fs.mkdir(destination, () => { posts.map((slug) => { if (slug === '.DS_Store') return fs.mkdir(`${destination}/${slug}`, () => { fs.readdirSync(`${source}/${slug}`) .filter((item) => ['.png', '.jpg', '.jpeg', '.gif'].includes(path.extname(item)) ) .map((item) => { fs.copyFile( `${source}/${slug}/${item}`, `${destination}/${slug}/${item.replace(/ /g, '-')}`, () => { console.log(`${destination}/${slug}/${item}`) } ) }) }) }) }) Then add a postbuild entry in your package.json file scripts: { ... "scripts": { "build": "next build", "postbuild": "node ./postbuild.mjs", "dev": "next dev", "start": "next start", ... } } Using the same technique you could create a prebuild entry that’s run before the build. By the way this is not unique to Next.js, it’s a feature of npm. See. Download my free Next.js Handbook!
https://flaviocopes.com/nextjs-run-script-build-time/
CC-MAIN-2022-27
refinedweb
309
69.48
Just starting the discussion page. Don't forget to sign your entries with your wiki name and to split entries. The initial discussion started on the freedesktop.org mailing list. Shaun's first post contains a great summary of the goals of a common documentation standard. See also my follow-up summarizing what we agreed on. There's a number of possibilities for how we find help files. This is all about coming up with a mapping from URIs to file names. The first question is what we use for URIs. There're basically two possibilities here. The first is to use https URLs that resolve to actual files on the internet. We then create some sort of local mapping to find those files on the local box. Let's call this local cache mapping. The second possibility is to have a specific URI domain that things map to. For example, we could map help:gnumeric/gnumeric.xml to $XDG_DATA_DIRS/help/C/gnumeric/gnumeric.xml (using the XDG base directory spec.) Of course, the app would look under different languages based on what locale the user was in. A variation on this is what KDE uses. There are a number of issues that have to be dealt with in both cases, as well as issues specific to one method or the other. The first issue I think of is the fact that a single docbook file often maps to multiple pages of data. This is the case in how help is displayed in both Gnome and KDE. I'm not sure how yelp solves this problem, but khelpcenter, if it can't find the given url, searches the directory given for an index.docbook file and uses that to generate the required html file. It saves this information to a cache file and just loads from that cache file if it exists. This solution works quite well. I've thought of three possibilities for local cache mapping. Let's call them methods 1, 2, and 3. Methods 1 and 2 have in common that they're an arbitrary mapping from URL to filename. Method 1 is to have a database mapping the information and method 2 is to have individual files (probably .desktop file format) each one describing a single document. With method 2, you can have a database with a cache of all the information, but it's not the cononical source. For packaging purposes, method 2 is very clearly preferred to method 1. It's much easier for a distro to install an extra file than to modify a file that's already there. This is the reason, for example, that the menu spec allows you to search a directory and add any menu files found to the current menu. Method 3 is to have a specified mapping from URL to file name. For example, might map to /usr/share/docs/ with a backup to look for present.docbook or something like that. Of these 3, my favorite is method 2. However, I don't actually like any of them. I don't want a local cache mapping. First, it has the conceptual problem that a URL is variable whereas the docs on a particular distro are not. Or at the very least, they're variable in different dimensions. For example, if a distro makes changes to a program that lead to changes in documentation, that particular version of the doc is different from the one at the "cononical" source. Theoretically, you could put the docs in a different place, but there could be links all over the place to those docs. For instance, if docs for another program refer to docs for this program, when the user clicks on that link, you want them to get the docs for the version of the program that they will run, not the docs for some theoretical upstream version. Whereas online docs change in a different dimension, that of time. Say someone is using the upstream version of a program. They want the docs for their version, not the ones for the latest version. However, the link is to the online version which is going to be the latest version. Yes, it's going to try to map to the local copy, but which is the correct version of the document? The reason using online mappings like this works for DTDs is that they're defining something which isn't nearly as variable. Yes, the language described by the DTD changes over time, but you'll also note that the URL changes when the DTD does, and the documents described by the old DTD still point to the old URL as they should. That makes sense because DTDs are about describing the type of the current document. But that's not what links in help docs are for. They're for referring to other help, which should change, even if the referencing document doesn't. Having versioned URLs for the help means a huge maintenance pain keeping the links up to date. Local cache mapping is also a problem if a project doesn't have a web space of their own. Or say that the web space is small and won't handle high bandwidth. No matter whether we put the cache in or not, people will sometimes link to it. Or say that the web space is some awful URL. All in all, it seems like an odd requirement to have a web space to be able to link to your docs. Finally, perhaps we can come up with some sort of database of information about where the cononical docs are kept and include that in the docs themselves. Perhaps in some metadata. That way we make the cononical version an optional thing instead of a requirement that doesn't really make sense anyway. I like the idea of having a help: URI namespace. We should really use the XDG data dirs standard for this. We should map As far as pages inside a file, I'm fairly happy with the behavior that KDE has. One of the advantages of using XDG_DATA_DIRS is that we get the ability for user installed apps to have help very easily. The user can also do things like put a file in $XDG_DATA_HOME/help/common/default.css to change their stylesheet for accessibility or style reasons. The next problem we need to solve is the table of contents of help (TOC). KDE's solution is to use the menu system and just put a reference to the help in the desktop files. This has the advantage that the help is arranged in the same way as the menus, so if the user knows how to find the app, they can find the help for it. KDE has the menu system as a subdirectory and keeps some other docs in a different hierarchy, but this is overly complicated. Instead, the help TOC should be a .menu file of its own with the main menu file as a subdirectory. One of the things that can be done is to match other elements of the menu system in the TOC. For instance, if a DE adds recently used apps to the menus, those apps can have a special section in the help TOC. Gnome uses scrollkeeper. One thing to note about scrollkeeper is that it requires keeping a database which takes up a bit of time every time we install. This also allows for the categories of help to be completely distinct from the applications themselves. This is a problem, since repeating the structure of the menus here means that the user only has to learn one location for a particular app. It would be possible to do the same thing with scrollkeeper, but it would add a whole lot of maintenance work to keep the two structures in sync. The one thing I'm not sure how to handle is an app with multiple help files, or even an extendable set of help files. For multiple help files, I suppose you could just list them all in the desktop file, but an extendable set of help files creates problems. The example I'm thinking of here is control center. In Novell's Gnome, control center is a single app, but the docs for it are split up into a bunch of bits. Does anyone have any suggestions for this problem? The next thing I want to discuss is links to outside materials. I assume there's already a way to link to web pages, for instance. The main thing I'm curious about is the concept of launching applications and doing actions in them. For instance, the help manual for a particular app might be able to pull up that particular app or open a specific dialog in that app. I think the right way to do this is the simple way. Just support a url scheme that lets you execute code. Perhaps shell: or something like that. This is simple and should work. One thing that's important is the security issues here. If you're going to support this link type, you need to make sure that it comes from local documentation, and not a website. An alternative to this is to just have a predefined set of actions that are possible. Say a specific directory full of desktop files for help docs to execute. The help doc just specifies the name of the desktop file. This way, even if a website does include such a link, all that happens is one of the predefined actions. (Note, the link shouldn't be allowed to pass arguments to the desktop file for security reasons. I think we need a separate desktop file for each possible thing the help doc might want to do.) I've started writing up all the ideas here in the parent document as a specification. I went in and boldly edited the parent document, recommending simple support for man and info. There are LOTS of documents in these formats; if we support them directly, then we have an integrated system for handling them. Forcing everyone to store man and HTML files makes no sense in many circumstances; best to do the translation on the fly, in which case a "man" convention helps.
http://freedesktop.org/wiki/Specifications/help-system/Discussion/?action=SubscribeUser
CC-MAIN-2015-18
refinedweb
1,722
72.87
10 Tips for Migrating VBA to Visual Basic Using Visual Studio Tools for Office Second Edition (Part 2 of 2) Summary: Read 10 tips about how to migrate Microsoft Visual Basic for Applications code to Microsoft Visual Basic code by using Microsoft Visual Studio 2005 Tools Second Edition for the Microsoft Office system. (16 Tip #6: Know When to Keep Using VBA Constructs Tip #7: Learn the .NET Framework Tip #8: Use the .NET Framework to Write Readable and Consistent Code Tip #9: Learning New Ways to Work with Data Tip #10: Learn About Security Policies - - - Read part one: 10 Tips for Migrating VBA to Visual Basic Using Visual Studio Tools for Office Second Edition (Part 1 of 2) Tip #6: Know When to Keep Using VBA Constructs After you learn the Microsoft .NET Framework, you may want to convert all of your Microsoft Visual Basic for Applications (VBA) code so that it uses .NET Framework methods. You might think that the .NET Framework methods are faster than the equivalent VBA code. That is not always true. In some cases, there is no significant difference. It may be better from a development standpoint to continue to use the technique you know best rather than changing just for the sake of change. In some cases, the VBA code is more efficient than the .NET Framework equivalent. In a few cases, VBA provides functionality that the .NET Framework does not have. The .NET Framework provides a base set of classes that many programming languages can use. That is why it is easy, in general, to convert Microsoft Visual Basic code to Microsoft Visual C# code and back—whether you use Visual Basic or C#, the code you write uses .NET Framework classes, and those classes are the same across different languages. Each language provides its own extensions, including Visual Basic. To keep Visual Basic backward compatible with VBA, it includes methods (such as the Len method and the Mid method) that C# does not have. Some of these VBA methods perform better than their .NET Framework equivalents, because of the way they are implemented in Visual Basic. For more information, see the article Visual Basic .NET Internals. An Example: Date-related Functions As a VBA developer migrating to the .NET Framework, you have the best of both worlds. Because the Visual Basic language includes all of the date-handling functions that VBA has, you can continue to use familiar techniques and combine them with APIs such as System.DateTime and System.TimeSpan that the .NET Framework provides. For example, you can use either the VBA Month function, or the .NET Framework function System.DateTime.Month because both return the same value, given the same input. In VBA, you can use the DateSerial function to generate a date/time value given integers representing the specific year, month, and date, like the following. You can get the same result in Visual Basic by passing the necessary information to the constructor of the DateTime structure, as follows. Some VBA method names conflict with .NET Framework classes. If you need to disambiguate the VBA references, add the full class name. This works because you can access VBA functionality by using Visual Basic classes. For example, the following code does not compile in Visual Basic. You can fix the code by explicitly referencing the DateAndTime class, as follows. If you find that some date calculations are easier in VBA, you can save some time by not converting your code. However, if you need to convert your code to C# in the future, you should replace VBA methods calls with methods provided by the .NET Framework. Take the time to compare the Visual Basic DateAndTime class to the DateTime and TimeSpan structures in the .NET Framework. The DateAndTime class may be more familiar, but the .NET Framework versions have more features. If you do not learn the .NET Framework versions now, continue to use VBA techniques that are available in Visual Basic. Note the following differences. Explicit methods to add time intervals, like AddDays, AddHours, AddMilliseconds, AddMinutes, AddMonths, AddSeconds, AddTicks, and AddYears. Much of this functionality is available in VBA, using the DateAdd method, but not all. The DaysInMonth method, which returns the number of days in a specified month and year. Calculating this value in VBA requires more effort. The IsDaylightSavingTime method, which indicates whether the specified date is within the daylight savings time range for the current time zone. The IsLeapYear method, which returns a Boolean value indicating whether the specified year is a leap year. The .NET Framework has more date and time features than VBA, and it is worth investigating what is available. Some tasks may be easier using VBA, while other tasks may only be possible using the methods provided by the .NET Framework. For example, the .NET Framework has the System.TimeSpan type, which you can use to work with elapsed times. If you need a data structure that can store information about a length of time, use System.TimeSpan. VBA does not have similar functionality. Late Binding and Visual Basic Visual Basic provides a feature that C# does not: late binding. This feature allows you to declare a variable as an object, and then treat the variable as if it were of a specific class. To use this feature, you must turn Option Strict off for the code file (or the entire project, but that is not recommended). Late binding indicates that the Visual Basic compiler does not determine the type of the variable at compile time, as it normally would. The type of the variable is not determined until run time. This can cause run-time errors, if your code is incorrect; but the compiler does not complain at compile time. In its simplest form, late binding allows you to write code as follows. Even though the variable x is defined as an object type, the code retrieves the value of the Length property, treating the variable as if it were a String type. The compiler does not complain, and the code runs fine. This technique is useful when you are programming with the Microsoft Office Word 2007 object model. Specifically, the Dialog object in Word changes its shape depending on which specific dialog box you use. Different dialog boxes provide different properties, and the compiler cannot determine which of the dialog boxes you write code for. By using late binding, you can effectively indicate to the compiler, “Look, I know you can’t tell what I want until run time. Trust me. I know what I’m doing.” The only problem is that you need to know what you are doing. The alternative to using late binding (and the only option, for C# developers), is to use a technique called reflection by using classes and members in the System.Reflection namespace. This set of classes enables you to retrieve information about objects at run time, and to call methods and retrieve properties dynamically, based on that information. To read more about Office Word dialog boxes and reflection, see Understanding the Word Object Model from a Visual Studio 2005 Developer’s Perspective. Tip #7: Learn the .NET Framework If you want to take advantage of what the .NET Framework has to offer, you need to learn new ways to do old things. Fortunately, many of the new techniques that you learn require less programming. You should not be discouraged: even though the .NET Framework is large, learning it is worth the effort. Everything in the .NET Framework is a class or a member of a class. The classes within the framework are grouped into logical groupings called namespaces, which are organized in a hierarchy. The System namespace contains the basic and most important classes, such as Object, Int32, and String. Most of the .NET Framework classes exist in namespaces that lie below the System namespace, such as System.Data, System.IO, and System.Windows.Forms. Each namespace contains a collection of classes, all with unique names. That is, within each namespace, no two classes can have the same name. Across namespaces, however, two classes can have the same name. (You can use the full namespace name to disambiguate references when necessary.) The best place to learn about the .NET Framework is the reference documentation, which is available in Microsoft Visual Studio. To view the documentation from Visual Studio, on the Help menu, click Contents. On the left of Microsoft Document Explorer, you see the contents, as shown in Figure 1. Figure 1. Contents of the .NET Framework documentation If you work on a project that requires localization, you can scroll through the set of classes to find the System.Globalization namespace. Double-click it to display the documentation shown in Figure 2. This documentation is crucial to your understanding of the many classes in the .NET Framework. Figure 2. Use the .NET Framework documentation to investigate classes and their members Importing a Namespace To resolve references to specific classes, the Visual Basic compiler needs to find the class. For example, suppose you want to use the DateTimeFormatInfo class in the System.Globalization namespace. If you try to use the class name by itself, you see a compiler error, which is indicated by a wavy underline, as shown in Figure 3. Figure 3. You need to specify the namespace of the DateTimeFormatInfo class You can use the Error Correction Options smart tag to fix the reference; it suggests that you qualify, or disambiguate, the class name with the namespace name, as follows. If you create a solution that needs the Globalization class throughout the module or class, you should import the namespace by adding an Imports statement to the top of the class, as follows. Adding the Imports statement to the top of a code file helps the Visual Basic compiler determine namespaces to look in to resolve ambiguous class names. After importing the System.Globalization namespace, you can use DateTimeFormatInfo without specifying the Globalization namespace explicitly. However, if you import two namespaces that each contain classes with the same names, you need to disambiguate the references by using the complete name of the namespace. (This does not occur often within the .NET Framework, but is more likely when you start using modules and namespaces created by others.) Before you import a namespace, your project must have a reference to the assembly that provides the namespace. It is easy to confuse importing a namespace with setting a reference to the assembly that provides the namespace. Importing is a coding convenience that saves you a lot of typing because you do not have to fully qualify class names in your code. Setting a reference to an assembly makes a namespace available for importing by your project. As an example, view the Microsoft Office Excel add-in project you created earlier. In the sheetInfoButton Click event handler, you used the class Excel.Worksheet, as follows. The namespace for Excel is Microsoft.Office.Interop.Excel; the Excel in Excel.Worksheet is an alias for the full namespace name. The template for your add-in contains a reference to the Excel namespace. In addition, because your add-in imports the namespace at a project level, you can use its members in your code. If you want to create your own namespace aliases, the syntax looks like the following. In a real example, the Imports statement looks like the following. If you use Visual Studio Tools for Office Second Edition to extend a 2007 Office system product, and you want your code to interact with another product, you need to set a reference just as you did when you used VBA. Follow these steps to add a reference to Office Word and an Imports statement that includes an alias for the Word namespace to the Excel 2007 add-in you created earlier. To add a reference to Word and an Imports statement Start with the add-in you created earlier, or create an add-in for Excel 2007. On the Project menu, click ExcelAddIn1 Properties (your project may have a different name than ExcelAddIn1). In the Properties window, click the References tab. The References page has two parts, References and Imported namespaces. You can use the Imported namespaces part to import a referenced namespace so that it is available throughout your project. Under the list of references, click Add. Although there is a Microsoft.Office.Interop.Word namespace in the .NET list, select the Microsoft Word 12.0 COM library. After you set the reference, you can use the namespace. In the Add References dialog box, on the COM page, find and select Microsoft Word 12.0 Object Model. Click OK to add the reference to the project. The library now appears in the list of references. In Solution Explorer, open Ribbon1.vb. Notice that there are already several Importsstatements in the Declarations section. Add an Imports statement for the Word namespace, along with an alias for the namespace. Change the OnClick procedure you created earlier so that clicking the button opens an instance of Word. Save and run the project. Click the button in the Excel Add-In Ribbon. Word opens. Exit Word and Excel. Shared Members and Instance Members When you work with the .NET Framework, you need to know what the Shared keyword means. Some members of some classes include this keyword: it means that you can call the member (a property or method) without needing to create an instance of the class. In other words, shared members apply to all instances of the class. Standard members (normally called instance members) apply to a specific instance of the class. For example, find the System.String class in the .NET Framework documentation. (The String class is within the System namespace.) Figure 4 shows a small subset of the documentation for the methods of the String class. The methods that include a graphical letter S are shared members; they are not instance members. Figure 4. The letter S indicates shared members To use shared members, you do not need an instance of the class; to use instance members, you do. Therefore, to call the String.Compare method (a shared method), write code like the following (assuming that string1 and string2 are defined as String types). This code does not create a specific String and call its Compare method—instead, it calls the Compare method of the String class. For instance members, you must create an instance of the class before using the member. For example, to use the Contains method, you must first create a String instance, give it a value, and call the Contains method of that particular string, like the following (assuming that string1 is a String). Tip #8: Use the .NET Framework to Write Readable and Consistent Code The .NET Framework provides more native functionality than VBA does, and its classes and members behave consistently. You can find a namespace, class, and member for practically any task and, because of the consistency of the .NET Framework, the resulting code is easy for others to understand. This tip shows you how to use the .NET Framework to perform file input/output and to retrieve system information, two operations that are harder to perform in VBA. File Handling In contrast to VBA, accessing the file system using the .NET Framework is consistent and straightforward. All the classes for reading and writing files are in the System.IO namespace. If you plan to use the System.IO namespace many times in a particular file, we recommend that you add Imports System.IO at the top of the file. The following examples assume you import the System.IO namespace. Retrieving a List of Files in a Folder Retrieving a list of files using the .NET Framework is also easy. The System.IO.Directory class returns an array of strings, and you can iterate through the array to handle the list of files. For example, the following code retrieves all the files in the C:\ folder, and displays their contents in the Output window. Retrieving the Contents of a File Using the File class, you can open a file and retrieve its contents into a string. The ReadAllText shared method does this for you. The following example reads all the text from the file C:\Test.txt and displays its contents in the Output window. Writing a Line of Text to a File and Closing the File Often, you want to log information to a text file. Although you can use the methods in the System.Diagnostics namespace to log information, sometimes it is easier to perform this task yourself. The following example, from a Button control’s Click event handler, writes text to a log file each time you click the button. Use the String.Format method to create a template with numbered placeholders. At run time, the method replaces each placeholder with the item from the same position in the list of arguments that follows. In this case, the method replaces the {0} placeholder with the current date and time value, and the {1} placeholder with a NewLine character. Reading Lines of Text in a File into an Array Using the File.ReadAllLines method, you can read the contents of a text file. The following example iterates through all the lines in the returned array of strings, displaying each in the Output window. System Information If you have to interact with the environment using VBA, you can use the Win32 API to solve your problems. Because the .NET Framework provides many classes that interact with the environment, you never have to use the Win32 API. The following code displays useful information about the current environment, without using the Win32 API, in the Output window. Debug.WriteLine("Current directory: " & _ System.Environment.CurrentDirectory) Debug.Write("Logical drives : ") For Each drive As String In _ System.Environment.GetLogicalDrives Debug.Write(drive & " ") Next ' Insert a new line. Debug.WriteLine("") Debug.WriteLine("Machine name : " & _ System.Environment.MachineName) Debug.WriteLine("OS Version : " & _ System.Environment.OSVersion.VersionString) Debug.WriteLine("Processor Count : " & _ System.Environment.ProcessorCount) ' This is just one of many special directories. Debug.WriteLine("Desktop Directory: " & _ System.Environment.SpecialFolder.DesktopDirectory) Debug.WriteLine("User Domain Name : " & _ System.Environment.UserDomainName) Debug.WriteLine("User Name : " & _ System.Environment.UserName) Debug.WriteLine("Framework version: " & _ System.Environment.Version.ToString()) For more information, see the following references: What's New in Developer Content discusses how to use the Help files, and how to decipher the object model information. Language Changes for Visual Basic 6.0 Users. Tip #9: Learning New Ways to Work with Data If you have a VBA solution that interacts with data, you probably used either the Data Access Object (DAO) or the ActiveX Data Objects (ADO) to retrieve and update the data. Although Word and Excel try to simplify data access, neither application uses a consistent, simple solution. Although we cannot cover in detail the data API (called ADO.NET) used by the .NET Framework, we can demonstrate an example that shows how easy it is to bind forms to data, and then use that data. ADO.NET provides classes and members in the System.Data namespace (and subsidiary namespaces) that allow you to easily connect to, retrieve, and update data from OLE DB, Microsoft SQL Server, and other data sources. After you create a connection by using a connection string (similar to connection strings in ADO), you can create command objects that retrieve and update data. You can remain connected to the data source and iterate through all the rows of data, using a SqlDataReader or OleDbDataReader (other data sources provide their own data reader classes). Or you can create and fill a client-side cache for the data, using the DataTable class or the DataSet (a group of related DataTable instances) class. In that case, your code is independent of the original data source. Although you can write code that manipulates data using the classes and members of the System.Data namespace, you do not need to. The .NET Framework provides superb data binding functionality, and you can access the data without much code. To demonstrate this functionality, use the Excel add-in you created earlier in this article, or create an additional add-in for Excel, and add a custom task pane named CalendarTaskPaneControl (to match the previous example’s demo). See Tip#4: Add Custom Task Panes for help. Follow these steps to add support for displaying data from the SQL Server Northwind sample database in a custom task pane. To follow through this demo, you need to have Microsoft SQL Server 2000, SQL Server 2005, or SQL Express 2005 installed and configured. In addition, you need to have the Northwind sample database installed. If it is not, download and install the sample from Northwind and Pubs Sample Databases for SQL Server 2000. Follow these steps to add a data source that refers to the Northwind sample database. To add a data source that refers to the Northwind sample database In Visual Studio 2005, click View and then click Server Explorer. If you already have set up a connection to the Northwind sample database, skip to Step 8. In the Server Explorer window, right-click Data Connections, and click Add Connection. In the Choose Data Source window, click Microsoft SQL Server. Click Continue. In the Add Connection dialog box, in the Server name field enter the value .\SQLEXPRESS. (If you are using a different instance of SQL Server, enter the name of the instance here.) If you already attached the Northwind sample database to your SQL Server instance, in the Select or enter a database name drop-down list, select Northwind, and skip to step 7. If you do not see Northwind in the list, execute the instructions in Step 6. Select the Attach a database file option, click Browse, and find the NORTHWND.MDF file. Set the Logical Name field to Northwind. Click Test Connection to verify that you can access the data in the Northwind sample database, and then click OK to dismiss the dialog box. The Server Explorer window now displays a new data connection. If you see a name other than Northwind in the Server Explorer window, you can right-click the connection, click Rename, and enter Northwind as the connection name. Click Data and then Show Data Sources. Visual Studio 2005 displays the Data Sources window. Click Data, and then Add New Data Source (or click the link in the Data Sources window to create a data source). In the Data Source Configuration Wizard, on the Choose a Data Source Type page, select Database, and then click Next. In the Choose Your Data Connection page, from the drop-down list, select Northwind. Click Next. Depending on how you configured your data source, you may be asked if you want to copy the attached file into the current project. Although there are benefits to copying the data file locally (data separation, for example), you do not need to copy it for this example. Click No, if prompted. In the Save the Connection String to the Application Configuration File page, accept the default option, and then click Next. In the Choose Your Database Objects page, select Categories and Products, and change the DataSet name to NorthwindDataSet. When you are finished, the dialog box should look like Figure 5. Click Finish. Your project now includes a new item, NorthwindDataSet.xsd, and its supporting files. Figure 5. Choose Your Database Objects page Right-click the Products TableAdapter, click Add, and then click Query. In the Choose a Command Type page, click Next to accept the default option to create a query using a SELECTSQL statement. In the Choose a Query Type page, click Next to accept the default option to create a SELECTstatement that returns rows. In the Specify a SQL SELECT statement page, modify the existing SQL statement so that it looks like the following, adding the WHEREclause. Click Next. In Choose Methods to Generate, modify the names of the methods to FillByCategoryIDand GetDataByCategoryID. Click Finish. Note that Products TableAdapternow displays a second row containing the new queries. Click File and then click Save All. Click Window and then click Close All Documents. In Solution Explorer, double-click CalendarTaskPaneControl.vb, loading it into the designer. In the Data Sources window, expand the Categories table. Select the CategoryName field, and from the drop-down list to the right of the field name, select ComboBox (this indicates to Visual Studio what type of control you want to create when you drag the field onto a design surface). Figure 6 shows the drop-down list. Figure 6. Select the type of control you want to create From the Data Sources window, drag the CategoryName field to the CalendarTaskPaneControl designer. Place it below any existing controls. You can arrange the controls so that the Label control is above the ComboBox control. You can delete the BindingNavigator that was added when you placed the ComboBox CategoriesBindingNavigator; you do not need it for this example. Select the combo box, and rest the pointer over the control. A tiny arrow button appears in the upper-right corner of the control. Click the button, and a properties window appears. Set the Data Source property to CategoriesBindingSource as shown in Figure 7. Figure 7. Set the binding properties of the ComboBox control Add a Button control to the task pane, and set its name to insertButton. Set its Text property to Insert. You must add a single line of code to fill the local data cache. To do that, double-click the custom task pane in the designer, which takes you to the Load event handler for the task pane. Add the following line of code to the event handler. This code fills a local DataTable named NorthwindDataSet.Categories with information about all the categories. The Insert button you created inserts a list of products from the selected category into the current worksheet, starting at the current cell. Continue following these steps to add this simple functionality. Double-click the Insert button, and add the following code to its Click event handler. This code retrieves the Products table adapter (including the query you created previously), executes the query to retrieve a DataTable containing the products matching the selected category, and places the list of products into the worksheet at the current location. ' Retrieve a reference to the Products table adapter. Dim adapter As New NorthwindDataSetTableAdapters.ProductsTableAdapter ' Retrieve the products associated with the ' selected category. Dim products As NorthwindDataSet.ProductsDataTable = _ adapter.GetDataByCategoryID(Me.CategoryNameComboBox.SelectedValue) 'Insert the data into Excel. Dim rng As Excel.Range = Globals.ThisAddIn.Application.ActiveCell Dim i As Integer For Each product As NorthwindDataSet.ProductsRow In products rng.Offset(i, 0).Value2 = product.ProductName i += 1 Next Finally, save and run the project. In Excel, in the custom task pane, select a category from the list of categories. Click Insert, and the code inserts a list of products from the selected category into the current cell and the cells below it. This example requires a small amount of code to perform its work—some situations require even less. If you choose, you can write all the code yourself instead of using the tools Visual Studio and Visual Studio 2005 Tools for Office SE provides. However you choose to approach the problem, ADO.NET provides a wide range of tools, classes, and data-binding support that you can use when building applications in Visual Basic. For information on ADO.NET, see the following references: Developing Application-Level Data-Centric Solutions Using Visual Studio 2005 Tools for the Office System SE Converting a Data-Oriented Application from Visual Basic 6 to Visual Basic 2005, Part 1 Converting a Data-Oriented Application from Visual Basic 6 to Visual Basic 2005, Part 2 Converting a Data-Oriented Application from Visual Basic 6 to Visual Basic 2005, Part 3 - Data Access for Visual Basic 6.0 Users Tip #10: Learn About Security Policies When you develop VBA solutions, deployment is simple. For document-based solutions, you distribute the document and indicate to the user how to open it. Deploying Microsoft Office add-ins is slightly more complicated because you may need to instruct the user to change the macro security settings for the add-in to work. If you registered the add-in correctly, it should work. By default, Windows applications based on the .NET Framework have full rights to run on any computer that they are installed on. The same is not true for Visual Studio 2005 Tools for Office SE add-ins, which by default have no trust, and only run on the computer that they were created on. When you create an add-in project, Visual Studio sets up the necessary security to allow the add-in to run on the current computer. Visual Studio 2005 Tools for Office SE add-ins, which are installed within a Microsoft Office application and can execute unsafe code, require special security rights to run. Specifically, you must create and install a security policy that grants trust to the add-in based on criteria that you specify in the policy. For example, you can grant trust based on a cryptographic key, or on the location of the add-in. You cannot run the add-in without installing a security policy. Setting up and configuring add-in security is beyond the scope of this article, but you must be aware of .NET Framework security and how it applies to Visual Studio 2005 Tools for Office SE add-ins. When you create an add-in, you need to set aside some time to create and manage security policies. The Visual Studio 2005 Tools for Office SE project template includes a Setup project, along with your add-in project. You can explicitly build this project when you are ready to deploy your add-in (right-click the project in Solution Explorer and select Build from the context menu to build the Setup executable). After you create the Setup application, you also need to create the security policy for client computers. The following articles describe the process in more detail. Deploying Visual Studio 2005 Tools for the Office System SE Solutions Using Windows Installer (Part 1 of 2) Deploying Visual Studio 2005 Tools for the Office System SE Solutions Using Windows Installer: Walkthroughs (Part 2 of 2) VB.NET Installer class for setting Code Access Security with Uninstall Conclusion We hope these tips help you use your VBA knowledge as you migrate to Visual Studio 2005 and Visual Studio 2005 Tools for Office SE. The tips discussed here are a good place to start, but nothing beats experience. Take the time to convert an existing VBA customization to Visual Studio 2005 Tools for Office SE, and as you do so, remember the issues discussed in this article. Remember that a customization can contain both VBA and Visual Basic code; when you create your add-in project, convert your VBA code to Visual Basic code one function at a time. About the Authors Ken Getz is a developer, writer, and trainer, working as a senior consultant with MCW Technologies, LLC, a Microsoft Solution Provider. Ken co-authored AppDev's C#, ASP.NET, VB.NET, and ADO.NET courseware (see Microsoft Training at AppDev). many industry events, including Advisor Media's Advisor Live events, (see Microsoft Training at AppDev). She has contributed to books about Microsoft Office, written white papers for publication on MSDN, and created samples designed to help developers get up to speed quickly on new Microsoft products and features. Additional Resources To learn about the Visual Studio 2005 user interface, read Don’t Freak Out About Visual Studio. Although the article discusses Visual Studio 2003, much of its content applies to Visual Studio 2005 as well. To see how to apply these tips in actual applications, see the following white papers: Migrating a Word VBA Solution to Visual Basic Using Visual Studio 2005 Tools for Office Migrating Excel VBA Solutions to Visual Studio 2005 Tools for Office Redesigning an Excel VBA Solution for .NET Using Visual Studio 2005 Tools for Office Migrating a VBA Solution to a Visual Studio 2005 Tools for the Office System SE Add-in The following books by members of the Visual Studio 2005 Tools for Office SE team help you transition to the world of managed code and Visual Studio 2005 Tools for Office SE add-ins. VSTO for Mere Mortals(TM): A VBA Developer's Guide to Microsoft Office Development Using Visual Studio 2005 Tools for Office (For Mere Mortals) Visual Studio Tools for Office: Using Visual Basic 2005 with Excel, Word, Outlook, and InfoPath (Microsoft .Net Development Series) For more information about the Visual Basic language, we recommend the following books: The Visual Basic .NET Programming Language (Microsoft .NET Development Series) Start to Finish Visual Basic 2005 Read part one: 10 Tips for Migrating VBA to Visual Basic Using Visual Studio Tools for Office Second Edition (Part 1 of 2)
http://msdn.microsoft.com/en-us/library/bb974399(d=printer,v=office.12).aspx
CC-MAIN-2013-48
refinedweb
5,512
64.61
> Bryan O'Sullivan wrote: >> apfelmus wrote: > > Thank you for the review and constructive criticism. > Although highly amusing, some of what you write is elliptical > enough in style that I'm having trouble following a few of your points. > Doubly so since I've thus far gone years without paying attention to > Control.Applicative Oui, you're right. I'm going to elaborate, although critique incompréhensible is a must, otherwise it would quickly become obvious that the critic has absolutely no clue ;) >> The RecursionPredicate decides whether to >> recurse into or a sub-directory or not (it could be mentioned explicitly >> that the predicate is only invoked on directories). > > Good Point! >>) The intention is to present the predicates (FileInfo -> a) in a way that does not mention the parameter over and over. I mean, one could write the example as \info -> (extension info == ".hs") || (extension info == ".lhs") but threading the parameter around will quickly become a nuisance with more complex queries. One solution is to use the Reader monad, and in essence that's what System.FilePath.Find currently does. But I think that a) The Reader is better seen as applicative functor than as monad b) Making (FindClause a) a mere type synonym of (FileInfo -> a) has the benefit that the user can choose whether he wants to use monads or applicative functors via the respective instances or whether he does not. Programming with applicative functors is like programming with monads but you're only allowed to use return and `ap`. The example can be written for monads as return (||) `ap` (return (== ".hs") `ap` extension) `ap` (return (== ".lhs") `ap` extension) or for general applicative functors as (||) <$> ((== ".hs") <$> extension) <*> ((== ".lhs") <$> extension) In the end, one will probably use the custom combinators (==?) and (||?) anyway, so that it doesn't matter whether it's monad or applicative or whatever. But not making (FindClause a) opaque gives more freedom to the user. Note that providing functionality by making things instances of very powerful classes should be documented explicitly. A recent question on haskell-cafe was about Data.Tree and one can argue that the innocent looking instances for Foldable, Functor and Traversable are maybe 75% of the functionality that Data.Tree offers :) >> Abstracting the concrete representation away into FindClause hinders >> reuse of already existing functionality from System.FilePath, > > Yes, that's unfortunate. >> The monad could make sense if the predicate might involve not only the >> file status but also looking inside the file. > > How would you suggest doing so? Just a simple unsafeInterleaveIO > readFile? > >> Returning all files with a >> certain magic number would be such a use case but System.FilePath.Find >> currently does not offer zis possibilité. > > I'm certainly all for improving it, so this kind of suggestion is most welcome. I guess that using (FileInfo -> IO Bool) and providing a specialized (FileInfo -> Bool) version is probably a safe way. But with IO, the traversal could even change files in place, which is probably not a good idea. Maybe unsafePerformIO is the best solution, because you may safely close the file after having evaluated predicate (unsafePerformIO $ hGetContents handle) (fileinfo) to weak head normal form, i.e. True or False. I think it fits perfectly. >>. > > But the FilePath is embedded in a FileInfo, so a double traversal is unneeded. Yes. I've not been clear, I mean the other way round. find :: ... -> IO [FileInfo] If you only want a list of file paths, you can always do liftM (map filepath) . find ... Of course, this leads to the question whether find should be factored even more into generate & prune find r p dir = map filepath . filter p . directoryWalk r dir with the intention that System.FilePath.Find only exports directoryWalk. Which leads to the question whether directoryWalk can be factored as well which in turn leads to the next question: >>. > > The darcs authors already tried this, and gave up on the idea. > Once you have a pure data structure, you start developing notions that > it makes sense to manipulate it, and then all is lost once you turn > your mind to applying those manipulations to the real world. Yes, the data structure ought to be read-only. But by making it an abstract data type with proper access functions, maybe this goal can be achieved. In any case, with such a data structure, directoryWalk can be factored as well. In particular, I have something in mind, based on the following Question: "Given a directory tree, what are you going to do with it?". Answers : "Well, I rename every file" (=> map) "Calculate total size" (=> fold) The assumption is that everything you'll ever do with a huge directory tree is to map or fold it. So, here comes my crazy speculation: make the directory tree a phantom type data DirTree a type DirectoryTree = DirTree FileInfo and implement Functor, Foldable and Traversable for it. So, printing out all files would become Data.Foldable.mapM_ (putStrLn . filepath) :: DirTree FileInfo -> IO () In conclusion, there are several ways to generalize an interface. One way is to add more options and parameters to a function. But the other way is to shatter a monolithic function into tiny pieces that can be reassembled and composed at will. I think that the latter is the spirit and the source of power of functional programming. Regards, apfelmus
http://www.haskell.org/pipermail/libraries/2007-June/007682.html
CC-MAIN-2014-15
refinedweb
883
55.74
I recently had a chance to play with Elixir and Phoenix as part of my Cookies Lab. The following is a whistle-stop tour of my first impressions. Functional, not Object-Oriented At first glance, Elixir looks like Ruby, but is quite different underneath with the functional language paradigm. Having said that, the semi-familiar syntax does make understanding some of the new concepts easier. In Elixir’s case, functional (generally) means more explicit – for example, passing around the conn struct in controllers, rather using instance variables (there are no instances!). While this can mean a bit more boilerplate, the extra clarity can be worth it – of course, on the flip side, you can write Fortran in any language ;-). One of the ‘disadvantages’ of functional languages is having deeply-nested functions (e.g. foo(bar(baz(1)))), which need to be read ‘inside-out’ to be understood. Fortunately, Elixir has the pipe operator, which allows us to construct functions as a pipeline (much like the Unix command line) – for example: String.split(String.reverse(String.capitalize("elixir phoenix")), " ") can be rewritten as (the much clearer form): String.capitalize("elixir phoenix") |> String.reverse |> String.split(" ") # or over multiple lines: String.capitalize("elixir phoenix") |> String.reverse |> String.split(" ") which compares favourably with the Ruby equivalent: "elixir phoenix".capitalize.reverse.split(" ") As part of its’ Erlang heritage, all data in Elixir is immutable (this is part of the Erlang VM’s ‘shared nothing’ method of data sharing for concurrency), which means you can’t alter things on the fly as you might do in a Rails controller (e.g. adding the current user ID to the session) – any changes you make have to be returned by your function. Pattern matching One thing that takes some getting used to is Elixir’s pattern matching. On first glance, the syntax for assigning variables looks very familiar – a = 1 – but Elixir isn’t assigning variables in the traditional sense (‘put this data in some address in memory and let me refer to it as “a”‘), since all values are immutable. You’ve probably come across some pattern matching in Ruby (or CoffeeScript) – this is generally positional and for arrays. For example: a, b, c = [1, 2, 3] # gives: # a = 1 # b = 2 # c = 3 [["a", 1], ["b", 2]].each do |(letter, number)| puts letter puts number end # gives (on STDOUT): # a # 1 # b # 2 Elixir takes this a step further and allows pattern matching on other data structures (e.g. maps, which are like Ruby hashes), as well as function arguments. This means functions are often ‘defined’ more than once with different arguments, with the runtime deciding which to use based on the arguments passed in. Functions can also be defined multiple times with different arities – this is why Elixir functions are named with a number and slash at the end, indicating arity (e.g. foo/1, bar/2 etc.). def belongs_to_user?(%{user_id: user_id}, user_id) do true end # Default case, handling nil or other values def belongs_to_user?(_, _) do false end I found pattern matching the hardest to grasp and I feel like getting to ‘idiomatic’ usage would take a while. It’s a cool idea, though, and certainly removes a lot of ifs and nesting. Ecto Elixir’s ‘equivalent’ of ActiveRecord is Ecto, although it’s quite different. Models are simply structs (typed maps), with validations and changes being handled by changesets. Changesets represent a different way of thinking about your models, but really stand out when dealing with changes to models under different circumstances (create vs. update, admin vs. normal user etc.) – they avoid the problems with conditional validations or callbacks in ActiveRecord, and this is where the idea of a pipeline of functions is certainly appealing. In Rails, form objects and interactors cover similar ideas. Ecosystem As is to be expected, Elixir has a relatively immature ecosystem compared to Rails – the language itself is in flux (although it seems pretty stable) and there are not so many battle-tested libraries to just drop into your project. Authentication is an area I found to be pretty sparse compared to Rails – there are a few libraries, but nothing that matches the completeness of Devise. Efforts do seem to be underway, though, so this should improve in the future. The standard library itself is pretty good, and you have the advantage of interoperating with the Erlang standard library and packages as well. There are similar data types to Ruby (arrays, hashes, strings, symbols, along with a few others), although the terminology is a little different (hashes are “maps”, symbols are “atoms”). Documentation is also pretty good, with the Hexdocs for the major packages (Phoenix, Ecto etc.) well-documented and understandable for novices. Developer experience This feels like another area that’s still developing – error messages can often be unclear (often caused by failing to satisfy a pattern match somewhere and finding where can be difficult). I also found that Elixir doesn’t quite have the Ruby-like property of ‘just try something how you’d expect it to work and it probably will’ (the principle of least surprise) – or at least not for ‘least-surprise’ that’s shaped by Ruby experience. In contrast, the front-end side feels more advanced than Rails – Elixir offloads handling of the front-end to Broccoli, which gives you code reloading out of the box, along with access to the wider NPM ecosystem (for better or worse). Conclusion All in all, I enjoyed my brief foray into Elixir and Phoenix. The language – and framework – seem well-established, although the wider ecosystem has a while to go to match that of Ruby and Rails. Inevitably, my basis for comparison has been Ruby/Rails (since that’s my day-to-day language/framework), but it’s worth remembering that the two are different languages/frameworks with differing goals, and so a one-to-one comparison will always be a bit reductionist. If you’re a Rubyist, give Elixir a try!
https://www.cookieshq.co.uk/posts/initial-impressions-of-elixir-and-phoenix
CC-MAIN-2022-21
refinedweb
998
60.55
The biggest story this week was the 24,000-word epic detailing the entirety of a 48-hour long game jam, where teams compete to create the best game in a brutally short amount of time. We also talked about why the press is often showed older code when conducting previews of upcoming games, and we give our thoughts on a number of big games. Battlefield 3 launches next week. Are you ready? "I think they're mad": Inside a 48-hour battle to build the best video game: We send one writer to follow 20 teams of game developers as they fight exhaustion and each other to create the best badger-themed video game in only two days. What's at stake? Pride—and jellybeans. Skylanders: a toy-based RPG for kids (that adults will love): Skylanders: Spyro's Adventure wants to erase the line between toys and video games, and it does so quite effectively thanks to the cross-platform nature of the statues and games that are actually fun to play. Why both gamers and the press play old code close to a game's release: Pre-alpha code a month from launch? A beta that's rough and glitchy weeks from the game's release? Why Skyrim and Battlefield 3 suffered from mistaken assumptions about betas and press previews. As Xbox Live-FIFA 12 fraud continues, Microsoft's response becomes maddening: Microsoft and EA may claim there is no problem with security, but with so many people suffering from hacked Xbox Live accounts and fraudulent charges, the head-in-sand approach may begin to do more harm than good for both companies. Watch Uncharted 3's launch trailer, then remove jaw from floor: It's a little early for a launch trailer, but Naughty Dog's Uncharted 3 has been given one anyway. Our verdict is simple: holy [redacted]! Playing the Star Wars: The Old Republic beta—the birth of a Jedi knight: As the Star Wars: The Old Republic MMORPG grows closer to its December release, we spend some time in the beta with the Republic classes, and see how a Jedi Knight gains her footing in the game. Alone and an easy target: our thoughts on the first half of Arkham City: Batman: Arkham City may suffer from some small issues, but the first half of the game reveals a city filled with the insane and dangerous, and one hero who chooses to go in to try to control the chaos. Walking Dead premiere offers scares, mixed with limp soap opera (Full spoilers): Season two of The Walking Dead has a few wonderful scenes of horror, but the show continues to handle its characters and central conflicts in an inept and boring manner. Caution: full spoilers inside. Ace Combat: Assault Horizon sacrifices control for glitz : What happens if we take away player control in order to add more exciting dog fights and explosions? Ace Combat: Assault Horizon happens. While the game is often exciting, it's hard to forgive the minigame-like nature of the core experience. The Secret World MMORPG: absorbing lore, wooden characters: The Secret World, an MMORPG due for launch next spring, has built a game world that's already won many rabid fans, but a gameplay demo at NY Comic Con left us cold. You must login or create an account to comment.
https://arstechnica.com/gaming/2011/10/week-in-gaming-48-game-jam-epic-arkham-city-the-old-republic/
CC-MAIN-2017-22
refinedweb
563
63.63
JSON God NOT YET PRODUCTION READY The definitive solution for JSON in Dart. Contents - JSON God - About - Installation Usage IMPORTANT - Dart2JS Compatibility Serializing JSON * Deserializing JSON * [Deserializing to Classes](#deserializing-to-classes) Compatibility with JsonObject objectToJson enableJsonObjectDebugMessages - Thanks About I think Dart is really freaking cool, and far better than Javascript. However, coming from a Javascript background, I really am not a fan of native JSON support only coming in the form of Maps. What about classes? What if I want to serialize my precious objects? What if I want to validate JSON input? My main problem with Dart is that it does not natively support this. And the standard JsonObject Library is outdated and incompatible with SDK 1.0+. To remedy this, I wrote JSON God. JSON God exposes a class called God. God contains just two methods: serialize and deserialize. You can call these synchronously, and even in the browser. Cool, right? I made a few provisions for compatibility with JsonObject, too. Bow down to JSON God. Installation To install JSON God for your Dart project, simply add json_god to your pub dependencies. If you use an old version of the SDK (<1.0), stick with JsonObject. dependencies: json_god: any I do not plan any major changes to the API, and future releases will be, for the most part, backwards-compatible. However, if you are paranoid, apply the version constraint '^1.0.0' instead of 'any'. Usage Serialization and deserialization are run through the God class. Instantiate a new God to begin. import 'package:json_god/json_god.dart'; God god = new God(); // Set .debug to true to print debug output. // CAUTION: It is very verbose. // god.debug = true; Dart2JS Compatibility IMPORTANT - Reflection through dart:mirrors is not yet perfect in Dart2JS. Make sure to add a @MirrorsUsed() annotation to any classes you want to serialize/deserialize. library app; @MirrorsUsed(targets: 'app') import 'dart:mirrors'; @MirrorsUsed documentation can be found here. Serializing JSON Simply call god.serialize(x) to synchronously transform an object into a JSON string. Map map = {"foo": "bar", "numbers": [1, 2, {"three": 4}]}; // Output: {"foo":"bar","numbers":[1,2,{"three":4]"} String json = god.serialize(map); print(json); You can easily serialize classes, too. JSON God also supports classes as members. class A { String foo; A(this.foo); } class B { String hello; A nested; B(String hello, String foo) { this.hello = hello; this.nested = new A(foo); } } main() { God god = new God(); print(god.serialize(new B("world", "bar"))); } // Output: {"hello":"world","nested":{"foo":"bar"}} Deserializing JSON Deserialization is equally easy, and is provided through god.deserialize. Map map = god.deserialize('{"hello":"world"}'); int three = god.deserialize("3"); Deserializing to Classes JSON God lets you deserialize JSON into an instance of any type. Simply pass the type as the second argument to god.deserialize. class Child { String foo; } class Parent { String hello; Child child = new Child(); } main() { God god = new God(); Parent parent = god.deserialize('{"hello":"world","child":{"foo":"bar"}}', Parent); print(parent); } Any JSON-deserializable classes must initializable without parameters. If new Foo() would throw an error, then you can't use Foo with JSON. This allows for validation of a sort, as only fields you have declared will be accepted. class HasAnInt { int theInt; } HasAnInt invalid = god.deserialize('["some invalid input"]', HasAnInt); // Throws an error Compatiblity with JsonObject As mentioned before, there is some compatibility with JsonObject for projects that already use that library. objectToJson JsonObject, as I said before, is old, and uses the older Future API. It exposes a Future<String> called objectToJson that serializes an object. I felt like this was worth keeping, so I kept it. main() async { Map map = {"hello": "world"}; // Using await String json = await objectToJson(map); // Using Future.then objectToJson(map).then(print); } It's marked as deprecated, because with JSON God you really should just stick with god.serialize. enableJsonObjectDebugMessages This boolean flag is used internally by the God class, and is the equivalent of god.debug. Thank you for using JSON God Thank you for using this library. I hope you like it. Feel free to follow me on Twitter:
https://www.dartdocs.org/documentation/json_god/1.0.0-beta.1/index.html
CC-MAIN-2017-17
refinedweb
684
52.05
#include <ConstrainedLS.H> Solve the bound-constrained least squares Uses the Lawson-Hanson active set method as given by P.B. Stark and R.L. Parker, Bounded-Variable Least-Squares: An Algorithm and Applications, Computational Statistics, 10:129-141, 1995. Uses solveUnconstrained for sub-problems Referenced by LSProblem< dim >::invertNormalEq(). Solve the unconstrained least squares problem Get a vector of Bound indicating constraint activity. Query the number of active constraints. Bails if a LS problem has not been solved yet Referenced by LSProblem< dim >::invertNormalEq(). Get the LS residual Bails if a LS problem has not been solved yet Solve the least squares problem. Uses successive Householder rotations
http://davis.lbl.gov/Manuals/CHOMBO-RELEASE-3.1/classConstrainedLS.html
CC-MAIN-2018-26
refinedweb
108
51.95
Errata for The RSpec Book The latest version of the book is P2.1,.0 (08-Nov-11) PDF page: 0 After location 62 on Kindle, there are some doubled hyphens, which are probably meant to be en-dashes: "Test--Driven". --Ilmari Vacklin - Reported in: P2.1 (03-Dec-13) PDF page: 1 The "should" expectation syntax (e.g. "[].should be_empty") has been deprecated since RSpec 2.11. Hopefully you'll be updating the code samples to use the preferred syntax, e.g. "expect([]).to be_empty"--Richard Murnane - Reported in: P2.1 (07-Nov-14) PDF page: 3 Broken link in footnote 1 - the current link is at http(colon slash slash)blog(dotdaveastels(dot)com(dot)s3-website-us-west-2(dot)amazonaws(dot)com/2014/09/29/a-new-look-at-test-driven-development.html--Anand C Ramanathan - Reported in: P2.1 (09-Jan-16) Paper page: 12 The ling greeting.should == "Hello RSpec!" is now depreciated for RSpec 3.0 due to the use of should. If you are using RSpec 3.0 then you can use the transpec gem "ie gem install transpec" to convert the code The new line will be converted will be expect(greeting).to eq("Hello RSpec!") --Darren Davie - Reported in: P2.1 (10-Nov-15) Paper page: 14 syntax Should has been deprecated replace with Expect Section 2.3 Hello Cucumber hello/3/features/greeter_says_hello.feature Scenario: ........ Then I expect "Hello Cucumber !" hello/4/features/step_definitions/greeter_steps.rb Given.... When... Then(/^I expect "([^"]*)"$/) do |greeting| expect(@message).to eq(greeting)--MelHay - Reported in: P2.1 (17-Jan-13) PDF page: 14 Last then line missing closing $/--Brian maltzan - Reported in: P2.1 (06-Oct-12) PDF page: 16 The code sample hello/5/features/step_definitions/greeter_steps.rb appears to be missing a blank line before "Given...".--Wayne Conrad - Reported in: P1.0 (11-Apr-11) PDF page: 31 checking - Reported in: P2.1 (09-Nov-13) PDF page: 41 "Cucumber will load features/support/env.rb, which now requires lib/code-breaker.rb, which" lib/code-breaker.rb should be lib/codebreaker.rb (i.e. remove the dash.) Page 40 tells you to make a codebreaker.rb file. Page 41 refers to it with a - in the name. - Reported in: P2.1 (05-Dec-13) PDF page: 42 There is some confusion about using the "puts" method on the Output object, which makes it feel like an IO object, and because it is later replaced w/ a Test Double and a message expectation is set on "puts", which feels like mocking a low-level API. Recommend renaming Output to View or Display or similar so we end up mocking e.g. view.display or display.show or similar rather than output.puts.--David Chelimsky - Reported in: P2.0 (25-Aug-11) Paper page: 42 The code example is shown as Then /^I should see "([^"]*)"$/ do ... but in the following paragraph, the regex is shown as ([^\"]*) - note added backslash. The presence or absences of the backslash does not make any difference in the operation of the code. However Textmate get pretty confused (gets out of phase about which is the open/closed quotes) without the backslash. The downloadable code examples do not have the backslash.--Dan Nachbar - Reported in: P2.0 (17-Jan-15) Paper page: 43 Completion of code on page 43 when running cucumber get "undefined method" error for messages. code snippet class Output def messages @messages ||= [] end def puts(message) messages << message end end def output @output ||= Output.new end Then /^I should see "([^"]*)"$/ do |message| output.messages.should include(message) end --Jeff Brown - Reported in: P2.0 (17-Jan-15) Paper page: 43 Found correct to the previous error on forum. The following code causes a clash with RSpec's BuiltIn::Output def output @output ||= Output.new end Then ..... output.messages.should ...... end Changed this to: def msg_out @output ||= Output.new end Then ..... expect(msg_out.messages).to include(message) end Now receiving the correct "Expectations" failure of '....expected [] to include.....'--Jeff Brown - Reported in: P2.0 (06-Nov-11) PDF page: 47 The "original narrative" shown with the scenario in the lower half of this page is different from the original narrative as it appeared on the previous page. --Justin Forder - Reported in: P2.1 (31-Jul-13) PDF page: 59 "subimt a guess" should read "submit a guess" in the third line of the first paragraph on page 59 of the PDF.--Daniel - Reported in: P1.0 (28-Apr-11) PDF page: 59 A hypen is included in the epub version of the book in the case where the hypen indicated a end of line wrap in the PDF version. The hypen does not occur at the end of the line in the epub version and looks like it is part of the file name (code-breaker.rb) Cucumber will load features/support/env.rb, which now requires lib/code- breaker.rb, which, in turn, requires lib/codebreaker/game.rb, which is --Mark Anderson - Reported in: P2.0 (16-May-12) PDF page: 60 The code in step definition Then /^I should see "([^"]*)"$/ do |message| output.messages.should include(message) end is using should method. When I followed the book using Ruby 1.9.3-p125 I was running into an error: undefined method `should' for #<Object:0x007f9580b0bc70> (NoMethodError) Only when I included RSpec it could use all the 'should' methods. I included RSpec using a Gemfile, but as the book does not say anything about Gemfile, maybe including RSpec in a different way would work as well. --JjP - Reported in: P2.0 (16-May-12) PDF page: 60 The need for including RSpec appeared only when I used Guard for automating running Cucumber acceptance tests. When running Cucumber manually it all works fine. I'm sorry for the confusion, but in case someone else is using Guard it might be useful. I will also look into Guard to see why it runs Cucumber differently in that case than when Cucumber is run manually.--JjP - Reported in: P2.1 (20-Jan-13) PDF page: 61 Section 5.1, first paragraph: "game_spec.rb). See Shouldn't We Avoid a One-to-One Mapping?, on page 46 for" should be "game_spec.rb). See Shouldn't We Avoid a One-to-One Mapping?, on page 62 for" --Ralph Hoffmann - Reported in: P2.0 (16-Jul-12) PDF page: 61 [Using ruby 1.9.3-p194] After modifying the line game = Codebreaker::Game.new(output) and running Cucumber the book indicates that the test will fail due to the incorrect number of arguments. However, it actually first fails because the variable output is undefined. The error message is: NameError: undefined local variable or method `output' for #<Object:0x00000002620348> adding the line output = Output.new (shown below) initializes output so that the call to Codebreaker::Game.new can occur -- at which time it will fail in accordance with the book's text. The step definition change described above is shown here: When /^I start a new game$/ do #output = Output.new #<-- added line Codebreaker::Game.new(output) game.start end --Coby Pachmayr - Reported in: P1.0 (03-Aug-11) PDF page: 73 I suggest that the definition of the let() method to be updated/verified. let(<name>) { <block> } i'm beginning to have a suspicion that the "name" which you feed into a call to let() is to be defined as "the name of the resource which the contents of the block are to be assigned to. For example, the generated of a mocked resource object that is to be used throughout a spec".--Gordon Yeong - Reported in: P2.1 (03-Dec-13) PDF page: 74 Paper page: 59 Section 6.1, first paragraph: "subimt" should be "submit".--Richard Murnane - Reported in: P2.1 (22-Jan-13) PDF page: 74 Chapter 6, introduction, first paragraph feature we’re going to tackle is to subimt a guess and get feedback from the "to subimt a" -> "to submit a" --Ralph Hoffmann - Reported in: P2.1 (10-Jun-13) PDF page: 74 The third sentence contains 'subimt', should be 'submit'--Robert Newbould - Reported in: P2.0 (26-May-11) Paper page: 75 "Run the specs, and they should all pass." in 2nd para, line 1 To make this a true statement, number_match? need to be in codebreaker/game.rb, which is missing upto this point. Feel free to reach me at khan [dot] mohammad [at] acm [dot] org, if you need further clarification.--MOHAMMAD H KHAN - Reported in: P2.0 (25-Aug-12) PDF page: 77 This is a dummy. Throw me away. - Reported in: P2.0 (17-Jul-12) PDF page: 78 [Ruby 1.9.3p194; Cucumber 1.2.1; RSpec 2.11.0] Running cucumber on codebreaker_submits_guess.feature, shows 5 passing scenarios, not 14 scenarios. The code for all Scenarios: below the Scenario outline should be changed Examples: For example: Scenarios: no matches | code | guess | mark | | 1234 | 5555 | | Becomes.. Examples: no matches | code | guess | mark | | 1234 | 5555 | | --Coby Pachmayr - Reported in: P2.0 (21-Apr-12) PDF page: 80 Earlier in the book "before(:each)" was introduced and the lines were changed to use @output.should_receive. Now in this section the code is back to output.should_receive--NewToRubyRailsCucSpec - Reported in: B13.0 (19-Mar-13) PDF page: 80 Paper page: 92 undefined method `[]' for nil:NilClass (NoMethodError) ./lib/codebreaker/game.rb:30:in `exact_match?' ./lib/codebreaker/game.rb:17:in `block in guess' ./lib/codebreaker/game.rb:16:in `each' ./lib/codebreaker/game.rb:16:in `map' ./lib/codebreaker/game.rb:16:in `guess' ./features/step_definitions/codebreaker_steps.rb:36:in `/^I guess "(.*?)"$/' features/codebreaker_submits_guess.feature:18:in `When I guess "<guess>"' - Reported in: P2.0 (17-Jul-12) PDF page: 81 The book states that running the codebreaker_starts_game will fail because of the missing argument... however, it does not fail since a default is provided in the start method of Game. That is (in the game.rb file): def start(secret=nil) .. end If one removes the default (=nil) from the start arguments it will fail as the book indicates. --Coby Pachmayr - Reported in: P2.0 (01-Mar-12) Paper page: 88 missing change mark before: def initialize(secret, guess)--Carlos Silva - Reported in: P2.0 (05-Jul-12) PDF page: 91 I am using ruby --version ruby 1.9.3p194 (2012-04-20) [x86_64-linux] rspec --version 2.10.1 cucumber --version 1.2.1 The example code fails in This version of cucumber with the error shown below. The error can be resolved by removing the "#" from the #START_HIGHLIGHT & #END_HIGHLIGHT in the Feature section. The cucumber error message is: features/codebreaker_submits_guess.feature: Lexing error on line 12: ' Each position in the secret code can only be matched once. For example, a'. See http<colon>//wiki<dot>github<dot>com/cucumber/gherkin/lexingerror for more information. (Gherkin::Lexer::LexingError) /parser/parser.rb:31:in `parse' /usr/local/lib/ruby/gems/1.9.1/gems/cucumber-1.2.1/lib/cucumber/feature_file.rb:37:in `parse' /usr/local/lib/ruby/gems/1.9.1/gems/cucumber-1.2.1/lib/cucumber/runtime/features_loader.rb:28:in `block in load' /usr/local/lib/ruby/gems/1.9.1/gems/cucumber-1.2.1/lib/cucumber/runtime/features_loader.rb:26:in `each' /usr/local/lib/ruby/gems/1.9.1/gems/cucumber-1.2.1/lib/cucumber/runtime/features_loader.rb:26:in `load' /usr/local/lib/ruby/gems/1.9.1/gems/cucumber-1.2.1/lib/cucumber/runtime/features_loader.rb:14:in `features' /usr/local/lib/ruby/gems/1.9.1/gems/cucumber-1.2.1/lib/cucumber/runtime.rb:170:in `features' /usr/local/lib/ruby/gems/1.9.1/gems/cucumber-1.2.1/lib/cucumber/runtime.rb:46:in `run!' /usr/local/lib/ruby/gems/1.9.1/gems/cucumber-1.2.1/lib/cucumber/cli/main.rb:43:in `execute!' /usr/local/lib/ruby/gems/1.9.1/gems/cucumber-1.2.1/lib/cucumber/cli/main.rb:20:in `execute' /usr/local/lib/ruby/gems/1.9.1/gems/cucumber-1.2.1/bin/cucumber:14:in `<top (required)>' /usr/local/bin/cucumber:23:in `load' /usr/local/bin/cucumber:23:in `<main>' --Kevin Slade - Reported in: P2.0 (07-Oct-11) PDF page: 92 "Ru n t h e s pe c s , an d t h e y s h o u ld all pas s . Ru n t h e s c e n ario s , an d y o u s h o u ld s e e t h at t we lv e ar e pas s in g , le av in g o n ly t h r e e failin g s c e n ario s t o g o ". There are 14 tests in total where 11 now are passing (not 12), and 3 fails.--Kjetil Klaussen - Reported in: P2.0 (28-Feb-12) PDF page: 94 Chapter 7.3 is all about “Refactor to Express Intent”. A minor improvement could be made to the refactored `number_match?` method to better convey the intent of the code. I suggest changing: def number_match?(guess, index) @secret.include?(guess[index]) && !exact_match?(guess, index) end ...to… def number_match?(guess, index) @secret.include?(guess[index]) unless exact_match?(guess, index) end While the "a && !b" expression makes sense to programmers familiar with C-style syntax, I would argue that "unless" is not only more appropriate for Ruby, but does a superior job of conveying the intent of the method.--Sean McCann - Reported in: P2.0 (03-Mar-12) Paper page: 100 I think that the marker_spec.rb code presented in this page leads to think that there is only one context at this point in the "describe Marker do … end" when that is not true.--Carlos Silva - Reported in: P2.0 (09-Sep-11) PDF page: 102 It's being picky, but following the mantra that readability is paramount, isn't this rather cryptic? count + (number_match?(guess, index) ? 1 : 0) I'd write this instead: count += 1 if number_match?(guess, index) Or I'm just not a fan of the ternary operator ;)--Martin Svanberg - Reported in: P2.0 (01-Apr-12) Paper page: 103 @guess is a string being passed to Marker, map is not a method available to the String class, so use each_char instead, or convert the String to Array--Andre Dublin - Reported in: P2.0 (22-Jan-14) PDF page: 117 marker = Marker.new('1234','1155') marker.number_match_count.should == 0 should be marker = Marker.new('1234','1155') marker.number_match_count.should == 1 because the first 1 on the guess does match with de the first 1 on secret--Matías M. - Reported in: P2.0 (07-Apr-12) PDF page: 118 You do realize that this new implementation could have been achieved by just reversing the order of the checking like this: @secret.include?(@guess[index]) && !exact_match?(index) reverse it to @guess.include?(@secret[index]) && !exact_match?(index)--João Soares - Reported in: P2.1 (12-Feb-14) PDF page: 139 The third paragraph starts with so you can refactor which is suspended in mid-air. --Bernard Kaiflin - Reported in: P2.1 (05-Jun-13) PDF page: 143 I try to implement all the suggestion you guys have provide but none of one work--Amrit Deep Dhungana - Reported in: P2.1 (25-Oct-12) PDF page: 145 In the middle of the page it says: "Assuming that Thing’s initialize() method does this and set_status() does as well, you can write the previous like this:" The "and set_status() does as well" should be removed since Thing.new will not return the return value of the initialize method but will return the newly created Thing instance.--Peter Lohmann - Reported in: P2.1 (12-Feb-14) PDF page: 145 In Ruby 1.8.7, the block passed to "yield Thing.new" is executed twice, and the block passed to given_thing_with is not executed. In Ruby 2.0.0 : block given to yield (SyntaxError). To make it work as expected, split the yield : result = Thing.new do |thing| … end yield result Another solution is : yield(Thing.new { |thing| … }) ---------- file to test it ---------- class Thing def initialize puts "in Thing initialize" yield self end def do_fancy_stuff(*p) puts "in do_fancy_stuff" end def set_status(p) puts "in set_status(#{p})" end end describe Thing do def given_thing_with(options) puts "in given_thing_with(#{options.inspect})" # yield Thing.new do |thing| # result = Thing.new do |thing| yield(Thing.new { |thing| puts "in block to Thing.new in given_thing_with, thing=#{thing.inspect}" thing.set_status(options[:status]) # end }) # yield result end it "should do something when ok" do puts "in test should do something when ok" given_thing_with(:status => 'ok') do |thing| puts "in block to given_thing_with in it ... ok, thing=#{thing.inspect}" thing.do_fancy_stuff(1, true, :move => 'left', :obstacles => nil) end end end ---------- output ---------- $ rspec thing_spec.rb --format doc Thing in test should do something when ok in given_thing_with({:status=>"ok"}) in Thing initialize in block to Thing.new in given_thing_with, thing=#<Thing:0x007fa964961aa8> in set_status(ok) in block to given_thing_with in it ... ok, thing=#<Thing:0x007fa964961aa8> in do_fancy_stuff should do something when ok Finished in 0.00045 seconds 1 example, 0 failures --Bernard Kaiflin - Reported in: P2.1 (03-Dec-13) PDF page: 149 Paper page: 139 Third paragraph starts with a sentence fragment: "----> so you can refactor <---- There are a couple of.... etc" (arrows added for emphasis) --Richard Murnane - Reported in: P1.0 (20-May-12) Paper page: 150 `yield Thing.new do; end` binds the block to `yield` instead of `Thing.new`. Need to use `yield Thing.new { ... }` instead.--David Chelimsky - Reported in: P2.1 (13-Feb-14) PDF page: 165 In section "Owned Collections", paragraph 5, line 2 : players_on(). When it receives a message, it doesn’t understand (like players_on()), ----> I would suppress the comma after When it receives a message [ meaning : When it receives a message that it doesn’t understand (like players_on()) ] --Bernard Kaiflin - Reported in: P1.0 (08-Jun-11) Paper page: 169 Third paragraph: Currently Reads: "...anything that begins with have_ to a predicate on the target object beginning with has_" have_ and has_ should be switched. It SHOULD read: "...anything that begins with has_ to a predicate on the target object beginning with have_" - Reported in: P2.1 (08-Dec-12) PDF page: 179 Book has test-specific subclasspattern (with no space between subclass and pattern) --Perry Smith - Reported in: P2.1 (14-Feb-14) PDF page: 182 Executing the example "describe WidgetsController". $ gem install rails … Successfully installed rails-4.0.2 I have added four missing do's and changed Widget to widget (except Widget.new()). WidgetsController PUT update with valid attributes redirects to the list of widgets (FAILED - 1) PUT update with valid attributes finds the widget (FAILED - 2) updates the widget's attributes (FAILED - 3) NoMethodError: undefined method `put' for #<RSpec::Core::ExampleGroup::Nested_1::Nested_1:0x007fb8a1bf5200> --Bernard Kaiflin - Reported in: P2.1 (13-Feb-14) PDF page: 183 In section Stub Chain, second paragraph : Article.recent.published.authored_by(params[:author_id]) ----> shouldn't it be article ? Also 3rd line from bottom : Article.stub(:recent).and_return(recent) Also Article.stub_chain on PDF page 184. --Bernard Kaiflin - Reported in: P2.1 (14-Feb-14) PDF page: 188 Section Custom Argument Matchers. I had difficulties to make an example with calculator.should_receive(:add).with(greater_than_3) working. During a Google search with GreaterThanThreeMatcher, I have found a Github page which was actually a copy of code/mocking/custom_arg_matchers_spec.rb. ----> it would be nice to add a green bar with this path before class GreaterThanThreeMatcher (fourth paragraph) and also on the next page before class GreaterThanMatcher. Note that I had to change module Spec to module RSpec. ----> and the error message for calculator = double('calculator') calculator.should_receive(:add).with(greater_than_3) calculator.add 3 is Double "calculator" received :add with unexpected arguments expected: (#<RSpec::Mocks::ArgumentMatchers::GreaterThanThreeMatcher:0x007f921430d218>) got: (3) --Bernard Kaiflin - Reported in: P1.0 (07-May-13) Paper page: 201 In the first paragraph on the page, the book implies that it is the stub that causes any subsequent calls to log() to be ignored: <quote> In the second example, we override that stub with the expectation. Once the expectation is met, any subsequent calls to log() are caught by the stub and essentially ignored. </quote> My testing indicates otherwise. It is the '.as_null_object()' that causes any subsequent calls to log() to be ignored. To prove this, simply create code that causes log() twice, observe that everything is ok, and then take out the '.as_null_object()' (and the "@database.(begin|end)" lines if u have them. Observe then that repeated calls to log() will FAIL, and that they are not "caught by the stub and essentially ignored"!--Jeff Lim - Reported in: P2.0 (09-Feb-12) PDF page: 203 "Article.stub_chain" should be "article.stub_chain" - (15-Feb-14) PDF page: 215 Section 16.1 Metadata ----> it would be nice to have a green bar with the path of the example : code/extending_rspec/metadata.rb before the example(s). --Bernard Kaiflin - Reported in: P2.1 (08-Dec-12) PDF page: 215 Suggestion to Pragmatic: it would be nice if I could add and edit previous "erratas". Now to the errata. This is the third one. This isn't a big deal. I'm just trying to give you more samples of what to look for. The book has: The output contains the contents of a hash with keys such as:example_group, :description,:location,:caller, and so on. There is a line wrap just before :description. The :foo words are in italics(?). Again, we see no space before the :foo words. Other examples, like on page 216: RSpec exposes a configuration object that supports the definition of global before, after, and around hooks, as well as hooks to include modules in examples or extend example group classes. format nicely (with "before", "after", "around", and "include" in a different font (tt?) HTH --Perry Smith - Reported in: P1.0 (18-Jun-11) Paper page: 221 The code presented to add a rspec:rcov rake task is no longer needed as it is added rails rspec gems. This should at least be mentioned for those using rails.--Harry Hornreich - Reported in: P1.0 (04-Oct-11) Paper page: 225 To run the example it says to type "rspec group_example.rb" but given the filename shown above this should be "rpsec focused_group.rb"--Nigel Lowry - Reported in: P2.0 (17-Jul-12) Paper page: 232 Under the "Matcher Protocol" subsubsection of subsection 16.7, under the "matches?" list item, the second sentence contains a typo: "Return true for a passing expection or false for a failure." (should be "expectation" not "expection") - Reported in: P2.0 (01-Jun-12) PDF page: 232 "spec_opts takes an array of strings" (first sentence right under the "About Code Coverage" sidebar) but the (non-deprecated) option's name is actually "rspec_opts".--Frederic BLANC - Reported in: P2.1 (08-Nov-12) PDF page: 237 "pending_count)" missing a parentheses, should be "pending_count()". I'm really enjoying this book so far. Thanks!--Alex Plescan - Reported in: P2.0 (01-Jun-12) PDF page: 238 network.rb require 'ping' ping has been deprecated after ruby 1.8.7 The book needs to show another way of doing ping for ruby 1.9.x--Jim Oser - Reported in: P1.0 (02-Dec-10) PDF page: 251 On the Kindle 3, the word "fiancé" in the "Cucumber Seeds" box is rendered as "fiancée". The PDF page # given above is actually the location number.--Shea Levy - Reported in: P2.0 (21-May-12) PDF page: 253 In the footnote, the URL to the Cucumber wiki is wrong. Cucumber has moved to another Github repository.--Ulrich Sossou - Reported in: P2.1 (16-Feb-14) PDF page: 258 Bottom of the page, the link in footnote 2 h__p://wiki.github.com/aslakhellesoy/cucumber/rdoc gives an error 404. --Bernard Kaiflin - Reported in: P2.0 (11-Jan-12) PDF page: 270 s/pending/undefined/ "Several things changed when we added the step definition. First, the scenario and step are no longer pending, but passing." The scenario and step were previously undefined rather than pending. If we had run the cucumber command after pasting the cucumber-provided snippet but before editing it, then they would have temporarily been pending.--ibroadfo - Reported in: P1.0 (02-Dec-10) PDF page: 273 In the Kindle edition, chapter 1 has footnotes 2,3 and 4 not no footnote 1. 2, 3 and 4 correspond to 1, 2, and 3 of the PDF edition. PDF page # given above corresponds to Kindle location.--Shea Levy - Reported in: P2.0 (15-Dec-11) PDF page: 274 At the beginning of the last sentence of the fifth paragraph (immediately before the "Tagged Hooks" subsection): "When we do,…" is somewhat confusing. Perhaps something like, "We can use a tagged hook if we don't want that hook to run for all scenarios." would be clearer.--Bruce Hobbs - Reported in: P2.0 (31-Jul-12) PDF page: 278 The url in the footnote does not work. This form doesn't let me input urls to this box so I can't give you the correct one. --Joseph Shraibman - Reported in: P2.0 (02-Apr-12) PDF page: 287 In 19.2 "Setting up a Rails 3 Project", running 'bundle install' gives the following error: % rails generate rspec:install WARNING: Cucumber-rails required outside of env.rb. The rest of loading is being defered until env.rb is called. To avoid this warning, move 'gem cucumber-rails' under only group :test in your Gemfile create .rspec create spec create spec/spec_helper.rb This appears to be a change introduced about 6 months ago (see blog post here): See github.com/cucumber/cucumber-rails/pull/171 (can't provide a proper URL because of the submission filter). Setting up a separate group with :test only and moving 'gem "cucumber-rails"' into this group gets past this issue. e.g. group :test do gem "cucumber-rails", ">= 0.3.2" end --Doug Morris - Reported in: P2.0 (21-Jul-11) PDF page: 288 Cucumber on rails 3 requires database_cleaner to be added explicitly to the gemfile. Following the steps here gives an error that database_cleaner is uninitialized.--Anand C Ramanathan - Reported in: P1.0 (05-Jul-12) Paper page: 296 Content of simulated_browser\05\app\models\genre.rb I downloaded from the website should be: class Genre < ActiveRecord::Base attr_accessible :name end Rather than: class Genre < ActiveRecord::Base end The missing line causes the When step and the scenario to fail. --Shaun Dashjian - Reported in: P1.0 (23-Aug-11) PDF page: 326 “The Webrat’s default timeout” should probably drop the “the”, agree? - Reported in: P2.1 (06-Nov-12) PDF page: 338 The book says "The assigns() method returns a hash representing instance variables that were assigned to the view by the controller. Run rake spec:controllers, and the new example fails with this: expected #<Message:0x81b0b900 @ got nil however my failure message was: /Users/akkdio/.rvm/rubies/ruby-1.9.3-p194/bin/ruby -S rspec ./spec/controllers/messages_controller_spec.rb ....F* Pending: MessagesController POST create when the message fails to save successfully renders the new template # Not yet implemented # ./spec/controllers/messages_controller_spec.rb:38 Failures: 1) MessagesController POST create when the message fails to save successfully assigns @message Failure/Error: post :create ActionView::MissingTemplate: Missing template messages/create, application/create with {:locale=>[:en], :formats=>[:html], :handlers=>[:erb, :builder, :coffee]}. Searched in: * "#<RSpec::Rails::ViewRendering::EmptyTemplatePathSetDecorator:0x00000102174f78>" # ./spec/controllers/messages_controller_spec.rb:34:in `block (4 levels) in <top (required)>' Finished in 0.10912 seconds 6 examples, 1 failure, 1 pending I added the following as the book suggests: def create @message = Message.new(params[:message]) if @message.save flash[:notice] = "The message was saved successfully." ➤ redirect_to :action => "index" ➤ ➤ else ➤ render :action => "new" ➤ end end and the example passes. I don't know why but I never get the failure the book says I should get. --akkdio - Reported in: P2.1 (06-Nov-12) PDF page: 340 the book text indicates that there is a change on the line: context "when the message fails to save" do it "assigns @message" do ➤ message.stub(:save).and_return(false) post :create assigns[:message].should eq(message) end Yet this is the same code as on page 339: it "assigns @message" do message.stub(:save).and_return(false) post :create assigns[:message].should eq(message) end - Reported in: P2.0 (11-Jul-11) Paper page: 347 Example rails_controllers/messages/13/spec/controllers/messages_controller_spec.rb is missing the line Message.stub(:new).and_return(message). This winds up being fixed in following examples due to the refactoring in the "Tidy Up" section.--Cailin Nelson - Reported in: P2.0 (11-Jul-11) Paper page: 348 In the example rails_controllers/messages/15/spec/controllers/messages_controller_spec.rb the before block needs job.stub(:save).and_return(true) to make the "saves the message" test pass.--Cailin Nelson - Reported in: P2.0 (03-Feb-12) PDF page: 353 In this section, the messages_controller is altered so that it stores a notice in flash when the message is successfully saved. In the examples, the #save method has not been stubbed, so we're relying on the behavior of Mock#as_null_object to return a truthy response. In rspec-rails 2.8.1, there is a bug which causes as_null_object to return nil instead (GitHub rspec-rails issue 488). The bug was fixed in commit 7fa15c4157, but has yet to be released. So when you run rspec, the test fails. I checked the book errata and this wasn't mentioned it, and googling eventually let me to the bug and its fix on GitHub. I thought I'd mention it so that you can list it on the errata page while a fixed release is pending. Thanks! Justin Relevant text: Download rails_controllers/messages/17/app/controllers/messages_controller.rb def create message = Message.new(params[:message]) if message.save flash[:notice] = "The message was saved successfully." end redirect_to :action => "index" end Run that, and you’ll see that it passes. Now we have two passing exam- ples that specify the happy path, so let’s move on to examples of what should happen when the save fails. --Justin Force - Reported in: P1.0 (31-Mar-12) Paper page: 353 First paragraph reads: "The most obvious bit is the duplication in the past two examples" "past" should be "last"--Nigel Lowry Stuff To Be Considered in the Next Edition - Reported in: P1.0 (16-Dec-10) PDF page: 1 # Quick Thoughts on "The RSpec Book" My rating: 10/10. Heavens gift book! Really. That said, I have a few thoughts... ## On brute-forcing I somehow felt sad because "disciplined approach" detailed part from previous versions was omitted/under-emphasized. It was a great value for me to read it, and definitely helped my career a lot, per se, "you're not silly, you're on the right path". ## On testing Test everything that could possibly break. A lot of debate here. Not even a mention in the book. I know it's hard to talk on this, but it's worth at least mentioning it. ## On workflow I'm unaware of the constraints, but I would like to see a section of BDD DVCS workflow and deployment. I'm not sure if they're on scope, but feature branches, staging servers, deployment from day one, and a few other topics are often under-valued or unpracticed on communities, I'm a strong believer that stuff works better that way, and I was wondering if there are any plans about it. Agile++. ## On autotesting Also, although the some implementations are a little bit buggy at this time, I would also suggest to focus on automated testing on your development machine, projects such as autotest-[notification], spork and possibly guard, and the rspec/cucumber support are worth mentioning. I'd also like to see to some point CI scenarios. ## On mocks On test doubles, I have almost nothing to say. Great coverage. What's not clear to me by reading it, it's the role of mock objects on high level testing, I mean the cucumber part. I haven't faced this situation, but I do have the doubt. I mean, a feature is supposed to be integrated, it's supposed to test your entire stack, but might also be supposed to be contained, isn't? My doubt here is probably just an indicator of bad design, but what happens when you feel the need for a mock object on your features? Their role on rspec is well understood, as a matter of fact, I'm confident with them since I've got another layer of testing that allows me to see how are the parts working, that's why it seems a big no-no to me to use of them on cucumber, but again, what do you when you feel the need? ## On milestones An excelent part that one about release planning and milestones: This gets in, that gets out. A little more emphasis could be added into this customer-driven iteration negotiation, and even if they're other sources that deal with the topic I would like to see more about reacting to change, new features breaking schedule, etc. Absolutely awesome the part of one-week iterations, I have to say I was shocked, I've been ranging on 2-4 weeks iterations, definitely I must try that in the near future. ## On Cucumber I've experienced in the past, begginners at BDD have problems with Cucumber, because they tend to be much specific and they abstract away from the business requirements on writing the scenarios, they get to deep into implementation, and they even have problems with Regexps because of that. Although that though can be extracted from the book, I've felt somehow it lacks a little more emphasis on this part. ## On Rails BDD For the rails part, I must only say I was one of the ones that were expecting a real project in the "Agile Web Development with Rails 4" way. Totally omitted capybara, I assume there's a very good reason on doing that. What about database_cleaner? Wasn't database_cleaner on the Gemfile sufficient? On view testing, three state models, only one mentioned: there are factories, fixtures and mocks. Each with it's own advantages and disavantages, but certainly each of them are widely used. Suggest something like "def and; self; end" to rspec authors, allowing: mock_model("Message").as_new_record.and.as_null_object # instead of # mock_model("Message").as_new_record.as_null_object I don't like script/rails at all, who does that? rails g controller messages instead of: script/rails generate controller Messages Also, seems like --no-helper is no longer valid. A rails bug maybe? It displays '--helper', and even that isn't allowed. Migrations: I feel something missing here. Shouldn't be possible to explain at least something like nulldb or similar, can't remember exactly, adding to migrations continously while a model evolves, or generating a model from a set of expectations on what we might need? Shouldn't we favoring field_id:references instead of a simple field_id:integer? The shoulda people will feel a little offended, not a single mention to their matchers ;) "is valid with valid attributes" <- pointless? This is not behavior. What makes valid attributes? On associations, agree you shouldn't spec them directly. ## On Progressive Enhancement (or the lack of) Based on experience, and in this book's scope, I was sad that progressive enhancement wasn't mentioned at all in the book. We aren't all the time building RIAs in the strict sense, so this model of FE developing actually helps you on testing your app. It brings a lot of benefits, but from a testing standpoint (or a lazy standpoint maybe), it allows you to focus on testing your app entirely without javascript at first, and then adding those automated browser scenarios only on those enhancements. Ex. if you've got a file upload field which gets replaced by a drop area on a drag-drop UX, you just abstract away from that initially, make sure the form is working, the views are rendered and the file is uploaded, then your selenium tests complements that. Narrowed down concept, simple enough, but missing from the book. ## Colophon Finally, I wouldn't expect you to take my suggestions to the line, you're far better than that, but since this has become (or I expect it to become) the point of authority for BDD, I believe some of those points will fill the gaps in such an important book. Nevertheless, a terrific work, a must-have book, and an amazing source of knowledge. Didn't change the way I program, I've been on this for a while now, but it certainly enhanced the way I do it, I feel a better developer already. Keep going, since you guys are my model of inspiration. Thanks ** NaN ;) On Cucumber part, there are still examples which use b/w instead of +/-.--Adrian Perez - Reported in: B16.0 (24-Nov-10) PDF page: 22 In section 1.2, end of the second paragraph: the word "is" should be removed from "...but even then, they generally mean is that it's stored somewhere and they can get it back."--Brian Darrell Green - Reported in: P1.0 (05-Dec-10) PDF page: 34 The phrase "go ahead and <x>" is used too much throughout the book. It would improve the writing to remove that part and just state the "<x>". For example, on page 34, the phrase could be removed from the sentence beginning with "Go ahead and add a step_definitions directory."--Jimmy Cuadra - Reported in: P1.0 (03-Dec-10) PDF page: 43 So, now we have our release plan with 3 stories. It’s time to start breaking it down into iterations. Yet only 2 stories are worked out and page 52 reads: We picked out two stories that will result in working software sufficient to interact with it in a meaningful way. Poor third story…--Thomas Maas - Reported in: B16.0 (24-Nov-10) PDF page: 59 Under section 4.3 Test Double: "A fake object that pretends to be real object..." should be "A fake object that pretends to be a real object...".--John Topley - Reported in: P1.0 (03-Dec-10) PDF page: 67 Throughout the PDF, output is colored as though it were code. On page 67, in "prompts for the first guess" the "for" is highlighted as a keyword; in "Codebreaker::Game#start", the # is treated as the start of a comment. This could be overlooked were it not for the explicit instruction to "use the --color flag".--Thom Raybold - Reported in: P1.0 (23-Dec-10) Paper page: 73 The path cb/325/... threw me for a minute, after seeing a steady progression of 27,28,30,32. I guess you meant 32.5. In the next edition, I would make sure the numbers always progress in sequence.--Dale Visser - Reported in: P1.0 (10-Mar-11) Paper page: 85 Recommend Enumerable#count, not enumerable#inject. The #count method does exactly what's required here. Using #inject makes the code harder to read to save a temporary variable. Using #count still gets rid of the temporary variable, and improves readability to boot.--Sheldon Hearn - Reported in: B16.0 (27-Nov-10) PDF page: 106 "Create a marker.rb file in lib/codebreaker/, open the Codebreaker module, and copy the Marker into that file." This is unclear to me. I think it means simply "Create a marker.rb file in lib/codebreaker and move the Marker class into that file." With no module around it? That works, but I'm not sure it's just what was meant. It's a Ruby-newbie issue but it might be worth trying to clarify a bit.--Mike Blyth - Reported in: P1.0 (14-Mar-11) PDF page: 120 In Ruby 1.9.2 there isn't a String#map method any more. but in this example you could use String#each_char instead. Perhaps you could mention it in a future print run--Jens Fahnenbruck - Reported in: P1.0 (01-Dec-10) PDF page: 130 The claim that exponential cost increase in later bug fixes comes from civil engineering would be nice to have footnoted with a source--Greg Cox - Reported in: P1.0 (01-Feb-11) PDF page: 152 Your code snippet formatter has decided that the text following => is ruby code and highlighted "require" and "and" inappropriately in the second line: describe User, "should require password length between 5 and 40" { ... } => User should require password length between 5 and 40 --Jonathon Abbott - Reported in: B16.0 (14-Nov-10) PDF page: 167 There's a missing space after the comma: "It /is/ DRY,/and/ it's so complicated."--Adam Spiers - Reported in: B16.0 (14-Nov-10) PDF page: 170 I think "nontechnical" should be hyphenated: "non-technical"--Adam Spiers - Reported in: B16.0 (25-Nov-10) PDF page: 176 Ruby on RailsRuby on Rails extends ...--Rich Morin - Reported in: P1.0 (03-Apr-11) Paper page: 181 In "/... for Aslak/", the word for should not be printed like a keyword. This occurs on pages 181, 184 and 200(4).--Andreas Kemkes - Reported in: B16.0 (28-Nov-10) PDF page: 183 There should be no comma in "When it receives a message, it does not understand" on line 2. - Reported in: B16.0 (28-Nov-10) PDF page: 193 "All of the other patterns we’ll talk about and you’ll read about elsewhere are usually variations of method stubs and method expectations..." Does this means that *every* other pattern is *usually* (but not always) a variation, or the *nearly every* (but not all) patterns are variations? There is a difference in meaning.--Mike Blyth - Reported in: B16.0 (26-Nov-10) PDF page: 200 "The first example specifies that the WidgetsController finds the widget, so we set an expectation that the Widget class should receive the find() method." This should be "find() message" rather than "find() method" because messages are received, not methods.--John Topley - Reported in: P1.0 (19-Dec-10) PDF page: 230 The autotest command appears to be part of the ZenTest gem, which is not listed on page 18. While autotest appears to have been its own gem for a while, it was also not listed on page 18. Please consider adding the appropriate gem to page 18 or mentioning where to find it on page 230.--Daniel Hedlund - Reported in: B16.0 (28-Nov-10) PDF page: 232 RSpec::Core::RakeTask.new do |t| t.rspec_opts = ["--color"] end spec_opts takes an array of strings,... Should "spec_opts" be "rspec_opts"?--Mike Blyth - Reported in: B16.0 (29-Nov-10) PDF page: 238 Under the Exclusion heading, "...we tend to try to disable them so we can run rest of the suite..." should be "we tend to try to disable them so we can run the [sic] rest of the suite..."--John Topley - Reported in: P1.0 (03-Apr-11) Paper page: 280 In ":require", the word require should not be printed like a keyword.--Andreas Kemkes - Reported in: P1.0 (30-Dec-10) PDF page: 287 The autotest/discover.rb would not be any more generated since this commit f47e87b39a6f2bc24b71d701c8b509fd6e32acb1 in the rspec-rails repository.--Daniel Spangenberg - Reported in: B16.0 (29-Nov-10) PDF page: 300 "We’re going to focus on Cucumber with Webrat and Selenium, so we’re going to skip over some of the low-level details that we use RSpec for in practice." Awkward sentence with "for in practice." Consider rewording at least to "that in practice we use RSpec for," if not the more formal "for which in practice we use RSpec."--Mike Blyth - Reported in: P1.0 (03-Apr-11) Paper page: 313 In "/... logged in as .../", the word in should not be printed like a keyword.--Andreas Kemkes - Reported in: P1.0 (05-Dec-10) PDF page: 338 The messages example in the rails_view chapter works with rails 3.0.0 but does not work when running on rails 3.0.3. The refactoring of new.html.erb_spec.rb on pdf page 338 results in a an unintended failure: Failures: 1) messages/new.html.erb renders a text field for the message title Failure/Error: form.should have_selector("input", type: "text", name: "message[title]", value: "the title")02adcd18>"><input id="message_submit" name="commit" type="submit" value="Save"> </form> --Jeff Hutchison - Reported in: P1.0 (25-Jan-11) PDF page: 344 "Use Webrat’s have_xpath( ) and have_selector( ) matchers for view specs." I believe this is misplaced and should be moved to the summary of Chapter 21, where they are actually talked about and used.--Ted Milker - Reported in: B16.0 (29-Nov-10) PDF page: 367 The database migration on this page specifies "recipient_id": "script/rails generate model message title:string text:text recipient_id:integer" On PDF page 371 and 372, the reference is to "recipient" (no "_id"). Admittedly I'm a beginner in Ruby but for me this example fails. For context, I'm running Rails 3.0.3 and Ruby 1.9.2p0.--Sully Syed
https://pragprog.com/titles/achbd/errata
CC-MAIN-2016-18
refinedweb
7,570
67.76
Attiny85-shiftlcd Direction Detection Introduction: Attiny85-shiftlcd Direction Detection This instructable combines a few instructables projects that I have done into one. It starts off with a 5v power supply I breadboard with a 9V battery, 7805 regulator, switch, 2 10uF electrolytic caps, 103 ceramic cap and just for visual confirmation a led and 330 resistor to detect when the power is on. Then the LCD is wired up to a 74HC595 shift register. The LCD via shift register is attached to the attiny85 along with a 100k Ohm potentiometer. Before I actually hooked up to the attiny I had to wire up a little attiny programmer to the Arduino and upload the sketch. Then i could transfer the ATtiny to the project breadboard. (I uploaded and transferred too many times during debugging) If you have questions or want to discuss this project please comment. I would love to get some feedback. Step 1: Hardware This instructable consist of several smaller instructables working together. 5v 7805 power supply 1602 LCD 74HC595 shift register for LCD* Attiny85 micro controller over Arduino programmer *I've noticed while I was making the shift register for the LCD that there is a transistor that the shift register controls when to turn on/off the backlight of the LCD. I wasn't sure if it would be needed as I elected to have my LCD backlight stay on. Step 2: Software Other than the code itself, the only additional need is the LiquidCrystal595.h library which you can download and install to your arduino IDE. There are several sources. There are a few different ways to load the code to the Attiny85 microcontroller but in this instructable the Arduino as ISP method was used. I've seen in some publications that a 10uF capacitor was needed but for me it was not. To save yourself some hassle, be sure to connect your arduino USB directly to your computer, double check the following: Board: ATtiny Processor: ATtiny85 Clock: 1MHz internal (##DO NOT SELECT EXTERNAL UNLESS YOU KNOW WHAT YOU'RE DOING##) Port: this may vary Programmer: Arduino as ISP* *Follow directions on how to use the Arduino as ISP. Don't connect the ATtiny85 for programming until all of the above conditions are met. Sketch #include <LiquidCrystal.h> // include this library LiquidCrystal595 lcd(0,1,2); // datapin, latchpin, clockpin int pot = A2; //Input potentiometer int val; //current pot value int prev; //previous pot value void setup() { lcd.begin(16,2); // 16 characters, 2 rows lcd.clear(); // clear screen pinMode(pot, INPUT); //declares A2 an INPUT } void loop() { prev = val; val = analogRead(pot); //read the pot as value to be mapped val = map(val, 0, 1023, 0, 99); //maps the val to 0-99, this could be any range you choose lcd.setCursor(0,1); //2nd row far left if(val < prev) { //CCW direction lcd.print("<<<<"); }else lcd.print("----"); lcd.setCursor(6,1); //2nd row middle if(val==prev) { //no turn lcd.print("STOP"); } else lcd.print("----"); lcd.setCursor(11,1); //2nd row far right if (val > prev) { //CW direction lcd.print(">>>>"); } else lcd.print("----"); lcd.setCursor(7,0); lcd.print(val); //prints the mapped val delay(10); lcd.print(" "); //doesnt print what isnt suppose to delay(1); } Step 3: Results The potentiometer's analog reading is sent to the LCD via the shift register noting direction of left/CCW or right/CW (to potentiometers orientation) by comparing 2 variables of current value and previous value. When both variables are identical representing no change, the LCD displays STOP. I like my breadboards to be neat and modular for easy storage and hookup. It helps when you're trying to troubleshoot and if other people are trying to do the same. I'm not posting any schematics because all of the different hardware modules are fairly common and there are instructables on them. I may at a later time if requested. Just leave a comment. I'm not sure of any real world applications this instructable would relate to, but I'm sure with some modification to the code you could display any kind of analog reading whether it be temperature, light intensity, distance, pressure, etc. *If you replace the pot with a joystick and modify the code to display the analog reading, you can get single axis center of the joystick. I've noticed some joysticks with center readings that were 513 (Dominion-Network's Arduino Joystick with LCD) My parallax joystick was at 509 and 523. Here's a video demonstration via Instagram @kayohzee Muy buen trabajo, esclarecedor y versátil. Muy buen trabajo, iluminadora y versátil. I know it's not anything spectacular but I mostly did the instructable to demonstrate the modular aspect of it doing projects like these. It was also my first time programming for the attiny85. Lots of research and troubleshooting throughout. I don't know if there would be any real world applications for a project like this but it was fun and a genuine learning experience. Very cool project, nicely done. First instructable too!
http://www.instructables.com/id/Attiny85-shiftlcd-Direction-Detection/
CC-MAIN-2017-47
refinedweb
844
62.98
Anthony DeRobertis wrote: > On Feb 12, 2004, at 12:45, Henning Makholm wrote: >>> Putting our own forms into TEI's namespace would be similar to >>> claiming that they said something they did not. >> >> No it isn't - not unless you *explicitly* claim that it was TEI who >> said it. > > If TEI's namespace is something like 'tei-consortium' and there is no > technical reason things have to be added there (I don't think there is; > aren't namespaces just to prevent collisions?) then I think it is > perfectly reasonable to argue that things in there appear to originate > with the TEI Consortium. > > I think the TEI Consortium would have a reasonable case against --- > under trademark law --- someone who did it without making it *very* > clear that his extensions are non-standard and are not endorsed by the > TEI Consortium. > > Is there any good reason that I'd ever want to infringe on TEI's > namespace, as opposed to using my own? I was under the impression that > the whole point of namespaces was to make it clear what standard > something comes from, and to prevent collisions. > Claiming endorsement by TEI without permission is definitely not allowed, and this restriction is perfectly DFSG-free. However, trademarks cannot be applied to functional elements, and a namespace seems like a functional element, since a program reading the XML/SGML could check the namespace and fail if it is not a given value. Any restriction that attempts to require modified specifications or files to be distinguishable from the originals _by a program_ would be non-free, and would not be allowed by trademark law anyway. The desired restriction here, which is perfectly free, is that a _person_ can distinguish between the original and the modified version, because of the lack of endorsement. - Josh Triplett
https://lists.debian.org/debian-legal/2004/02/msg00220.html
CC-MAIN-2017-30
refinedweb
301
53.34
On Tue, Dec 4, 2012 at 9:58 PM, Don Moir <donmoir at comcast.net> wrote: >> On date Tuesday 2012-12-04 04:22:06 -0200, Ramiro Polla encoded: >>> >>> Hi, >>> >>> I've sent this patch before, but I had forgotten why I had written it. >>> Thanks to Don Moir for pointing out why it was necessary. >>> >>> Ramiro >> >> >>> From d2d9631a110bbb28d5aa2082fe508df5739543ac Mon Sep 17 00:00:00 2001 >>> From: Ramiro Polla <ramiro.polla at gmail.com> >>> Date: Tue, 4 Dec 2012 02:29:43 -0200 >>> Subject: [PATCH 1/2] dshow: fix return code when opening device >>> >>> Successfully opening a device altered the ret variable, making the >>> function >>> not cleanup properly and return an incorrect value for errors that >>> happened >>> afterwards. >>> --- >>> libavdevice/dshow.c | 18 ++++++++---------- >>> 1 files changed, 8 insertions(+), 10 deletions(-) >>> >>> diff --git a/libavdevice/dshow.c b/libavdevice/dshow.c >>> index ea01b2a..18a4ee9 100644 >>> --- a/libavdevice/dshow.c >>> +++ b/libavdevice/dshow.c >>> @@ -929,20 +929,18 @@ static int dshow_read_header(AVFormatContext >>> *avctx) >>> } >>> >>> if (ctx->device_name[VideoDevice]) { >>> - ret = dshow_open_device(avctx, devenum, VideoDevice); >>> - if (ret < 0) >>> - goto error; >>> - ret = dshow_add_device(avctx, VideoDevice); >>> - if (ret < 0) >>> + if ((r = dshow_open_device(avctx, devenum, VideoDevice)) < 0 || >>> + (r = dshow_add_device(avctx, VideoDevice) < 0)) { >>> + ret = r; >>> goto error; >>> + } >> >> >> Isn't this perfectly equivalent to the pre-existing code? The only >> difference is that in case of success it sets the ret value to the >> return code of dshow_open/add_device, which is finally returned (while >> the other failures, e.g. if (!ctx->mutex), don't change the default >> return value of 0, so won't cause a failure, is this intended?). > > > The problem with the pre-existing code is that ret has been set to a zero > value indicating success. If it fails after the above chunk of code it just > does a goto error with a successful return (ret) value. That patch has a ')' > wrong in 2 places though. > > When a device it already open in another app and you try to open it in > ffmpeg what would happen is all would succeed until you hit the > IMediaControl_Run statement which would fail and return success to the > calling function since the error ret value was over written with a success > value. Patch updated. -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-dshow-fix-return-code-when-opening-device.patch Type: text/x-patch Size: 1621 bytes Desc: not available URL: <>
http://ffmpeg.org/pipermail/ffmpeg-devel/2012-December/135340.html
CC-MAIN-2016-22
refinedweb
391
61.06
The basic idea is this: write your own sorting method without using indices (square bracket notation [ ]) For example, in a normal sorting method you might want to swap positions i and j. Code like this: n[i], n[j] = n[j], n[i] This is not allowed. As a further restriction, I am not allowing any square brackets in your code, even if they aren't indexing anything. Time to think outside the box a bit. For example, when you have written your function use a random number generator to test it: import random for i in range(5): sortedIntegers = mySortingMethod(list(random.randint(0, 100) for x in range(20))) print sortedIntegers .... [1, 2, 2, 11, 11, 15, 16, 18, 23, 27, 35, 38, 39, 57, 57, 57, 64, 81, 81, 97] [0, 5, 6, 6, 15, 21, 21, 35, 46, 49, 62, 62, 65, 67, 70, 73, 78, 80, 89, 90] [0, 0, 6, 14, 14, 27, 31, 33, 49, 52, 53, 57, 58, 60, 69, 86, 90, 93, 95, 100] [2, 8, 8, 15, 20, 20, 22, 22, 33, 35, 37, 41, 44, 44, 45, 70, 79, 84, 88, 96] [2, 8, 12, 15, 19, 19, 30, 37, 40, 41, 46, 51, 60, 63, 65, 75, 79, 79, 92, 99] A few rules: * It must be a single top-level function (nested helper functions are fine) that sorts a random LIST of integers into ascending order * Built-in sorting functions are not allowed, nor functions from another module not packaged with Python * There are allowed to be repeats in the random sequence of integers, it must deal with this * Right now, only Python entries will be accepted, I may open it up to other languages in the future, implementations in other languages are fine but will not be judged in the challenge. * Python 2 or 3 is fine * The winner will be the person who enters the fastest sorting method timed by me, there is also an award for the most creative submission. After a few entries or a bit of interest I'll release my solution I've already written for you all to have a peek at Happy Challenging! Let's see what you can do. EDIT: I should really set a deadline, 2 weeks from today should be good to be get plenty of entries so Sunday 16th September 2012 will be the cut off. This post has been edited by Simown: 12 September 2012 - 11:51 AM
http://www.dreamincode.net/forums/topic/290729-sorting-without-indices-challenge/
CC-MAIN-2016-44
refinedweb
414
63.26
When running the binary it asks for an input to validate. Putting test returns nooooh. try harder! So I thought, that the right flag would be the valid input. Putting a single space or multiple characters separated by a space returns various bash errors. Firing up gdb shows that various subprocesses are spawned: first an instance of /bin/bash ist spawned, after that guess3 is spawned again, then another instance of /bin/bash is spawned. At this point we are asked for the input. So I thought, maybe, the second bash call actually passes an inline bash script which is then evaluated and does the input check. So, I fired up gdb again, put a breakpoint on subprocess spawning, and after the first call to /bin/bash I exchanged /bin/bash with an simple c program which simply prints all parameters and continued the execution. #include <stdio.h> int main(int args, char *argv[]) { int i = 0; for (i = 0; i < args; i++) printf("\n%s", argv[i]); return 0; } This reveals that /bin/bash is called with the parameters [“./guess3”, “-c”, ” *4096 spaces* #!/bin/bash\n\nread -p \”Your input: \” input\n\nif [ $input = \”HV19{Sh3ll_0bfuscat10n_1s_fut1l3}\” ] \nthen\n echo \”success\”\nelse \n echo \”nooooh. try harder!\”\nfi\n\n”, “./guess3”]. So the flag is HV19{Sh3ll_0bfuscat10n_1s_fut1l3}.
https://blog.sebastianschmitt.eu/challenges/hackvent-2019/hv19-10-guess-what/
CC-MAIN-2022-21
refinedweb
217
63.39
As a developer, I spend a lot of time watching events in the Event Logs. The Event Logs provide quick tracing/debugging functionality for application developers, especially where applications run in non-graphical environments (like the BizTalk environment). Some people even use it to store objects in a serialized state. As such, it proves to be a huge source of information. I've grown tired of Microsoft's standard Event Log viewer (the snap-in for MMC): Smoothy is still a work in progress, but currently features the following: This article provides an event log viewer control (which incidentally inspired a couple of layout ideas), but it's always fun to write your own, isn't it? Event Log entries are stored in the registry, together with it's associated log and application sources registered for that log. The Event Logs can be accessed through several classes found in the System.Diagnostics namespace in .NET. The main classes used are System.Diagnostics.EventLog and System.Diagnostics.EventLogEntry. They provide an easy to use, intuitive interface for read and write access to the Event Logs, either on the local machine, or a remote machine through WMI. System.Diagnostics System.Diagnostics.EventLog System.Diagnostics.EventLogEntry Using it? Easy, run it. Auto-refresh is not supported at the moment (although the code is in there), because the DataGridView doesn't update (paint) correctly in this case when adding new items. Custom painting for the DataGrid might be needed to solve that issue. DataGridView DataGrid Clicking on the information, warning, etc., buttons filters by event entry type. Searching can be toggled by the search buttons next to the search textbox. The application creates event logs by registering a user supplied source in the new event log. Deletion of custom event logs is possible, but I wouldn't go about deleting every log you can find - some applications might depend on them.. The column sorting still needs some work. Microsoft's Event Log viewer has a "dual" sorting mode that it achieves by sorting the column requested and sorting the index fields on the entries in the same direction as the sorted column. Sorting by two columns on a DataGridView isn't possible, so if possible sort on the index column (the default sorting column) to ensure the right chronological order. This program was born out of an interest in testing the new functionality of the DataGridView in .NET 2.0. It makes extensive use of the BindingSource class to provide sorting and filtering (job well done on this, Microsoft!). BindingSource Accessing multiple entries in the event log seems to be the slowest operation possible. Increasing performance in version 1.1 has been put in place by using a mixture between WMI and the standard EventLog classes. A WMI query is executed to retrieve only the relevant information for display (without the messages). The messages are displayed by doing a lazy load on the SelectionChanged event provided by the DataGridView. EventLog SelectionChanged Features still outstanding for Smoothy: GridView This article, along with any associated source code and files, is licensed under The Microsoft Public License (Ms-PL) General News Suggestion Question Bug Answer Joke Rant Admin Man throws away trove of Bitcoin worth $7.5 million
http://www.codeproject.com/Articles/15201/Smoothy-Event-Log-Viewer-1-2?msg=3165143
CC-MAIN-2013-48
refinedweb
541
54.63
- CLASSES, SUPERCLASSES, AND SUBCLASSES - Object: THE COSMIC SUPERCLASS - GENERIC ARRAY LISTS - OBJECT WRAPPERS AND AUTOBOXING - METHODS WITH A VARIABLE NUMBER OF PARAMETERS - ENUMERATION CLASSES - REFLECTION - DESIGN HINTS FOR INHERITANCE Chapter 4 introduced you to classes and objects. In this chapter, you learn about inheritance, another fundamental concept of object-oriented programming. The idea behind inheritance is that you can create new classes that are built on existing classes. When you inherit from an existing class, you reuse (or inherit) its methods and fields and you add new methods and fields to adapt your new class to new situations. This technique is essential in Java programming. As with the previous chapter, if you are coming from a procedure-oriented language like C, Visual Basic, or COBOL, you will want to read this chapter carefully. For experienced C++ programmers or those coming from another object-oriented language like Smalltalk, this chapter will seem largely familiar, but there are many differences between how inheritance is implemented in Java and how it is done in C++ or in other object-oriented languages. This chapter also covers reflection, the ability to find out more about classes and their properties in a running program. Reflection is a powerful feature, but it is undeniably complex. Because reflection is of greater interest to tool builders than to application programmers, you can probably glance over that part of the chapter upon first reading and come back to it later. Classes, Superclasses, and Subclasses Let's return to the Employee class that we discussed in the previous chapter. Suppose (alas) you work for a company at which managers are treated differently from } The keyword extends indicates that you are making a new class that derives from an existing class. The existing class is called the superclass, base class, or parent class. The new class is called the subclass, derived class, or child class. The terms superclass and subclass are those most commonly used by Java programmers, although some programmers prefer the parent/child analogy, which also ties in nicely with the "inheritance" theme. The Employee class is a superclass, but not because it is superior to its subclass or contains more functionality. In fact, the opposite is true: subclasses have more functionality than their super classes. For example, as you will see when we go over the rest of the Manager class code, the Manager class encapsulates more data and has more functionality than its superclass Employee. Our Manager class has a new field to store the bonus, and a new method to set it:, leading; } As you saw, a subclass can add fields, and it can add or override methods of the superclass. However, inheritance can never take away any fields or methods. Finally, let us supply a constructor.." Because default (no-parameter) constructor of the superclass is invoked. If the superclass has no default constructor and the subclass constructor does not call another superclass constructor explicitly, then the Java compiler reports an error. Having redefined the getSalary method for Manager objects, managers will automatically have the bonus added to their salaries. Here's an example of this at work: we make a new manager and set the manager's bonus: (Employee e : staff) or Manager. discuss both topics in more detail in this chapter. Listing 5-1 contains a program that shows how the salary computation differs for Employee and Manager objects. Listing 5-1. ManagerTest.java 1. import java.util.*; 2. 3. /** 4. * This program demonstrates inheritance. 5. * @version 1.21 2004-02-21 6. * @author Cay Horstmann 7. */ 8. public class ManagerTest 9. { 10. public static void main(String[] args) 11. { 12. // construct a Manager object 13. Manager boss = new Manager("Carl Cracker", 80000, 1987, 12, 15); 14. boss.setBonus(5000); 15. 16. Employee[] staff = new Employee[3]; 17. 18. // fill the staff array with Manager and Employee objects 19. 20. staff[0] = boss; 21. staff[1] = new Employee("Harry Hacker", 50000, 1989, 10, 1); 22. staff[2] = new Employee("Tommy Tester", 40000, 1990, 3, 15); 23. 24. // print out information about all Employee objects 25. for (Employee e : staff) 26. System.out.println("name=" + e.getName() + ",salary=" + e.getSalary()); 27. } 28. } 29. 30. class Employee 31. { 32. public Employee(String n, double s, int year, int month, int day) 33. { 34. name = n; 35. salary = s; 36. GregorianCalendar calendar = new GregorianCalendar(year, month - 1, day); 37. hireDay = calendar.getTime(); 38. } 39. 40. public String getName() 41. { 42. return name; 43. } 44. 45. public double getSalary() 46. { 47. return salary; 48. } 49. 50. public Date getHireDay() 51. { 52. return hireDay; 53. } 54. 55. public void raiseSalary(double byPercent) 56. { 57. double raise = salary * byPercent / 100; 58. salary += raise; 59. } 60. 61. private String name; 62. private double salary; 63. private Date hireDay; 64. } 65. 66. class Manager extends Employee 67. { 68. /** 69. * @param n the employee's name 70. * @param s the salary 71. * @param year the hire year 72. * @param month the hire month 73. * @param day the hire day 74. */ 75. public Manager(String n, double s, int year, int month, int day) 76. { 77. super(n, s, year, month, day); 78. bonus = 0; 79. } 80. 81. public double getSalary() 82. { 83. double baseSalary = super.getSalary(); 84. return baseSalary + bonus; 85. } 86. 87. public void setBonus(double b) 88. { 89. bonus = b; 90. } 91. 92. private double bonus; 93. } Inheritance Hierarchies Inheritance need not stop at deriving one layer of classes. We could have an Executive class that extends Manager, for example. The collection of all classes extending from a common superclass is called an inheritance hierarchy, as shown in Figure 5-1. The path from a particular class to its ancestors in the inheritance hierarchy is its inheritance chain. Figure 5-1 Employee inheritance hierarchy There is usually more than one chain of descent from a distant ancestor class. You could form a subclass Programmer or Secretary that extends Employee, and they would have nothing to do with the Manager class (or with each other). This process can continue as long as is necessary. Polymorphism A simple rule enables you programming language, object variables are polymorphic. A variable of type Employee can refer to an object of type Employee or to an object of any subclass of the Employee class (such as Manager, Executive, Secretary, and so on). We took advantage of this principle in Listing 5-1: Manager boss = new Manager(. . .); Employee[] staff = new Employee[3];. Dynamic Binding It is important to understand what happens when a method call is applied to an object. Here are the details: The compiler looks at the declared type of the object and the method name. Let's say we call x.f(param), multiple methods all match after applying conversions, then the compiler reports an error. Now the compiler knows the name and parameter types of the method that needs to be called. -timeruntime.(param), then the compiler consults the method table of the superclass of the implicit parameter. Let's look at this process in detail in the call e.getSalary() in Listing 5-1. The declared type of e is Employee. The Employee class has a single method, called getSalary, with no method parameters. Therefore, in this case, we don't worry about overloading resolution. Because the whole story runtime, modifying. Preventing Inheritance: Final Classes and Methods Occasionally, you want to prevent someone from forming a subclass from one of your classes. Classes that cannot be extended are called final classes, and you use the final modifier in the definition of the class to indicate this. For example, let us suppose we want to prevent others from subclassing the Executive class. Then, we simply declare the class by using the final modifier as follows: final class Executive extends Manager { . . . } You can also make a specific method in a class final. If you do this, then no subclass can override that method. (All methods in a final class are automatically final.) For example: class Employee { . . . public final String getName() { return name; } . . . }. Casting Recall from Chapter 3 that the process of forcing a conversion from one type to another is called casting. The Java programming ManagerTest class, the staff array had to be an array of Employee objects because). The compiler checks that you do not promise too much when you store a value in a variable. If you assign a subclass reference to a superclass variable, you are promising less, and the compiler will simply let you do it. If you assign a superclass reference to a subclass variable, you are promising more. Then you must use a cast so that your promise can be checked at runtimeruntime. What happens if you try to cast down an inheritance chain and you are "lying" about what an object contains? Manager boss = (Manager) staff[1]; // ERROR When the program runs, the Java runtime system notices the broken promise and generates a ClassCastException. If you do not catch the exception, your program terminates. Thus, it is good programming: - You can cast only within an inheritance hierarchy. - Use instanceof to check before casting from a superclass to a subclass. Actually, converting the type of an object by performing a cast is not usually a good idea. In our example, you do not need to cast an Employee object to a Manager object for most purposes. The getSalary method will work correctly on both objects of both classes. The dynamic binding that makes polymorphism work locates the correct method automatically. The only reason to make the cast is to use a method that is unique to managers, such as setBonus. If for some reason you find yourself wanting to call setBonus on Employee objects, ask yourself whether this is an indication of a design flaw in the superclass. It may make sense to redesign the superclass and add a setBonus method. Remember, it takes only one uncaught ClassCastException to terminate your program. In general, it is best to minimize the use of casts and the instanceof operator. Abstract Classes. Figure 5-2 shows the inheritance relationships between these classes. Figure 5-2 Inheritance diagram for Person and its subclasses fields and concrete methods. For example, the Person class stores the name of the person and has a concrete method that returns it. abstract class Person { public Person(String n) { name = n; } public abstract String getDescription(); public String getName() { return name; } private String name; } Abstract methods act as placeholders for methods that are implemented in the subclasses. When you extend an abstract class, you have two choices. You can leave some or all of the abstract methods undefined. Then you must tag the subclass as abstract as well. Or you can define all methods. Then the subclass is no longer abstract. For example, we will define a Student class that extends the abstract Person class and implements the getDescription method. Because none of the methods of the Student class are abstract, it does not need to be declared as an abstract class. A class can even be declared as abstract even though it has no abstract methods. Abstract classes cannot be instantiated. That is, if a class is declared as abstract, no objects of that class can be created. For example, the expression. Let us define a concrete subclass Student that extends the abstract Person class: Listing (Person p : people) System.out.println(p.getName() + ", " + p.getDescription()); Some people are baffled by the call p.getDescription() Isn't this call an undefined method? Keep in mind that the variable p never refers to a Person object because it is impossible to construct an object of the abstract Person class.? If you did that, then you wouldn't have been able to invoke the getDescription method on the variable p. The compiler ensures that you invoke only methods that are declared in the class. Abstract methods are an important concept in the Java programming language. You will encounter them most commonly inside interfaces. For more information about interfaces, turn to Chapter 6. Listing 5-2. PersonTest.java 1. import java.util.*; 2. 3. /** 4. * This program demonstrates abstract classes. 5. * @version 1.01 2004-02-21 6. * @author Cay Horstmann 7. */ 8. public class PersonTest 9. { 10. public static void main(String[] args) 11. { 12. Person[] people = new Person[2]; 13. 14. // fill the people array with Student and Employee objects 15. people[0] = new Employee("Harry Hacker", 50000, 1989, 10, 1); 16. people[1] = new Student("Maria Morris", "computer science"); 17. 18. // print out names and descriptions of all Person objects 19. for (Person p : people) 20. System.out.println(p.getName() + ", " + p.getDescription()); 21. } 22. } 23. 24. abstract class Person 25. { 26. public Person(String n) 27. { 28. name = n; 29. } 30. 31. public abstract String getDescription(); 32. 33. public String getName() 34. { 35. return name; 36. } 37. 38. private String name; 39. } 40. 41. class Employee extends Person 42. { 43. public Employee(String n, double s, int year, int month, int day) 44. { 45. super(n); 46. salary = s; 47. GregorianCalendar calendar = new GregorianCalendar(year, month - 1, day); 48. hireDay = calendar.getTime(); 49. } 50. 51. public double getSalary() 52. { 53. return salary; 54. } 55. 56. public Date getHireDay() 57. { 58. return hireDay; 59. } 60. 61. public String getDescription() 62. { 63. return String.format("an employee with a salary of $%.2f", salary); 64. } 65. 66. public void raiseSalary(double byPercent) 67. { 68. double raise = salary * byPercent / 100; 69. salary += raise; 70. } 71. 72. private double salary; 73. private Date hireDay; 74. } 75. 76. class Student extends Person 77. { 78. /** 79. * @param n the student's name 80. * @param m the student's major 81. */ 82. public Student(String n, String m) 83. { 84. // pass n to superclass constructor 85. super(n); 86. major = m; 87. } 88. 89. public String getDescription() 90. { 91. return "a student majoring in " + major; 92. } 93. 94. private String major; 95. } Protected Access peek inside the hireDay field of Manager objects only, not of other Employee objects. This restriction is made so that you can't abuse the protected mechanism and form subclasses just to gain access to the protected fields. In practice, use protected fields with caution. Suppose your class is used by other programmers and you designed it with protected fields. Unknown Chapter 6 for more details. Here is a summary of the four access modifiers in Java that control visibility: - Visible to the class only (private). - Visible to the world (public). - Visible to the package and all subclasses (protected). - Visible to the package—the (unfortunate) default. No modifiers are needed.
http://www.informit.com/articles/article.aspx?p=1021579&amp;seqNum=6
CC-MAIN-2017-34
refinedweb
2,428
59.8
01 September 2010 19:55 [Source: ICIS news] (adds capacity information in paragraph 3) LONDON (ICIS)--?xml:namespace> KBR carried out the engineering, procurement and construction of the cracker, which it said has a nameplate capacity of 1.35m tonnes/year. According to Saudi Kayan, the cracker can produce 1.48m tonnes/year of ethylene. ICIS reported on 26 July that the cracker had started up, but the Tuesday statement was the first official confirmation. “The unit started up successfully with on-specification ethylene being produced at the end of July 2010,” KBR said. “The cracker….currently feeds predominately butane and ethane with total final products [ethylene, propylene and benzene] totalling 2.1m ..., making it one of the largest crackers in the world,” it added. Additionally, the statement said that the complex would cater to high value sectors of the polyethylene, polypropylene and ethylene glycol markets and would introduce high value added products and derivatives such as polycarbonate and amines for the first time in Saudi Arabia. Saudi Kayan said in July that it expected the project to run 24% or Saudi riyals (SR) 9bn ($2.4bn) over cost and was working working with "one or more banks to cover the increase in costs, with support from major shareholders to ensure complete implementation of all plants at the company’s complex on time". SABIC has a 35% stake in the venture and Al-Kayan Petrochemical Company 20%. The remaining shares are publicly traded on the Saudi bourse. ($1 = SR0.2666) Read John Richardson and Malini Hariharan’s Asian Chemical Connections.
http://www.icis.com/Articles/2010/09/01/9389972/kbr+and+saudi+kayan+announce+start-up+of+cracker+in.html
CC-MAIN-2013-20
refinedweb
260
53.71
v2.015 2015-11-22 08:52:22-05:00 General Notes: * PDL-2.015 is a clean and polish release. It fixes some problems discovered with PDL-2.014 and improves the 64bit integer support. Highlights: * Fixes to improve compile and test of F77 modules on OS X (i.e. don't build i386 versions) * Basic/Ops/ops.pd - make compatible with MSVC++ 6.0 * Fix win10 breakage in t/flexraw_fortran.t Apparently, temp files were not being released after use which was preventing the tests to pass. * Fix missing PDL license information * Fix sf.net bug #403: reshape() failed when applied to a piddle with dataflow. Now, changing the shape of a PDL that already has dataflow is prevented and an error message given. * Fix sf.net bug 406 Added missing logic for clump(-N) and minor cleanup of perl wrapper. * force new_pdl_from_string to return a piddle with P=physical flag * Add $PDL::indxformat for PDL_Indx This avoids loss of significance when 64bit PDL_Indx values are printed. Make new_pdl_from_string() avoid converting IV values to NVs This allows pdl(indx, '[ 9007199254740992 ]') to work without rounding due to the 52bit double precision mantissa versus the 63bits + sign for PDL_Indx. * Add type support info to pdl() constructor documentation. pdl() has always supported this usage but examples were missing in the documentation. * improving PDL::GSL::RNG documentation * remove spurious '}' from gnuplot demo v2.014_03 2015-11-19 12:37:00-05:00 General Notes: * This quick release is to verify the fix for the PDL license information. Highlights: * Some updates to Changes and Known_problems as well. v2.014_02 2015-11-17 09:20:23-05:00 General Notes: * This is the 2nd release candidate for PDL-2.015 Highlights: * Same as PDL-2.014_01 but with a couple of F77 build fixes from Karl to support MacOSX builds and, we hope, a SciPDL to go with PDL-2.015! v2.014_01 2015-11-14 14:01:28-05:00 General Notes: * This is PDL-2.014_01, a cleanup and bug fix release. Highlights: * Add $PDL::indxformat for PDL_Indx and Make new_pdl_from_string() avoid converting IV values to NVs PDL_Indx values (type indx) now print with an integer format specification so all digits get printed. In addition pdl(indx, '[ 9007199254740992 ]') works as well going the other direction. * Fix sf.net bug 403: reshape can't handle piddles with -C flag reshape() on a piddle with dataflow isn't meaningful. Now a warning is given. You can $pdl->sever first and then reshape() can be applied. * Fix sf.net bug 406: clump() produces bogus dims * Various build improvments and documentation fixes: - force new_pdl_from_string to return a piddle with P=physical flag - remove spurious '}' from gnuplot demo - Basic/Ops/ops.pd - make compatible with MSVC++ 6.0 - Fix win10 breakage in t/flexraw_fortran.t - improving PDL::GSL::RNG documentation - Add type convert info to POD for the pdl() constructor v2.014 2015-10-12 11:44:10-04:00 v2.013_06 2015-10-10 16:04:14-04:00 General Notes: * This is PDL-2.013_06 which is RC 2 for PDL-2.014 and likely the final one before the official release. Please report any final issues and doc patches ASAP. Highlights: * Mark some failing tests in t/primitive.t as TODO to avoid CPAN Testers failures. * Add IPC::Cmd to TEST_REQUIRES v2.013_05 2015-10-08 07:14:19-04:00 General Notes: * This is PDL-2.013_05 (a.k.a. PDL-2.014 rc 1) which is the fifth CPAN developers release for PDL with newly completed support for 64bit indexing. * Needs testing for piddles with more than 2**32 elements but all checks pass so far. Highlights: * Fix problem with broken t/opengl.t for testers v2.013_04 TBD General Notes: * This is PDL-2.013_04 which is the fourth CPAN developers release for PDL with newly completed support for 64bit indexing. * Needs testing for piddles with more than 2**32 elements but all checks pass so far. Highlights: * t/opengl.t is skipped the dynamic GL window creation tests if $AUTOMATED_TESTING is true. * A new ipow() routine for integer exponentiation * Corrected return types of intover, borover, bandover, nbadover, and ngoodover. * Fixed compile problem in clang from using finite() on an integer datatype. v2.013_03 2015-10-04 12:21:30-04:00 General Notes: * This is PDL-2.013_03 which is the third CPAN developers release for PDL with newly completed support for 64bit indexing. * Needs testing for piddles with more than 2**32 elements but all checks pass so far. Highlights: * More clean-up to handle perls with long double NVs Loss of precision will be warned on "use PDL;" * Skipping t/bigmem.t to avoid OOM CPAN Testers fails. * Minor fixes to C code to meet stricter compiler and C99 requirements. v2.013_02 2015-10-03 08:40:08-04:00 General Notes: * This is PDL-2.013_02 which is the second CPAN developers release for PDL with newly completed support for 64bit indexing. * Needs testing for piddles with more than 2**32 elements but all checks pass so far. Highlights: * Clean up to handle perls with long double NVs * Various bugs closed * PDL::IO::Storable is now loaded with "use PDL;" v2.013_01 2015-09-26 17:39:41-04:00 General Notes: * This is PDL-2.013_01 which is the first CPAN developers release for PDL with newly completed support for 64bit indexing. * Needs testing for piddles with more than 2**32 elements but all checks pass so far. Highlights: * TBD v2.013 2015-08-14 08:37:15-04:00 General Notes: * This is PDL-2.013. It is PDL-2.012 with some fixes for badvalue problems and Solaris make issues. * See PDL 2.012 notes below. Highlights: * Fix for sf.net bug #390: scalar PDL with badvalue always compares BAD with perl scalars. Now a warning is given if the badvalue could conflict with the results of a logical or comparision operation. * Fixed a makefile construct which was ambiguous and caused build failures on Solaris using their make. Gnu make was not affected even on Solaris. v2.012_01 2015-08-01 15:47401-0400 General Notes: * This is PDL-2.012_01. It is PDL-2.012 with some fixes for badvalue problems when the badvalue was 0 or 1. * See PDL 2.012 notes below. Highlights: * Candidate fix for sf.net bug #390: scalar PDL with badvalue always compares BAD with perl scalars v2.012 2015-06-14 08:27:01-0400 General Notes: * This is PDL-2.012 it is essentially PDL-2.011 with some fixes for some minor issues that only came to light with a new official release. * See PDL 2.011 notes below. Highlights: * Add package statements so PDL::Lite and PDL::LiteF are indexed correctly * Give PDL::NiceSlice a non-developer release for indexing * Fix build regression that broke ActiveState perl builds for many perl versions and OS platforms. v2.011 2015-06-02 17:01:22-0400 General Notes: * This is PDL-2.011 it is essentially PDL-2.008 with some fixes for some minor issues. *) * PDL::Graphics::PLplot is no longer included in the PDL core distribution. Please install from CPAN directly. *: * See PDL 2.008 Highlights below. v2.010 2015-06-02 14:40:15-0400 General Notes: * Another indexing regression. Sigh. v2.009_01 2015-05-29 17:47:57-0400 General Notes: Highlights: * Removal of PDL::Graphics::PLplot since exists as separate CPAN distro v2.009 2015-05-29 12:26:25-0400 General Notes: * This is PDL-2.009. It has tweaks to fix PAUSE/CPAN indexing problems in 2.008. * Known_problems updated to reflect a seldom seen pdldoc installation problem for certain custom perl configurations on cygwin. A workaround is known. Please contact the PDL mailing list if you have this problem. See the sf.net bug report at for more information. * See Release Notes for PDL 2.008 below for more. v2.008 2015-05-24 18:42:22-0400 General Notes: * This is PDL-2.008! Yay! *) * New improved vsearch functionality, interfaces, and documentation (Diab Jerius) * PDL::IO::Storable now robust against version, platform endianness, and supports the new 64bit PDL_Indx data type (Dima Kogan) * Clean up of PDL/Basic/Core code to remove cruft and to simplify the evolution to a coming improvements (Craig DeForest) * Major clean up, de-crufting, and streamlining of the entire PDL ExtUtils::MakeMaker build process (Ed J) * Standardizing and updating the entire PDL test suite to a common basis (use Test::More) and coding to more consistent best practices. E.g., use strict This is a huge (ongoing) effort but a comprehensive test suite is needed for regression tests to validate compatibility of coming PDL3 architecture changes. (Zakariyya Mughal) * You can now call the PDL::Graphics2D twiddle() routine with an argument of 0 or 1 (i.e., false or true) to set whether the twiddle loop is run. * Library dependency detection improvements including PROJ4 and GD. A number of improvements in this for strawberry perl on windows (kmx) * The PDL distribution process now generated the documentation for the modules using the automated code generation process. This makes all the PDL docs available on and on for your web browser. (kmx) * Improved support to build XS/C extensions: (Ed J) - You can now: "use Inline with => 'PDL';", see PDL::API - You can, in your module's Makefile.PL: "$p = new ExtUtils::Depends 'MyMod', 'PDL'" * MANY sf.net tickets closed: 377 PDL::Transform::Proj4 not building under latest dev EUMM 375 Storable files incorrectly processed from older versions. 374 CONFIGURE_REQUIRES => Devel::CheckLib 373 2.007_11 MANIFEST 372 2.007_11 fails on MS Windows 64bit 371 PDL-2.4.11 medover failure in perl 5.20.1 370 PDL-2.007 can fail to build Basic/#Core files 369 slice fails with subclass index 368 PDL::Slatec::polyfit ignores incorrect length of weight piddle... 367 BAD value parsing breakage 365 CPAN PDL install broken due to breakage in Module::Compile 0.34 363 PP "OtherPars" namespace is not clean 362 rcols COLIDS need chomp-ing 361 vsearch bug w/# reversed list 360 subtle & rare bug in vsearch 359 Improved documentation for vsearch 358 one2nd() has unexpected behaviour when given a Perl scalar rather than a piddle 357 Android support 356 overload::Method() does not return coderef for stringification 355 dog creates 0 dim piddle which squeezes to 1 dim. 353 imag2d not 'use warnings' safe and no way to disable twiddle 352 reorder() not an lvalue sub 351 PDL_BOOT has wrong XS code 350 Modules using PDL::Slatec will get installed even if Slatec is disabled 349 PDL needs integrated support for PDL::PP and CPAN indexer 348 PDL->null->slice('')->nelem results in error 345 documentation of WITH_PLPLOT in perldl.conf incorrect 344 Current version numbering is broken 342 BUGS file not indexable or discoverable 337 rangeb() broken for 64bit index support on 32bit perl 332 "isn't numeric in null operation" warning could be more helpful 331 uniq does not always return proper object 329 t/#picrgb.t fails in PDL build 321 Several core modules do not have man pages 319 PDL::Index does not render on websites 316 plplot.t failure when building 2.4.11 314 conv1d bad value broken 313 clip() edge case not handled right 312 Wrong results in corner empty-set cases 283 PDL::IO::FITS::wfits corrupting FITS image 272 indexND of Empty pdls can segfault 268 PLplot still unusable with X 261 max() fails on nan 256 Threadable version of PDL::MatrixOps::inv 232 perl -d chokes on lvalue functions 227 PGPLOT module doesn't work in PDL 224 Ctrl-C kills perldl in win32 console 207 Name "PDL::SHARE" used only once. 63 Unable to get the 3d demos 51 justify option fails in imag (PDL2.3.4) v2.007_17 2015-05-06 13:35:57-0400 General Notes: * This is PDL-2.008_rc4! Highlights: * Clean up of large number modulo tests and make TODO for the PDL-2.008 release. * Fix build/configure problems from CPAN Testers reports. * Quiet excessive warnings in perldl and pdl2doc v2.007_16 2015-04-22 10:23:46-0400 General Notes: * This is PDL-2.008_rc3! Highlights: * Various clean up and doc fixes * Add more of the PDL prerequisites explicitly to handle missing core functionality for CPAN Testers. v2.007_15 2015-04-19 17:08:55-0400 General Notes: * This is PDL-2.008_rc2! Highlights: * Build issues with PROJ4 detection and link with cygwin platforms has been worked around. * Failing tests in t/ops.t for new 64bit modulus support have been marked TODO. v2.007_14 2015-04-11 14:28:07-0400 General Notes: * This is PDL-2.008_rc1! Highlights: * More cleanup and a couple of build issues fixed with PROJ4. * Various test suite improvements. v2.007_13 2015-03-22 16:00:03-0400 General Notes: * Counting down to a PDL-2.008 release this April 2015 Highlights: * Travis on Github now routinely tests on dev ExtUtils::MakeMaker, clang: * Coveralls on Github now lists test coverage: * Many tests updated to use Test::More, Test::Deep, Test::Exception * PDL::FFTW is now removed from the PDL core. Use PDL::FFTW3 instead. * Prototype use of Alien::Proj4 to encapsulate install/build info * Fix warnings compiling under clang * Addition of "core" and "coretest" targets for quicker build and dev cycle * Make Filter::Util::Call the default engine for PDL::NiceSlice * Make PDL_Anyval type, for 64-bit purposes * Clean up and better-comment pdl*.h * Make "isn't numeric in null operation" warning more helpful v2.007_12 2015-03-06 09:18:04-05:00 General Notes: * Counting down to a PDL-2.008 release this February 2015 * This release marks the completion of almost all the priority issues needed for PDL-2.008. Expect feature freeze, final shakedown, and release to come! Highlights: * Fixed sf.net bug #373 2.007_11 MANIFEST * Implemented 'core' and 'coretest' targets for quick testing. * Fix quote/whitespace build problems * Fix threading problem discovered in PDL::MatrixOps::inv() * Build improvements and support for automated commit testing via the travis-ci infrastructure * Fixed sf.net bug #368 PDL::Slatec::polyfit ignores incorrect length of weight piddle * Fixed sf.net bug #374 CONFIGURE_REQUIRES => Devel::CheckLib * Tests and fixes for modulus operator for 64bit integer operands. Tests on 32bit systems welcome. * Lots of tweaks and cleanup... v2.007_11 2015-02-24 16:08:36-05:00 General Notes: * Counting down to a PDL-2.008 release this February 2015 Highlights: * The new Filter::Simple engine for PDL::NiceSlice is now the default. This fixes problems where PDL::NiceSlice was applying the sourcefilter to the content of comments and strings. Still to do: implement for command line use in perldl/pdl2 shells. * Added ability to call PDL::Graphics2D twiddle() routine with an argument of 0 or 1 (technically false or true) to set whether the twiddle loop is run. Also fixed a minor warning generated with 'use warnings'. This closes bug #353. * Lots of clean up and build process improvements. v2.007_10 2015-02-02 10:59:22-05:00 General Notes: * Counting down to a PDL-2.008 release this February 2015 Highlights: * More clean up to build process. v2.007_09 2015-01-29 11:01:24-05:00 General Notes: * Counting down to a PDL-2.008 release this February 2015 Highlights: * perl 5.10.x is now the minimum version of perl supported for this release. Please test. * Much clean up of the EU::MM build process by Ed.:w v2.007_08 2015-01-20 18:24:01-05:00 General Notes: * Counting down to a PDL-2.008 release this January 2015 Highlights: * Some ExtUtils::MakeMaker fixes and clean up for the PDL build process. * Fix non-portable usage bug in t/vsearch.t which prevented the test from passing on perls 5.12 and earlier v2.007_07 2015-01-06 17:44:08+11:00 General Notes: * Counting down to a PDL-2.008 release this January 2015 Highlights: * This release includes the new pre-generated pm/pod to clean up the docs available on-line on metacpan.org and search.cpan.org. * Bug fix in t/vsearch.t to support perl 5.12 and earlier. PDL currently supports perl 5.8.x and later. v2.007_06 2015-01-05 13:31:13-05:00 General Notes: * Counting down to a PDL-2.008 release this January 2015 Highlights: * Fixed a number of bugs on the sf.net tracker * Fix for EU-MM-7.0 and later problem with dmake * Include generated pod in the distribution so that metacpan.org and search.cpan.org have better/working online docs. v2.007_05 2014-12-24 09:24:04-05:00 General Notes: Highlights: * You can now: "use Inline with => 'PDL';" - see PDL::API * You can, in your module's Makefile.PL: "$p = new ExtUtils::Depends 'MyMod', 'PDL'" * Various bugs fixed * New vsearch() implementations with features and flexibility v2.007_04 2014-09-09 00:44:29+01:00 General Notes: Highlights: * You can now: "use Inline with => 'PDL';" - see PDL::API * You can, in your module's Makefile.PL: "$p = new ExtUtils::Depends 'MyMod', 'PDL'" v2.007_03 2014-07-01 16:54:59-04:00 General Notes: Highlights: * Fix documentation builds for installs into vendor directory. * Fixes for library detection on MS Windows * Fix incompatibility of PDL::IO::Storable with perl versions < 5.10.x v2.007_02 2013-11-25 14:10:22-05:00 General Notes: Highlights: * This release should be a working PDL::IO::Storable that is compatable with the new 64bit index support. * PDL::IO::Storable now requires perl 5.10.x or greater although the overall distribution requirements are not planned to update to 5.10.x until the completion of fixes for the longlong hidden double precision conversion slash truncation bug. v2.007_01 2013-11-17 16:31:17-05:00 General Notes: Highlights: * Added FAQ entry on PDL version numbering change and how to specify required PDL versions. * Corrected perldl.conf docs for WITH_PLPLOT in the comments * Update PDL::IO::Storable to work with the new PDL_Indx data type. Also made the code backwards compatible to read files written by older PDL releases. * Fixed NaN handling for min/max and cleaned up handling of empty sets. * Various enhancements to PDL::Transform v2.007 2013-10-12 12:56:52-04:00 General Notes: * PDL computations now use 64bit indexing/addressing if your platform supports it (i.e., your perl configuration has $Config{ivsize} == 8). - You can process with pdls with more then 2**32 elements. - Memory mapped file IO supports up to 8TB files which allows much simpler processing of large data files. (See mapflex in PDL::IO::FlexRaw for details) * PDL-2.007 has a new, unified slicing engine and syntax that consolidates the multiple slicing variants into a backward compatible but now 64bit aware slice. See the PDL::Slices for the new syntax that is enabled. * PDL::FFTW has moved to its own distribution on CPAN and is no longer in the PDL core distribution. Look for PDL::FFTW3 coming to CPAN soon. * Some required dependencies have been update to more recent versions: - ExtUtils::MakeMaker now requires version 6.56 or higher, the minimum version with CONFIGURE_REQUIRES support. - Perl OpenGL 0.6702 is now required for PDL::Graphics::TriD to build. This fixes a number of critical bugs and should be a seamless upgrade. - File::Map version 0.57 is required. This fixes map_anonymous support for the >2**32 sizes needed for 64bit support. Legacy mmap support for unix platforms is no longer supported. The distribution requires File::Map so you should not notice the change. * Incompatible Changes: - PDL::FFT now uses the same sign convention as FFTW and the rest of the world, -1/+1 for forward and reverse FFT respectively. - C/XS API of PDL-2.007 is incompatible with previous PDL releases. If you upgrade to PDL-2.007, you *will* need to re-install or upgrade *all* dependent XS or PP based modules. - PDL now auto-promotes array refs in many places that previously required a piddle (so you can say, e.g., "$a->index([2,3])" instead of "$a->index(pdl(2,3))"). - String syntax for slice() specifications now ignore white space. * The clean up of the PDL core distribution continues and PDL-2.007 is no exception. Many bug fixes, documentation updates, code and implementation improvements make this the best testing PDL release to date. Highlights: * FITS IO improvements and fixes: - Added 'afh" option to rfits to allow explicit use of legacy hash parser for performance reasons. - New multiple extension writing support for wfits. * Added pp_deprecate_module() to PDL::PP * New mode/modeover functions in PDL::Ufunc * Made exception handling in slices more robust. * PDL::CLONE_SKIP added for improved ithread support. * Updated graticule() in PDL::Transform::Cartography to support NaN-delimited output. * Bugs fixes: - Fix %hash randomization bugs in PDL tests - Fix OpenBSD pthread build problem for non-threaded perls - Fix PDL::shape to return vector for 1-D piddles - Fix badvalue-on-truncate support for map and for interpND - Fix for MSVC++ 6.0 to build on 32bit systems. MSVC++ 6.0 cannot be used to build 64bit index support. - Fix polyfit() handling of BAD values and various edge cases. - Fix rare "Bizarre copy of HASH in scalar assignment" - Fix rcols with colsep and $PDL::undefval - Fix sf.net bug #331 "uniq does not always return proper object" - Fix sf.net bug #338 PDL::FFT uses backwards sign convention from FFTW - Make PDL::NiceSlice preserve line numbering (sf.net feature #75) - PDL::IO::GD->new() is now less picky about it args, and no longer crashes - Two bug fixes to transform.pd, and an augmentation v2.006 2013-03-23 10:02:31-04:00 General Notes: * Change to the version number scheme used for PDL from the dotted-integers format back to plain old decimal numbers. Unfortunately, PDL has used both alternatives before and in an inconsistent, out-of-order way. With this release, the current version will also be the most recent version with respect to both numbering schemes. For more details see David Goldens blob post on the topic and the pdl-porters list discussion: * PDL-2.006 also showcases the demos of two new PDL graphics modules in the perldl/pdl2 shells: - PDL::Graphics::Gnuplot - PDL::Graphics::Prima Both modules install on all supported PDL platforms. A recent addition is PDL::Graphics::Simple which provides a uniform presentation to the variety off available PDL plot/view/print options. - PDL::Graphics::Simple Let us know how they work for you. As they are relatively "young" contributions your feedback and questions are always welcome. * PDL Distribution related updates: - Fixes a build issue for PDL at ASPerl - Many fixes for debian distributions. - PDL::IO::NDF has been moved to its own distribution on CPAN. This could affect upgrades from older PDL installs. Highlights: * New support for reading IDL format files via PDL::IO::IDL. * Added an unpdl method which is (roughly) the inverse operation of pdl (Joel Berger). * Updated polyfill and polyfillv routines to the algorithm from pnpoly: more accurate on edge pixels and faster due to its PP implementation (Tim Haines). * Added Boundary=>'Replicate' option to conv2d and med2d (chm). * Support for new additional random number generators to PDL::GSL (John Lapeyre). * Add lgamma support for MinGW-built perls with tests to match (sisyphus). * Many improvments to docs and their generation from PDL sources. Specific new functionality support: - Newly refactored docs engine using core perl modules rather than PDL-only ones (Joel Berger) - New FullDoc key added to PDL::PP makes writing CPAN friendly .pd files much, much easier (David Mertens). - New PDL::Doc::add_module() routine to add an external module's POD (with PDL::Doc conventions) to the PDL docs on-line database (Craig DeForest). * Many bugs fixed, some even before a ticket could be opened! - Sf.net bug #3607936: Fixed a bug causing crashes due to using inplace with a duplicate argument. - Sf.net bug #3603249: AutoLoader leaks $_ into local context, reported and fixed by Craig. - Sf.net bug #3588182: Fixed hist() handling of the case of fractional steps in integral input data types. - Sf.net bug #3563903: Fixed bug in PNG format detection on win32 platforms. - Sf.net bug #3544682: Fixed error report bug in perldl that resulted from a change in the way perl handles eval exception reporting. - Sf.net bug #3539760: qsort[vec] are now inplace aware. - Sf.net bug #3518190: Potential fix for t/gd_oo_tests.t test crashes. - Sf.net bug #3515759: Work around for PDL::GIS::Proj build failures with proj-4.8.0. - Sf.net bug #3479009: Fixed dummy() to generate a valid syntax for the underlying call to slice(). - Sf.net bug #3475075: Fixed 16-bit PNM raw format handling. - Added warning if conv1d is used on a piddle with the badflag set. - Fix NaN sign issues as reported (and fixed!) by Matthew McGillis with contributions by Sisyphus. - Fix rim() 3-arg form. Added tests to support and verify the development. - Fixed a problem with multiple windows and imag2d and imag2d_update. * The PDL shells keep getting better: - New feature in perldl and pdl2 where a pattern matching the PDL shell prompt (in $PERLDL::PREFIX_RE) will get stripped off of input lines before eval. This makes it easier to cut-and-paste example text from PDL shell sessions or from the PDL Book into an active session. - Added a demo for PDL::Graphics::Prima to the PDL shells. - Added a demo for gnuplot to the PDL shells. - The p shortcut to display output in the PDL shells has been reverted to its previous 2.4.10 behavior. If you wish it to be an exact alias for print just override in your .perldlrc or local.perldlrc file. v2.4.11 2012-05-20 13:32:17-04:00 General Notes: * This is a point release of PDL to support the coming perl 5.16.0 release. Highlights: * A new implementation mapflex and mapfraw routines provides memory-mapped IO for all platforms including win32 systems. * The new memory mapped IO support is implemented using File::Map so version 0.47 has been added as a required dependency to force automated testing so an automated build will need this dependency installed. NOTE: For systems having POSIX mmap, a manual build of PDL will automatically use the legacy implementation. * Various cleanup to existing code to fix warnings generated by perl versions 5.15.x and higher. Remove deprecation warning in PGPLOT/Window/Window.pm complex.pd - fix attempts to overload '<=>=' and '=>' * Sf.net bugs fixed: 3518253 Make PDL::flat work under perl 5.16 (thanks sprout!) 3516600 pdl_from_string.t fails w/ BADVAL_USENAN=1 3487569 PDL::IO::Misc : rcols problem (thanks bperret!) 3476648 PDL build of HTML docs fails on recent bleed Perl * Other bugs fixed: Fix check for glutRunning logic for imag2d Fixed a bug in cat's error reporting. Added lvalue awareness to whereND * New and improved tests have been added to the test suite. Tests t/gd_oo_tests.t and t/inline-comment-test.t are skipped for BSD platforms (see sf.net bugs #3518190 and #3524081 to track their issues). * New support for multi-line comments in PP code. See docs for PDL::PP for details (e.g., pdldoc PP). * Various enhancements to barf/croak output and messages to make error reports and stack traces more useful and readable. * There is a new changes (or Changes) target for the PDL Makefile which is a convenience target that allows one to regenerate the Changes file from git. v2.4.10 2012-02-03 18:44:47-05:00 General Notes: New Stuff: * PDL::Constants module provides E, PI, I (or J) and more. * PDL::NiceSlice has a new engine based on Filter::Simple which is more selective about where the PDL::NiceSlice sourcefilter). v2.4.9 2011-04-09 10:05:43-04:00 General Notes: * Fixes a couple of surprise bugs that were discovered immediately with the PDL-2.4.8 release. * See Also: the Release Notes for PDL-2.4.8 below Highlights: * Fix sf.net bug #3267408 "t/slice.t crashes in tests 68-70 for BSD" * Fix sf.net bug #3190227 "PDL build fails with parallel GNU make -j3 -j3" * Fixed various tempfile name generation problems by switching to File::Temp instead of hand rolled solutions. This is the recommended approach going forward. * Force Convert::UU usage for BSD to work around a t/dumper.t failure on MirBSD. v2.4.8 2011-03-29 17:12:41-04:00 General Notes: * The deprecated Karma imaging library support code has been removed from the PDL distribution. * Perl OpenGL (POGL) is now the only build option for 3-D graphics support in PDL. The POGL build has proven to be portable and reliable. This prepares TriD graphics development for the next stage of re-factoring for support and new features. * Many improvements to the PDL configuration, build and test process make this the most robust PDL yet. * PDL::IO::FlexRaw now supports automatic header file creation when writeflex() is given a filename argument for writing. readflex/writeflex/mapflex now support reading and writing piddles with bad values in them. * New PDL::Constants module provides PI and E. * PDL::Complex now supports in-place operations. * Added $PDL::toolongtoprint to set the maximum piddle size allowed to print as a string. This was added to the default.perldlrc to make it easier to discover by users. * wmpeg() from PDL::IO::Pic uses the new ffmpeg back-end and can create many additional file formats beyond MPEG alone, including MP4 and animated GIF. See the documentation for details. * Lots of improvements to the documentation, overall usability and many bugs fixed! Highlights: Build and Test Enhancements: * Karma support code has been *removed* from the PDL distribution The last stable PDL distribution with Karma code was be PDL-2.4.7. * You must use the Perl OpenGL module to build the PDL 3-D graphics module, PDL::Graphics::TriD. OPENGL_LIBS, OPENGL_INC and OPENGL_DEFINE are no longer used by perldl.conf for the configuration process. * Added a check for mis-installed PROJ4 libraries. If the library does not initialize (even if present) then PDL will not build the PROJ4 modules. This is sf.net feature #3045456. * GD, HDF, PROJ4, OpenGL, and GSL tests will not be run unless the corresponding module was configured to be built. This addresses the possibly mysterious test failures caused by previous PDL installations in the perl path at build time. * Use of the Test::More TODO {} blocks allows tests for known bugs to be added to the test suite without causing the suite to fail. This replaces the previous SKIP_KNOWN_PROBLEMS option and should better enable test first development and debugging. * utils/perldlpp.pl is a new script for off-line source filtering to pre-filter PDL source files with NiceSlice constructs. This allows PDL to use NiceSlice constructs in the core functionality while still allowing PDL to work in environments where source filters are not supported. * The 'perl Makefile.PL' response to detecting another PDL in the build path has changed. If such a pre-existing PDL installation is detected, the user is warned *but* configuration and build will proceed nonetheless. * Clean-up and fixes to demos and tests for reliability and portability. Documentation: * Added INTERNATIONALIZATION file with i18n notes. PDL does yet not have internationalization support beyond that provided by perl itself. * Cleared up the documentation on when to use lu_decomp and versus lu_decomp2. Now that lu_decomp is threaded, it is the preferred implementation. * wmpeg() with the ffmpeg converter supports generation of many different output video file formats including MPEG, MP4, and animated GIF. Documentation on these uses were added. * New example code refresh.pdl in Example/PLplot to provide for PLplot, some of the same functionality as in PDL::Graphics::PGPLOT. * Other documentation updates for clarity and correctness. New Features or Functionality: * New PDL::Constants module providing PI and E (so far) * Inplace support added for PDL::Complex operations * pdldoc and the pdl2/perldl help commands now print all matches by default when multiple matches are found. * A do_print command was added to the pdl2 shell which toggles the input mode between printing and not printing the return value of each command. * readflex/writeflex/mapflex now support reading and writing piddles with bad values in them. This was sf.net feature request #3028127, "add badvalue support to FlexRaw". * writeflex now supports automatically calling the writeflexhdr() routine if you have set the variable $PDL::FlexRaw::writeflexhdr to a true value and are writing to a file given by filename as argument. * Updated the error handling for GSL::INTERP to match other GSL module usages. * Applied sf.net patch #3209075 IO::HDF square sds * New binary blob support in PDL::IO::GD::OO Bugs Fixed: * Applied Christian Soeller's patch for FFTW on 64-bit systems. This resolves bug #3203480 "t/fftw.t fails on 64-bit systems". * Fixed sf.net bug #3172882 re broken threading in inv(). inv() and lu_backsub() now handle threading. Updated documentation for lu_decomp, lu_deomp2, and lu_backsub. * Fixed sf.net bug #3171702 "missing podselect command breaks PDL build" * Fixed sf.net bug #3185864 (bad adev in statsover) * Fixed sf.net bug #3139697: fixed imag2d() to work better with Mac OS X GLUT and not just FreeGLUT. * Fixed uniqind bug #3076570 * Fixed SF bug #3057542: wmpeg doesn't error on missing ffmpeg program. Now wmpeg returns 1 on success and undef on error. If ffmpeg is not in PATH, it just fails immediately.... * Fixed SF bug #3056142: pdl2 fallback to perldl broken on win32 * Fixed SF bug #3042201: t/dumper.t fails mysteriously * Fixed SF bug #3031068: PDL::IO::FlexRaw mapflex memory mapping fails * Fixed SF bug #3011879, "pdl() constructor crashes perl for mixed ref/piddle args" and #3080505, and #3139088. This fix also includes a larger and more complete set of tests. * Fixed segfault in plplot.t with a work-around. * Fixed bug in readenvi.pdl header list value processing and added support for embedded file headers. * Fixed bug in FlexRaw.pm support for headers with Type passed as string. * Fixed imag2d() in PDL::Graphics2D. It no longer calls exit on ESC if run within the pdl2/perldl shell. Also did some clean up of key controls and module mechanics. * Fixed upstream bug in Devel::REPL for MultiLine continuation. Now incomplete q[] and qq[] constructs continue reading until properly closed. See the Known_problems file for details. v2.4.7 2010-08-18 20:55:52-04:00 General Notes: * New requirements: - perl version 5.8.x and higher - Convert::UU * PDL::Graphics::TriD now requires OpenGL-0.63 *. v2.4.6 2009-12-31 23:06:11-05:00 General Notes: * Mainly a bug fix and polishing release Highlights: * Improved 3D graphics and OpenGL functionality * imag2d() routine for multi-image (photo) display * Many fixes for Debian package release * Several little bugs fixed since PDL-2.4.5 * Fixed some issues with PDL convolution routines * Improved documentation and release notes and files * Padre and enhanced perldl shell integration begun Summary of Changes: * Improved 3D graphics and OpenGL functionality Perl OpenGL 0.62 is the minimum required version for PDL::Graphics::TriD to build. TriD now builds correctly for Mac OS X systems without X11 installed. Autoprobe for build of 3D graphics and the use of the Perl OpenGL library has been implemented. The default perldl.conf setting is to check. Improved multi-window support for PDL::Graphics::TriD display windows: the GLUT window ID is now part of the default window title for convenience, and redraws with multiple open TriD windows are handled correctly. * imag2d() routine for multi-image (photo) display REQUIRES: The Perl OpenGL TriD interface and FreeGLUT. IMPORTANT: Legacy X11 TriD is *not* supported! It is implemented in the imag2d.pdl file for autoloading via PDL::AutoLoader. To use, copy the imag2d.pdl file to somewhere in your PDLLIB path or add the location to your PDLLIB environment variable. It works with multiple, simultaneous, image windows and appears to work side-by-side with TriD graphics windows. After you have imag2d.pdl in your @PDLLIB list, you can use 'help imag2d' to get usage instructions and documentation. This implements the basic functionality planned regarding an improved imagrgb() routine. * Many fixes for Debian package release This should allow PDL-2.4.6 to be more readily released as a Debian packages. The general clean up involved improves PDL portability and robustness generally. * Several little bugs fixed since PDL-2.4.5 The number of history lines when you use Term::ReadLine::Perl with perldl are now set correctly to $PERLDL::HISTFILESIZE. The default value is 500. A number of minor internal fixes for portability and implementation improvements: - Add comment re fix for defined(%hash) usage - Fix annoying PGPLOT::HANDLE warning message - Replace GIMME by GIMME_V in Core.xs - Update to v3.14 of ppport.h portability Fixed MINUIT build problem where non-standard code was being generated which caused problems with rigorous compiler settings. This was SF bug #2524068. * Fixed a number of issues with PDL convolution routines conv1d() algorithm adjusted to match conv2D() and convolutionND(). Documentation on the exact computation being performed in conv1d() was added. Fixes bug #2630369 with fftconvolve(). It now gives the same results as conv1d(), conv2d(),.., except perhaps with respect to the boundary condition handling. * Improved documentation and release notes and files Updated PDL::FAQ. Lots of little changes to bring documentation in line with current PDL functionality. Volunteer editors and contributors always welcome! * Padre and enhanced perldl shell integration begun There is a new PDL-2.4.6/Padre/ subdirectory in the PDL source tree which contains work towards Padre integration and a 2nd generation, enhanced perldl shell for PDL. E.g. an *experimental* plug-in giving PDL::NiceSlice support to the Devel::REPL shell is included. See the Padre/README file for instructions to get you started. v2.4.5 2009-10-24 11:56:23-04:00 Highlights: * 3D graphics modules now run on win32 and Mac OS X systems without requiring X11 be installed. The only prerequisites are OpenGL and FreeGLUT/GLUT. * Release documentation and FAQ have been updated to be more useful (and more accurate). * PDL build, test, and run time diagnostic messages have been make clearer and more helpful. * Many bugs have been fixed since PDL-2.4.4 so this is the most reliable PDL ever. * PDL now requires perl 5.6.2 or greater and has updated DEPENDENCIES information and code. This should improve the buildability of PDL General Notes: This is the first PDL release supporting the new build strategy for the PDL::Graphics::TriD modules. The result is it now builds on more platforms than ever. You'll need to install the OpenGL module and have FreeGLUT or GLUT (for Mac OS X) on your system. If you have problems with the new TriD build (that you did not have before), edit perldl.conf and set USE_POGL to 0. That should enable you to build the legacy TriD code but you *will* want to submit a bug report, see the next point.... IMPORTANT: Given the increased portability and generality of the new TriD interface approach, it is expected that the legacy TriD build interface (based on X11) will be deprecated soon (almost immediately) and removed after that. (N.B. It has been effectively unsupported for some time) If you are new to PDL, we recommend joining the perldl mailing list for discussion and questions. See for how to sign up and links and searches of the list archive discussions. Summary of Changes: New perldl.conf configuration parameters controlling build of TriD with perl OpenGL (a.k.a. POGL) with the follow default values: USE_POGL: 1 -- build using POGL 0 -- build using legacy build process undef -- build with POGL if possible POGL_VERSION: 0.60 -- minimum required version of OpenGL POGL_WINDOW_TYPE: 'glut' -- use a GLUT GUI for window creation 'x11' -- use GLX and X11 for the GUI (this is a "compatibility mode" to support PDL::Graphics::TriD::Tk widgets) NOTE: Set WITH_3D => 0 in perldl.conf to disable the TriD build completely. Just setting USE_POGL => 0 is not enough. The OpenGL tests in t/opengl.t now respects the interactive setting from the PDL_INT environment variable. Two TriD check programs, 3dtest.pl and line3d.pl, are added to the main PDL build directory. They can be run as quick checks of the new functionality and are short enough run under the perl debugger if needed. e.g. perl -Mblib 3dtest.pl OR perl -Mblib line3d.pl OpenGL (a.k.a. GL) is the default TriD output device on all platforms. VRML does not work at the moment. GLpic is not tested but may work. Closed SF bug #1476324 by adding FAQ entry on clean installs Fix qsorti(null()) crash bug from SF bug #2110074. Make qsorti() return quietly when given a null() piddle input Fix broken PP typemap finding code, thanks to CS for the final code and many testers! Fix t/autoload.t tilde expansion bugs and test failures. tilde expansion seems to work consistently with bash now Partial fix implemented for PDL::IO::Browser. The code has only been tested with cygwin but it should work on systems with ncurses in the "right place". This is **not tested** but set WITH_IO_BROWSER => 1 if you wish to try. If the perldl shell is unable to load PDL for some reason and defaults to basic perl support, the prompt now changes to perl> reflecting that fact. readflex() now works with File::LinearRaid. Many win32 fixes to tests and build process which make things work more smoothly on win32 platforms. See the Changes file or run 'git log --stat' for the detailed list of changes. v2.4.4 2008-11-12 19:16:53-10:00 General Notes: - Bad value support is now enabled by default for PDL builds. This feature allows simpler handling of missing or invalid data during processing. For example, missing pixels could be interpolated across. Calculations could be skipped for missing data points... Edit the perldl.conf file to turn off bad value support before compiling if needed. - This release includes significant improvments in portability, enhancements in functionality, and *many* bugs fixed. - PDL::Graphics::TriD modules for 3-D and image visualization are being refactored for improved portability and performance. Preliminary hooks are in PDL-2.4.4 to support the new functionality. Announcements to the perldl mailing list will be made when the PDL::Graphics::OpenGL::Perl and Term::ReadLine::FreeGLUT suport is on CPAN. - Builds out-of-the-box on cygwin and win32 - perl 5.6.x is explicitly required to configure and will go away in future versions. 5.8.x and above are the recommended versions Summary of Changes: - Improve uuencode support in Dumper for additional OSes such as *BSD varieties that may need additional options to prevent stripping of the leading slash in pathnames including: darwin, netbsd, openbsd, freebsd, and dragonfly. - Updated more PDL tests to use the recommended Test::More - Updated PDL::Graphics::PLplot build support for more 5.9.0 specific features - AutoLoader ~ expansion has been updated to conform more closely to the ~ expansion in the bash shell - Better checks for a valid PROJ4 build environment are now performed before attempting to compile PDL modules using it - PDL now builds and runs on perl-5.10.x - The perldl shell has added support for using FreeGLUT for display in analogy with the existing Tk event loop support. This enables refactoring of the TriD modules to use the Perl OpenGL module (a.k.a. POGL) instead of the internal, and deprecated, PDL::Graphics::OpenGL et. al. - The perldl acquire/filter/execute loop is now $_-safe by using $lines instead of $_ for all the central modifications. Avoids problems with some AUTOLOAD functions that leaked $_. - Removed ExtUtils::F77 from the required prerequisites for PDL to improve the buildability on platforms without an available fortran compiler. If you have a fortran compiler and EU::F77 installed, PDL::Slatec will be built. - zeros function added as an alias for the zeroes function - Many warning messages that were not actually problems have been quieted, especially many pointer to int conversion messages - Added $PERLDL::HISTFILESIZE to allow configuration of the number of lines of history to be saved by the interactive PDL shell. - Fixed implementation of the pctover() function to address bug #2019651 on sf.net. Added explicit documentation in the code on the algorithm being used. - Various updates to the PDL FAQ - Implemented a PDL interface to the Minuit minimization library from CERN - Removed circular dependency in PDL Makefile.PL Makefile generation process which caused builds with some versions of make to fail - Multiple fixes to enhance configuration and build for win32 - Added basic table-inversion to t_lookup for PDL::Transform - Fixed problem in uniqvec() where it failed to generate a correct result if all the input vectors were the same, fixed bug #1869760 - Add improved 16-bit image support for IO with rpic() and wpic() provided you have a recent version of the NetPBM library that supports 16-bit images - Enabled building of GSL on Win32. v2.4.3 2006-08-20 06:07:30-10:00 General Notes: - again, mainly a bugfix and polishing release. - builds out-of-the-box on cygwin and win32 build environment has been significantly improved - perl 5.6.x is now deprecated; 5.8.x is recommended. Support for 5.6.x may go away in future versions. Summary of Changes: - PDL now builds under cygwin on windows PC including TriD(OpenGL) 3D graphics and PGPLOT and PLplot 2D graphics support. See PDL/cygwin/ and files README and INSTALL for details and how to build/install external dependencies. - The win32 build has been improved. See PDL/win32/INSTALL for details. - Many fixes from the Debian build patches have been incorporated. See PDL/debian for specifics. - 64bit platform build has been improved. - New functionality, functions and modules: * Bad value support has been extended to per-PDL bad values as an experimental feature. To use, configure WITH_BADVAL => 1 and BADVAL_PER_PDL => 1 in perldl.conf before building. * PDL::GSL::INTEG now supports the calculation of nested integrals * New function tcircle (threaded circle) added to PDL::Graphics::PGPLOT This draws multiple circles in one go (see also tpoints and tline) * Added set operation routines for pdls treated as sets (help setops). * PDL::IO::GD module interface to the GD runtime image libaray () has been integrated. * The PDL::GIS::Proj and PDL::Transform::Proj4 modules to interface to the PROJ4 Cartographic Projections Library () have been added. * PDL::IO::HDF provides an interface to the HDF4 library (). - The PDL test suite (i.e. tests in in PDL/t) has been enhanced. Coverage has improved and output diagnostic messages are more useful. Test::More is becoming the preferred test module. The vanilla Test and Test::Simple may be deprecated in the future. - PDL core code has been fixed to address valgrind-detected errors and to enable better bad value support including the new experimental per-PDL bad values. These changes will require a re-build/install of any external modules using the C interface of PDL. See perldl.conf to configure the new bad value support. - Several TriD graphics build problems have been resolved. The TriD rotation bug has been fixed. - Many other bug fixes too numerous to mention. See the PDL/Changes file for details. - Multiple fixes and additions to PDL documentation as well as the PDL::FAQ. v2.4.2 2004-12-28 09:19:30-10:00 General Notes: - again, mainly a bugfix and polishing release. - perl 5.6.x is now deprecated; 5.8.x is recommended. Support for 5.6.x may go away in future versions. - a little too late for Christmas; but happy new year 2005! Summary of Changes: - Overhaul of FITS I/O. FITS binary tables are now supported, for both reading and writing. - Many improvements to PLplot handling - New Graphics::Limits package determines display range for multiple concurrent data sets - Better PDL::reduce function - Improvements to PDL::Transform - pdl() constructor is more permissive -- you can feed it PDLs and it does the Right Thing most of the time. - Cleaner handling of config files - Improvements to multi-line parsing in the perldl shell - New 'pdl' command-line entry to perldl shell allows #!-style scripting (so you can re-use your journal files verbatim) - Several fixes for Microsoft Windows operation - PDL::undefval works properly, also has warning tripwires - statsover finally seems to produce meaningful, consistent RMS values - Several 64-bit compatibility issues (this work is probably not yet complete). - Many small bug-fixes too numerous to list (see the Changes file). v2.4.1 2004-01-05 12:27:18-10:00 General Notes: - mainly a bugfix and polishing release Summary of Changes: - Fixed warnings with perl 5.8.2 - Replace original m51.fits with freely distributable image - Upgrade PLplot interface for plplot-5.2.1 and perl 5.8.2 - Improvement to documentation of autoloaded routines - Added more universal `whatis' function to perldl - Numerous small fixes/additions to docs/functions - Improved handling of empty piddles - Fixed most reported bugs v2.4.0 2003-05-22 12:09:26-10:00 General Notes: - Perl 5.6.0 or later is now required, along with the modules Filter and Text::Balanced. - After installing PDL 2.4.0 external PDL modules will need to re-built. (any such modules will refuse to load until they have been re-built) - New demos of the PDL::Transform and PDL::Transform::Cartography modules have been added to perldl. Type 'demo transform' or 'demo cartography' in the perldl shell. ( Note that PGPLOT is required to run ) Summary of Changes: - The NiceSlice syntax comes of age (Nice slicing has been around a while, but really needs to be acknowledged as the main way of slicing PDLs...) - New GSL functionality: greatly improved access to the Gnu Scientific Library, including interpolation, finite-difference, random variable, and other routines. - New, very powerful indexing and slicing operators allow boundary conditions (range, indexND) - N-dimensional indexing (indexND) and selection (whichND) methods - Powerful syntax for coordinate transformation and arbitrary image resampling and coordinate transformations -- including high powered spatially-variable filtering (PDL::Transform module) - Support for major cartographic transformations (PDL::Transform::Cartography module) - New PLPlot graphics interface ( cleaner and faster than PGPLOT ) - Many improvements to the PGPlot interface: * Strong FITS support (easy display of images, vectors, & contours in science coordinates) * Better vector graphic support [including improvements to line() and a new routine, lines()] * Much cleanup of errors and bugs * Spinlocks prevent interrupt-related PGPLOT crashes (requires Perl 5.8) * RGB output to truecolor devices (requires PGPLOT-5.3devel) - Improvements to the perldl shell: * Many bug fixes * Multi-line inputs to the perldl shell for easier cut-n-paste * ^D blocking on demand (controlled by perl variable or command-line switch) * Autoloading detects error conditions on compile * New demos - Header copying is now explicit rather than by reference -- so that, e.g., FITS file manipulation is more intuitive and natural. - Improved support for Astro::FITS::Header - Bad value support is improved - Several new utility routines, including glue(), zcheck(), and ndcoords(). - Better matrix operation support: matrix operations are now in PDL::MatrixOps, and are all threadable. Singular value decomposition, determinant, linear equation solving, matrix inversion, eigenvalue decomposition, and LU-decomposition. v2.3.4 2002-09-23 15:50:06-10:00 - Now should compile using perl 5.8.0 - Improved speed for generating PDL's from a perl array ref - Added PDL::IO::Storable, which enables PDL storage/retrieval using the Storable package. - Added PDL::GSL::SF (Gnu Scientific Library, Special Functions) hierarchy - New % operator follows (mathematically correct) perl % operator behavior - Numerous Bug Fixes See the Changes file for a detailed list of changes. v2.3.3 2002-05-22 03:16:29-10:00 Mainly a bugfix release with some nice little additions: - PDL::IO::Dumper: Cleanly save and restore complex data structures including PDLs. - Support for the new Astro::FITS::Header module (availiable on CPAN). See the Changes file for a detailed list of changes. v2.3.2 2001-12-18 22:20:31-10:00 - A pure bugfix release to fix compilation problems with gimp-perl (part of the gimp distribution). The following notes from 2.3 and 2.3.1 still apply: v2.3.1 2001-11-21 14:38:32-10:00 - A bugfix release to fix some compilation problems seen with 2.3. The following notes from 2.3 still apply: v2.3 2001-11-16 05:12:41-10:00 Summary of Changes - A nicer slicing syntax for PDL added via the new PDL::NiceSlice module. - Inline::Pdlpp module added, which enables in-line PDL::PP definitions. (i.e. no Makefiles, building hassles for creating PP code) - A Multitude of bug fixes, doc updates, and other changes. Note:Support for perl version 5.005 and previous is limited in this release. Perl 5.6.0 or greater is recommended for full PDL functionality. See the Changes file for a detailed list of changes. v2.2.1 2001-04-25 03:05:46-10:00 Summary of Changes Bugs Fixed: - 'pow' function fixed in math.pd - Misc memory leaks fixed. - PGPLOT 'undefined window size' bug fixed. - Test failures with opengl.t fixed. - Error in output of 'minimum_n_ind' function fixed. Misc Changes: - Documentation updates. - Updates to work with perl5.6.1 See the Changes file for a detailed list of changes. v2.2 2000-12-21 03:25:36-10:00 Major Changes: - 'Bad' Value Support added. With this option compiled-in, certain values in a PDL can be designated as 'Bad' (i.e. missing, empty, etc). With this designation, most PDL functions will properly ignore the 'Bad' values. See PDL::BadValues for details. - PGPLOT interface rewritten. New Features: - Interactive cursors (cursor) - Text on plots (text) - Legends (legend) - Circles, Rectangles, Ellipses - Multiple plot windows, one can jump from panel to panel when the window is divided in several. - More control over options - see PDL::Graphics::PGPLOTOptions for details. - New Examples in Example/PGPLOT. - Major updates to the Tri-D Code. Now requires perl 5.6 for TriD. - 'Reduce' function added. This provides a consistent interface to the projection routines (sumover, average, etc). See PDL::Reduce. - Improved OpenGL detection during 'perl Makefile.PL - pdldoc command added. This allows you to look up PDL documentation similar to the perldoc command. - Perl 5.6 is now recommended for PDL 2.2. It will still work with perl 5.005, but some of the extra libs won't be compiled ( like Graphics/TriD). Many other changes. See the Changes file for a detailed list of changes. v2.1 2000-06-07 22:23:47+00:00 Major Changes: - Speed Increase. Most PDL functions are now done totally in C-code, without any perl wrapper functions as was done with previous versions. The speedup will be most noticeable for operations on many small PDL arrays. - Mem Leaks Fixed. - Added a consistent, Object-Oriented interface to the various interpolate functions in PDL. (PDL::Func, See Lib/Func.pm). See the Changes file for a detailed list of changes. v2.005 2000-04-05 22:30:35+00:00 Major Changes: - A bugfix release to fix 2.004 problems with PGPLOT changes and perl 5.6.0. - The following notes from 2.004 still apply: - *IMPORTANT NOTE*: Due to changes to the PGPLOT module, 'use PDL::Graphics::PGPLOT' has been removed from PDL.pm (i.e. in scripts and within perldl you now need to explicitly say 'use PDL::Graphics::PGPLOT'). Additionally, it needs Karl's new 2.16 release of the PGPLOT module (available from CPAN). - Notable additions are a module for operations on complex piddles (PDL::Complex), a subtype of PDL which allows manipulation of byte type PDLs as N dimensional arrays of fixed length strings (PDL::Char) and a Levenberg-Marquardt fitting module (PDL::Fit::LM). - Bug reports and patches to the relevant places on sourceforge, please.
https://metacpan.org/changes/distribution/PDL
CC-MAIN-2016-07
refinedweb
9,380
58.89
This blog will discuss features about .NET, both windows and web development.). You may have already noticed that it's harder to debug errors that occur within a Lambda expression. For instance, if you have this Lambda expression: var entries = collection.Where(i => i.EndDate.Value >= new DateTime(2008, 2, 1)); Notice the type that EndDate represents, naturally it's a date but it's also a nullable one, so it's Nullable<DateTime> or datetime?. When Value is called, a value has to exist for EndDate; otherwise, an exception gets thrown because you can't call Value when the value is null (HasValue can be used to check for null). But the issue with Lambda's is that they don't throw an exception immediately. Rather they throw an exception whenever you do anything with the entries variable. As soon as you may do "var entryCount = entries.Count();" an exception will be thrown, seemingly an issue with this line but ultimately pointing to the lambda. Now, to debug, if its your method being called, the debugger steps into it. Otherwise, for framework code, it doesn't (except possibly if you enabled CLR debugging which I didn't). You probably know that C# supports inheritance. With inheritance, C# supports constructor inheritance of sorts. This means that you can call a base class’s constructors in the derived class. If you had the following class: public class DerivedObject : BaseObject{ public DerivedObject() { }} This, by nature, doesn’t support calling the base class’s constructor. Instead, you would do: Public class DerivedObject : BaseObject{ public DerivedObject(): base() { }} The base() statement calls the base class constructor. If the constructor has any parameters, these can be passed along too. You may not want to call the base class. There isn’t any point in calling the base constructor, unless the base constructor does something (like pass data to local variables or execute some sort of logic, which the latter isn’t recommended). ASP.NET AJAX has this too; to define a constructor for a class that inherits from another class, you do: DerivedJSObject = function() { } DerivedJSObject.registerClass(“DerivedJSObject”, BaseJSObject); Now, this scenario involves not calling the base class constructor. Remember I said in C# it’s OK to not call the base class constructor. In ASP.NET AJAX, it IS NOT OK to omit this call. This is because ASP.NET AJAX does some important stuff in the base class call, through a special method. This method is initializeBase, as in: DerivedJSObject = function() { DerivedJSObject.initializeBase(this);} The initializeBase method takes the instance of the class, followed by any other parameters to pass to the constructor (in an array form of []). This method is important because JavaScript doesn’t support inheritance. Because it doesn’t, there are a coupleof ways to setup inheritance in JavaScript. These options are: I'm not going to go too in-depth into this because these references do a great job of explaining the concepts. I found out yesterday that I was renewed as a Microsoft MVP for another year! I look forward to tweeting, blogging, and writing about Microsoft .NET technologies. You can connect to me with the following: Twitter: @brianmainsLinked In: I created a TWTPOLL for this very question. Please submit your opinion:. Thanks for your participation in advance. Link to us All material is copyrighted by its respective authors. Site design and layout is copyrighted by DotNetSlackers. Advertising Software by Ban Man Pro
http://dotnetslackers.com/Community/blogs/bmains/archive/2009/07.aspx
crawl-003
refinedweb
573
57.47
This blog post is about the ways of getting data from API in React. Before you read this post, you should be familiar with React library and Application Programming Interface (API). React library is a wonderful tool for building rich and highly scalable user interfaces. One of its powerful features is the possibility to fetch data for the web application from the outside and interact with it. Why to Fetch Data? When you are just starting developing web applications with React, you probably won't need any external data at the beginning. You will build a simple applications like ToDo app or Counter and add your data to the state object of your application. And that is totally fine. However, at some point you want to request real world data from an own or a third-party API. For example, if you want to build a book store or weather application, it is faster and convenient to use one of those free data sources available in the Internet. Where to Do Data Fetching Now that we have decided that we want to fetch data from external source, here comes the question - where exactly in our application we should do that? This question depends the following criteria: - who is interested in data? - who will show the loading indicator in case data is pending? - where to show an optional error message when the request fails? Usually this is a common parent component in the components tree who will do this job. It will fetch the data, store it to its local state and distribute it to the children: 1. On the first mounting of the component We use this way when we want the data to be accessible when we first start the application. It means, we need to perform data fetching when our parent component is being mounted. In class-based components the perfect place for data fetching is componentDidMount() lifecycle method. In functional components it is useEffect() hook with an empty dependancy array because we need the data to be fetched once. 2. On event being triggered We can fetch data on triggering an event (for example button click) by creating a function, which will make data fetching and then binding that function to the event. Ways of Fetching Data There are many ways to extract data from API in React: - using Fetch API - using Axios library - using async-await syntax - using custom hooks - using React Query library - using GrapthQL API We will explore these ways now in details. 1. Fetching Data with Fetch API Fetch API is built into most modern browsers on the window object (window.fetch) and enables us to make HTTP requests very easily using JavaScript promises. In our CRA we can use fetch() method to get the data. This method accepts just an URL to the data. To do so, we will create a method called fetchData(). It will call fetch() method with provided URL, then convert the result to JSON object and print it to the console: const fetchData = () => { return fetch("") .then((response) => response.json()) .then((data) => console.log(data));} We can use this method now anywhere in the application. Here is an example how to use it in useEffect()hook: import {useEffect} from "react"; useEffect(() => { fetchData(); }, []); 2. Fetching Data with Axios library It does the same job as Fetch, but the main difference is that it already returns the result as JSON object, so we don't need to convert it. First we need to install it using npm: npm install axios Than we need to import it to our project and we can use it in the same function fetchData() instead of fetch() method: import axios from "axios" const fetchData = () => { return axios.get("") .then((response) => console.log(response.data));} What's convenient about using Axios is that it has a much shorter syntax that allows us to cut down on our code and it includes a lot of tools and features which Fetch does not have in its API. 3.Fetching Data with Async-Await syntax In ES7, it became possible to resolve promises using the async-await syntax. If you are not familiar with such function, check here. The benefit of this is that it enables us to remove our .then() callbacks and simply get back our asynchronously resolved data. Lets re-write our fetchData() function using this syntax: async function fetchData() { try { const result = await axios.get("") console.log(result.data)); } catch (error) { console.error(error); } } 4.Fetching Data with Custom Hook We can use the library React-Fetch-Hook to extract the data from API. It includes already several properties we can use: data, error for errors handling and isLoading for loading issues. First it should be installed: npm install react-fetch-hook Then it should be imported and used on top of common parent component: import useFetch from "react-fetch-hook" const {data} = useFetch(""); console.log(data); There are other ways for data fetching as React Query library and GraphQL API, but this blog post is not covering them in depth, but you are free to explore those :) Happy Fetching!!! Thank you for reading my blog. Feel free to connect on LinkedIn or Twitter :) Discussion (31) Here is a playlist of videos about React that I made for my students - Hope it helps you learn faster. I started posting a new YouTube playlist last week about React in 2021 that includes all the newer features of React. Great article! If any reader is interested in a more in-depth video guide that covers each of these data-fetching strategies, you can check out my post here: reedbarger.com/fetch-data-in-react Great post! I discovered other library very interesting, it's SWR by Vercel :) Is the try catch useful for an async api call? If the promise rejects, the catch will not be triggered. There is actually situations where using axios, “try/catch” & async/await can cause unwanted behaviour. At least for me it did. E.g. when using interceptors a failed request might not end up in catch. Nevermind, just read it does actually throw :D My personal favourite way to handle web requests (post and put) is defintely axios due to it feeling far more intuitive than the others. What’s a bit missing In this post is: In the topic index you mention graphQL (6.) but you only mention it as a side note together with react-query-library. “ways of fetching data” is a bit confusing to me. Axios and fetch-api are library’s for sending requests. Everything else is ways to structure/time sending requests. A bit confusing in my perspective. How to cancel the request once the component is unmounted? Why would you need to cancel it? It performed just one time Imagine you are setting state on a successful response. If the component is unmounted before the response is back, you will get an error, something like "Unable to set state on unmounted component" In this case you can introduce a variable to track if the component is unmounted or not. If you use functional component, you can write something like this: useEffect(() => { let isMounted = true; // track whether component is mounted }, []); // only on "didMount" replace "reach" with "rich" on the second paragraph. Nice article 😊 Another nice article! I'd never heard of the fetch-hook before, pretty cool :) Nice post, thanks! I have read this blog and enjoyed it a lot but I have a question, Which way is the best way for fetching data from backend? I use Axios for this There is really no better way :) use what is suitable for your needs. If you ask me, I use Axios just to avoid that extra line of code to transform data to JSON object, but it requires additional import ....most people prefer fetch() because it is already built-in method Thanks, this post was needed. I never thought about making a custom hook for API calls oO I feel kinda stupid now haha Thanks for that realization! You are welcome :) If you need to quickly create a custom REST API for testing the fetch process, I would recommend you to have a look at the restapify library. I love articles that present implementations with clear examples and options like you have. Nice one! This post is exactly what I needed for my application. Thank you! Great explanation. Thank you! how to fetch fixed number of data from jsonplaceholder, like I want to only fitch 10 element for my page. First learn to why we are using React.Js and learn how create component in React.Js like a Class and functional component in react Fantastic post.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/olenadrugalya/ways-of-getting-data-from-api-in-react-2kpf
CC-MAIN-2022-33
refinedweb
1,449
62.07
This is a Java Program to Find Prime Numbers Within a Range of n1 and n2. Enter the upper and lower limits as input. Now we use modulus operation along with double for loops and if-else conditions to get the output. Here is the source code of the Java Program to Find Prime Numbers Within a Range of n1 and n2. The Java program is successfully compiled and run on a Windows system. The program output is also shown below. import java.util.Scanner; public class Prime { public static void main(String args[]) { int s1, s2, s3, flag = 0, i, j; Scanner s = new Scanner(System.in); System.out.println ("Enter the lower limit :"); s1 = s.nextInt(); System.out.println ("Enter the upper limit :"); s2 = s.nextInt(); System.out.println ("The prime numbers in between the entered limits are :"); for(i = s1; i <= s2; i++) { for( j = 2; j < i; j++) { if(i % j == 0) { flag = 0; break; } else { flag = 1; } } if(flag == 1) { System.out.println(i); } } } } Output: $ javac Prime.java $ java Prime Enter the lower limit : 2 Enter the upper limit : 20 The prime numbers in between the entered limits are : 3 5 7 11 13 17 19 Sanfoundry Global Education & Learning Series – 1000 Java Programs. Here’s the list of Best Reference Books in Java Programming, Data Structures and Algorithms.
https://www.sanfoundry.com/java-program-find-prime-numbers-within-range-n1-n2/
CC-MAIN-2018-13
refinedweb
224
64.2
In this article, we will learn about the solution and approach to solve the given problem statement. Given an array as an input, we need to find the largest element in the array. Let’s see the implementation below − def largest(arr,n): #maximal element max = arr[0] for i in range(1, n): if arr[i] > max: max = arr[i] return max # main arr = [10, 24, 45, 90, 98] n = len(arr) Ans = largest(arr,n) print ("Largest in the given array is",Ans) Largest in the given array is 98 All the variables and functions are declared in global scope as shown in the figure below. In this article, we learned about the approach to find the largest element in an array.
https://www.tutorialspoint.com/python-program-to-find-largest-element-in-an-array
CC-MAIN-2022-05
refinedweb
124
59.67
Spring is in the air here in sunny NYC! Well, it was sunny for a bit. Now it’s turned into that classic film-noir drizzle. That’s authentic New York weather outside our window as we do the Round Up! Ask for Help I asked, “In Jasmine, how do you spy on a constructor?” Suppose you have a constructor called Widget. Saying spyOn(window, "Widget")swaps out the real Widgetfunction with a spy. The real Widgetimplementation takes its prototype with it, which means that Widgets created while the constructor is spied on don’t get the methods a Widgetwouldis a class method. But in Javascript, new. Interesting Stuff nilemail? Regarding Jasmine and stubbing constructors, you can do this: Since the Widgetctor is just a function, newcalls the function and gets your fake widget, and then it does its extra magic with the prototype and such on your fake widget. Should work for most things you’d want to do, I believe. March 12, 2010 at 8:21 pm The problem is that constructors don’t return their new objects; they just do stuff to them. So even if I do that, `new Widget()` won’t return `fakeWidget`, it’ll return `{}` (since we’ve stubbed out the constructor implementation, and so no properties are added to the new object). Just saying `Widget()` *will* return `fakeWidget`, but that’s not how `Widget` is used. March 13, 2010 at 7:54 am Aaah, I see the problem you’re having with stubbing the constructor. It’s not that the function is stubbed, it’s that the function’s prototype is not on the stub. Calling new will return the object in the andReturn parameter, but it won’t attach the prototype (or, really, it attaches the prototype associated with the stub to the new object). Hm. What happens if you do this: March 13, 2010 at 10:41 am That was the workaround we went with for the prototype problem that doesn’t solve the fact that `andReturn` doesn’t work on constructors… …which turns out to be a complete lie. I really thought we saw that fail, but an isolated test case shows that it works. Which is curious to me: how is that possible? My understanding is that `spyOn` replaces the spied function with a spy. That spy can be told what to return with `.andReturn`. But the return value of a constructor is discarded! Saying `new Foo()` *always* creates and returns a new object, doesn’t it? So how does this pass? var namespace = {}; namespace.Constructor = function() { this.wasMadeWithRealConstructor = true; }; var myFakeObject = { wasMadeWithRealConstructor: false }; spyOn(namespace, “Constructor”).andReturn(myFakeObject); expect(new namespace.Constructor()).toEqual(myFakeObject); March 13, 2010 at 11:25 am OK, who let their pair stick a knife in the toaster? Bad pair… March 13, 2010 at 11:22 pm fwiw, I believe the following will generally get you the behavior you need, but it requires a fixed set of arguments. We’ll look at supporting functions with properties better in jasmine; hopefully in the future this sort of thing will be transparent. March 14, 2010 at 1:27 pm
http://pivotallabs.com/nyc-standup-round-up-for-mar-8th-mar-12th/?tag=incubation
CC-MAIN-2014-10
refinedweb
523
65.42
RCMDSH(3) BSD Programmer's Manual RCMDSH(3) rcmdsh - return a stream to a remote command without superuser #include <unistd.h> rsh(1) or the value of rshprog (if non-null). rshprog may be a fully-qualified path, a non-qualified command, or a command contain- ing space-separated command line arguments. The rcmdsh() function looks up the host *ahost using gethostbyname(3), returning -1 if the host does not exist. Otherwise *ahost is set to the standard name of the host and a connection is er- ror. rsh(1), socketpair(2), rcmd(3), rshd(8) The rcmdsh() function first appeared in OpenBSD 2.0. If rsh(1) encounters an error, a file descriptor is still returned in- stead of -1. MirOS BSD #10-current May.
http://mirbsd.mirsolutions.de/htman/sparc/man3/rcmdsh.htm
crawl-003
refinedweb
126
57.67
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How to get the tax_id from a sale order? (Customize Quotation template) Hey everybody Today I'm trying to build my own Odoo templates but I'm stuck on one thing. So far I managed to get the fields name, product_uom, product_uom_quantity price_unit etc from the model sale.order But now I need to get the tax amount from the database, which is stored in the model sale.order.line under the field tax_id which is a many2many field. How exactly can I get the tax_id from the model sale.order.line when I'm using the model sale.order? So far I have the following code: class sale_order(osv.Model): _inherit = 'sale.order' _columns = { 'is_template': fields.boolean('Template'), 'template_id': fields.many2one('sale.order', 'Offer', domain=[('is_template', '=', True)]), } def onchange_template(self, cr, uid, ids, template=False, partner_id=False, pricelist_id=False, fiscal_position=False): line_obj = self.pool.get('sale.order.line') result = {'order_line': []} lines = [] if not template: return {'value': result} if not partner_id: raise osv.except_osv(_('No Customer Defined!'), _('Before choosing a template,\n select a customer in the template form.')) template = self.browse(cr, uid, template) order_lines = template.order_line for line in order_lines: vals = line_obj.product_id_change(cr, uid, [], pricelist = pricelist_id, product = line.product_id and line.product_id.id or False, qty = 0.0, uom = False, qty_uos = 0.0, uos = False, name = '', partner_id = partner_id, lang = False, update_tax = True, date_order = False, packaging = False, fiscal_position = fiscal_position, flag = False) vals['value']['discount'] = line.discount vals['value']['product_id'] = line.product_id and line.product_id.id or False vals['value']['name'] = line.name vals['value']['state'] = 'draft' vals['value']['product_uom_qty'] = line.product_uom_qty vals['value']['product_uom'] = line.product_uom and line.product_uom.id or False vals['value']['price_unit'] = line.price_unit #vals['value']['tax_id'] = line.tax_id lines.append(vals['value']) result['order_line'] = lines Any information or help is much appreciated! Yenthe you can get many2many value using the below code vals['value']['tax_id'] = [(6, 0, [x.id for x in line.tax_id])] Awesome! This does exactly what I wanted it to do! Could you please explain me what every value does after the = ? = [(6, 0, [x.id for x in line.tax_id])] I understand that the for gets every id for every record but what about the 6,0? Where does this come from? please see this document For a many2many field, a list of tuples is expected. Here is the list of tuple that are accepted, with the corresponding semantics (6, 0, [IDs]) replace the list of linked IDs (like using (5) then (4,ID) for each ID in the list of IDs) Example: [(6, 0, [8, 5, 6, 4])] sets the many2many to ids [8, 5, 6, 4] About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-to-get-the-tax-id-from-a-sale-order-customize-quotation-template-62714
CC-MAIN-2017-22
refinedweb
495
53.58
OFFSETOF(3) Linux Programmer's Manual OFFSETOF(3) offsetof - offset of a structure member #include <stddef.h> size_t offsetof(type, member); The). offsetof() returns the offset of the given member within the given type, in units of bytes. POSIX.1-2001, POSIX.1-2008, C89, C99.u; c=%zu; d=%zu a=%zu\n", offsetof(struct s, i), offsetof(struct s, c), offsetof(struct s, d), offsetof(struct s, a)); printf("sizeof(struct s)=%zu\n", sizeof(struct s)); exit(EXIT_SUCCESS); } This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. GNU 2020-11-01 OFFSETOF(3) Pages that refer to this page: readdir(3), system_data_types(7)
https://man7.org/linux/man-pages/man3/offsetof.3.html
CC-MAIN-2021-04
refinedweb
131
58.79
Can't get the WebView delegate methods to work (SOLVED) I'm trying to do something when a webpage has finished loading, but it doesn't seem to work. import ui, sound class MyWebViewDelegate (object): def webview_did_finish_load(webview): sound.play_effect('Ding_3') w = ui.WebView() w.frame = (0, 0, ui.get_screen_size()[0], ui.get_screen_size()[1]) w.delegate = MyWebViewDelegate() w.load_url('') w.present() Any ideas? Sorry about this, the documentation is wrong. The first parameter of every method should be self. So your delegate class should look like this: class MyWebViewDelegate (object): def webview_did_finish_load(self, webview): sound.play_effect('Ding_3') Just curious..I was playing with this snippet and I find that some webpages call webview_did_finish_load more than once. How does this method determine that the load is finished and why do some webpages (e.g.,) trigger it more than once? @ihf This is related to embedded iframes. Unfortunately, I forgot to implement a loadingattribute that would allow you to check whether the view is still loading, so there's currently no way to work around this... Noted that it is a year later, and the documentation is still wrong.
https://forum.omz-software.com/topic/872/can-t-get-the-webview-delegate-methods-to-work-solved
CC-MAIN-2017-34
refinedweb
187
52.46
Question: I've been busy working on a program that can solve Minesweeper puzzles (for no other reason than I think it's fun). When it comes down to the UI, though, I greatly dislike the idea of instantiating over a hundred of the same control, one per cell. Should I create a custom control that handles all of its drawing and input itself? What approach do you guys suggest? I'm using WPF, and I'm pretty new. Any pointers would be awesome. Solution:1 Yes a custom control would be a good idea. Also, M-V-VM is a must in a situation like this; it will reduce the complexity of your app greatly. I'd take a UniformGrid and use Buttons as the squares. You'd have to create a tri-state custom button if you want to add in the "?" intermediate state. The model for the button would look like public class MineSquare : INotifyPropertyChanged { // exploded, number, or nothing pubic ImageSource ButtonImage {get;private set;} // true, then goes to false when clicked public bool CanClick {get; private set;} // bound to the Command of the button public ICommand Click {get; private set;} } You deal with the model in code rather than the controls. Drop in nine MineSquares into an ObservableCollection on your ViewModel bound to your UniformGrid and you have a 3x3 minesweeper game. Each button handles its own logic. You can hook into the models via the view model to iterate over all squares and determine if everybody has been clicked. Solution:2 I think you should create a single, owner-drawn control. WPF is cool, but a WPF app still has the same limitations regarding the total number of controls on a form, so having a separate control for each cell in Minesweeper would still be problematic. Minesweeper is pretty played out, though, as much as I love it myself. Maybe you could have more fun with it by making the cells hexagonal instead of rectangular, and arranging the mines so that they spell out dirty words or something. Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/11/tutorial-creating-minesweeper-ui-control.html
CC-MAIN-2019-04
refinedweb
363
61.36
On Tuesday 14 February 2006 5:40 am, Olivier Galibert wrote:> > > 4- sysfs has all the information you need, just read itThere's no ownership or permissions information in sysfs. Even busybox's eight kilobyte micro-udev replacement has the option for an /etc/mdev.conf to specify permissions and ownership on device nodes.> > That mapping should not live in sysfs,> > /dev is none of the kernel's business and sysfs is the kernel's> > playground.>> Why not have udev and whatever comes after tell the kernel so that a> symlink is done in sysfs? The kernel not deciding policy do not> prevent it from storing and giving back userland-provided information.That wouldn't help us. If userspace generates the info, then userspace can drop a note in /dev or something to keep it there.> I guess you didn't bother to read the "answer 3" paragraph of my> email. Do you trust udev to still exist two years from now, given> that hotplug died in less than that? Do you trust udevinfo to have> the same interface two years from now given that the current interface> is already incompatible with a not even two-years old one (udev 039,> 15-Oct-2004 according to kernel.org) which is widely deployed as part> of fedora core 3?You want something simple and stable?Busybox's mdev should still be there, and have the same interface, two years from now. (We may have to fix it between now and then if the kernel keeps moving out from under us, but that shouldn't change how you set up and use it..)If you call it without -s, it assumes it was called from /sbin/hotplug and looks at its environment variables to figure out what device node to create/delete.That's it. That's all we do. No persistent naming, no device renaming, /dev is a flat namespace with no subdirectories, mounting tmpfs on it before calling us is your problem, as is putting /dev/pts and /dev/shm in there..echo "CONFIG_MDEV=y" >> .configecho "CONFIG_FEATURE_MDEV_CONF=y" >> .configmakemv busybox mdevThere you go, standalone 8k binary. It'll come standard in the busybox 1.1.1 release. (It was in 1.1.0, but had a bug.)Rob-- Never bet against the cheap plastic solution.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2006/2/14/478
CC-MAIN-2017-22
refinedweb
413
64.91
Well Im stuck on this problem for my class, I mean i'm drawing a complete blank here. heres the crux of the problem: This program has a console-based user interface. After printing a brief introduction to the user, your program will prompt for a file name. Your program will examine the data and find the longest sequence of decreasing numbers. The program will display the numbers in that sequence, how many there are, and the position (starting from 0) where the sequence starts. The program should repeat this process, allowing the user to analyze other files, until the user signals they are done (using an exit sentinel of some sort). Heres one of the methods I can't quite get: public static double[] getDecSeq(double[] data) -- returns the longest decreasing sequence found in the data. If more than one decreasing sequence is present with the "longest" length, return the one that appears closest to the beginning of the array. Therefore, if the entire set of data is non-decreasing, return an array of capacity one holding the first element. The returned array should be completely filled (i.e. its length should match the length of the sequence). A push in the right direction would be great. I keep trying to use a for loop to go through the array and find the decending sequence but I'm not sure how to recognize it and how to store it so I have propper access to it later. heres what I have so far. import java.util.*; import java.io.*; public class Sequence{ public static void main(String[] args) throws FileNotFoundException { Scanner console = new Scanner(System.in); System.out.print("Please enter the name of the file to be processed: "); File f = new File(console.nextLine()); System.out.println(Arrays.toString(getDecSeq((getData(f))))); } public static double[] getData(File source) throws FileNotFoundException{ Scanner input = new Scanner(source); double[] listA = new double [70]; int count = 0; while (input.hasNextDouble()){ listA [count] = input.nextDouble(); count++; } double[] listB = new double[count]; for (int i = 0; i < listB.length; i++) listB[i] = listA[i]; return listB; } //public static int findSequence(double[] data, double[] seq){ //} public static double[] getDecSeq(double[] data){ //System.out.println("hello" + Arrays.toString(data)); double[] seq = new double [data.length]; double temp = 0; int j = 0; for (int i = 1; i < data.length; i++){ //This is where I start going if (data[i] < data[i-1]) //off the rails!! seq[i] = data[i]; } return seq; } //public static String toString(double[] data){ //} }
http://www.dreamincode.net/forums/topic/198727-help-with-arrays-methods/
CC-MAIN-2017-22
refinedweb
420
64.2
So I have a specific requirement that a list be stored as an array, but the professor suggested it would be easier to start it off as an arraylist then use the .toArray method to copy it to a new array. The thing is, the array is to be declared as a field as "songs", and at that point the program won't know how many objects are going to be in the list, and the number of songs is an important part for later sections of the program. Basically what's happening is I'm supposed to take a text file with around 10,000 song lyrics and organize them so they'll be searchable. The text file is organized well, so I just read through the file and make each one a Song object, with String parameters containing title, artist, and lyrics. As they're read in, I add them to an arrayList, songsAL. After all that is done, I try to copy songsAL to the existing (but uninitialized) array songs, and I get a nullpointerexception. If I just keep it as an arrayList and change any place is requires an array to an arraylist, it works fine. But he gave us a GUI we're supposed to use it with, and I doubt that will work if I keep it as an arrayList. It's been over a year since I've done any programming work, and before that I had only done it for a year, so I'm pretty rusty. The existing comments were meant for the professor, but they might help you understand my thinking. Here's my code: Code : import java.io.*; import java.util.*; /* * SongCollection.java * Read the specified data file and build an array of songs. * * Initial code by Bob Boothe September 2008 */ public class CopyOfSongCollection { private Song[] songs; /*I can't figure out a way to copy the ArrayList I have to the existing array songs. I could make a new array inside the constructor, but then I wouldn't be able to access it. I tried actually building the array during the file scanning process, but without initializing it first I can't. The only way I can think of would be to initialize it up here, but that would make it a lot harder to get the number of songs, since I would have to initialize it before knowing the number of songs in the text file. */ private ArrayList<Song> songsAL = new ArrayList<Song>(); //so I just used an arrayList for the whole thing public CopyOfSongCollection(String filename) throws IOException { //read in the file and build songs array try{ FileReader reader = new FileReader(filename); Scanner fileScan = new Scanner(reader); while(fileScan.hasNext()){ //as long as there is something in the next spot of the file String a = fileScan.nextLine();//first line will be artist a = a.substring(8, a.length()-1);//remove prefix 'ARTIST="' String t = fileScan.nextLine();//next line is title of song t = t.substring(7, t.length()-1);//again, remove prefix String l =""; //initialize string for lyrics while(!fileScan.hasNext("\"")){//while the next character is NOT a double quote l = l + "\n" + fileScan.nextLine();//add the next line to the lyrics string } l = l.substring(9, l.length()); //remove prefix fileScan.nextLine(); //necessary to skip the line with just quotations Song s = new Song(a, t, l); //build song object out of previously obtained info this.songsAL.add(s); //add song object to arraylist } }catch (FileNotFoundException e){ System.err.println("File not found"); System.exit(1); } // sort the songs array Collections.sort(songsAL); for(int i = 0; i < songsAL.size(); i++){ songs[i] = songsAL.get(i); } // print statistics: System.out.println("Number of songs: " + songsAL.size());// the number of songs System.out.println("Number of comparisons used: " + Song.getCount());// the number of comparisons used to sort it } // return the songs array // this is used as the data source for building other data structures public Song[] getAllSongs() { return songs; //I'm assuming this is where the importance of using an array instead of //an arraylist comes into play. I just changed it to an arraylist. } // testing method public static void main(String[] args) throws IOException { // todo: show First 10 songs CopyOfSongCollection sc = new CopyOfSongCollection("allSongs.txt"); try{ for(int i = 0; i < 10; i++){ System.out.println("Artist:" + sc.songsAL.get(i).getArtist()); System.out.println("Title: " + sc.songsAL.get(i).getTitle()); } }catch (IndexOutOfBoundsException e){ System.err.println("Array doesn't have that index!"); } if (args.length == 0) { //not entirely sure what this code is for System.err.println("usage: prog songfile"); return; } } }
http://www.javaprogrammingforums.com/%20collections-generics/11002-problem-copying-arraylist-array-printingthethread.html
CC-MAIN-2016-30
refinedweb
767
64.51
During one of last projects I needed to test some webservices. I was wondering: if I can do it with Burp or by manual testing, maybe I can also write some quick code in python... And that's how I wrote soapee.py: ---<code>--- root@kali:~/code/soapee-v3# cat soapee3.py #!/usr/bin/env python # ------------------------------------- # soapee.py - SOAP fuzz - v0.2 # ------------------------------------- # 16.10.2015 import urllib2 import sys import re from bs4 import BeautifulSoup import httplib from urlparse import urlparse target = sys.argv[1] def sendNewReq(method): global soap_header print '[+] Sending new request to webapp...' toSend = open('./logs/clear-method-'+str(method)+'.txt','r').read() parsed = urlparse(target) server_addr = parsed.netloc service_action = parsed.path body = toSend print '[+] Sending:' print '[+] Response:' headers = {"Content-type": "text/xml; charset=utf-8", "Accept": "text/plain", "SOAPAction" : '"' + str(soap_header) + '"' } # print '***********************************' # print 'headers: ', headers # print '***********************************' conn = httplib.HTTPConnection(server_addr) conn.request("POST", parsed.path, body, headers) # print body response = conn.getresponse() print '[+] Server said: ', response.status, response.reason data = response.read() logresp = open('./logs/resp-method-'+ method + '.txt','w') logresp.write(data) logresp.close() print '............start-resp...........................................' print data print '............stop-resp...........................................\n' print '[+] Finished. Next step...' print '[.] -----------------------------------------\n' ## def prepareNewReq(method): print '[+] Preparing new request for method: '+str(method) fp = open('./logs/method-'+str(method)+'.txt','r') fp2 = open('./logs/fuzz-method-'+str(method)+'.txt','w') for line in fp: if line.find('SOAPAction') != -1: global soap_header soap_header = line soap_header = soap_header.split(" ") soap_header = soap_header[1].replace('"','') soap_header = soap_header.replace('\r\n','') # print soap_header newline = line.replace('<font class="value">','') newline2 = newline.replace('</font>','') newline3 = newline2.replace('string','";\'>') newline4 = newline3.replace('int','111111111*11111') newline5 = newline4.replace('length','1337') newline6 = newline5.replace('<soap:','<soap:') newline7 = newline6.replace('</soap:','</soap:') newline8 = newline7.replace(' or ','or') fp2.write(newline8) print '[+] New request prepared.' fp2.close() print '[+] Clearing file...' linez = open('./logs/fuzz-method-'+str(method)+'.txt').readlines() open('./logs/clear-method-'+str(method)+'.txt','w').writelines(linez[6:]) fp.close() fp2.close() sendNewReq(method) ## # compose_link(method), get it, and save new req to file def compose_link(method): methodLink = target + '?op='+ method print '[+] Getting: ', method fp = open('./logs/method-'+str(method)+'.txt','w') req = urllib2.urlopen(methodLink) page = req.read() soup = BeautifulSoup(page) for pre in soup.find('pre'): fp.write(str(pre)) print '[+] Method body is saved to file for future analysis.' fp.close() prepareNewReq(method) ## ## main def main(): print ' _________________' print ' (*(( soapee ))*)' print ' ^^^^^^\n' url1 = urllib2.urlopen(target) page1 = url1.readlines() # get_links_to_methods print '[+] Looking for methods:\n------------------------' for href in page1: hr = re.compile('<a href="(.*)\.asmx\?op=(.*?)">') #InfoExpert.asmx?op=GetBodyList">GetBodyList</a>') found = re.search(hr,href) if found: # at this stage we need to create working link for each found method method = found.group(2) # found method get as URL for pre content to next request compose_link(method) # ... # ... get example of each req # ... change each str/int to fuzzval # ... send modified req print '---------------------------\ndone.' ## try: main() except IndexError, e: print 'usage: ' + str(sys.argv[1]) + '\n' root@kali:~/code/soapee-v3# ---</code>--- Also@pastebin;) As you can see it's just a proof of concept (mosty to find some useful information disclosure bugs) but the skeleton can be used to prepare more advanced tools. Maybe you will find it useful. Enjoy ;) Haunt IT HauntIT Blog - security testing & exploit development Saturday, 24 October 2015 Friday, 2 October 2015 My Java SIGSEGV's During couple of last days I was checking lcamtuf’s American Fuzzy Lop against some (“non-instrumented”) binaries. I was looking for some sources, but unfortunately I wasn’t able to find any. Next thing was checking where I have Java installed (so I will know what/where I can check. Kind of ‘test lab’ was: Ubuntu 12, Kali Linux, WinXP, Win7. (Exact version of Java installed on that OS’s you will find below.) After 2 days there were approx. 170 different samples. After first check, we can see that java (7) will end up with sigsegv (with SSP enabled – Kali Linux): Same sample with Java 6 will produce: Next thing I saw was: During the analysis of the crash file in gdb I found some “new” function names. I decide to find them also but in Ida Pro and check, what is their purpose: (As you can see, some core files were generated by valgrind.) Below “find_file” function (from IdaPro): You can easily see that we have malloc() here. Next thing I decide to check was JLI_ParseManifest() function: After checking those functions, we can see that JLI_ParseManifest() will iterate through each character in Manifest file. Correct me if I’m wrong but I think that find_file() is the place when SIGSEGV occurs. Manifest file is parsed here: When we will set up “Windbg” (in IdaPro) to run Java with our sample.jar file (generated by afl), we will see, that crash occurs in similar region: After this warning, Ida will jump to next location: In text mode view, we can see more instructions: Let’s see if in pseudocode (Tab/F5) we will find any hint: We see the memcpy() function with 3 arguments: v4, v3 and v5. Details about those variables we can find in the beginning of the pseudocode: Now we know that v3 is the value from esi, v4 is the value from edi and v5 is the value from ecx. Next thing is generating and saving dump file. We will open it in Windbg later: Now, open java.dmp file in Windbg and observe results: We can see that SIGSEGV occurs when the program is using EDI and ESI registers. Let’s check what’s on that location: let’s use the “dc” command: In case that you will ask what !exploitable will tell you about it, screen below: Short summary: this sample file will crash Java on all mentioned systems. If you think that this is exploitable… Well. Let me know what do you think about (comments or emails). Any ideas are welcome. ;) Posted by Haunt IT at 02:05 No comments:
http://hauntit.blogspot.com/
CC-MAIN-2015-48
refinedweb
997
60.61
Message Passing Basics John Urbanic urbanic@psc.edu Introduction What is MPI? The Message-Passing Interface Standard(MPI) is a library that allows you to do problems in parallel using message- passing to communicate between. Message Passing Basics John Urbanic urbanic@psc.edu What is MPI? The Message-Passing Interface Standard(MPI) is a library that allows you to do problems in parallel using message- passing to communicate between processes. In order to do parallel programming, you require some basic functionality, namely, the ability to: With these four capabilities, you can construct any program. We will look at the basic versions of the MPI routines that implement this. Of course, MPI offers over 125 functions. Many of these are more convenient and efficient for certain tasks. However, with what we learn here, we will be able to implement just about any algorithm. Moreover, the vast majority of MPI codes are built using primarily these routines. On the T3E or TCS, the fundamental control of processes is fairly simple. There is always one process for each PE that your code is running on. At run time, you specify how many PEs you require and then your code is copied to each PE and run simultaneously. In other words, a 512 PE T3E or TCS code has 512 copies of the same code running on it from start to finish. At first the idea that the same code must run on every node seems very limiting. We'll see in a bit that this is not at all the case. The easiest way to see exactly how a parallel code is put together and run is to write the classic "Hello World" program in parallel. In this case it simply means that every PE will say hello to us. Let's take a look at the code to do this. Hello World C Code #include <stdio.h> #include "mpi.h" main(int argc, char** argv){ int my_PE_num; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &my_PE_num); printf("Hello from %d.\n", my_PE_num); MPI_Finalize(); } program shifter include 'mpif.h' integer my_pe_num, errcode call MPI_INIT(errcode) call MPI_COMM_RANK(MPI_COMM_WORLD, my_pe_num, errcode) print *, 'Hello from ', my_pe_num,'.' call MPI_FINALIZE(errcode) end Hello from 5. Hello from 3. Hello from 1. Hello from 2. Hello from 7. Hello from 0. Hello from 6. Hello from 4. There are two issues here that may not have been expected. The most obvious is that the output might seem out of order. The response to that is "what order were you expecting?" Remember, the code was started on all nodes practically simultaneously. There was no reason to expect one node to finish before another. Indeed, if we rerun the code we will probably get a different order. Sometimes it may seem that there is a very repeatable order. But, one important rule of parallel computing is don't assume that there is any particular order to events unless there is something to guarantee it. Later on we will see how we could force a particular order on this output. The first thing to notice about these, or any, MPI codes is that the MPI header files, in C: "mpi.h" in Fortran: 'mpif.h' must be included. These contain all the MPI definitions you will ever need. The next thing to note is the format of MPI calls: Call MPI_XXXXX(parameter,..., ierror). Upon success, ierror is set to MPI_SUCCESS. All MPI codes must start with MPI_Init before doing any MPI work. Likewise, they should all issue a MPI_Finalize when they are done. Besides these most basic of MPI routines, you will also always wish to use the MPI_Comm_Rank routine to determine what the number of the PE the routine is running on is. This will always be from 0 to N-1 for N PEs. Remember, this exact same code is running on each of the PEs. Unless you want the same codes to use the same data in exactly the same manner and generate exactly the same results on each node (which is kind of pointless), you will want to have the PEs vary their behavior based upon their PE number. In this case, the number is merely used to have each PE print a slightly different message out. In general, though, the PE number will be used to load different data files or take different branches in the code. The extreme case of this is to have different PEs execute entirely different sections of code based upon their PE number. if (my_PE_num = 0) Routine1 else if (my_PE_num = 1) Routine2 else if (my_PE_num =2) Routine3 . . . So, we can see that even though we have a logical limitation of having each PE execute the same program, for all practical purposes we can really have each PE running an entirely unrelated program by bundling them all into one executable and then calling them as separate routines based upon PE number. The much more common case is to have a single PE that is used for some sort of coordination purpose, and the other PEs run code that is the same, although the data will be different. This is how one would implement a master/slave or host/node paradigm. if (my_PE_num = 0) MasterCodeRoutine else SlaveCodeRoutine Of course, the above code is the trivial case of EveryBodyRunThisRoutine and consequently the only difference will be in the output, as it actually uses the PE number. In the Hello World program, we see that the first parameter in MPI_Comm_rank (MPI_COMM_WORLD, &my_PE_num) isMPI_COMM_WORLD. MPI_COMM_WORLD is known as the "communicator" and can be found in many of the MPI routines. In general, it is used so that one can divide up the PEs into subsets for various algorithmic purposes. For example, if we had an array that we wished to find the determinant of distributed across the PEs, we might wish to define some subset of the PEs that holds a certain column of the array so that we could address only those PEs conveniently. However, this is a convenience that can often be dispensed with. As such, one will often see the value MPI_COMM_WORLD used anywhere that a communicator is required. This is simply the global set that states we don't really care to deal with any particular subset here. Well, now that we may have some idea how the above code will perform, let's compile it and run it to see if it meets our expectations. We compile using a normal ANSI C or Fortran 90 compiler (C++ is also available): While logged in the T3E (jaromir.psc.edu) For C codes: cc -lmpi hello.c For Fortran codes: f90 -lmpi hello.c We now have an executable. To run on the T3E we must tell the machine how many copies we wish to run. In the T3E, you can choose any number. We'll try 8: On the T3E we usempprun –n8 a.out On the TCS we useprun –n8 a.out The second issue, although you may have taken it for granted, is "where will the output go?". This is another question that MPI dodges because it is so implementation dependent. On the T3E, the I/O is structured in about the simplest way possible. All PEs can read and write (files as well as console I/O) through the standard channels. This is very convenient, and in our case results in all of the "standard output" going back to your terminal window on the T3E. The TCS is very similar. In general, it can be much more complex. For instance, suppose you were running this on a cluster of 8 workstations. Would the output go to eight separate consoles? Or, in a more typical situation, suppose you wished to write results out to a file: With the workstations, you would probably end up with eight separate files on eight separate disks. With the T3E, they can all access the same file simultaneously.There are some good reasons why you would want to exercise some constraint even on the T3E. 512 PEs accessing the same file would be extremely inefficient. Hello world might be illustrative, but we haven't really done any message passing yet. Let's write the simplest possible message passing program. It will run on 2 PEs and will send a simple message (the number 42) from PE 1 to PE 0. PE 0 will then print this out. Sending a message is a simple procedure. In our case the routine will look like this in C (the standard man pages are in C, so you should get used to seeing this format): MPI_Send( &numbertosend, 1, MPI_INT, 0, 10, MPI_COMM_WORLD) &numbertosend a pointer to whatever we wish to send. In this case it is simply an integer. It could be anything from a character string to a column of an array or a structure. It is even possible to pack several different data types in one message. 1 the number of items we wish to send. If we were sending a vector of 10 int's, we would point to the first one in the above parameter and set this to the size of the array. MPI_INT the type of object we are sending. Possible values are: MPI_CHAR, MPI_SHORT, MPI_INT, MPI_LONG, MPI_UNSIGNED_CHAR, MPI_UNSIGNED_SHORT, MPI_UNSIGNED, MPI_UNSIGNED_LING, MPI_FLOAT, MPI_DOUBLE, MPI_LONG_DOUBLE, MPI_BYTE, MPI_PACKED Most of these are obvious in use. MPI_BYTE will send raw bytes (on a heterogeneous workstation cluster this will suppress any data conversion). MPI_PACKED can be used to pack multiple data types in one message, but it does require a few additional routines we won't go into (those of you familiar with PVM will recognize this). 0 Destination of the message. In this case PE 0. 10 Message tag. All messages have a tag attached to them that can be useful for sorting messages. For example, one could give high priority control messages a different tag then data messages. When receiving, the program would check for messages that use the control tag first. We just picked 10 at random. MPI_COMM_WORLD We don't really care about any subsets of PEs here. So, we just chose this "default". Let's look at the parameters individually: MPI_Recv( &numbertoreceive, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG,MPI_COMM_WORLD, &status) &numbertoreceive A pointer to the variable that will receive the item. In our case it is simply an integer that has has some undefined value until now. 1 Number of items to receive. Just 1 here. MPI_INT Datatype. Better be an int, since that's what we sent. MPI_ANY_SOURCE The node to receive from. We could use 1 here since the message is coming from there, but we'll illustrate the "wild card" method of receiving a message from anywhere. MPI_ANY_TAG We could use a value of 10 here to filter out any other messages (there aren't any) but, again, this was a convenient place to show how to receive any tag. MPI_COMM_WORLD Just using default set of all PEs. &status A structure that receive the status data which includes the source and tag of the message. Receiving a message is equally simple. In our case it will look like: #include <stdio.h> #include "mpi.h" main(int argc, char** argv){ int my_PE_num, numbertoreceive, numbertosend=42; MPI_Status status; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &my_PE_num); if (my_PE_num==0){ MPI_Recv( &numbertoreceive, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); printf("Number received is: %d\n", numbertoreceive); } else MPI_Send( &numbertosend, 1, MPI_INT, 0, 10, MPI_COMM_WORLD); MPI_Finalize(); } program shifter implicit none include 'mpif.h' integer my_pe_num, errcode, numbertoreceive, numbertosend integer status(MPI_STATUS_SIZE) call MPI_INIT(errcode) call MPI_COMM_RANK(MPI_COMM_WORLD, my_pe_num, errcode) numbertosend = 42 if (my_PE_num.EQ.0) then call MPI_Recv( numbertoreceive, 1, MPI_INTEGER,MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, status, errcode) print *, 'Number received is:‘ ,numbertoreceive endif if (my_PE_num.EQ.1) then call MPI_Send( numbertosend, 1,MPI_INTEGER, 0, 10, MPI_COMM_WORLD, errcode) endif call MPI_FINALIZE(errcode) end All of the receives that we will use are blocking. This means that they will wait until a message matching their requirements for source and tag has been received. It is possible to use non-blocking communications. This means a receive will return immediately and it is up to the code to determine when the data actually arrives using additional routines. In most cases this additional coding is not worth it in terms of performance and code robustness. However, for certain algorithms this can be useful to keep in mind. Standard mode Send will usually not block even if a receive for that message has not occurred. Exception is if there are resource limitations (buffer space). Buffered Mode Similar to above, but will never block (just return error). Synchronous Mode will only return when matching receive has started. Ready Mode will only work if matching receive is already waiting. There are four possible modes (with slight differently named MPI_XSEND routines) for buffering and sending messages in MPI. We use the standard mode here, and you may find this sufficient for the majority of your needs. However, these other modes can allow for substantial optimization in the right circumstances: We are going to write one more code which will employ the remaining tool that we need for general parallel programming: synchronization. Many algorithms require that you be able to get all of the nodes into some controlled state before proceeding to the next stage. This is usually done with a synchronization point that require all of the nodes (or some specified subset at the least) to reach a certain point before proceeding. Sometimes the manner in which messages block will achieve this same result implicitly, but it is often necessary to explicitly do this and debugging is often greatly aided by the insertion of synchronization points which are later removed for the sake of efficiency. Our code will perform the rather pointless operation of having PE 0 send a number to the other 3 PEs and have them multiply that number by their own PE number. They will then print the results out (in order, remember the hello world program?) and send them back to PE 0 which will print out the sum. #include <stdio.h> #include "mpi.h" main(int argc, char** argv){ int my_PE_num, numbertoreceive, numbertosend=4,index, result=0; MPI_Status status; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &my_PE_num); if (my_PE_num==0) for (index=1; index<4; index++) MPI_Send( &numbertosend, 1,MPI_INT, index, 10,MPI_COMM_WORLD); else{ MPI_Recv( &numbertoreceive, 1, MPI_INT, 0, 10, MPI_COMM_WORLD, &status); result = numbertoreceive * my_PE_num; } for (index=1; index<4; index++){ MPI_Barrier(MPI_COMM_WORLD); if (index==my_PE_num) printf("PE %d's result is %d.\n", my_PE_num, result); } if (my_PE_num==0){ for (index=1; index<4; index++){ MPI_Recv( &numbertoreceive, 1,MPI_INT,index,10, MPI_COMM_WORLD, &status); result += numbertoreceive; } printf("Total is %d.\n", result); } else MPI_Send( &result, 1, MPI_INT, 0, 10, MPI_COMM_WORLD); MPI_Finalize(); } program shifter implicit none include 'mpif.h' integer my_pe_num, errcode, numbertoreceive, numbertosend integer index, result integer status(MPI_STATUS_SIZE) call MPI_INIT(errcode) call MPI_COMM_RANK(MPI_COMM_WORLD, my_pe_num, errcode) numbertosend = 4 result = 0 if (my_PE_num.EQ.0) then do index=1,3 call MPI_Send( numbertosend, 1, MPI_INTEGER, index, 10, MPI_COMM_WORLD, errcode) enddo else call MPI_Recv( numbertoreceive, 1, MPI_INTEGER, 0, 10, MPI_COMM_WORLD, status, errcode) result = numbertoreceive * my_PE_num endif do index=1,3 call MPI_Barrier(MPI_COMM_WORLD, errcode) if (my_PE_num.EQ.index) then print *, 'PE ',my_PE_num,'s result is ',result,'.' endif enddo if (my_PE_num.EQ.0) then do index=1,3 call MPI_Recv( numbertoreceive, 1, MPI_INTEGER, index,10, MPI_COMM_WORLD, status, errcode) result = result + numbertoreceive enddo print *,'Total is ',result,'.' else call MPI_Send( result, 1, MPI_INTEGER, 0, 10, MPI_COMM_WORLD, errcode) endif call MPI_FINALIZE(errcode) end The output you get when running this codes with 4 PEs (what will happen if you run with more or less?) is the following: PE 1’s result is 4. PE 2’s result is 8. PE 3’s result is 12. Total is 24 The best way to make sure that you understand what is happening in the code above is to look at things from the perspective of each PE in turn. THIS IS THE WAY TO DEBUG ANY MESSAGE-PASSING (or MIMD) CODE. Follow from the top to the bottom of the code as PE 0, and do likewise for PE 1. See exactly where one PE is dependent on another to proceed. Look at each PEs progress as though it is 100 times faster or slower than the other nodes. Would this affect the final program flow? It shouldn't unless you made assumptions that are not always valid. MPI_Reduce: Reduces: sendbufaddress of send buffer countnumber of elements in send buffer (integer) datatypedata type of elements of send buffer (handle) opreduce operation (handle) rootrank of root process (integer) commcommunicator (handle) Output Parameter: recvbufaddress of receive buffer (choice, significant only at root) Algorithm: This implementation currently uses a simple tree algorithm. Our last example will find the value of pi by integrating 4/(1 + x2) for -1/2 to +1/2. This is just a geometric circle. The master process (0) will query for a number of intervals to use, and then broadcast this number to all of the other processors. Each processor will then add up every nth interval (x = -1/2 + rank/n, -1/2 + rank/n + size/n). Finally, the sums computed by each processor are added together using a new type of MPI operation, a reduction. program FindPI implicit none include 'mpif.h' integer n, my_pe_num, numprocs, index, errcode real mypi, pi, h sum, x call MPI_Init(errcode) call MPI_Comm_size(MPI_COMM_WORLD, numprocs, errcode) call MPI_Comm_rank(MPI_COMM_WORLD, my_pe_num, errcode) if (my_pe_num.EQ.0) then print *,'How many intervals?:' read *, n endif call MPI_Bcast(n, 1, MPI_INTEGER, 0, MPI_COMM_WORLD, errcode) h = 1.0 / n sum = 0.0 do index = my_pe_num+1, n, numprocs x = h * (index - 0.5) sum = sum + 4.0 / (1.0 + x*x) enddo mypi = h * sum call MPI_Reduce(mypi, pi, 1, MPI_REAL, MPI_SUM, 0, MPI_COMM_WORLD, errcode) if (my_pe_num.EQ.0) then print *,'pi is approximately ',pi print *,'Error is ',pi-3.14159265358979323846 endif call MPI_Finalize(errcode) end Do not make any assumptions about the mechanics of the actual message- passing. Remember that MPI is designed to operate not only on fast MPP networks, but also on Internet size meta-computers. As such, the order and timing of messages may be considerably skewed. MPI makes only one guarantee: two messages sent from one process to another process will arrive in that relative order. However, a message sent later from another process may arrive before, or between, those two messages. Obviously, we have only touched upon the 120+ MPI routines. Still, you should now have a solid understanding of what message-passing is all about, and (with manual in hand) you will have no problem reading the majority of well-written codes. The best way to gain a more complete knowledge of what is available is to leaf through the manual and get an idea of what is available. Some of the more useful functionalities that we have just barely touched upon are: There is a wide variety of material available on the Web, some of which is intended to be used as hardcopy manuals and tutorials. Besides our own local docs at you may wish to start at one of the MPI home pages at from which you can find a lot of useful information without traveling too far. To learn the syntax of MPI calls, access the index for the Message Passing Interface Standard at: Books: LIST OF MPI CALLS:To view a list of all MPI calls, with syntax and descriptions, access the Message Passing Interface Standard at: Exercise 1: Write a code that runs on 8 PEs and does a “circular shift.” This means that every PE sends some data to its nearest neighbor either “up” (one PE higher) or “down.” To make it circular, PE 7 and PE 0 are treated as neighbors. Make sure that whatever data you send is received. Exercise 2: Write, using only the routines that we have covered in the first three examples, (MPI_Init, MPI_Comm_Rank, MPI_Send, MPI_Recv, MPI_Barrier, MPI_Finalize) a program that determines how many PEs it is running on. It should perform as the following: mpprun -n4 exercise I am running on 4 PEs. mpprun -n16 exercise I am running on 16 PEs. The solution may not be as simple as it first seems. Remember, make no assumptions about when any given message may be received. You would normally obtain this information with the simple MPI_Comm_size() routine.
http://www.slideserve.com/emily/mpi-basics
CC-MAIN-2017-22
refinedweb
3,395
64.1
- NAME - VERSION - SYNOPSIS - DESCRIPTION - ALGORITHM - SYNTAX - Sub-Paths - Steps - Separators - Selectors - Axes - Predicates - Attributes - Variables - Special Selectors - Grouping and Repetition - Hiding Nodes - Potentially Confusing Dissimilarities Between TPath and XPath - String Concatenation - Grammar - Escape Sequences in String Literals - HISTORY - SEE ALSO - ACKNOWLEDGEMENTS - AUTHOR NAME TPath - general purpose path languages for trees VERSION version 1.006 SYNOPSIS # we define our trees package MyTree; use overload '""' => sub { my $self = shift; my $tag = $self->{tag}; my @children = @{ $self->{children} }; return "<$tag/>" unless @children; local $" = ''; "<$tag>@children</$tag>"; }; sub new { my ( $class, %opts ) = @_; die 'tag required' unless $opts{tag}; bless { tag => $opts{tag}, children => $opts{children} // [] }, $class; } sub add { my ( $self, @children ) = @_; push @{ $self->{children} }, @children; } # teach TPath::Forester how to get the information it needs package MyForester; use Moose; use MooseX::MethodAttributes; # needed for @tag attribute below with 'TPath::Forester'; # implement required methods sub children { my ( $self, $n ) = @_; @{ $n->{children} }; } sub tag { my ( $self, $n ) = @_; $n->{tag}; } sub tag_at : Attr(tag) { # tags receive the selection context, not the bare node my ( $self, $context ) = @_; $context->n->{tag}; } package main; # make the tree # a # /|\ # / | \ # b c \ # /\ | d # e f | /|\ # h / | \ # /| i j \ # l | | |\ \ # m n o p \ # /| /|\ \ # s t u v w k # / \ # q r # / \ # x y # | # z my %nodes = map { $_ => MyTree->new( tag => $_ ) } 'a' .. 'z'; $nodes{a}->add( @nodes{qw(b c d)} ); $nodes{b}->add( @nodes{qw(e f)} ); $nodes{c}->add( $nodes{h} ); $nodes{d}->add( @nodes{qw(i j k)} ); $nodes{h}->add( @nodes{qw(l m)} ); $nodes{i}->add( $nodes{n} ); $nodes{j}->add( @nodes{qw(o p)} ); $nodes{k}->add( @nodes{qw(q r)} ); $nodes{m}->add( @nodes{qw(s t)} ); $nodes{p}->add( @nodes{qw(u v w)} ); $nodes{r}->add( @nodes{qw(x y)} ); $nodes{y}->add( $nodes{z} ); my $root = $nodes{a}; # make our forester my $rhood = MyForester->new; # index our tree (not necessary, but efficient) my $index = $rhood->index($root); # try out some paths my @nodes = $rhood->path('//r')->select( $root, $index ); print scalar @nodes, "\n"; # 1 print $nodes[0], "\n"; # <r><x/><y><z/></y></r> print $_ for $rhood->path('leaf::*[@tag > "o"]')->select( $root, $index ) ; # <s/><t/><u/><v/><w/><q/><x/><z/> print "\n"; print $_->{tag} for $rhood->path('//*[@tsize = 3]')->select( $root, $index ); # bm print "\n"; @nodes = $rhood->path('/>~[bh-z]~')->select( $root, $index ); print $_->{tag} for @nodes; # bhijk print "\n"; # we can map nodes back to their parents @nodes = $rhood->path('//*[parent::~[adr]~]')->select( $root, $index ); print $_->{tag} for @nodes; # bcdijkxy print "\n"; DESCRIPTION TPath provides an xpath-like language for arbitrary trees. You implement a minimum of two methods -- children and tag -- and then you can explore your trees via concise, declarative paths. In tpath, "attributes" are node attributes of any sort and are implemented as methods that return these attributes, or undef if the attribute is undefined for the node. The object in which the two required methods are implemented is a "forester" (TPath::Forester), something that understands your trees. In general, to use tpath you instantiate a forester and then call the forester's methods. Forester objects make use of an index (TPath::Index), which caches information not present in, or not cheaply extracted from, the nodes themselves. If no index is explicitly provided it is created, but one can gain some efficiency by reusing an index when selecting paths from a tree. And one can use a forester's index method to produce a TPath::Index. The paths themselves are compiled into reusable TPath::Expression objects that can be applied to multiple trees. One use's a forester's path method to produce a TPath::Expression. ALGORITHM TPath works by representing an expression as a pipeline of selectors and filters. Each pair of a selector and some set of filters is called a "step". At each step one has a set of context nodes. One applies the selectors to each context node, returning a candidate node set, and then one passes these candidates through the filtering predicates. The remainder becomes the context node set for the next step. If this is the last step, the surviving candidates are the nodes selected by the expression. A node will only occur once among those returned and the order of their return will be the order of their discovery. Search is depth-first pre-ordered -- parents returned before children. CAVEAT The tpath algorithm presupposes the tree it is used against is static, at least for the life of the index it is using. If the tree is mutating, you must at least ensure that it does not mutate during the functional life of any index. The consequence of not doing so may be inaccurate queries. SYNTAX Sub-Paths A tpath expression has one or more sub-paths. //a/b | preceding::d/* - Sub-paths are separated by the pipe symbol | and optional space. The nodes selected by a path is the union of the nodes selected by each sub-path in the order of their discovery. The search is left-to-right and depth first. If a node and its descendants are both selected, the node will be listed first. Steps //a/b[0]/>c[@d] - //a/b[0]/>c[@d] - //a/b[0]/>c[@d] - Each step consists of a separator (optional on the first step), a tag selector, and optionally some number of predicates. Separators a/b/c/>d - /a/b//c/>d - //a/b//c/>d - />a/b//c/>d - null separator a/b/c/>d - The null separator is simply the absence of a separator and can only occur before the first step. It means "relative to the context node". Thus is it essentially the same as the file path formalism, where /a means the file a in the root directory and a means the file a in the current directory. Note, here and in the following discussion we speak of a "root" node, but in reality the node in question is not the tree root but the node to which the expression is applied. This may be a bit confusing, but it simplifies the interpretation of expressions. If you genuinely want to begin at the root node, use the :root selector, described below. Since in general one will apply an expression to a tree's root node, in general this confusion of terminology is harmless. But know that if you pick a node at random from a tree and apply an expression to it, this will become the "root" as far as the various separator definitions here and below are concerned. / /a/b//c/>d - The single slash separator means "search among the context node's children", or if it precedes the first step it means that the context node is the root node. // select among descendants //a/b//c/>d - The double slash separator means "search among the descendants of the context node" or, if the context node is the root, "search among the root node and its descendants". /> select closest />a/b//c/>d - The /> separator means "search among the descendants of the context node (or the context node and its descendants if the context node is root), but omit from consideration any node dominated by a node matching the selector". Written out like this this may be confusing, but it is a surprisingly useful separator. Consider the following tree a / \ b a | | \ a b a | | b b The expression />b when applied to the root node will select all the b nodes except the leftmost leaf b, which is screened from the root by its grandparent b node. That is, going down any path from the context node />b will match the first node it finds matching the selector -- the matching node closest to the context node. Selectors Selectors select a candidate set for later filtering by predicates. literal a - A literal selector selects the nodes whose tag matches, in a tree-appropriate sense of "match", a literal expression. Any string may be used to represent a literal selector, but certain characters may have to be escaped with a backslash. The expectation is that the literal will begin with a word character, _, or $ and any subsequent character is either one of these characters, a number character or a hyphen or colon followed by one of these or a number character. The escape character, as usual, is a backslash. Any unexpected character must be escaped. So a\\b - represents the literal a\b. There is also a quoting convention that one can use to avoid many escapes inside a tag name. /:"a tag name you otherwise would have to put a lot of escapes in" See the Grammar section below for details. ~a~ regex ~a~ - A regex selector selects the nodes whose tag matches a regular expression delimited by tildes. Within the regular expression a tilde must be escaped, of course. A tilde within a regular expression is represented as a pair of tildes. The backslash, on the other hand, behaves as it normally does within a regular expression. @a attribute Any attribute may be used as a selector so long as it is preceded by something other than the null separator -- in other words, @ cannot be the first character in a path. This is because attributes may take arguments and among other things these arguments can be both expressions and other attributes. If @foo were a legitimate path expression it would be ambiguous how to compile @bar(@foo). Is the argument an attribute or a path with an attribute selector? You can produce the effect of an attribute selector with the null separator, however, in two ways child::@foo - ./@foo - the second of these will be normalized in parsing to precisely what one would expect with a @foo path. The attribute naming conventions are the same as those of tags with the exception that attributes are always preceded by @. complement selectors The ^ character before a literal, regex, or attribute selector will convert it into a complement selector. //^foo - //^~foo~ - //^@foo - Complement selectors select nodes not selected by the unmodified selector: //^foo will select any node without the foo tag, //~a~, any node whose tag does not contain the a character, and so forth. * wildcard The wildcard selector selects all the nodes on the relevant axis. The default axis is child, so //b/* will select all the children of b nodes. case sensitivity If you construct a forester with the case_insensitive parameter set to true my $f = MyForester->new( case_insensitive => 1 ); the tag selectors in all expressions compiled by this forester will be case insensitive. So then //INPUT will match INPUT and input and InPuT and so forth. The same is true for //input and //~input~ and //^INPUT etc. If your Perl version is 5.16 or higher, the native fc function will be used for case normalization. Otherwise, if Unicode::CaseFolding is available, its fc function will be used. If no fc function is available, lc will be used for case folding. Axes To illustrate the nodes on various axes I will using the following tree, showing which nodes are selected from the tree relative the the d node. Selected nodes will be in capital letters. root | __a__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ b c d e f /|\ /|\ /|\ g h i j k l m n o | | | p q r - adjacent //d/adjacent::* root | __a__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ b C d E f /|\ /|\ /|\ g h i j k l m n o | | | p q r The adjacentaxis might be called the adjacent-siblingaxis. It selects the nearest siblings of the context node passing the test. //foo/adjacent::pwill select the immediately preceding and following psiblings of foonodes. - ancestor //d/ancestor::* ROOT | __A__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ b c d e f /|\ /|\ /|\ g h i j k l m n o | | | p q r - ancestor-or-self //d/ancestor-or-self::* ROOT | __A__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ b c D e f /|\ /|\ /|\ g h i j k l m n o | | | p q r - child //d/child::* root | __a__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ b c d e f /|\ /|\ /|\ g h i J K L m n o | | | p q r - descendant //d/descendant::* root | __a__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ b c d e f /|\ /|\ /|\ g h i J K L m n o | | | p Q r - descendant-or-self //d/descendant-or-self::* root | __a__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ b c D e f /|\ /|\ /|\ g h i J K L m n o | | | p Q r - following //d/following::* root | __a__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ b c d E F /|\ /|\ /|\ g h i j k l M N O | | | p q R - following-sibling //d/following-sibling::* root | __a__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ b c d E F /|\ /|\ /|\ g h i j k l m n o | | | p q r - leaf //d/leaf::* root | __a__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ b c d e f /|\ /|\ /|\ g h i J k L m n o | | | p Q r - parent //d/parent::* root | __A__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ b c d e f /|\ /|\ /|\ g h i j k l m n o | | | p q r - preceding //d/preceding::* root | __a__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ B C d e f /|\ /|\ /|\ G H I j k l m n o | | | P q r - preceding-sibling //d/preceding-sibling::* root | __a__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ B C d e f /|\ /|\ /|\ g h i j k l m n o | | | p q r //d/previous::* ROOT | __a__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ b c d e f /|\ /|\ /|\ g h i j k l m n o | | | p q r The previous axis is a bit different from the others. It doesn't concern the structure of the tree but the history of node selection. The root node is always included because it is always the initial selection context. By itself the previous axis is not terribly useful, as it is silly in general to select a node, then other nodes, then backtrack. It is useful, however, when one wants to compare properties of different nodes in the selection history. See also the :pselector, which selects the immediately preceding node in the selection history. - self //d/self::* root | __a__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ b c D e f /|\ /|\ /|\ g h i j k l m n o | | | p q r - sibling //d/sibling::* root | __a__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ B C d E F /|\ /|\ /|\ g h i j k l m n o | | | p q r - sibling-or-self //d/sibling-or-self::* root | __a__ / /|\ \ / / | \ \ / / | \ \ / / | \ \ / / | \ \ B C D E F /|\ /|\ /|\ g h i j k l m n o | | | p q r Predicates //a/b[0]/>c[@d][@e < 'string'][@f or @g] - //a/b[0]/>c[@d][@e < 'string'][@f or @g] - //a/b[0]/>c[@d][@e < 'string'][@f or @g] - //a/b[0]/>c[@d][@e < 'string'][@f or @g] - Predicates are the sub-expressions in square brackets after selectors. They represents tests that filter the candidate nodes selected by the selectors. Index Predicates //foo/bar[0] An index predicate simply selects the indexed item out of a list of candidates. By default, the first index is 0, unlike in XML, so the expression above selects the first bar under every foo. This is configurable during the construction of your forester, however. If you pass in the one_based property my $f = MyForester->new( one_based => 1 ); The forester will use one-based indexing, so its index predicates will work identically to index predicates in xpath. (This also affects the @index and @pick attributes. See below.) The index rules are the same as those for Perl arrays: 0 is the first item; negative indices count from the end, so -1 retrieves the last item. Negative indices behave the same regardless of whether the one_based property has been set to true. outer versus inner index predicates In general, an index indicates the location of a node among its siblings which have survived and preceding filters. For example //*[0] picks all nodes that are the first child of their parent (also the root, which has no parent). This is distinct from /descendant-or-self::*[0], which will simply pick the root. //a[0] picks all nodes that are the first child of an a node. This is distinct from /descendant-or-self::a[0], where the only node returned will simply be the first node picked on this axis. //a[@foo][0] picks all nodes that are the first child of an a node having the property @foo. This is distinct from /descendant-or-self::a[@foo][0], which will pick only the first a node with the @foo property. These predicates are all "inner" predicates. It is also possible to specify "outer" predicates, like so (//*)[0] (//a)[0] (//a[@foo])[0] In this case, the index is for the collection of all nodes selected up to this point, not relative to a node's similar siblings. So the first expression picks the first node which is the first child of its parent; the second picks the first node anywhere that is the first child of an a node; the third picks the first node anywhere that is the first @foo child node of an a node. Any predicate may be either inner or outer, but the distinction is most relevant to index predicates for steps with the // separator. Path Predicates a[b] A path predicate is true if the node set it selects starting at the context node is not empty. Given the tree a / \ b a | | a b the path //a[b] would select only the two non-leaf a nodes. Attribute Predicates a[@leaf] An attribute predicate is true if its context node bears the given attribute. (For the definition of attributes, see below.) Given the tree a / \ b a | | a b the path //a[@leaf] would select only the leaf a node. Attribute Tests Attribute tests are predicaters which compare two values. The values may be attributes, expressions, other attribute tests, or literals, either strings or numbers. //a[@b = 1] # simple equality test //a[* % 2 == 0] # mathematical expressions //a[@foo > :pi] # comparison to a named constant //a[@foo > "pi"] # alphabetical sort order comparison The name "attribute test" is actually a misnomer: originally one of the two items compared had to be an attribute. Now any two values from the list above are acceptable. If both values are constant, the test is evaluated during compilation. Analytically true tests are discarded and analytically false ones cause an error to be thrown. math in attribute tests //a[ (@foo + 1) ** 2 > @bar ] //a[ :sqrt(@foo) == 1.414 ] Basic mathematical expressions are acceptable in attribute tests. The standard precedence relations among operators are preserved, the operators all have the same representation as in Perl, and one may group expressions with parentheses to make precedence explicit. There are two named constants, :pi and :e, the circle constant and the base of the natural logarithm. These are preceded by colons to distinguish them from path expressions. The operators -- +, -, *, /, %, and ** -- all may also be preceded by a colon when necessary to distinguish them from repetition characters or the wildcard character. This will rarely be necessary, but consider //* [ a * /b = 3 ] Without the colon, this is taken to be an assertion about the cardinality of the set of nodes selected by the expression a*/b. If one wants this to be interpreted as concerning the product of the cardinalities of two sets of nodes, one should write it as //* [ a :* /b = 3 ] The colon also must precede the various unary mathematical expression tpath understands: :abs :acos :asin :atan :ceil :cos :exp :floor :int :log :log10 :sin :sqrt :tan These are all either the functions provided by Perl itself or those provided by the POSIX module. equality and inequality a[@b = 1] a[@b = "c"] a[@b = @c] a[@b == @c] a[@b != @c] ... The equality and inequality attribute tests, as you would expect, determine whether the left argument is equal to the right by some definition of equality. If one operator is a number and the other a collection, it's equality of cardinality. If one is a string, it is whether their printed forms are identical. If they are both objects or collections, either referential or semantic identity is measured. Referential identity means the collections or objects must be the same individual -- must be stored at the same memory address. This is the meaning of the double equals sign. The single equals sign designates semantic identity, meaning, in the case of collections, that they are deeply equal -- the same values stored under the same indices or keys. If one of the items compared is an object and it has an equals method, this method is invoked as a semantic equality test (this is the Java convention). Otherwise, referential identity ( ==, which may be overloaded) is required. Objects are not treated as containers. Finally, if an object is compared to a string, it will be the stringification of the former that is compared to the latter using the c<eq> operator when semantic identity is required. The != comparator behaves as you would expect so long as one or the other of the two operands is either a string or a number. That is, it is the negation of = or ==. Otherwise, collections are converted to cardinalities and objects to strings, with string comparison being used if either argument is an object. If you wish the negation of = or == with collections or objects, you must negate the positive form: a[!(@b = @c)] a[!(@b == @c)] ranking a[@b < 1] a[@b < "c"] a[@b < @c] a[@b > 1] a[@b <= 1] ... The ranking operators require some partial order of the operands. If both evaluate to numbers or strings, the respective orders of these are used. If one is a string, string sort order dominates. If both are collections, numeric sorting by cardinality is used. Objects are sorted by string comparison. matching The matching operators look for character patterns within strings. They fall into two groups: the regex matchers and the index matchers. a[@b =~ '(?<!c)d'] # regex matching a[@b !~ '(?<!c)d'] a[@b =~ @c] ... a[@b |= 'c'] # index matching a[@b =|= 'c'] a[@b =| 'c'] a[@b |= @c] ... regex matching The two regex matching operators, =~ and !~, function as you would expect: the right operand is stringified and compiled into a regular expression and matched against the left operand. If the left operand is constant -- a string or a number -- this regex compilation occurs at compile time. Otherwise, it must be performed for every match, at some cost to efficiency. index matching Index matching uses the string index function, so it only finds whether one literal string occurs as a substring of another -- the right as a substring of the left. There are three variants for the three most common uses of index matching: |=prefix True if the left operand starts with the right operand. =|=infix (anywhere) True if the right operand occurs anywhere in the left. =|suffix True if the right operand ends the left operand. Boolean Predicates Boolean predicates combine various terms -- attributes, attribute tests, or tpath expressions -- via boolean operators: !or not True iff the attribute is undefined, the attribute test returns false, the expression returns no nodes, or the boolean expression is false. &or and True iff all conjoined operands are true. ||or or True iff any of the conjoined operands is true. Note that boolean or is two pipe characters. This is to disambiguate the path expression a|bfrom the boolean expression a||b. ;or one True if one and only one of the conjoined operands is true. The expression @a ; @b behaves like ordinary exclusive or. But if more than two operands are conjoined this way, the entire expression is a uniqueness test. ( ... ) Groups the contained boolean operations. True iff they evaluate to true. The normal precedence rules of logical operators applies to these: () < ! < & < ; < || Attributes //foo[@bar] //foo[@bar(1, 'string', path, @attribute, @attribute = 'test')] Attributes identify callbacks that evaluate a TPath::Context to see whether the respective attribute is defined for it. If the callback returns a defined value, the predicate is true and the candidate is accepted; otherwise, it is rejected. As the second example above demonstrates, attributes may take arguments and these arguments may be numbers, strings, paths, other attributes, or attribute tests. Paths are evaluated relative to the candidate node being tested, as are attributes and attribute tests. A path argument is evaluated to the TPath::Context objects selected by this path relative to the candidate node. Attribute parameters are enclosed within parentheses. Within these parentheses, they are delimited by commas. Space is optional around parameters. For the standard attribute set available to all expressions, see TPath::Attributes::Standard. For the extended set that can be composed in, see TPath::Attributes::Extended. Ad Hoc Attributes There are various ways one can add bespoke attributes but the easiest is to add them to an individual forester via the add_attribute method: my $forester = MyForester->new; $forester->add_attribute( 'foo' => sub { my ( $self, $context, @params) = @_; ... }); Another methods is to define attributes as annotated methods of the forester sub foo :Attr { my ( $self, $context, @params) = @_; ... } If this would cause a namespace collision or is not a possible method name, you can provide the attribute name as a parameter of the method attribute: sub foo :Attr(problem:name) { my ( $self, $context, @params) = @_; ... } Defining attributes as annotated methods is particularly useful if you wish to create an attribute library that you can mix into various foresters. In this case you define the attributes within a role instead of the forester itself. package PimpedForester; use Moose; extends 'TPath::Forester'; with qw(TheseAttributes ThoseAttributes YonderAttributes Etc); sub tag { ... } sub children { ... } Auto-loaded Attributes Some trees, like HTML and XML parse trees, may have ad hoc attributes. Foresters for this sort of tree should override the default autoload_attribute method. This method expects an attribute name and an optional list of arguments and returns a code reference. The code reference in turn, when applied to a context and a list of context-specific arguments, must return the value of the given attribute in that context. For instance, the following implements HTML attribute autoloading providing these nodes have an attribute method that returns the value of a particular attribute at a given node, or undef when the attribute is undefined: sub autoload_attribute { my ( $self, $name ) = @_; return sub { my ( $self, $ctx ) = @_; return $ctx->n->attribute($name); }; } With this one could write expressions such as //div[@:style =|= 'width'] which auto-load the style attribute. Note the expression syntax: attributes whose names are preceded by an unescaped colon are supplied by the autoload_attribute method. One could make this HTML implementation more efficient by memoizing autoload_attribute. For HTML attributes it doesn't make sense to further parameterize attribute generation -- all you need is the name -- so any attribute arguments are ignored during auto-loading. Variables There are three special attributes among the standard attributes that facilitate using variables in tpath expressions: @var, @v, and @clear_var. The first two are synonyms, so there are really only two functionally distinct variable attributes. The first two allow one to set or check the value of a particular variable. The last clears a variable, returning whatever value it had before clearing. The variables themselves live in a hash belonging to a particular expression. One can use variables to obtain information from a selection other than a list of nodes. For example, my $exp = $forester->path('/*[@v( "size", @tsize )][@v( "leaves", @size(leaf::*) )]'); $exp->select($tree); say 'number of nodes in the tree: ' . $exp->vars->{size}; say 'number of leaf nodes: ' . $exp->vars->{leaves}; One may also use variables to make later selections in an expression dependent on earlier selections. my $exp = $forester->path('//foo[ @v( "bar", @quux ) ]//baz[ @quux = @v("bar") ]'); Finally, one may use variables to parameterize an expression: for my $fruit qw(apple orange kumquat quince) { $exp->vars->{fruit} = $fruit; my @harvest = $exp->select($tree); deliver( $recipients->{$fruit}, @harvest ); } Special Selectors There are four special selectors that cannot occur with predicates and may only be preceded by the / or null separators. . : Select Self This is an abbreviation for self::*. .. : Select Parent This is an abbreviation for parent::*. :id(foo) : Select By Index This selector selects the node, if any, with the given id. This same node can also be selected by //*[@id = 'foo'] but this is much less efficient. :root : Select Root This expression selects the root of the tree. It doesn't make much sense except as the first step in an expression. :p : Select the Previously Selected Node This expression selects the node from which the current node was selected. For example, /a/b/:p will select the a node selected before the b node. How is this ever useful? Well, it lets one write expressions like //a//b[@height = @at(/:p, 'depth')] This selects all b nodes descended from a nodes where some a node the b node is descended from has the same depth as the b node's height. One can iterate the :p selector to move different distances up the selection path and one can impose predicates on the selector to filter the selection. //a//b//c//d[@height = @at(/:p+, 'depth')] See also the previous:: axis. Grouping and Repetition TPath expressions may contain sub-paths consisting of grouped alternates and steps or sub-paths may be quantified as in regular expressions //a(/b|/c)/d - //a?/b*/c+ - //a(/b/c)+/d - //a(/b/c){3}/d - //a{3,} - //a{0,3} - //a{,3} - The last expression, {,3}, one does not see in regular expressions. It is the short form of {0,3}. Despite this similarity it should be remembered that tpath expression differ from regular expressions in that they always return all possible matches, not just the first match discovered or, for those regular expression engines that provide longest token matching or other optimality criteria, the optimal match. On the other hand, the first node selected will correspond to the first match using greedy repetition. And if you have optimality criteria you are free to re-rank the nodes selected and pick the first node by this ranking. Hiding Nodes In some cases there may be nodes -- spaces, comments, hidden directories and files -- that you want your expressions to treat as invisible. To do this you add invisibility tests to the forester object that generates expressions. my $forester = MyForester->new; $forester->add_test( sub { my ($forester, $node, $index) = @_; ... # return true if the node should be invisible }); One can put this in the forester's BUILD method to make them invisible to all instances of the class. Potentially Confusing Dissimilarities Between TPath and XPath For most uses, where tpath and xpath provide similar functionality they will behave identically. Where you may be led astray is in the semantics of separators beginning paths. /foo/foo //foo//foo In both tpath and xpath, when applied to the root of a tree the first expression will select the root itself if this root has the tag foo and the second will select all foo nodes, including the root if it bears this tag. This is notably different from the behavior of the second step in each path. The second /foo will select a foo child of the root node, not the root node itself, and the second //foo will select foo descendants of other foo nodes, not the nodes themselves. Where the two formalisms may differ is in the nodes they return when these paths are applied to some sub-node. In xpath, /foo always refers to the root node, provided this is a foo node. In tpath it always refers to the node the path is applied to, provided it is a foo node. In tpath, if you require that the first step refer to the root node you must use the root selector :root. If you also require that this node bear the tag foo you must combine the root selector with the self:: axis. :root/self::foo This is verbose, but then this is not likely to be a common requirement. The tpath semantics facilitate the implementation of repetition, which is absent from xpath. String Concatenation Where you may use a string literal -- 'foo', "foo", q("fo'o"), etc. -- you may also use a string concatenation. The string concatenation operator is ~. The arguments it may separate are string literals, numbers, mathematical expressions, attributes, or path expressions. Constants will be concatenated during compilation, so //foo('a' ~ 1) will compile to //foo('a1') The spaces are optional. Grammar The following is a BNf-style grammar of the tpath expression language. It is the actual parsing code, in the Regexp::Grammars formalism, used to parse expressions minus the bits that improve efficiency and adjust the construction of the abstract syntax tree produced by the parser. \A <treepath> \Z <rule: treepath> <path> ( \| <path> )* <token: path> (?|\\.)* | <literal> | <qname> <token: qname> : [[:punct:]].+[[:punct:]] <rule: attribute> <aname> <args>? <rule: args> \( <arg> ( , <arg> )* \) <token: arg> <literal> | <num> | <concat> | <attribute> | <treepath> | <attribute_test> | <condition> <rule: concat> <carg> ( ~ <carg>)+ <token: carg> <literal> | <num> | <attribute> | <treepath> | <math> <token: num> <signed_int> | <float> <token: signed_int> [+-]? <int> <token: float> [+-]? <int>? \.\d+ ( [Ee][+-]? <int> )? <token: literal> <squote> | <dquote> <token: squote> ' ( [^'\\] | \\. )* ' <token: dquote> " ( [^"\\] | \\. )* " <rule: predicate> \[ ( < signed_int> | <condition> ) \] <token: int> \b ( 0 | [1-9] [0-9]* ) \b <rule: condition> <not>? <item> ( <operator> <not>? <item> )* <token: not> ! | not <token: operator> <or> | <xor> | <and> <token: xor> ; | one <token: and> & | and <token: or> \|{2} | or <token: term> <attribute> | <attribute_test> | <treepath> <rule: attribute_test> <value> <cmp> <value> <token: cmp> [<>=]=? | ![=~] | =~ | =?\|= | =\| <token: value> <literal> | <num> | <concat> | <attribute> | <treepath> | <math> <rule: math> <function> | <operand> ( <mop> <operand> )* <token: function> :? <%FUNCTIONS> \( <math> \) <token: mop> :? <%MATH_OPERATORS> <token: operand> <num> | -? ( <mconst> | <attribute> | <treepath> | <mgroup> | <function> ) <token: mconst> : <%MATH_CONSTANTS> <rule: mgroup> \( <math> \) <rule: group> \( <condition> \) <token: item> <term> | <group> The crucial part, most likely, is the definition of the <name> rule which governs what you can put in tags and attribute names without escaping. The rule, copied from above, is (\\.|[\p{L}\$_])(?>[\p{L}\$\p{N}_]|[-.:](?=[\p{L}_\$\p{N}])|\\.)*+ | <qname> This means a tag or attribute name begins with a letter, the dollar sign, or an underscore, and is followed by these characters or numbers, or dashes, dots, or colons followed by these characters. And at any time one can violate this basic rule by escaping a character that would put one in violation with the backslash character, which thus cannot itself appear except when escaped. One can also use a quoted expression, with either single or double quotes. The usual escaping convention holds, so "a\"a" would represent two a's with a " between them. However neither single nor double quotes may begin a path as this would make certain expressions ambiguous -- is a[@b = 'c'] comparing @b to a path or a literal? Finally, one can "quote" the entire expression following the qname convention: : [[:punct:]].+?[[:punct:]] A quoted name begins with a colon followed by some delimiter character, which must be a POSIX punctuation mark. These are the symbols <>[](){}\/!"#$%&'*+,-.:;=?@^_`|~ If the character after the colon is the first of one of the bracket pairs, the trailing delimiter must be the other member of the pair, so :<a> :[a] :(a) :{a} are correct but :<a< and so forth are bad. However, :>a> :]a] :)a) :}a} are all fine, as are :;a; ::a: :-a- and so forth. The qname convention is a solution where you want to avoid the unreadability of escapes but have to do this at the beginning of a path or your tag name contains both sorts of ordinary quote characters. And again, one may use the backslash to escape characters within the expression. If you use the backslash itself as the delimiter, you do not need to escape it. :\a\ # good! :\a\\a\ # also good! equivalent to a\\a Since the qname convention commits you to 3 extra-name characters before any escapes, it is generally not advisable unless you otherwise would have to escape more than 3 characters or you feel that whatever escaping you would have to do would mar legibility. Double and single quotes make particularly legible qname delimiters if it comes to that. Compare file\ name\ with\ spaces :"file name with spaces" One uses the same number of characters in each case but the second is clearly easier on the eye. In this case the colon is necessary because " cannot begin a path expression. Before or after most elements of tpath expressions one may put arbitrary whitespace or #-style comments. # a path made more complicated than necessary //a # first look for a elements /*/* # find the grandchildren of these [0] # select the first-born grandchildren [ # and log their foo properties @log( @foo ) ] There are some places where one cannot put whitespace or a comment: between a separator and a selector // a # bad! between an @ and an attribute name @ foo # bad! and between a repetition suffix and the element repeated //a + # bad! Escape Sequences in String Literals All the places where one may use the \ escape character to protect a special character in a string one may also use one of the escape sequences understood by tpath, which are just those understood by JSON. These are - \t The tab character. - \n The ASCII newline character -- decimal character 10 in the basic ASCII set. Note that this isn't the magic newline character in Perl that adapts to the operating system it finds itself on. This is just the 10th character in the ASCII set (excluding the null character). - \r The ASCII carriage return character, decimal character 13. - \f The ASCII form feed character. - \b The backspace character. - \v The vertical tab character. Why \v? Well, I figure it's important enough to somebody to be included in the JSON spec, so it's here too. This is character 11 in ASCII's decimal set. HISTORY I wrote tpath initially in Java () because I wanted a more convenient way to select nodes from parse trees. I've re-written it in Perl because I figured it might be handy and why not? Since I've been working on the Perl version I've added lots of features. Eventually I'll back port these to the Java version, but I haven't yet. SEE ALSO Tree::XPathEngine and Class::XPath provide similar functionality, though their aim is not to provide a generic tree path language but rather to provide a means of adapting XPath, designed with XML in mind, to non-XML trees. I have not actually used these modules, but if you are already familiar with XPath and your node names and whatnot comport with those of XPath, than these may better suit your needs. If what you really want is to use XPath on XML in Perl, consider XML::XPath or XML::LibXML. If speed is your concern and you are able to use the latter, it's probably what you want. TPath is fast enough as pure Perl tree path libraries go, but it has Moose's startup lag and its own conventions. ACKNOWLEDGEMENTS Thanks to Damian Conway for Regexp::Grammars, which makes it pleasant to write complicated parsers. Thanks to the Moose Cabal, who make it pleasant to write elaborate object oriented Perl. Without the use of roles I don't think I would have tried this. Thanks to Jon Rubin, who made me aware that tpath's index predicates weren't working like xpath's (since fixed). And thanks to my wife Paula, who has heard a lot more about tpath than is useful to her. AUTHOR David F. Houghton <dfhoughton@gmail.com> This software is copyright (c) 2013 by David F. Houghton. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
https://metacpan.org/pod/release/DFH/TPath-1.006/lib/TPath.pm
CC-MAIN-2016-18
refinedweb
6,580
59.94
setting up google OAuth in phoenix ▶▶▶ - Its been almost a week or so learning the phoenix framework and elixir. I’ve gotten a bit of understanding on how the framework uses the MVC architecture to make building web applications easier, and how elixir’s functional programming capabilities make functional programing great again 😜 - This post will focus on how to set up Google OAuth as a third-party login option for user authentication into the app built with the phoenix web framework. [ Assumption is that one already has a decent understanding of how both the language(elixir) and framework(phoenix) work.] - For every phoenix project created, a mix.exsfile is generated. Inside this file is a function deps. This function contains all the dependencies that come already installed in the phoenix framework. Step 1️⃣ Add Ueberauth and Google Dependencies - We want to add a new dependency that allows for 3rd party authentication. For our app we will use ueberauth:here is the Github repo. Also we add the specific 3rd party auth_source to use, in this case we using google oauth. The depsfunction now has two dependencies added, as shown below in lines 14 and 15 - Also, inside the same mix.exsfile is another function, applicationthat executes everytime the phoenix app starts. We have add the two dependencies as well. - The applicationfunction now looks like this, below Step 2️⃣ Run command to install the dependencies - After adding these dependencies, we then run the command in our terminal that executes and installs these dependencies in our app. $ mix deps.get which should show the entire list of dependencies including the ones we just added. Step 3️⃣ Update config.exs file - Now, we navigate to the config.exsfile where we need to add this line of code at the end to configure the “providers” . providers are the 3rd party login apps, like Google, Github or Facebook …etc Step 4️⃣ Create a google client ID - Use this link to create a google api project, that will generate two values [ client_idand client_secret] that we need for our phoenix app. - Add those two values to the config.exsfile as below - Once the above configuration steps are all done, now we can add codethat shows how to create new controller that handles authentication, and also appropriate routes for signing in and out of the application. Next Steps Step 5️⃣ Add routes for signing in. - Inside the router.exsdefine a new “scope” that handles requests starting with this path /auth. Add the routes for sign in as below. - line 6. specifies a GETrequest with; /:providerwill be a google, or any other 3rd party login option. AuthControlleris name of controller we will create. :requestis a pre-defined function. - line 7. specifies a GETrequest with; /:provider/callbackfor redirecting from the google api with user profile. :callbackis a function that we will implement. Step 6️⃣ Create user model and add database migrations. - Assumption is, PostgreSql is already installed as a database of use. Phoenix uses Ectofor migrations and Reponamespace for communication to the PostgreSql database. - Add user.exas a model in the modelsfolder. Code below defines a User module with schema that specifies the type of each field in the database. - Run this command that adds a migration file to your project. $ mix ecto.gen.migrations add_users - This generates a migration file under the /priv/repo/migrationsfolder. this file an empty function change().Add the following code to that empty function that creates a table with its columns specified as in the schema. - Finally run this command to create the actual table in the postgres database. $ mix ecto.migrate Step 7️⃣ Create Controller for handling Authentication. - Inside the controllersfolder, create a new controller file, for example, auth_controller.exs. add logic for the callback function, and sign_in functionality. below is code that handles a user logging in with a 3rd party option, and redirection to index page of our app, plus adding user info pulled for google api into our db. - line 2: we use pattern matching to extract the authobject from the connparameter. - line 3: still using pattern matching to extract the user_paramsfrom the authobject. The user_paramsinclude the user_token, email, and provider. - line 4: creates changesetobject that now contains the the values we need. - line 6: we then pass that changesetobject to the signin function. - The signin()checks to see if that user exists in the database, if not it inserts the new user into the database, or else it just logs them back into the app. Step 8️⃣ Create plug that checks user session: - Once the user has been signed-in, we need to find a way to keep track of the user session. We accomplish this buy creating a plug. a plug is basically a function or module that gets a connobject, makes a tiny modification to this object and then returns a “modified” connobject. - In our case, we will create a module plug, since it will be used in different sections of our code where we would prefer to know if a user is logged in. Create a new folder plugsinside of the controllerdirectory. then create a file inside the plugsfolder. our module plug is user.exdefined below. (check # comments in code for each line explanation) - Now, inside the router.exfile, add this module plug to the list of pipeline plugs that get executed wherever a request is sent from a user. as seen below on line 10 (line 10 contains plug we just created) Step 9️⃣ Add sign-in button on layout_view page. Since the sign_in and sign_out feature have to be on every page, we should put the html/css design in the generic layout page. this code below adds the link for sign in and also shows with controller function to access whenever a user clicks the sign-in link. - line 4: checks the connobject that contains the user details, and checks if it exits, and if so, it displays the option for the to signout. - line 10: is for when the user is not logged(meaning the @conn.assigns[:user]returns nil), then show them the sign in button. Step 🔟 Add functionality for signing out - This should easier since we already have the auth_controller, and the logic for handling sessions for a user when signed in. - Just add this sign_out function to the auth_controller - line 5: sets the drop key to true, thereby closing the session for user - line 6: then redirects user to the index function of the controller with developer_path 🙌 END 🙌
https://medium.com/@rrugamba/setting-up-google-oauth-in-phoenix-9167595f5fb7
CC-MAIN-2020-45
refinedweb
1,079
64.1
This page should cover pros and cons of current stdlib logging package. Pros - Almost every feature you could want from a logging package is there Cons - See below. Vinay made sections of each of the points in the original bulleted list for discussion. Docs are not complete False. There is pretty much everything in the docs. If you think something is missing, please, be more specific about the missing stuff. Docs are too long True. -- techtonik The docs are now rearranged into reference API, tutorials (basic and advanced) and cookbook, so this complaint is not really valid. Earlier it was all in one page, so the complaint was more justified. -- VinaySajip I agree that it became much better structurized than before, but the amount of text that an average user needs to read to completely understand how logging works didn't change. Neither part gives a summary about current problems with logging like the one about libraries below. For that part you really need to read everything, but the chances to find this are low. -- techtonik Config files are a little bit hard to comprehend An alternate configuration mechanism is provided by the ZConfig package (PyPI). (Note ZConfig is not a small package to pull in.) Another alternative is provided by the config package, which is a single module and easy to incorporate into logging. However, what say people to the question of backward compatibility? -- VinaySajip So the answer is True, but can't not be changed, because of backward compatibility. -- techtonik Since dictionary-based configuration was added (Python 2.7/3.2, available in older Python versions via dictconfig on PyPI), this is not an issue. You can e.g. use YAML or JSON files for configuration. -- VinaySajip If you want to suggest an alternative mechanism which uses ConfigParser, please suggest alternatives to the format. -- VinaySajip I'd rather avoid using config files (for logging) entirely in almost every project I work on. If logging has its own configuration files, that's a smell. If I need configfiles for my project, I'll choose the format I need, and expose the necessary settings to the user coherently, including logging settings. That it has its own configuration file to me is a smell, it suggests configuring in code is too hard, and too necessary. -- JoshuaRodman It's not a smell - it's not necessary to use, but allows for easier configuration in some scenarios. You can certainly put other settings in the same file, or have separate files for other settings. I don't know of any convention that limits the number of configuration files in an application to zero or one. -- VinaySajip API uses camelCase (goes against PEP8 recommendation and most of the stdlib) PEP8 says - consistency with this style guide is important. Consistency within a project is more important. Consistency within one module or function is most important. So True, but can't not be changed, because of backward compatibility. logging2 maybe. -- techtonik It's a low priority right now, unless there's an initiative to ensure the rest of the stdlib is made to conform to PEP8. -- VinaySajip Rather slow considering the large number of function calls performed internally to check which handler to use There is no official confirmation, but users report 25% boost in performance after commenting logging stuff. So, this stays to be True until somebody proves otherwise. Unfortunately that Stanford post doesn't give any indication of how they used logging calls or how they configured it, so their bare statement does not really give any useful information about logging performance. -- VinaySajip If you doubt that Stanford was right by calling logging to be slow then perhaps stackoverflow page will be more convincing. In short - for iner-loop-like scenarios (the 90% code) hotshot indicated logging was one of the biggest bottlenecks. --techtonik Did you notice the accepted answer to that Stack Overflow question? If you have specific performance problems, please post some code and some numbers. -- VinaySajip Why not to include a performance related chapter into logging docs with a measurement instructions? That will be extremely handy for -- techtonik In my view, there's no need to do this. Logging calls are of the order of 10s of microseconds, as indicated in figures on this page. -- VinaySajip Vinay argues that 'logging can be too slow in some specialised scenarios', and asks people to provide more objective metrics than "lot of function calls". Although we don't speak about specialized scenarios, it would be nice if at least these function calls were counted. There are (very simple) timing test results below. Is a pain in the arse for working with libraries Libraries should not output any logging information by default unless explicitly asked to do so. That's why they should not try to configure logging. But on the first call to any log() function logging configures itself automatically. This is only documented in thread safety note for logging.log() function and in logging.basicConfig() (one of the examples why Docs are not complete speculation is not corrent - the docs are there, they are just hard to find). So, if any used library uses logging, you have to configure root logger in your application even if it uses other logging means. -- techtonik An application developer who uses libraries needs to configure logging in order to debug library behaviour. The alternative would be for libraries to output potentially copious debug logging information and needing to be explicitly silenced. -- VinaySajip - Ok, I removed the rant about logging being too obtrusive to debugging libraries. I don't want to debug these 3rd party libraries. Do I have an option to skip logging configuration in application? I don't even know if those libs are using logging at all. -- techtonik - A second question - I am currently debugging Spyder IDE. IDE is a just a set of widgets initiated by a central window, but these are also used independently. How can I setup widget logging to be silent if used standalone and don't affect logging configuration of Spyder if used from the IDE? -- techtonik -- Please don't post such questions here, try Stack Overflow or comp.lang.python. -- VinaySajip Doesn't have runtime scoping (i.e., log messages handled based on the call stack) Please give some more details on what you mean here. How exactly would you want to log messages based on the call stack? -- VinaySajip Difficult to extend log records LogRecords need to be pickleable to be sent across the wire. You can add arbitrary attributes to LogRecords by using the "extra" keyword parameter to logging calls. Exactly how would you like to extend LogRecords in a way which is difficult at the moment, and why? -- VinaySajip Changes in Python3.2 allow more control over LogRecord creation -- VinaySajip I don't really know what this offers, but I usually prefer to simply add attributes to the python objects if I need to decorate them. Perhaps unconventional, but it feels pythonic to me. -- JoshuaRodman Yes, but you don't normally have access to a LogRecord when logging an event, so you can't simply add attributes to it. -- VinaySajip Difficult to add general context to log messages (e.g., add the request URL to all logging messages during the request) There's information on this very topic at Adding contextual information to your logging output. Can you provide more details on how the mechanism provided fails to meet your needs? -- VinaySajip I'd prefer some log-invocation magic for the common cases, like the function name I'm in as a token to be expanded. -- JoshuaRodman The funtion name is already available without magic, have you checked the documentation? -- VinaySajip By default it does nothing; basicConfig makes it do something but makes it hard to tweak logging. How would you want to tweak logging in a way which basicConfig doesn't make easy? Suggest improvements which can be made to basicConfig, I'm receptive. Remember to consider backward compatibility. -- VinaySajip A logger with no handlers should default to sending to stderr or something like this. Perhaps it already has a handler. Perhaps there is a default handler set by default. I shouldn't have to configure anything for the simplest case of a script. Further, this means that I can test modules independently from my larger project and get a sane behavior. -- JoshuaRodman Don't agree with your suggestions - a third party library which uses logging should not spew logging output by default which may not be wanted by a developer/user of an application which uses it. For the simplest case of a script, just use logging.info() etc. and it invokes basicConfig() under the covers. Have you checked the documentation? -- VinaySajip filters are an abstract class instead of callables Filter is not an abstract class - it's a concrete class with a reasonable default implementation. Is this really a major design flaw? It would be easy to modify the package to add callables to the filter list, and the system could expect either a Filter instance or a callable. -- VinaySajip Thread-local handlers aren't easy to setup Please suggest how you would like these to work, and why you need thread-local handlers? You already have the ability to insert thread-local context information into messages using non-thread-local handlers. -- VinaySajip Nothing like keywords or tags for doing multi-dimensional categorization or grouping of messages How many use cases need this? How would you see this being implemented? Buffering of messages is already possible. Talk is cheap, so please describe your use cases in more detail and how you think an "ideal" API for "multi-dimensional categorization" or "grouping" would look. -- VinaySajip No clear way to introduce HTML-formatted messages (or an HTML formatting option) You can post messages to web sites, and present messages in HTML format in numerous ways; is this a common use case and would we get common agreement on how this would work (i.e. a specific HTML representation) ? -- VinaySajip Not fast enough to do pervasive logging in libraries (we've encountered this with Paste/Pylons, where we'd like to put lots of logging in that people could turn on, but it becomes a notable performance hit). Please clarify - how much of a performance hit is it if logging is turned off for particular modules? When you encountered performance issues, what mitigating strategies did you try? (e.g. isEnabledFor) -- VinaySajip I set up a very simple script to time logging calls, available here. Output looks like this (Python 2.6 on Windows, Core Duo E8400): log_noop 0.13 microseconds log_simple 57.36 microseconds log_filtered 4.19 microseconds log_mitigated 3.78 microseconds log_disabled 0.98 microseconds No caller, thread, process info... log_simple 49.89 microseconds log_filtered 4.19 microseconds log_mitigated 3.79 microseconds which means that a simple logging call takes around 57 microseconds, reduced to around 50 microseconds if you opt to not collect caller, thread or process info. That's for logging to a file. So what are your expectations for performance? These numbers don't look too shabby to me. Of course, it's a simplistic test, but at least there are some actual numbers there. --VinaySajip Too complicated, with logging levels, namespaces, handlers, and other things -- they took all the terrible things from java.util.logging and log4j, which are incredibly slow (but powerful) in python Good ideas don't only come from Python people, folks. These ideas were proven in log4j and other packages and they are based on the ideas of "what happened? where did it happen? how important is it? who wants to know?" and if you think about it, these ideas are hardly Java-specific. OTOH, they are pretty central to the problem domain addressed by logging. So - "what happened?" is the details of the logging call, "where did it happen?" is the namespace, "how important is it?" is the level, and "who wants to know?" is the handler. Hardly that complicated, and AFAICT pretty much a minimum requirement for any logging package that aspires to the name. And, "incredibly slow" is pretty emotive. Care to back that up with some hard data? See the numbers above for some ideas. Also, anyone who bothers to look at log4j in detail will see that Python logging is not a mindless translation - it's fairly Pythonic. Beyond the basic abstractions of "What? Where? How Important? Who Needs To Know?", there's no real correspondence between the Java artifacts and the Python ones. Using David A. Wheeler's SLOCCount, log4j 1.2.15 = 168 source files, around 16K SLOC; Python 2.6 logging = 3 source files, < 1.5K SLOC. To me the Java connection and inferences that people draw from the "Java heritage" is bordering on FUD a lot of the time, I have to say. But feel free to put me right with specific comments rather than vague arm-waving, and you'll find me receptive. -- VinaySajip It is too complicated. More defaulting is needed, and simpler construction. I should be able to get a logger talking to a rolling logfile in a single line, and that line should not be overly verbose. Sure, it might not be what I'll eventually want, but I need it to start working right away, so I can continue to figure out what I need. -- JoshuaRodman Then write your own utility function in a utility library of your own to make this happen. Not everybody thinks the same way, so it's better if people write their own simple-to-use (for them) wrappers on top of the existing functionality. Once written, you can import from your utility package into other applications you write. -- VinaySajip The appropriate roles for args and kwargs in loggerAdapter are ambiguous. I'm finding the paradigms for augmenting log arguments with relevant context info confusing. The docs and general web info are helpful but slim on the topic so far. Adding contextual info via the 'extra' parameter with loggingAdapter(. . . ,extra=myClassObj | extra=my.dict ) is slick for some use cases, not so much for others. It seems that the extra= is set in stone when the loggerAdapter object is instantiated, then passed to the individual LogRecords as needed (true?/false?). Context specific to the location of the log() call is superseded by the context of the loggerAdapter instantiation. Any kwargs['extra'] in the log() calls are overwritten silently. Perhaps that's a feature, and context info specific to the log() call location should be Youincorporated using log(... args) and having Formatters manage and replace the the actual args? At present, the use of the kw extra is handled by logging in the reverse behavior of most inherited object properties, where lower level objects supercede the higher level class values. Fortunately, the logging mechanisms are so flexible that it's not difficult to use args in place of kwargs (even for class arguments), or to alter the default behavior of logging for the extra kw. Still, it's confusing as is when the use case is not to pass down the context of the logger instantiation to all subsequent log() calls. I have code for both solutions if there is interest. (This comment seems long and detailed relative to the intended context of this page. Is there another place for similar discussion? Also: where to post demo code that is neither patch nor bug?) Please give details of the use case which is giving you problems, on comp.lang.python rather than here: that's a better platform for receiving support. The "extra= is set in stone" only applies to the default implementation of LoggerAdapter - you're free to override this behaviour by creating a subclass of LoggerAdapter. The "overwritten silently" is clearly documented at You should only really be using LoggerAdapter for specialised cases e.g. when you are logging information relating to database or network connections and need to log the context of individual connections as well as the specifics of an individual event. For most other uses, just using a %-format string and arguments should suffice. Without knowing the details of your use case, it's hard for me to know whether there is a real problem with LoggerAdapter functionality or just with your understanding of it; your comments above indicate to me that it could well be the latter. So, please post what you are trying to do on comp.lang.python, and what problems you are having, with "logging" in the title/subject of your post. You can post short snippets of code there directly; if you want to post longer pieces you can use any public pastebin (e.g. dpaste, LodgeIt, gist.github.com, ...) and link to the posted snippets from your mailing-list post. You can either mail into comp.lang.python using the email address python-list@python.org, or use e.g. Google Groups' web interface via -- VinaySajip
https://wiki.python.org/moin/LoggingPackage?highlight=JoshuaRodman
CC-MAIN-2016-50
refinedweb
2,828
65.32
A value class which describes a monetary value. More... #include <Wt/Payment/Money.h> A value class which describes a monetary value. The money is stored in cents as a 64-bit number, which allows for accurate representation for values up to 2^61 - 1, but has as consequence that division will round to the nearest cent. Math operations on monetary values of different currencies are not allowed and will result in exception. Creates a monetary value. The value is the integer value, cents is the fractional value (up to 2 digits) expressed in cents (0 - 100) and the currency is a string which indicates the currency. Returns the cents. Returns the cents - the last 2 digits of value in cents. Returns the currency. Addition operator. Adding money of different currencies is not allowed. Substraction operator. Subtracting money of different currencies is not allowed. Returns a text representation. The format is "value.cents". Returns the int part of money. Returns the int part of money (money with no cents).
https://www.webtoolkit.eu/wt/doc/reference/html/classWt_1_1Payment_1_1Money.html
CC-MAIN-2018-09
refinedweb
169
51.95
File.ReadAllText(string path) AOP stands for Aspect Oriented Programming. I guess every reader is familiar with the OP part of the acronym so we will have to clarify what Aspect means and don’t worry we’ll come to that later in the article. I’ll (try to) keep this article at beginner level. Knowledge of Object Oriented Programming concepts is the only requirement to read further! In my opinion understanding properly a concept make you a better consumer of its implementation. But, I do understand that the article is a bit long so if you feel bored or discouraged I still encourage you to jump to the implementation part. You'll still be able to come back to theory later. If you are familiar with AOP then don't leave yet! Let me already disclose straight away what this article has to offer... I will introduce an interception technique which allows to intercept: Without Basically we are talking about a pure managed code technique which could run on .Net 1.1 (actually I use bits of Linq but you can change t that easily) and which allows you to intercept almost anything you might think of. Let's be more clear. With the following technique: you can intercept stuff such as System.DateTime.Now or System.IO.File operations! you don't have the usual limitations of the most popular interceptions library. System.DateTime.Now System.IO.File Are you guys doubting ? then read further!! Some might think they have not made their full journey into Object Oriented Programming so why would they switch from OOP to AOP and abandoned all the concepts they hardly have learnt over the years? Answer is simple: there is no switch. There is no: OOP vs AOP! AOP is one of these concepts which name, in my opinion, is misleading. Embracing the AOP principles lets you dealing with classes, objects, interfaces, inheritance, polymorphism, abstraction etc… so there is no way you get lost as you are still fully immersed in the OOP world. When applying AOP concepts in your code you are attempting to relax one particular, and not the least, principle of OOP which is encapsulation to adress cross-cutting concerns (we'll come back to that later on). Back in the old days, when internet was only yellow pages, bulleting boards and usenet, you were better to read books if you wanted to learn anything (comment for the y generation : a book is a thing with written sheets of paper stuffed in it). And all these books would approximately remind you this regarding the OOP subject: Encapsulation has been the ultimate goal for introducing OOP concepts in 3rd generation languages (3GL). AOP, in a way, is claiming that it, sometimes, should be possible to use: a language construct that facilitates the bundling of methods (or other functions) operating with encapsulated data without the data. Got that? Then you get all that is to know about the theory. Good job! Now we are naturally leaded to the 2 following questions: Let's take a look at a the following situations: You are software developer in a bank. The bank has a pretty working operational system. The business is running smoothly. Government emits a policy which enforces banks to commit on some sort of transparency. Whenever money goes in or out of the bank it should be logged. The government publicly said that this is a first measure towards transparency but that more is to be expected. Your web application has been released to test team. All functional tests passed but the application failed at load testing. A non-functional requirement stated that no page should take more than 500 ms to process on the server. After analysis there are dozens of queries made to the database that could be avoided by caching the results. You have spent the last 2 years modeling your domain model in a perfect library consisting of 200+ classes. Lately you’ve been told that a new application front-end will be written and this guy needs to bind your objects to the UI. But to facilitate that task all your classes should now implement INotifyPropertyChanged. INotifyPropertyChanged These examples are valid in terms of the why and when AOP could come to the rescue. These scenarii have the following in common: When is "sometimes”? Some classes (Bank class, Data access services, Domain model classes, etc…) designed to achieve a given functionality have to be modified to handle a requirement which is basically “not their own business”. It is whenever you have to write some code over different classes to fulfill an interest external to these classes. In AOP dialect it is whenever you have a cross-cutting concern. The notion of cross-cutting concern is centric to AOP. No cross-cutting concern = no need for AOP! why should it be possible? Let’s take a closer look at Scenario C. Your problem is that you have an average of 5 properties exposed by each class in your domain model. With 200+ classes you will have to implement (copy/paste) more than 1,000 times some boiler plate code to transform something which looks like this: public class Customer { public string Name { get; set; } } Into something like that: public class Customer : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; private string _Name; public string Name { get { return _Name; } set { if (value != _Name) { _Name = value; SignalPropertyChanged("Name"); } } } void SignalPropertyChanged(string propertyName) { var pcEvent = this.PropertyChanged; if (pcEvent != null) pcEvent(this, new PropertyChangedEventArgs(propertyName)); } } Uh! And that is for one property only! BTW, did you know the intern left 2 days ago? You surely get the why without further explanations It should be possible "to bundle the methods operating with encapsulated data without the data" in other words : externalise the implementation of the cross-cutting INotifyPropertyChanged concern with no or minimum impact on domain model classes like Customer. Customer If we can achieve that then we will: Yeahh... ok, that's all nice and fancy and stuff but how can we do? We have a cross-cutting concern which requires some code to be executed in several classes (targets from now on). The implementation (the code which implements Logging, or Cache, or whatever) is simply called concern in the AOP world. We should then be able to attach (inject, introduce, etc... choose your word) our concern (I repeat because it is important: the concern is the implementation for your cross-cutting concern) at any chosen place of the target. And we should be able to choose any of the following places of the target to attach our concern: In a perfect AOP world we should be able to attach our concern at any line of code of the target. Fine, but if we want to attach a concern we need a hook in the target, don’t we? Yes captain! In AOP the notion of that hook (the place where your concern is going to be attached for execution) has a name: it is a pointcut. And the place from where you actualy attach the code has also a name : it is a joinpoint. Clear enough? Maybe not.... Here is some pseudo-code that hopefully demonstate the idea: // Target class class BankAccount { public string AccountNumber {get;} public int Balance {get; set;} void Withdraw(int AmountToWithdraw) { :public pointcut1; // a pointcut (imagine something like a label for goto statement in BASIC or T-SQL etc...) Balance -= AmountToWithdraw; } } // Concern concern LoggingConcern { void LogWithdraw(int AmountToWithdraw) { // Here you have to imagine that some kind of 'magic' has happened // and 'this' is an instance of the BankAccount class. Console.WriteLine(this.AccountNumber + " withdrawal on-going..."); } } class Program { void Main() { // get a reference to the pointcut marker through reflection pointcut = typeof(Bank).GetPointcut("pointcut1"); // this is the joinpoint LoggingConcern.Join(cutpoint, LogWithdraw); // After joinpoint the runtime should have (in a sort of registry) // a record which tells to execute our LoggingConcern at pointcut1 of Bank class } } Would'nt it be great to have such mechanism available out of the C# box??? Before we move on to our actual implementation let's introduce a few more definitions... What is an Aspect? It is the association of a concern, a point cut and a joinpoint. Think of it for a second and I hope it will be crystal clear: the fact that I have a Logging mechanism (concern), that I register its log method to be executed (joinpoint) at a given place of my application code (pointcut) is one aspect of my application. But wait a minute... What could/should a concern be allowed to do once injected? Concerns are categorized into 2 categories: Bank.Withdraw(int Amount) LoggingConcern.LogWithdraw(int Amount) CustomerService.GetById(int Id) CachingConcern.TryGetCustomerById(int Id) At that point if you are still reading then congratulations! Bravo! Parce que vous le valez bien... We are done with the general concepts and ideas of AOP. Let's move forward and see how we can get close to that with C#... The concern should have a magic this behavior which is of our target type. That's no problemo! this public interface IConcern<T> { T This { get; } // ok, that's a bit of cheating but who cares? } There is no easy way to get pointcuts for every single line of code. But we can get one at each method call and that's fairly easy by using the System.Reflection.MethodBase class. MSDN is not really verbose about it: Provides information about methods and constructors.[sic]. System.Reflection.MethodBase Between you and me using MethodBase for getting reference to pointcuts is the most powerful possibility at your disposition. MethodBase You can get reference to pointcuts for Constructors, Methods, Properties and Events as for .Net almost anything you declare in your code (apart from fields) ends up being a method... See yourselves: public class Customer { public event EventHandler<EventArgs> NameChanged; public string Name { get; private set; } public void ChangeName(string newName) { Name = newName; NameChanged(this, EventArgs.Empty); } } class Program { static void Main(string[] args) { var t = typeof(Customer); // Constructor (not limited to parameterless) var pointcut1 = t.GetConstructor(new Type[] { }); // The ChangeName method var pointcut2 = t.GetMethod("ChangeName"); // the Name property var nameProperty = t.GetProperty("Name"); var pointcut3 = nameProperty.GetGetMethod(); var pointcut4 = nameProperty.GetSetMethod(); // Everything about the NameChanged event var NameChangedEvent = t.GetEvent("NameChanged"); var pointcut5 = NameChangedEvent.GetRaiseMethod(); var pointcut6 = NameChangedEvent.GetAddMethod(); var pointcut7 = NameChangedEvent.GetRemoveMethod(); } } Writing the code for joining is fairly easy as well. Look at the signature of the method below : void Join(System.Reflection.MethodBase pointcutMethod, System.Reflection.MethodBase concernMethod); We can add that signature to a sort of registry that we will provide later on and we can already imagine writing code like this one!!! public class Customer { public string Name { get; set;} public void DoYourOwnBusiness() { System.Diagnostics.Trace.WriteLine(Name + " is doing is own business"); } } public class LoggingConcern : IConcern<Customer> { public Customer This { get; set; } public void DoSomething() { System.Diagnostics.Trace.WriteLine(This.Name + " is going to do is own business"); This.DoYourOwnBusiness(); System.Diagnostics.Trace.WriteLine(This.Name + " has finished doing its own business"); } } class Program { static void Main(string[] args)h { // Get a pointcut for Customer.DoSomething(); var pointcut1 = typeof(Customer).GetMethod("DoSomething"); var concernMethod = typeof(LoggingConcern).GetMethod("DoSomething"); // Join them AOP.Registry.Join(pointcut1, concernMethod); } } How far are we from our pseudo-code? Personally I would say not much... What's next then? That's where problems and fun start at the same time! But let's start simple with The registry will keep records about joinpoints. It's a singleton list of joinpoint items. A joinpoint is a simple struct : public struct Joinpoint { internal MethodBase PointcutMethod; internal MethodBase ConcernMethod; private Joinpoint(MethodBase pointcutMethod, MethodBase concernMethod) { PointcutMethod = pointcutMethod; ConcernMethod = concernMethod; } // Utility method to create joinpoints public static Joinpoint Create(MethodBase pointcutMethod, MethodBase concernMethod) { return new Joinpoint (pointcutMethod, concernMethod); } } Nothing fancy... It should as well implement IEquatable<Joinpoint> but for making the code shorter here I have intentionally removed it IEquatable<Joinpoint> And the registry: The class is called AOP and implements the singleton pattern. It exposes its unique instance through a public static property named Registry AOP Registry public class AOP : List<Joinpoint> { static readonly AOP _registry; static AOP() { _registry = new AOP(); } private AOP() { } public static AOP Registry { get { return _registry; } } [MethodImpl(MethodImplOptions.Synchronized)] public void Join(MethodBase pointcutMethod, MethodBase concernMethod) { var joinPoint = Joinpoint.Create(pointcutMethod, concernMethod); if (!this.Contains(joinPoint)) this.Add(joinPoint); } } With the AOP class we can now write construct like : AOP AOP.Registry.Join(pointcut, concernMethod); We've got an obvious and serious problem now to cope with. If a developer writes code like... var customer = new Customer {Name="test"}; customer.DoYourOwnBusiness(); ... there is just no reason why our registry would be consulted, so there is no way that our LoggingConcern.DoSomething() method is executed... LoggingConcern.DoSomething() Our problem is that .Net does not provide us with a simple way to intercept such calls out of the box. As there is no native way then some work around must be implemented. The capabilities of your work around is going to drive the capabilities of your AOP implementation. The goal of this aricle is not to discuss all possible interception techniques but take note that the interception model is the key differentiator between all AOP implementations. The SharpCrafters website (owners of PostSharp) is providing some clear information on the 2 major techniques: There is not much of a secret if you want to intercept all calls made to a class you have 3 choices: For advanced guys: I voluntarily don't mention Debugger API and Profiler API possibilities which are not viable for production scenarii. For very advanced one: An hybrid of solutions 1 and 2 using the Roslyn API should be feasible and, as far as I know, it is still to be invented. A bon entendeur... Apart if you need to provide pointcuts at any single line of code then it seems that the 2 first solutions are a bit of over-engineering. We'll go for the 3rd solution. Take note that usage of proxying technique comes with a good and a bad news: The bad news is that your target object must be swapped at runtime with a proxy object instance. Implying that if you want to intercept things such as constructors you'll have to delegate construction of your target class instances to a factory (that's a cross-cutting concern that this implementation won't solve. If you already have an instance of the target class then you will have to explicitely ask for the swap to happen. For the IOC and Dependency Injection ninjas the delegation of objects creation will be less than an issue. For others it means they'll have to use a factory if they want to use our interception technique at its full extent. But don't worry we are going to implement that factory. The good news is that we have nothing to do to implement a proxy. The class System.Runtime.Remoting.Proxies.RealProxy will build it for us in a highly optimized way. System.Runtime.Remoting.Proxies.RealProxy In my opinion the class name does not reflect its use. This class is not a Proxy it is an Interceptor. But anyway that class will provide us with a Proxy by calling its method GetTransparentProxy() and that's the only thing we need. GetTransparentProxy() So the skeleton for our interceptor is : public class Interceptor : RealProxy, IRemotingTypeInfo { object theTarget { get; set; } public Interceptor(object target) : base(typeof(MarshalByRefObject)) { theTarget = target; } public override System.Runtime.Remoting.Messaging.IMessage Invoke(System.Runtime.Remoting.Messaging.IMessage msg) { IMethodCallMessage methodMessage = (IMethodCallMessage) msg; MethodBase method = methodMessage.MethodBase; object[] arguments = methodMessage.Args; object returnValue = null; // TODO: // here goes the implementation details for method swapping in case the AOP.Registry // has an existing joinpoint for the MethodBase which is hold in the "method" variable... // if the Registry has no joinpoint then simply search for the corresponding method // on the "theTarget" object and simply invoke it... ;-) return new ReturnMessage(returnValue, methodMessage.Args, methodMessage.ArgCount, methodMessage.LogicalCallContext, methodMessage); } #region IRemotingTypeInfo public string TypeName { get; set; } public bool CanCastTo(Type fromType, object o) { return true; } #endregion } Some explanations are required here as we are now touching the heart of the implementation.... The RealProxy class exists to serve the purpose of intercepting calls from remote objects and marshal a targeted object. By remote here you must understand really remote like : objects living in another application, another AppDomain, another server, etc...). I am not going to go too much in details but there were 2 ways to marshal objects in the .net Remoting infrastructure : by reference or by value. Basically it means you can only marshal remote objects if they are inheriting MarshalByRef or if they implement ISerializable. Our plan is not to use the remoting capabilities at all but we still need to let the RealProxy class think our target is acceptable for remoting. That's why we pass typeof(MarshalByRef) to the RealProxy base constructor. RealProxy MarshalByRef ISerializable typeof(MarshalByRef) The RealProxy class is receiving all calls made on the transparent proxy via the System.Runtime.Remoting.Messaging.IMessage Invoke(System.Runtime.Remoting.Messaging.IMessage msg) method. That's where we will implement the details about method swapping. Read the comments in the code above. System.Runtime.Remoting.Messaging.IMessage Invoke(System.Runtime.Remoting.Messaging.IMessage msg) About the implementation of IRemotingTypeInfo: In a true remoting environment the client side would request an object to the server. The client application runtime might not know anything about the type of the marshalled remote object. So when the client app makes a call to the method public object GetTransparentProxy() the runtime must decide if the returned object (the transparent proxy) is boxable to the client application expected contract. By implementing IRemotingTypeInfo you give a hint to the client runtime telling if casting to a specified type is allowed or not. And guess what the trick is there, in front of your astonished gaze, right here... IRemotingTypeInfo public object GetTransparentProxy() public bool CanCastTo(Type fromType, object o) { return true; } All our AOP implementation is only possible due to the possibility offered by remoting to write these 2 words: return true; Passed that point we can cast the object returned by GetTransparentProxy() to whatever interface without any runtime check!!!. return true; The runtime just purely and simply gave us a "yes card" to play with! We might want to revisit that code to return something more appropriate than true to any type... But we could also imagine make use of this behaviour to provide a Missing Method implementation or a catch all interface... There is a lot of room fot your creativity to express istself here! At that point we have a decent interception mechanism for our target instance. We are still mising interception of constructors and creation of the transparent proxy. That's a job for a factory... Not much to say about that one. Here is the skeleton of the class. public static class Factory { public static object Create<T>(params object[] constructorArgs) { T target; // TODO: // Base on typeof(T) and the list of constructorArgs (count and their Type) // we can ask our Registry if it has a constructor method joinpoint and invoke it // if the Registry has no constructor joinpoint then simply search for the corresponding one // on typeof(T) and invoke it... // Assign the result of construction to the "target" variable // and pass it to the GetProxy method. return GetProxyFor<T>(target); } public static object GetProxyFor<T>(object target = null) { // Here we are asked to intercept calls on an existing object instance (Maybe we constructed it but not necessarily) // Simply create the interceptor and return the transparent proxy return new Interceptor(target).GetTransparentProxy(); } } Note that the Factory class is always returning an object of type object. We can't return an object of type T because the transparent proxy is simply not of type T, it is of type System.Runtime.Remoting.Proxies.__TransparentProxy. But, remember the "Yes card", we can cast the returned object to whatever interface without any runtime checking! Factory object T System.Runtime.Remoting.Proxies.__TransparentProxy We will nest the Factory class in the AOP class hoping to give a neat programming experience to our consumers. But you'll see that in the Usage section below If you have read the whole article till that point I must recognize you are almost a hero! Bravissimo! Kudos! For the sake of brevity and clarity of this article (damned... why are you smiling?) I am not going to discuss the boring implementation details of method retrievals and switching. There is actually not much fun in it. But if you are interested in that piece then you can download the code and browse it : it is fully functional!. The classes and methods signature might be a bit different as I am coding while but no major change is to be expected. Warning: Before deciding to use this code in your project please read carefully the paenultimus section. And if you don't know the word paenultimus then I guess you have to click the link first! I have been writing a lot but did not give you, yet, a proper hint on how we can actually use all of this. And finally here we are : the moment of truth! The attached zip file includes a project with 5 examples for the sake of demonstration. So you will get examples of injecting aspects by: Now I am going to show two out of these five: the most and the less obvious First we need a domain model... Nothing fancy public interface IActor { string Name { get; set; } void Act(); } public class Actor : IActor { public string Name { get; set; } public void Act() { Console.WriteLine("My name is '{0}'. I am such a good actor!", Name); } } Then we need a concern public class TheConcern : IConcern<Actor> { public Actor This { get; set; } public string Name { set { This.Name = value + ". Hi, " + value + " you've been hacked"; } } public void Act() { This.Act(); Console.WriteLine("You think so...!"); } } At application initialization we tell the Registry about our joinpoints // Weave the Name property setter AOP.Registry.Join ( typeof(Actor).GetProperty("Name").GetSetMethod(), typeof(TheConcern).GetProperty("Name").GetSetMethod() ); // Weave the Act method AOP.Registry.Join ( typeof(Actor).GetMethod("Act"), typeof(TheConcern).GetMethod("Act") ); And finally we create an object via the Factory var actor1 = (IActor) AOP.Factory.Create<Actor>(); actor1.Name = "the Dude"; actor1.Act(); Note that we requested the creation of an Actor class but we can cast the result to an interface so let's use IActor as the class is implementing it. Actor IActor If you run that in a Console application the output will be the following: My name is 'the Dude. Hi, the Dude you've been hacked'. I am such a good actor! You think so...! Here we have 2 slight issues: File That's where we benefit from the "Yes card"! Remember? There is no runtime type checking between the returned proxy and the interface. Which means we can create any kind of interface... no one as to implement it anyway: neither the target nor the concern. Basically we are only using the interface as a contract... Let's demonstrate that by creating a fake interface to mimic the static File class public interface IFile { string[] ReadAllLines(string path); } Our concern public class TheConcern { public static string[] ReadAllLines(string path) { return File.ReadAllLines(path).Select(x => x + " hacked...").ToArray(); } } The registering of joinpoints AOP.Registry.Join ( typeof(File).GetMethods().Where(x => x.Name == "ReadAllLines" && x.GetParameters().Count() == 1).First(), typeof(TheConcern).GetMethod("ReadAllLines") ); And finally execution of the program var path = Path.Combine(Environment.CurrentDirectory, "Examples", "data.txt"); var file = (IFile) AOP.Factory.Create(typeof(File)); foreach (string s in file.ReadAllLines(path)) Console.WriteLine(s); In this case please note that we can not use the Factory.Create<T> method as static types cannot be used as generic arguments. Factory.Create<T> In no particular order: So far we have been able to achieve the primary goal of AOP : implement an aspect and register it for execution. TinyAOP is born. But your journey in AOP land is not finished yet and you might want to dig further: Conclusion: We have a nice and tiny prototype which demonstrates the technical feasibility of doing AOP purely with managed, non-dynamic, code without weaving, etc... You have learnt AOP : you can start from here and roll your own implementation! No secret that I am french... Nobody's perfect! While writing this article I was googling for a place where the expression "A boire, ou je tue le chien!" would be explained and I found that page called "French expressions you won't learn at school". I am sure you might find some of these expressions pretty funny so I am sharing the link : A website such as Codeproject is only working because some guys are writing and publishing articles. Whatever the reasons why these guys are doing it, they are! And that takes a non-negligible amount of work and time. Please do not neglect that time and work: If you don't like the article please refrain to give your vote of 1 without further explanations... I might have wrote a statement which is wrong or false, my english surely needs rephrasing, maybe you are expecting more or less explanations, I don't know... It is as simple as that: I don't know if you don't tell! Your justified bad ratings are welcome I am not gonna hurt (ok, maybe a bit ) and it will allow me to revise my judgment, make any necessary adjustments or corrections to the article and also to improve myself for future ones. Now if you liked the article, or you are using the code or if you have learned something today then let me know as well : leave a comment, give me your vote (of 5), drop me an email, connect on LinkedIn... Whatever form of feedback is much appreciated!!! Thanks for reading! This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Guirec Le Bars wrote:I guess it's the right time to tell I am very disappointed about your "I am the best" attitude... I was winning that f****ng MUG and you just arrived and took it away from me!!! How many of them do you have already BTW? Could not you let me have one? General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/479302/Aspect-Oriented-Programming-learn-step-by-step-and?msg=4446483
CC-MAIN-2015-18
refinedweb
4,456
55.54
ANTLR rules Parts 1, 2, 3, and comments. Previously I showed how to use ANTLR to build a AST from a molecular formula then evaluate that AST to calculate the molecular weight. For complex grammars it's often useful to work with and transform parse trees, which I'll probably talk about when I get into developing a SMARTS grammar. For doing molecular weight calculations though, there's no reason to generate an intermediate AST. I can calculate the weight during the parsing by using action rules. Here's an example of using actions in lexer and parser rules to print something out. grammar MolecularFormulaWithPrint; options { language=Python; } parse_formula : species* EOF; species : ATOM DIGITS? { print "Species defined", $ATOM.text, # // My first use of Python's new (in 2.5) ternary operator print $DIGITS.text if $DIGITS else "default=1" } ; ATOM : 'H' { print "H = 1.00794" } | 'C' { print "C = 12.001" } // Added 'Cl' to see how that interacts with 'C' | 'Cl' { print "Cl = 35.453" } | 'O' { print "O = 15.999" } | 'S' { print "S = 32.06" } ; // I need a local variable name so the rule can refer to the match DIGITS : count='0' .. '9'+ {print " repeat", $count};I generated the lexer and the grammar as normal: java -cp /Users/dalke/Downloads/ANTLRWorks.app/Contents/Resources/Java/antlrworks.jar \ org.antlr.Tool MolecularFormulaWithPrint.g Some notes about this grammar. ANTLR does some parsing of the code inside of an action block so while you can use '#' for a Python comment, it interpreted the apostrophe in "Python's" as the start of a string. To work around that I added the leading '//' so ANTLR really thought it was a comment. I added "Cl" as a possible atom type (it wasn't in the previous code) because I wanted to see how the lexer handles terms with a common prefix. You can see how in the syntax diagram: and in the generated lexer: LA1 = self.input.LA(1) if LA1 == u'H': alt1 = 1 elif LA1 == u'C': LA1_2 = self.input.LA(2) if (LA1_2 == u'l') : alt1 = 3 else: alt1 = 2 elif LA1 == u'O': alt1 = 4 elif LA1 == u'S': alt1 = 5 else: nvae = NoViableAltException("16:1: ATOM : ( 'H' | 'C' | 'Cl' | 'O' | 'S' );", 1, 0, self.input)Man! That's going to be some slow code when I get around to doing timings. I'm also showing off the new ternary operator in Python 2.5. For the record, I'm against it, but because it's present I need to learn when it's appropriate to use, and I think this is one such case. print $DIGITS.text if $DIGITS else "default=1" }is the same as if $DIGITS: print $DIGITS.text else: print "default=1"The DIGITS term is optional, and if it's not present then that associated variable in Python is None. What this test does is print the count number if it's present, otherwise prints "default=1", because 1 is the default count if not explicitly given. Continuing on to using the new grammar, my driver code is pretty simple, because I'm not really doing anything except setup and requesting the parse: import sys import antlr3 from MolecularFormulaWithPrintParser import MolecularFormulaWithPrintParser from MolecularFormulaWithPrintLexer import MolecularFormulaWithPrintLexer formula = "CH3COOH" if len(sys.argv) > 1: formula = sys.argv[1] char_stream = antlr3.ANTLRStringStream(formula) lexer = MolecularFormulaWithPrintLexer(char_stream) tokens = antlr3.CommonTokenStream(lexer) parser = MolecularFormulaWithPrintParser(tokens) parser.parse_formula()which with the formula "H2SO4" gives. H = 1.00794 repeat 3 S = 32.06 O = 15.999 repeat 4 Species defined H 3 Species defined S default=1 Species defined O 4You can see that the lexer actions are executed, at least for this case, before the parser actions. Parser rules can return something A lexer rule always returns a Token. A parser rule by default returns a Tree but I can have it return something else. In this I want the atom parser to return the molecular weight rather than the atomic symbol. (I don't need to do that. I could use a table lookup on the symbol to get the molecular weight. But the parser already knows which atom it parsed so it feels needless to do that lookup again. As a consequence, the parser loses track of the token location, but there are ways to handle that if needed.) I need to turn the "ATOM" lexer rule into an "atom" parser rule. In ANTLR, lexer rules are in uppercase and parser rules are lower case, so the conversion is pretty easy in this case - change the case of the name. It works here because the pattern in the rule is a string. In general that doesn't work. For example, I changed DIGITS to a parser rule and got these warning messages: warning(200): MWGrammar.g:10:9: Decision can match input such as "'C'" using multiple alternatives: 1, 2 As a result, alternative(s) 2 were disabled for that inputI don't know what that means, but I decided not to worry much about it. My general rule will be to keep things in the lexer, because I understand lexers a lot better than grammars. With the change in place, the grammar is grammar MWGrammar; options { language=Python; } parse_formula : species* EOF; species : atom DIGITS? { print "Species defined", $atom.weight, print $DIGITS.text if $DIGITS else "default=1" } ; atom returns [float weight] : 'H' { $weight = 1.00794 } | 'C' { $weight = 12.001 } | 'Cl' { $weight = 35.453 } | 'O' { $weight = 15.999 } | 'S' { $weight = 32.06 } ; DIGITS : count='0' .. '9'+ ;I declared that the 'atom' rule sets a 'weight'. The 'float' is needed because ANTLR supports languages like Java and C++ which need to know the data type of the value returned. The 'weight' is how other rules, like 'species', can get the new value, in this case via $atom.weight. In general an ANTLR rule can declare that it returns multiple values. Using return values from a parser rule Computing the total molecular weight for a species is very simple. The only difference in the following is the 'species' rule: grammar MWGrammar; options { language=Python; } parse_formula : species* EOF; species : atom DIGITS? { count = int($DIGITS.text) if $DIGITS else 1 species_weight = $atom.weight * count print "Species weight", species_weight } ; atom returns [float weight] : 'H' { $weight = 1.00794 } | 'C' { $weight = 12.001 } | 'Cl' { $weight = 35.453 } | 'O' { $weight = 15.999 } | 'S' { $weight = 32.06 } ; DIGITS : count='0' .. '9'+ ; Using an @init action Next I'll make "species" return a value, a float named "species_weight". But how do I access it inside of parse_formula? The definition is parse_formula : species* EOF;so how do I get a rule executed once for every time it matches? The answer is very elegant. I can have rules attached to part of the expression like this: parse_formula : (species { print "species", $species.species_weight})* EOF;will execute the action for each 'species' that matches. That action is included in the "*" so the match and action are done 0 or more times. The new grammar is: grammar MWGrammar; options { language=Python; } parse_formula : (species { print "species", '+ ;The last step is to sum each of species weights into a total molecular weight and return that sum. I'm going to rename "parse_formula" into "calculate_mw" and have it return a "mw", so the rule becomes calculate_mw returns [float mw] : (species { $mw += $species.species_weight})* EOF ;Don't forget to change the driver code! My new driver ends: ... tokens = antlr3.CommonTokenStream(lexer) parser = MWGrammarParser(tokens) print "MW is", parser.calculate_mw() Okay, does it work? Err, ummm, no. Traceback (most recent call last): File "compute_mw2.py", line 14, in <module> print "MW is", parser.calculate_mw() File "/Users/dalke/src/dayparsers/MWGrammarParser.py", line 65, in calculate_mw mw += species1 TypeError: unsupported operand type(s) for +=: 'NoneType' and 'float'Taking a look at MWGrammarParser: def calculate_mw(self, ): mw = None species1 = NoneAhh, the default value of 'mw' is None, and I want it to be 0.0. I want to set the value before any of the other actions run, which I can do with an "@init" action. That's a special directive to ANTLR. There's also '@after' for adding code after all of the rule code. With the @init in place, here's the code grammar MWGrammar; options { language=Python; } calculate_mw returns [float mw] @init { $mw = 0.0 } : (species { $mw += '+ ;and the driver code, which includes some self-tests. (I didn't quite feel like making it work under unittest or py.test or similar code.) import sys import antlr3 from MWGrammarParser import MWGrammarParser from MWGrammarLexer import MWGrammarLexer formula = "H2SO4" if len(sys.argv) > 1: formula = sys.argv[1] def calculate_mw(formula): char_stream = antlr3.ANTLRStringStream(formula) lexer = MWGrammarLexer(char_stream) tokens = antlr3.CommonTokenStream(lexer) parser = MWGrammarParser(tokens) return parser.calculate_mw() print "MW is", calculate_mw(formula) print "Running self-tests" # Run random tests to validate the parser and results _mw_table = { 'H': 1.00794, 'C': 12.001, 'Cl': 35.453, 'O': 15.999, 'S': 32.06, } # Generate a random molecular formula and calculate # it's molecular weight. yield the weight and formula def _generate_random_formulas(): import random # Using semi-random values so I can check a wide space # Possible number of terms in the formula _possible_lengths = (1, 2, 3, 4, 5, 10, 53, 104) # Possible repeat count for each formula _possible_counts = tuple(range(12)) + (88, 91, 106, 107, 200, 1234) # The available element names _element_names = _mw_table.keys() for i in range(1000): terms = [] total_mw = 0.0 # Use a variety of lengths for j in range(random.choice(_possible_lengths)): symbol = random.choice(_element_names) terms.append(symbol) count = random.choice(_possible_counts) if count == 1 and random.randint(0, 2) == 1: pass else: terms.append(str(count)) total_mw += _mw_table[symbol] * count yield total_mw, "".join(terms) _selected_formulas = [ (0.0, ""), (1.00794, "H"), (1.00794, "H1"), (32.06, "S"), (12.001+1.00794*4, "CH4"), ] for expected_mw, formula in (_selected_formulas + list(_generate_random_formulas())): got_mw = calculate_mw(formula) if expected_mw != got_mw: raise AssertionError("%r expected %r got %r" % (formula, expected_mw, got_mw)) % python calculate_mw.py H2O MW is 18.01488 Running self-testsWhaddya know, it works! Comments? Andrew Dalke is an independent consultant focusing on software development for computational chemistry and biology. Need contract programming, help, or training? Contact me
http://www.dalkescientific.com/writings/diary/archive/2007/11/01/antlr_rules.html
CC-MAIN-2018-34
refinedweb
1,684
59.7
. I learned about a new FOSS project a few weeks ago: apio. It's stated mission is: Experimental open source micro-ecosystem for open FPGAs. Based on platformio. Apio is a multiplatform toolbox, with static pre-built packages, project configuration tools and easy commands to verify, synthesize, simulate and upload your verilog designs. That sounded pretty good to me. It was easy to install on my RPi3: pip install apioThat just installs the apio "shell". In order to get all the synthesis, simulation, and board tools, I used the command: apio install --all That automatically downloads and installs some system utilities, the Icestorm FPGA toolchain, the iverilog simulator, and some design examples for several types of iCE40 FPGA boards. Once that was completed, I could check to see what boards were supported:What!?!? No CAT Board in the list!?! I needed to fix that situation. I poked around in the apio Python code and found a likely file at apio/resources/boards.json. Inside that file, I pretty much copied the existing entry for the icoboard with a few modifications: "Cat-board": { "fpga": "iCE40-HX8K-CT256", "prog": "litterbox", "check": { "arch": "linux_armv7l" } }The only thing I changed (besides the board identifier) was the "prog" entry to list the litterbox utility that uploads bitstreams from the RPi3 to the FPGA on the CAT Board. I also had to modify the following section of the SConstruct file in the same directory: # -- Upload the bitstream into FPGA upload_cmd = '' if PROG == 'ftdi': upload_cmd = 'iceprog{0} -d i:0x0403:0x6010:{1} $SOURCE'.format( EXT, DEVICE) elif PROG == 'gpio': # Icoboard + RPI2: sram upload_cmd = 'export WIRINGPI_GPIOMEM=1; icoprog -p < $SOURCE' elif PROG == 'litterbox': # Cat Board + RPI2,3 upload_cmd = 'sudo litterbox -c $SOURCE' This just adds the command for uploading a bitstream using the litterbox utility. Finally, after a little more poking around, I found the apio/managers/scons.py file and added the following code to its upload function: # -- Litterbox elif programmer == 'litterbox': # Cat Board + RPI2,3 # Device argument is ignored if device and device != -1: click.secho( 'Info: ignore device argument {0}'.format(device), fg='yellow') # Check architecture arch = self.resources.boards[board]['check']['arch'] current_arch = util.get_systype() if arch != current_arch: # Incorrect architecture click.secho( 'Error: incorrect architecture: RPI2 or RPI3 required', fg='red') return 1 This does a sanity check to make sure the upload process for the CAT Board is occurring on an RPi2 or RPi3. Now when I regenerate the board list, I can see the CAT Board: Cat-board iCE40-HX8K-CT256 hx 8k ct256 Once my modifications were complete, I went into the directory for the LED blinker and used the following command to indicate that this design was intended for the CAT Board: apio init --board Cat-board This creates an apio.ini file with a single line of text indicating the board...Read more » I've been away from this project for a few months (OK, four months) building things like a new tool for designing electronics. One of the things I haven't discussed here is the time it takes to download a bitstream to the FPGA on the CAT Board. As shown in previous logs, the FPGA is configured through one of the hardware SPI ports of the RPi. I've never considered SPI a very fast way of transferring data, so I initially set the port bit rate at 1 Mbps. That was good enough to get the FPGA going within a couple of seconds and there was no reason to push it and possibly cause errors while I debugged the board. But once the board was working reliably, I revisited the SPI bit-rate setting. I figured there was no harm in upping it to 5 Mbps just to see what happens. I went into the litterbox.py script and changed it to: self.spi.speed = 5000000Then I ran the command to load the FPGA with the bitstream for the LED blinker: sudo litterbox -c blinky.bin The download to the FPGA completed more quickly than before and the LED started blinking. Success! Then I started pushing for more: 10 Mbps, 20 Mbps, 50 Mbps, no problem; 100 Mbps, 150 Mbps, still five-by-five; 200 Mbps, complete and utter failure. OK, I hadn't expected to get even close to 200 Mbps. With a little trial and error, I finally found the maximum speed I could use was 199,999,999 bps. The reason for that becomes clear later. Now, was I actually transferring bits at 200 Mbps, or was the software making a promise that the hardware couldn't keep? To test that, I wrote some code to time the transmission of a 10 MByte payload and compute the effective bit-rate while I also observed the maximum SPI clock frequency and duty cycle with an oscilloscope: As can be seen, the actual transmission speeds are quite a bit lower than the speed setting. The reason for that is the overhead in the python-spi module that copies and converts the individual 4096-byte packets of the payload before sending them to the SPI driver. Even though each packet gets transmitted at a high clock speed, there's a significant "dead time" (2.3 ms) while the software readies the next packet. As the raw speed increases, the packet transmission time decreases and the dead time (which stays constant) consumes a larger percentage of the time to send the full payload. That's why the duty cycle decreases as the speed setting increases. To decrease the overhead, I modified the python-spi code as follows: After these two changes, setting spi.speed to 100 Mbps resulted in an actual transmission speed of 65 Mbps (an increase of 540%). There's no reason to set the spi.speed to a value greater than 100 Mbps. The table indicates the RPi is generating the SPI clock by dividing a master 200 MHz clock by an integer. Any setting between 100 and 199 Mbps will result in an SPI clock of 100 MHz, and going to 200 Mbps has already proven too fast for sending an FPGA configuration bitstream. (The iCE40HX datasheet also shows the SPI clock in slave mode should not exceed 25 MHz, so getting to 100 MHz is really pushing it already.) A transfer rate of 65 Mbps opens up some interesting possibilities. That means there is an 8 MByte/second channel between the CAT Board FPGA and the RPi that uses only a few pins of the GPIO connector. I have some Xilinx-centric VHDL modules and a Python library that provide a printf-like debug interface for FPGA designs through the JTAG port. I can modify these to use the SPI port so the CAT Board + RPi will have the same capabilities. I'll be working on that next. I think. Maybe. In my previous post, I showed how to blink an LED on the CAT Board using Verilog. Now I'll do the same thing using MyHDL, a hardware description language based on Python. I'll assume a starting point of a Raspberry Pi running the Raspbian OS with the yosys, arachne-pnr and icestorm FPGA tools installed. If you're following along and haven't already got that, read the previous post. The stuff I describe here won't work without it. Before you can use MyHDL, you have to install it. Installing the latest release is as simple as: sudo pip install myhdl But I like to use the development version because that's where all the new features are. That's installed like this: cd /opt sudo git clone cd myhdl sudo python setup.py install Blinking an LED with Verilog wasn't hard, and doing it in MyHDL isn't either. Here's the source code that's stored in a Python file called blinky.py. from myhdl import * # Define the Blinky module. @block def blinky(clk_i, led_o): cnt = Signal(intbv(0, 0, 50000000)) # Counter from 0 to 49999999. tgl = Signal(bool(0)) # Toggle flag drives the LED. # Sequential block triggered on every rising edge of the clock. @always_seq(clk_i.posedge, reset=None) def toggle_led(): if cnt == cnt.max-1: # When the counter reaches its max value... tgl.next = ~tgl # Toggle the flag... led_o.next = tgl # Output the flag to the LED... cnt.next = 0 # Reset the counter. else: # Counter hasn't reached max so just keep incrementing. cnt.next = cnt + 1 # Return a reference to the Blinky logic. return toggle_led # Define the connections to Blinky. clk_i = Signal(bool(0)) led_o = Signal(bool(0)) # Create an instantiation of Blinky. top = blinky(clk_i, led_o) # Output Verilog code for Blinky. top.convert(hdl='Verilog') The blinky.py file containing the MyHDL code shown above is executed as follows: python blinky.py This translates the MyHDL code into Verilog that's stored in the blinky.v file. Then this Verilog file can be compiled into a bitstream using the yosys, arachne-pnr and icepack tools just like in the previous post. Once the bitstream is downloaded to the CAT Board, the LED will blink. Why bother doing this in MyHDL? It just adds another step, but does it add any value? In this case, no, it doesn't. Because this design is so simple, the MyHDL code isn't any more compact or expressive than the original Verilog. The value of MyHDL is experienced when doing design exploration, i.e. trying various approaches to solving a problem. Then all the features of the Python ecosystem can be used with MyHDL to come up with a solution. I showed a few examples of this here and here. A fundamental rite of passage is getting your embedded system to blink an LED. In this post, I'll show you how to take a Raspberry Pi 3 fresh out of the box and get an LED blinking on the CAT Board. (If you're already experienced with the RPi, then a lot of this post will be redundant for you. However, I've found it useful to document processes so I can repeat them reliably at a later date when the details have grown fuzzy.) The first thing to do is install an operating system (OS) on the RPi: I'm going to use the RPi by connecting to it over a network SSH connection from my PC: ip=169.254.68.2 You can do everything over the wired Ethernet, but I prefer the convenience of using my local wireless network. network={ ssid="XESS" psk="whatever_my_wireless_password_is" } Sharing files between my PC and the RPi lets me use a familiar editor to write source that can then be dropped onto the RPi. $ sudo apt-get update $ sudo apt-get install samba $ sudo apt-get install samba-common-bin [pi] path=/home/pi writeable = yes browseable = yes only guest = no create mask = 0777 directory mask = 0777 public = yesAlso, change the name of the workgroup to whatever is being used for the Windows PC: workgroup = XESS sudo service smbd start Before the RPi can be used to program the CAT Board FPGA, a few of its configuration settings need to be adjusted: sudo raspi-config sudo raspi-config --expand-rootfs There are five software...Read more » One of the reasons for respinning the CAT Board PCB was to get the SPI flash chip connected correctly to the Lattice FPGA. The flash+FPGA+Raspberry Pi interconnection is complicated because it has to operate in three different modes: All three modes have to share a single SPI bus between all the devices while using a minimum of RPi GPIO signals and extra circuitry. Mode 3 is the easiest: when the FPGA powers up (or is reset), it checks to see if the SPI CS line is pulled high and, if so, becomes an SPI master and reads its configuration bitstream from the flash. The RPi GPIO signals are hidden behind the series resistors and can't interfere. Mode 2 is slightly more difficult. When the RPi is storing a bitstream by sending it to the flash chip's SI input, the SPI CS line is pulled low. But that also enables the SPI interface of the FPGA which could lead to interference if the FPGA's SDO output becomes active. To prevent this, the RPi asserts the reset pin of the FPGA so its SPI port can't turn on. Problem solved. Mode 1 is the most difficult. The RPi pulls the SPI CS line low as it removes the reset from the FPGA. This places the FPGA in slave mode so the RPi can send a bitstream to the FPGA's SPI port. But this also enables the flash chip's SPI port, which means the flash's SO output could interfere with the bitstream data. Unfortunately, there's no reset pin on the flash to keep it quiet. However, the flash does have a deep power-down mode that is entered by sending a specific command to the flash. Once in this state, it will not respond to anything until it receives another specific command to wake up. The RPi can then transfer the bitstream to the FPGA. During the transfer, there's no chance the wake-up command will be sent accidentally to the flash's SI input because 1) neither the RPi or FPGA will be driving that signal line during the configuration process, and 2) the flash only executes a command once its CS input goes high but the SPI CS line is held low for the entire duration of the bitstream transfer. So the flash will stay quiet and the RPi can send the bitstream to the FPGA in peace. Another complication of the shared SPI bus is that the roles of the RPi's MOSI and MISO pins are reversed in modes 1 and 2. In mode 1, the RPI's MOSI output pin drives the SDI input of the FPGA and the FPGA's SDO output drives the RPI's MISO input. That allows the hardware SPI port of the RPi to be used for the SPI transactions. But in mode 2, the RPi's MOSI pin acts as an input to receive data from the flash's SO output and the RPI's MISO pin has to drive the SI input pin of the flash. That precludes the use of the RPI's SPI hardware and the SPI transfers to/from the flash have to be done using bit banging. I searched for a ready-made SPI bit-banger program but nothing great popped up. So I just wrote one in Python using the RPi.GPIO library. It consists of a class for handling individual pin I/O, another class for SPI transactions, and a final class that handles most of the commands for the serial flash chip. To test the code, I first tried the command to read the device ID from the flash. The manufacturer and device IDs should have been 0x1F and 0x8401, respectively. Instead, I got 0xFF and 0xFFFF. That's OK; nothing ever works the first time. I probed with an oscilloscope to make sure the RPi was driving the correct pins of the flash. No problem there except that the SO pin was always high (naturally). Next, I pulled out my old HP LogicDart and sampled the waveforms on the CS, SCK and SI pins. Once again, no problems:...Read more » No, I really don't have any plans for selling this. Lattice sells $22 iCE40 boards and Olimex has one for $25. Even though these use a small HX1K FPGA, they kind of set the price point of what people are expecting. Earlier this year, I calculated the part cost of the CAT Board with an HX8K at $17. Even if I could sell it at $50, I'd make almost nothing and still not many people would buy it. It's open source, though, so feel free to build your own. Hi Dave, very cool project. I have actually been working on similar project off and on for a while with a Xilinx FPGA. Glad to see that I am not the only one who had the idea of pairing the pi with FPGA. I haven't posted anything yet but I might in the future. Just a quick note based on an earlier comment where you mention the SPI as being slow. I'm using this library () to control the SPI interface. My very rough calculations put the transfer rate at about 3 Mbit/s when I'm loading my bit file, and it can probably be pushed a bit faster. I can share some code if you're interested. Hi, Mike. Thanks for the reply. I actually sell the StickIt!-MB () that mates one of my XuLA Xilinx FPGA boards to an RPi. The only problem with that is I can't run the Xilinx programming tools on the RPi. Have you been able to do that? In regards to the slow SPI, that's a problem only when I program the flash by bit-banging to arbitrary GPIO pins using Python. When I use the actual hardware SPI of the BCM2835 to program the FPGA directly, it's incredibly fast. I looked at the library you mentioned and it appears to only provide an interface to the hardware SPI port. Is the library also able to bit-bang to any set of GPIO pins? Damn nice project! Funny name, just happen to have finished a etherCAT project :P (8 layer pcb from multi-circuit-boards.eu -> 5 boards 150 euro without stelcil ). [this comment has been deleted] Hi, Brosnan. I used a hot air gun (actually, it's a paint remover) because that's what I have. A few years ago, I made a video about how to solder BGAs. It's not that hard to do. Hi, Sami. The CAT Board isn't available for sale at this time. I'm still considering whether that would be worthwhile. The main limitation of the CAT Board is the slow interface to the RPi through the GPIO connector. Hi, I would like to get and test one of your CAT FPGA board for Raspberry Pi 2 B. How can I order one from you? Have you found any limitations in your CAT board? Best regards, Sami Do you have an update on whether you will be selling these and pricepoint?
https://hackaday.io/project/7982-cat-board
CC-MAIN-2019-09
refinedweb
3,072
71.55
Getting this error when trying to use SDL_mixer... SDL_types.h file not found.. I am including all the correct files as far as I can tell and have included the frameworks (both SDL and SDL_mixer) to my /systems/library/frameworks folder and it compiles well until I add SDL_mixer.h to my project. What should I do? Here is main.cpp #include <SDL/SDL.h> #include "SDLMain.h" #include "SDL_mixer.h" int main(int argc, char ** argv) { SDL_Init(SDL_INIT_EVERYTHING); Mix_Chunk song; //Initialize SDL_mixer if( Mix_OpenAudio( 22050, MIX_DEFAULT_FORMAT, 2, 4096 ) == -1 ) { return false; } song = Mix_LoadWAV("song.wav"); SDL_SetVideoMode(1000, 500, 32, SDL_SWSURFACE); SDL_Event event; bool done = false; while(!done) { while (SDL_PollEvent(&event)) { if (event.type == SDL_QUIT) { done = true; } if (event.type == SDL_KEYDOWN) { if (event.key.keysym.sym == SDLK_UP) { Mix_PlayChannel( -1, song, 0 ); } } } } SDL_Quit(); return 0; } Thanks
http://www.gamedev.net/topic/638657-sdl-typesh-file-not-found-using-xcode/
CC-MAIN-2016-30
refinedweb
135
72.22
Used to pass options and map offset values during saving. More... #include <GA_SaveMap.h> Used to pass options and map offset values during saving. This class is a container which stores iterators that iterate over the points/primitives/vertices to be saved. It also stores keyword arguments in the form of a UT_Options. Definition at line 48 of file GA_SaveMap.h. Generic save map used when saving geometry. Destructor. Check the save options. Return defvalue if token isn't defined. Check the save options. Return defvalue if token isn't defined. Check the save options. Return defvalue if the token isn't defined. Get the geometry being loaded. Definition at line 84 of file GA_SaveMap.h. Definition at line 81 of file GA_SaveMap.h. Return the detail iterator. Definition at line 100 of file GA_SaveMap.h. Return the save index associated with an element. Definition at line 118 of file GA_SaveMap.h. The options in the save map are used to control behaviour during saving/loading of the geometry. The user can pass any arbitrary key/value pairs. Common options (used in the baseline GA library) are: Definition at line 77 of file GA_SaveMap.h. Take a point offset and returns its offset(index) in the save file. Return the iterator for the points to be saved. Definition at line 87 of file GA_SaveMap.h. Take a primitive offset and returns its offset(index) in the save file. Return the iterator for the primitives to be saved. Definition at line 97 of file GA_SaveMap.h. Return an arbitrary iterator based on the owner type. Definition at line 103 of file GA_SaveMap.h. Convert the secondary primitives offsets in the given lookup object into indices. Take a vertex offset and returns its offset(index) in the save file. Return the iterator for the vertices to be saved. Definition at line 90 of file GA_SaveMap.h. Test if the data associated with the unique key has been saved The key specified should be something which is guaranteed to be unique for the given data. However, it should be the same key for each shared instance. For example, given a shared pointer to data, you might consider: "string info:artist" String containing the artist's name Definition at line 164 of file GA_SaveMap.h. "string geo:attributesavemask" Specify the "mask" for attributes which should be saved. This mask is in the form used by UT_String::multiMatch(). Definition at line 262 of file GA_SaveMap.h. "string info:date" Current date (default: Y-m-d T) Definition at line 170 of file GA_SaveMap.h. "string geo:groupsavemask" Specify the "mask" for groups which should be saved. This mask is in the form used by UT_String::multiMatch() Definition at line 272 of file GA_SaveMap.h. "string info:hostname" String containing the host name Definition at line 167 of file GA_SaveMap.h. "bool geo:ignoreattribscope" When saving, private attributes are not typically saved. If this option is set, the scope of attributes should be ignored The GA_OPTION_EXPORT_ON_SAVE should override this option. Definition at line 255 of file GA_SaveMap.h. "bool info:saveattributesummary" Whether to save a summary of attributes in the info block. Definition at line 193 of file GA_SaveMap.h. "bool info:savebounds" Whether to compute and save bounding box in the info block. Definition at line 178 of file GA_SaveMap.h. "bool geo:savebreakpointgroups" Whether to save breakpoint groups Definition at line 245 of file GA_SaveMap.h. "bool geo:saveedgegroups" Whether to save edge groups Definition at line 238 of file GA_SaveMap.h. "bool info:savegroupsummary" Whether to save a summary of groups in the info block. Definition at line 212 of file GA_SaveMap.h. "bool geo:saveinfo" Whether to save info block Definition at line 160 of file GA_SaveMap.h. "bool geo:savepointgroups" Whether to save point groups Definition at line 217 of file GA_SaveMap.h. "bool info:saveprimcounts" Whether to compute and save the counts of each primitive type into the info block. Definition at line 183 of file GA_SaveMap.h. "bool geo:saveprimitivegroups" Whether to save primitive groups Definition at line 224 of file GA_SaveMap.h. "bool info:saverenderattributeranges" Will save the ranges for velocity attributes ("v" as point/primitive) and the "width" attribute (point, primitive, detail) if they exist. These can be used by rendering procedurals to adjust the bounds to include velocity and width attributes. This is included in the attribute summary, so for this option to work, you need to have info:saveattributesummary enabled. Definition at line 204 of file GA_SaveMap.h. "bool geo:savevertexgroups" Whether to save vertex groups Definition at line 231 of file GA_SaveMap.h. "bool info:savevolumesummary" Whether to compute and save the volume info into the info block. Definition at line 188 of file GA_SaveMap.h. "string info:software" Software (and version) Definition at line 173 of file GA_SaveMap.h. During the loading process, the file version may be set. Definition at line 80 of file GA_SaveMap.h. Indicate the given data has been shared.
https://www.sidefx.com/docs/hdk/class_g_a___save_map.html
CC-MAIN-2020-50
refinedweb
831
68.97
Issue with positions when live trading I use custom broker for bitfinex. I have an update positions function. It creates positions on start if there any. Or it fixes positions if position is wrong for some reason. I have a problem during live trading. If position is already open, and I start live trading. When strategy closes position. I get notify_trade that trade.justopened instead of trade.isclosed. Please tell me what I'm doing wrong here? def positions_update(self): positions = self.store.get_positions() for position in positions: symbol = position['symbol'] amount = position['amount'] base = position['base'] if symbol in self.positions: self.positions[symbol].fix(amount, base) else: self.positions[symbol] = Position(amount, base) But when strategy has no open positions on start. And It opens and closes position, without restarts. Everything works as expected. - backtrader administrators last edited by Your custom broker should retrieve any open position and notify that (like if the position had been just opened) to let the trade accounting know when a position is actually being opened. Simple logic. That mechanism will also include any position which you may have chosen to manually open, so take it into account. It should simulate trades? I saw simulated orders code in oanda broker. But I don't get how to access data from broker to create simulated orders. Is it possible to assign order time to be executed in the past, when position was opened? Up. Still need help. - backtrader administrators last edited by Sorry, I don't know how I can help. Let me quote from above @CooleRnax said in Issue with positions when live trading: I use custom broker for bitfinex That's the key. Your custom broker has to let the engine know there are open trades. Because trades start with an opening order, you need to create fake orders that will trigger the start of a trade if you want to restart with an open position. @backtrader my main problem is that i don't get how to access data inside broker to create fake orders. @backtrader I know dataname of the data that should be linked position. Solved. It actually was not that hard.
https://community.backtrader.com/topic/2267/issue-with-positions-when-live-trading
CC-MAIN-2022-33
refinedweb
363
59.9
When responding to a HEAD request without streaming the entity and without setting the Content-Length, the Content-Length is incorrectly computed to be zero in org.apache.catalina.connector.OutputBuffer.close(). This is incorrect. The Content-Length header should be unset in this case. RFC 7230 doesn't require the Content-Length to be set on HEAD requests, but if it set it must be the size of the corresponding GET. So "Content-Length: 0" violates the standard. Computing the real size would be excessively expensive in our use case because this would require to transfer data from a backend system. So a servlet like this will cause Tomcat to return "Content-Length: 0"? public class TestServlet extends HttpServlet { public void doHead(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { response.getWriter().close(); } } (In reply to Christopher Schultz from comment #1) > So a servlet like this will cause Tomcat to return "Content-Length: 0"? Yes, exactly. The same happens without the close() call. Would you mind testing quickly with 8.0.33? Hi, If I have a servlet like this: public class TestServlet extends HttpServlet { protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.getWriter(); } protected void doHead(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.getWriter(); } } Both GET and HEAD requests return "Content-Length: 0". So the size of the HEAD corresponds to the size of the GET. Do you observe something else? Regards, Violeta (In reply to Violeta Georgieva from comment #4) > If I have a servlet like this: My servlet has a different doGet method: Mine returns a response with unknown length. doHead is basically the same - but that results in an incorrect response: the response to the HEAD request includes a generated "Content-Length: 0". AFAIK, there is no reasonable way to make Tomcat not generate this header. (In reply to Christopher Schultz from comment #3) > Would you mind testing quickly with 8.0.33? I've tried with 8.0.33, and the result is the same: > HEAD /test-backend/raw/foo HTTP/1.1 > User-Agent: curl/7.29.0 > Host: localhost:8080 > Accept: */* > < HTTP/1.1 200 OK < Server: Apache-Coyote/1.1 < Content-Length: 0 < Date: Mon, 18 Apr 2016 10:59:20 GMT You should be able to do the following in your servlet: protected void doHead(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentLength(-1); } (BTW, skipping a setContentLength() call in javax.servlet.http.HttpServlet will have the same effect. The contentLength is a numeric field that always has some value, with -1 being the default. The actual header is generated in o.a.coyote.http11.Http11Processor.prepareResponse()). That won't work. The OutputBuffer will still set the content length to zero. resp.flushBuffer(); sort of works but adds the Transfer-Encoding header. I'm currently experimenting with a unit test to see if I can find a better solution although using flushBuffer() is likely to be the best cross-container solution. This has been fixed so Tomcat will not send a Content-Length header for a HEAD request unless the application explicitly specifies one. The fix has been made in: - 9.0.x for 9.0.0.M5 - 8.5.x for 8.5.1 - 8.0.x for 8.0.34 - 7.0.x for 7.0.70 - 6.0.x for 6.0.46
https://bz.apache.org/bugzilla/show_bug.cgi?id=59310
CC-MAIN-2020-45
refinedweb
560
59.7
docs.intersystems.com / First Look: Text Analytics with InterSystems Products Articles First Look: Text Analytics with InterSystems Products Search : This First Look guide introduces you to InterSystems IRIS™’s easy to use InterSystems IRIS Text Analytics. This simple procedure walks you through the basic steps of generating NLP metrics. Preliminaries You need to have an InterSystems IRIS instance that is up and running and has an active license key. (You can view the licence key from the Management Portal System Administration > Licensing .) This documentation uses the Aviation.Event SQL table, which is available on GitHub at . (You do not need to know anything about GitHub or have a GitHub account.) To install these samples, InterSystems recommends that you create a dedicated namespace called (for example) TESTSAMPLES and then load the samples into that namespace (or you can use an existing namespace; however, you cannot use the %SYS namespace). To create a namespace, use the Management Portal options System Administration > Configuration > System Configuration > Namespaces . For the general process of downloading from GitHub, see Downloading Samples for Use with InterSystems IRIS . After you download a sample, be sure to open the README file and follow the setup instructions. Enable the Namespace You must enable each namespace that you wish to use for NLP. To enable the TESTSAMPLES namespace for NLP, access the Management Portal from the InterSystems IRIS launcher. Select System Administration > Security > Applications > Web Applications . This displays a list of web applications. Select /csp/testsamples from the list. This displays the Edit Web Application page. In the Enable section of the page select the Analytics check box. Click the Save button. Create a Domain. All NLP analysis occurs within a domain. You associate multiple texts with a domain. You then build the domain, creating indices that are used by NLP queries. A domain is created within a namespace. You can create multiple domains within a namespace. You can associate a text with multiple domains. There are several ways to create, populate, and build a domain. The following example uses the Domain Architect analytics namespaces. Select TESTSAMPLES from this list. This displays the NLP Domain Architect option. From the Domain Architect press the New button to define a domain. You specify the following domain values (in the specified order):. Add Data Locations.. Then build the NLP indices for the data sources by pressing the Build button. Explore the data. Select the Tools tab on the right side of the screen. Select. Add a Blacklist Often the list of top concepts begins with concepts that are too common or concepts blacklist to prevent the display of these concepts. A blacklist only affects the display of concepts in certain query results; it has no effect on NLP indexing of concepts. In the Domain Architect click the Open button and select Samples >> then MyTest to open the existing domain Samples.MyTest. Click the Blacklists expansion triangle. This displays the Add blacklist button in the Details tab on the right side of the screen. Click Add blacklist to display the Name and Entries fields. Accept the default name for the blacklist ( Blacklist blacklists). In the Domain Explorer click the sunglasses icon in the upper right corner. This displays a list of the blacklists defined for this domain that you can apply. Select Blacklist_1 . Note that the Top Concepts listing no longer lists the blacklist concepts.. Learn More About NLP Text Analytics InterSystems has other resources to help you learn more about NLP Text Analytics, including: InterSystems IRIS Natural Language Processing (NLP) Guide [Top of Page]   © 1997-2018 InterSystems Corporation, Cambridge, MA Content for this page loaded from AFL_textanalytics.xml (file updated 2018-10-15 02:56:07)
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=AFL_textanalytics
CC-MAIN-2018-43
refinedweb
611
58.79
#include <rtt/NameServer.hpp> This class allows to globally set up device driver objects and find them back in the same process by querying this class. Every Orocos device driver type has a Device::nameserver type which allows retrieval of the object: // At application startup code: Device* init_device = new Device("device_name"); // ... // In other places: Device* myDevice = Device::nameserver.getObject("device_name"); Definition at line 71 of file NameServer.hpp. Determine if a given name is registered. Definition at line 105 of file NameServer.hpp. Referenced by NameServer< RTT::Event< SignatureT > * >::registerObject(). Determine if a given object is registered. Definition at line 117 of file NameServer.hpp. Get the object registered for a name. Definition at line 136 of file NameServer.hpp. Get the name registered for a object. Definition at line 153 of file NameServer.hpp. Register an object with a name. If an object with such a name already exists, the present one will not be overwritten, and this call is a no-op. Definition at line 172 of file NameServer.hpp. Remove an object from the nameserver registrations. Definition at line 190 of file NameServer.hpp. Remove a name from the nameserver registrations. Definition at line 222 of file NameServer.hpp.
http://people.mech.kuleuven.be/~orocos/pub/stable/documentation/rtt/v1.6.x/api/html/classRTT_1_1NameServer.html
crawl-003
refinedweb
202
53.58
12-11-2018 03:55 PM How do i to change a user added/invited to a team as "Member" to instead be a "Guest" and visa versa? I'm using the microsoft account [msa, @outlook.com ] free setup and have tried finding ability to make a team added/invited user permission setting change from "Member" to "Guest" and visa versa in the web app [ ] as well as the desktop app [ version 1.1.00.29068 ]. In both cases i'm not seeing any option to do this <team> | . . . | manage team | <user> | role column were all i see is an X to remove them and not a drop down to change role setting like i see for my msa entry that owns the msft teams tenant setup. I've also tried deleting user and re-adding/inviting them but when i do that it only gives me the ability to assign them a permission setting that is the same or higher than the permission setting i initially set them up with, e.g. in case of Member -- Member or Owner no option to select Guest to downgrade them. 12-11-2018 04:55 PM - edited 12-11-2018 05:01 PM 12-11-2018 04:55 PM - edited 12-11-2018 05:01 PMSolution Hi Rob, I had to test this tonight in the free version of Teams as I was interested in the outcome. There is no way to do this through the Teams web app or desktop app. I tried several different ways and none worked. Once you have invited the person Teams sets the user type so if you remove and re-add them just in the Teams app then, as you say, it re-adds them as that user type. However, if you have invited a user in error where they are supposed to be a guest or the other way around then you can do it via the following 1.) Log into 2.) Select Azure Active Directory 3.) On All Users, select and delete the user Back in Teams, you now have the option to invite them as a member or a guest again. Please bear in mind that even in Azure AD it is impossible to change the user type from a member to a guest and vice versa - they need removal. In addition, if they have already started collaborating in your Teams environment then you may see some disjointed things (I.e. chat) by removing and re-adding them again. So always stick to the principle that members are co-workers within your Teams organisation and guest are members of other Teams organisations or people outside your organisation. Hope that helps clarify. If I have answered your question, please set a like on the post and then mark it as the solution. Hope to help you again at some point. Best, Chris 12-11-2018 08:52 PM - edited 12-11-2018 08:57 PM 12-11-2018 08:52 PM - edited 12-11-2018 08:57 PM Hi Christopher, Thanks for figuring this out. I do recall an unexpected azure active directory [azuread] entry showing up my portal views recently and wondering why it was there, this now explains where that came from. Here are the exact steps i followed to carry out what you proposed and it did result in me being able to re-add users as Guests vs Members this time. Seems like an area of the free offering in desperate need of teams web and/or desktop app UI to cover especially given when just starting out using the msa associated free setup it wasn't readily clear that you don't want to make anyone outside of what you would consider a direct co-worker a member, i.e. default to guest role if you have any concerns. 1. or better still [ todo: msft rep update this to point at ] | signin using microsoft account [msa] used for msft teams free setup | profile [ in top right corner ] | switch directories | all directories | select <msa based msft teams free setup created directory name> | favorites [ navigation pane on left ] | azure active directory | users | all users | <select user> + delete user | deleted users | <select deleted user> + delete permanently 2. revisit aka.ms/teams??? -> | teams | <team you want to add user back to but as Guest vs Member> | manage team | add member | <enter user name> | select "add <username> as a guest" option | add | close 12-11-2018 08:56 PM 12-11-2018 08:56 PM 12-11-2018 09:32 PM 12-11-2018 09:32 PM 12-11-2018 11:17 PM 12-11-2018 11:17 PM 12-12-2018 01:54 AM 12-12-2018 01:54 AM 12-12-2018 11:55 AM 12-12-2018 11:55 AM @adam deltinger. . . a couple of related questions to the azuread b2c tenant setup that the msa based free teams setup created. 1. is there a way to attach that azuread b2c tenant to ones existing azure subscription so that you can create azure subscript setups that look for the associated azuread tenant to be bound to the subscription? 2. is there a way to change the <generated using spaces removed company name that was provided>.onmicrosoft.com part of the namespace associated with this free teams created azuread b2c tenant? Mine ended up with a <spaces removed company name>123.onmicrosoft.com format presumably to make it unique given existing duplicates but i'd like option to define what that first part of azuread user names I create under that tenant are vs auto-generated unique value.
https://techcommunity.microsoft.com/t5/Microsoft-Teams/how-to-change-a-user-added-invited-to-a-team-as-quot-Member-quot/m-p/299320
CC-MAIN-2019-30
refinedweb
937
63.83
.append() does not return anything so the statement prints None. I am stuck. And now I am worried programming is not for me. Because with this lesson is when it get’s hard, so far our hand was held as it said in the text. Can someone help me without providing the solution? In a way that I learn. For now I am stuck. I don’t understand this: For each student item in the class_list , calculate get_average(student) and then call results.append() with that result. Finally, return the result of calling average() with results . There is no student item in class_list because the example only told me to to write the function with it as an argument. This is my code. def get_class_average(class_list): results = [] for get_average(student) in class_list: results.append() return average(results) The class_list is the global object, students which contains references to three dictionaries. In a dictionary, a key-value pair is known as an item, but in this instance, item is referring to each referenced object, in turn. That would make it an element but we get the idea. There is when we pass the students object as an argument of the call to get_class_average; as in, print(get_class_average(students)) You should have a students object in with your global data… students = [lloyd, alice, tyler] That line would, one thinks raise an exception. for student in class_list: results.append(get_average(student)) Thanks. There is nowhere mentioned that I should have a line “students = [lloyd, alice, tyler]” I think we do create it at some point earlier, but it may not be carrying over. Just insert that object underneath the three dictionaries so it is grouped with the data. No, not even when fetching the code solution it is present. I think you might be refering to a later lesson. That’s odd. It is needed for this function so I can’t see it being in a later lesson. Ok I added it anyway. Let’s take this one issue at a time. For each student item in the class_list , calculate get_average(student) and then call results.append() with that result. Okay, I understand the bold one. It instructs me to write “for student in class_list:” But then I do not undersatand how to write the rest of the code. How to calculate? You should have a function already written for get_average. We call it from within the loop, on each student in turn… for student in class_list: results.append(get_average(student)) That will populate the results list with three different average grades. I don’t quite understand what is happening here. Could you explain? What does results.append does? How can there be a “results” when that is not a variable that has been defined earlier? Calling implies a function…how can we call a function that has not been defined? This is the code that the code solution provides: for student in class_list: student_avg = get_average(student) results.append(student_avg) return average(results) Just inside the function, we declare, results = [] Then we proceed with the loop to iterate over the class list. Each calculated average is appended to the list. I DID THIS: def get_class_average(class_list): results = for student in class_list: n = get_average(student) results.append(n) return average(results) print (“Lloyd: grade = %s, percent = %s”) % (get_letter_grade(get_average(lloyd)), get_average(lloyd)) print (“Alice: grade = %s, percent = %s”) % (get_letter_grade(get_average(alice)), get_average(alice)) print (“Tyler: grade = %s, percent = %s”) % (get_letter_grade(get_average(tyler)), get_average(tyler)) print (“Class average: %s”) % round(get_class_average([lloyd, alice, tyler]), 1) Which returns this: Lloyd: grade = B, percent = 80.55 Alice: grade = A, percent = 91.15 Tyler: grade = C, percent = 79.9 Class average: 83.9 Would you provide a link to the exercise? Why did you use a for loop in the function, yet four separate print() statements? This is a bit late but you say to create the list students in the lesson after this one. It is the first step of the next lesson. This did in fact work for me, but it did NOT return your below result. It actually returned only the letter “B”. I’m also curious as to why ‘n’ is used instead of student. Could you please offer more insight? Thanks! how do I print the result? These are the instructions: Define a function called get_class_average that has one argument class_list . You can expect class_list to be a list containing your three students. First, make an empty list called results . For each student item in the class_list , calculate get_average(student) and then call results.append() with that result. Finally, return the result of calling average() with results . As you can see from my code, I already completed the first two steps:(tyler)) def get_class_average(class_list): results = Thanks so much !! Really appreciate the help !!
https://discuss.codecademy.com/t/faq-learn-python-student-becomes-the-teacher-part-of-the-whole/330521/10
CC-MAIN-2020-34
refinedweb
801
67.86
Contents¶ - How to Pay for a War: Part 2 - An Application of Markov Jump Linear Quadratic Dynamic Programming - Two example specifications - One- and Two-period Bonds but No Restructuring - Mapping into an LQ Markov Jump Problem - Penalty on Different Issuance Across Maturities - A Model with Restructuring - Restructuring as a Markov Jump Linear Quadratic Control Problem In addition to what’s in Anaconda, this lecture deploys the quantecon library: !pip install --upgrade quantecon An Application of Markov Jump Linear Quadratic Dynamic Programming¶ This is a sequel to an earlier lecture. We use a method introduced in lecture Markov Jump LQ dynamic programming to implement suggestions by Barro (1999 [Bar99], 2003 [BM03]) for extending his classic 1979 [Bar79] model will also look like souped-up consumption-smoothing models. Wanting tractability induced Barro in 1979 [Bar79] to assume that - the government trades only one-period risk-free debt, and - the one-period risk-free interest rate is constant In our earlier lecture, we relaxed the second of these assumptions but not the first. In particular, we used Markov jump linear quadratic dynamic programming to allow the exogenous interest rate to vary over time. In this lecture, we add a maturity composition decision to the government’s problem by expanding the dimension of the state. We assume - that the government borrows or saves in the form of risk-free bonds of maturities $ 1, 2, \ldots , H $. - that interest rates on those bonds are time-varying and in particular are governed by a jointly stationary stochastic process. Let’s start with some standard imports: import quantecon as qe import numpy as np import matplotlib.pyplot as plt %matplotlib inline Two example specifications¶ We’ll describe two possible specifications - In one, each period the government issues zero-coupon bonds of one- and two-period maturities and redeems them only when they mature – in this version, the maturity structure of government debt at each date is partly inherited from the past. - In the second, the government redesigns the maturity structure of the debt each period. One- and Two-period Bonds but No Restructuring¶ Let $ T_t $ denote tax collections, $ \beta $ a discount factor, $ b_{t,t+1} $ time $ t+1 $ goods that the government promises to pay at $ t $, $ b_{t,t+2} $ time $ t+2 $ goods that the government promises to pay at time $ t $, $ G_t $ government purchases, $ p_{t,t+1} $ the number of time $ t $ goods received per time $ t+1 $ goods promised, and $ p_{t,t+2} $ the number of time $ t $ goods received per time $ t+2 $ goods promised. Evidently, $ p_{t, t+1}, p_{t,t+2} $ are inversely related to appropriate corresponding gross interest rates on government debt. In the spirit of Barro (1979) [Bar79], government expenditures are governed by an exogenous stochastic process. Given initial conditions $ b_{-2,0}, b_{-1,0}, z_0, i_0 $, where $ i_0 $ is the initial Markov state, the government chooses a contingency plan for $ \{b_{t, t+1}, b_{t,t+2}, T_t\}_{t=0}^\infty $ to maximize.$$ - E_0 \sum_{t=0}^\infty \beta^t \left[ T_t^2 + c_1( b_{t,t+1} - b_{t,t+2})^2 \right] $$ subject to the constraints$$ \begin{aligned} T_t & = G_t + b_{t-2,t} + b_{t-1,t} - p_{t,t+2} b_{t,t+2} - p_{t,t+1} b_{t,t+1} \cr G_t & = U_{g,s_t} z_t \cr z_{t+1} & = A_{22,s_t} z_t + C_{2,s_t} w_{t+1} \cr \begin{bmatrix} p_{t,t+1} \cr p_{t,t+2} \cr U_{g,s_t} \cr A_{22,s_t} \cr C_{2,s_t} \end{bmatrix} & \sim \textrm{functions of Markov state with transition matrix } \Pi \end{aligned} $$ Here $ w_{t+1} \sim {\cal N}(0,I) $ and $ \Pi_{ij} $ is the probability that the Markov state moves from state $ i $ to state $ j $ in one period. The variables $ T_t, b_{t, t+1}, b_{t,t+2} $ are control variables chosen at $ t $, while the variables $ b_{t-1,t}, b_{t-2,t} $ are endogenous state variables inherited from the past at time $ t $ and $ p_{t,t+1}, p_{t,t+2} $ are exogenous state variables at time $ t $. The parameter $ c_1 $ imposes a penalty on the government’s issuing different quantities of one and two-period debt. This penalty deters the government from taking large “long-short” positions in debt of different maturities. An example below will show this in action. As well as extending the model to allow for a maturity decision for government debt, we can also in principle allow the matrices $ U_{g,s_t}, A_{22,s_t}, C_{2,s_t} $ to depend on the Markov state $ s_t $. Below, we will often adopt the convention that for matrices appearing in a linear state space, $ A_t \equiv A_{s_t}, C_t \equiv C_{s_t} $ and so on, so that dependence on $ t $ is always intermediated through the Markov state $ s_t $. Mapping into an LQ Markov Jump Problem¶ First, define$$ \hat b_t = b_{t-1,t} + b_{t-2,t} , $$ which is debt due at time $ t $. Then define the endogenous part of the state:$$ \bar b_t = \begin{bmatrix} \hat b_t \cr b_{t-1,t+1} \end{bmatrix} $$ and the complete state$$ x_t = \begin{bmatrix} \bar b_t \cr z_t \end{bmatrix} $$ and the control vector$$ u_{t} = \begin{bmatrix} b_{t,t+1} \cr b_{t,t+2} \end{bmatrix} $$ The endogenous part of state vector follows the law of motion:$$ \begin{bmatrix} \hat b_{t+1} \cr b_{t,t+2} \end{bmatrix} = \begin{bmatrix} 0 & 1 \cr 0 & 0 \end{bmatrix} \begin{bmatrix} \hat b_{t} \cr b_{t-1,t+1} \end{bmatrix} + \begin{bmatrix} 1 & 0 \cr 0 & 1 \cr \end{bmatrix} \begin{bmatrix} b_{t,t+1} \cr b_{t,t+2} \end{bmatrix} $$ or$$ \bar b_{t+1} = A_{11} \bar b_t + B_1 u_t $$ Define the following functions of the state$$ G_t = S_{G,t} x_t, \quad \hat b_t = S_1 x_t $$ and$$ M_t = \begin{bmatrix} - p_{t,t+1} & - p_{t,t+2} \end{bmatrix} $$ where $ p_{t,t+1} $ is the discount on one period loans in the discrete Markov state at time $ t $ and $ p_{t,t+2} $ is the discount on two-period loans in the discrete Markov state. Define$$ S_t = S_{G,t} + S_1 $$ Note that in discrete Markov state $ i $$$ T_t = M_t u_t + S_t x_t $$ It follows that$$ T_t^2 = x_t' S_t' S_t x_t + u_t' M_t' M_t u_t + 2 u_t' M_t' S_t x_t $$ or$$ T_t^2 = x_t'R_t x_t + u_t' Q_t u_t + 2 u_t' W_t x_t $$ where$$ R_t = S_t'S_t, \quad Q_t = M_t' M_t, \quad W_t = M_t' S_t $$ Because the payoff function also includes the penalty parameter on issuing debt of different maturities, we have:$$ T_t^2 + c_1( b_{t,t+1} - b_{t,t+2})^2 = x_t'R_t x_t + u_t' Q_t u_t + 2 u_t' W_t x_t + c_1 u_t'Q^c u_t $$ where $ Q^c = \begin{bmatrix} 1 & -1 \\ -1 & 1 \end{bmatrix} $. Therefore, the overall $ Q $ matrix for the Markov jump LQ problem is:$$ Q_t^c = Q_t + c_1Q^c $$ The law of motion of the state in all discrete Markov states $ i $ is$$ x_{t+1} = A_t x_t + B u_t + C_t w_{t+1} $$ where$$ A_t = \begin{bmatrix} A_{11} & 0 \cr 0 & A_{22,t} \end{bmatrix}, \quad B = \begin{bmatrix} B_1 \cr 0 \end{bmatrix}, \quad C_t = \begin{bmatrix} 0 \cr C_{2,t} \end{bmatrix} $$ Thus, in this problem all the matrices apart from $ B $ may depend on the Markov state at time $ t $. As shown in the previous lecture, the LQMarkov class can solve Markov jump LQ problems when provided with the $ A, B, C, R, Q, W $ matrices for each Markov state. The function below maps the primitive matrices and parameters from the above two-period model into the matrices that the LQMarkov class requires: def LQ_markov_mapping(A22, C2, Ug, p1, p2, c1=0): """ Function which takes A22, C2, Ug, p_{t, t+1}, p_{t, t+2} and penalty parameter c1, and returns the required matrices for the LQMarkov model: A, B, C, R, Q,1 = np.atleast_2d(p1) p2 = np.atleast_2d(p2) # Find the number of states (z) and shocks (w) nz, nw = C2.shape # Create A11, B1, S1, S2, Sg, S matrices A11 = np.zeros((2, 2)) A11[0, 1] = 1 B1 = np.eye(2) S1 = np.hstack((np.eye(1), np.zeros((1, nz+1)))) Sg = np.hstack((np.zeros((1, 2)), Ug)) S = S1 + Sg # Create M matrix M = np.hstack((-p1, -p2)) # Create A, B, C matrices A_T = np.hstack((A11, np.zeros((2, nz)))) A_B = np.hstack((np.zeros((nz, 2)), A22)) A = np.vstack((A_T, A_B)) B = np.vstack((B1, np.zeros((nz, 2)))) C = np.vstack((np.zeros((2, nw)), C2)) # Create Q^c matrix Qc = np.array([[1, -1], [-1, 1]]) # Create R, Q, W matrices R = S.T @ S Q = M.T @ M + c1 * Qc W = M.T @ S return A, B, C, R, Q, W With the above function, we can proceed to solve the model in two steps: - Use LQ_markov_mappingto map $ U_{g,t}, A_{22,t}, C_{2,t}, p_{t,t+1}, p_{t,t+2} $ into the $ A, B, C, R, Q, W $ matrices for each of the $ n $ Markov states. - Use the LQMarkovclass to solve the resulting n-state Markov jump LQ problem. Penalty on Different Issuance Across Maturities¶ To implement a simple example of the two-period model, we assume} \hspace{2mm} , \hspace{2mm} U_g = \begin{bmatrix} 0 & 1 \end{bmatrix} $$ Therefore, in this example, $ A_{22}, C_2 $ and $ U_g $ are not time-varying. We will assume that there are two Markov states, one with a flatter yield curve, and one with a steeper yield curve. In state 1, prices are:$$ p^1_{t,t+1} = \beta \hspace{2mm} , \hspace{2mm} p^1_{t,t+2} = \beta^2 - 0.02 $$ and in state 2, prices are:$$ p^2_{t,t+1} = \beta \hspace{2mm} , \hspace{2mm} p^2_{t,t+2} = \beta^2 + 0.02 $$ We first solve the model with no penalty parameter on different issuance across maturities, i.e. $ c_1 = 0 $. We also need to specify a transition matrix for the Markov state, we use:$$ \Pi = \begin{bmatrix} 0.9 & 0.1 \\ 0.1 & 0.9 \end{bmatrix} $$ Thus, each Markov state is persistent, and there is an equal chance of moving from one to the other. # Model parameters β, Gbar, ρ, σ, c1 = 0.95, 5, 0.8, 1, 0 p1, p2, p3, p4 = β, β**2 - 0.02, β, β**2 + 0.02 # Basic model matrices A22 = np.array([[1, 0], [Gbar, ρ] ,]) C_2 = np.array([[0], [σ]]) Ug = np.array([[0, 1]]) correspond to each state As = [A1, A2] Bs = [B1, B2] Cs = [C1, C2] Rs = [R1, R2] Qs = [Q1, Q2] Ws = [W1, W2] Π = np.array([[0.9, 0.1], [0.1, 0.9]]) # Construct and solve the model using the LQMarkov class lqm = qe.LQMarkov(Π, Qs, Rs, As, Bs, Cs=Cs, Ns=Ws, beta=β) lqm.stationary_values() # Simulate the model x0 = np.array([[100, 50, 1, 10]]) x, u, w, t = lq() The above simulations show that when no penalty is imposed on different issuances across maturities, the government has an incentive to take large “long-short” positions in debt of different maturities. To prevent such an outcome, we now set $ c_1 = 0.01 $. This penalty is enough to ensure that the government issues positive quantities of both one and two-period debt: # Put small penalty on different issuance across maturities c1 = 0.012 = qe.LQMarkov(Π, Qs, Rs, As, Bs, Cs=Cs, Ns=Ws, beta=β) lqm2.stationary_values() # Simulate the model x, u, w, t = lqm() A Model with Restructuring¶ This model alters two features of the previous model: - The maximum horizon of government debt is now extended to a general H periods. - The government is able to redesign the maturity structure of debt every period. We impose a cost on adjusting issuance of each maturity by amending the payoff function to become:$$ T_t^2 + \sum_{j=0}^{H-1} c_2 (b_{t+j}^{t-1} - b_{t+j+1}^t)^2 $$ The government’s budget constraint is now:$$ T_t + \sum_{j=1}^Hp_{t,t+j} b_{t+j}^t = b_t^{t-1} + \sum_{j=1}^{H-1} p_{t,t+j} b_{t+j}^{t-1} + G_t $$ To map this into the Markov Jump LQ framework, we define state and control variables. Let:$$ \bar b_t = \begin{bmatrix} b^{t-1}_t \\ b^{t-1}_{t+1} \\ \vdots \\ b^{t-1}_{t+H-1} \end{bmatrix} \hspace{2mm} , \hspace{2mm} u_t = \begin{bmatrix} b^{t}_{t+1} \\ b^{t}_{t+2} \\ \vdots \\ b^{t}_{t+H} \end{bmatrix} $$ Thus, $ \bar b_t $ is the endogenous state (debt issued last period) and $ u_t $ is the control (debt issued today). As before, we will also have the exogenous state $ z_t $, which determines government spending. Therefore, the full state is:$$ x_t = \begin{bmatrix} \bar b_t \\ z_t \end{bmatrix} $$ We also define a vector $ p_t $ that contains the time $ t $ price of goods in period $ t + j $:$$ p_t = \begin{bmatrix} p_{t,t+1} \\ p_{t,t+2} \\ \vdots \\ p_{t,t+H} \end{bmatrix} $$ Finally, we define three useful matrices $ S_s, S_x, \tilde S_x $:$$ \begin{bmatrix} p_{t,t+1} \\ p_{t,t+2} \\ \vdots \\ p_{t,t+H-1} \end{bmatrix} = S_s p_t \text{ where } S_s = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ \vdots & & \ddots & & \\ 0 & 0 & \cdots & 1 & 0 \end{bmatrix} $$$$ \begin{bmatrix} b^{t-1}_{t+1} \\ b^{t-1}_{t+2} \\ \vdots \\ b^{t-1}_{t+T-1} \end{bmatrix} = S_x \bar b_t \text{ where } S_x = \begin{bmatrix} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & & & \ddots & \\ 0 & 0 & \cdots & 0 & 1 \end{bmatrix} $$$$ b^{t-1}_t = \tilde S_x \bar b_t \text{ where } \tilde S_x = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \end{bmatrix} $$ In terms of dimensions, the first two matrices defined above are $ (H-1) \times H $. The last is $ 1 \times H $ We can now write the government’s budget constraint in matrix notation. Rearranging the government budget constraint gives:$$ T_t = b_t^{t-1} + \sum_{j=1}^{H-1} p_{t+j}^t b_{t+j}^{t-1} + G_t - \sum_{j=1}^H p_{t+j}^t b_{t+j}^t $$ or$$ T_t = \tilde S_x \bar b_t + (S_s p_t) \cdot (S_x \bar b_t) + U_g z_t - p_t \cdot u_t $$ If we want to write this in terms of the full state, we have:$$ T_t = \begin{bmatrix} (\tilde S_x + p_t'S_s'S_x) & Ug \end{bmatrix} x_t - p_t' u_t $$ To simplify the notation, let $ S_t = \begin{bmatrix} (\tilde S_x + p_t’S_s’S_x) & Ug \end{bmatrix} $. Then$$ T_t = S_t x_t - p_t' u_t $$ Therefore$$ T_t^2 = x_t' R_t x_t + u_t ' Q_t u_t + 2 u_t'W_t x_t $$ where$$ R_t = S_t'S_t , \hspace{5mm} Q_t = p_t p_t' , \hspace{5mm} W_t = -p_t S_t $$ where to economize on notation we adopt the convention that for the linear state matrices $ R_t \equiv R_{s_t}, Q_t \equiv W_{s_t} $ and so on. We’ll continue to use this convention also for the linear state matrices $ A, B, W $ and so on below. Because the payoff function also includes the penalty parameter for rescheduling, we have:$$ T_t^2 + \sum_{j=0}^{H-1} c_2 (b_{t+j}^{t-1} - b_{t+j+1}^t)^2 = T_t^2 + c_2(\bar b_t - u_t)'(\bar b_t - u_t) $$ Because the complete state is $ x_t $ and not $ \bar b_t $, we rewrite this as:$$ T_t^2 + c_2(S_c x_t - u_t)'(S_c x_t - u_t) $$ where $ S_c = \begin{bmatrix} I & 0 \end{bmatrix} $ Multiplying this out gives:$$ T_t^2 + c_2 x_t' S_c' S_c x_t - 2c_2 u_t' S_c x_t + c_2 u_t'u_t $$ Therefore, with the cost term, we must amend our $ R,Q,W $ matrices as follows:$$ R^c_t = R_t + c_2 S_c'S_c $$$$ Q^c_t = Q_t + c_2 I $$$$ W^c_t = W_t - c_2 S_c $$ To finish mapping into the Markov jump LQ setup, we need to construct the law of motion for the full state. This is simpler than in the previous setup, as we now have $ \bar b_{t+1} = u_t $. Therefore:$$ x_{t+1} \equiv \begin{bmatrix} \bar b_{t+1} \\ z_{t+1} \end{bmatrix} = A_t x_t + B u_t + C_t w_{t+1} $$ where$$ A_t = \begin{bmatrix} 0 & 0 \\ 0 & A_{22,t} \end{bmatrix} , \hspace{5mm} B = \begin{bmatrix} I \\ 0 \end{bmatrix} , \hspace{5mm} C = \begin{bmatrix} 0 \\ C_{2,t} \end{bmatrix} $$ This completes the mapping into a Markov jump LQ problem. def LQ_markov_mapping_restruct(A22, C2, Ug, T, p_t, c=0): """ Function which takes A22, C2, T, p_t, c and returns the required matrices for the LQMarkov model: A, B, C, R, Q, W Note, p_t should be a T by 1 matrix c is the rescheduling cost (a scal_t = np.atleast_2d(p_t) # Find the number of states (z) and shocks (w) nz, nw = C2.shape # Create Sx, tSx, Ss, S_t matrices (tSx stands for \tilde S_x) Ss = np.hstack((np.eye(T-1), np.zeros((T-1, 1)))) Sx = np.hstack((np.zeros((T-1, 1)), np.eye(T-1))) tSx = np.zeros((1, T)) tSx[0, 0] = 1 S_t = np.hstack((tSx + p_t.T @ Ss.T @ Sx, Ug)) # Create A, B, C matrices A_T = np.hstack((np.zeros((T, T)), np.zeros((T, nz)))) A_B = np.hstack((np.zeros((nz, T)), A22)) A = np.vstack((A_T, A_B)) B = np.vstack((np.eye(T), np.zeros((nz, T)))) C = np.vstack((np.zeros((T, nw)), C2)) # Create cost matrix Sc Sc = np.hstack((np.eye(T), np.zeros((T, nz)))) # Create R_t, Q_t, W_t matrices R_c = S_t.T @ S_t + c * Sc.T @ Sc Q_c = p_t @ p_t.T + c * np.eye(T) W_c = -p_t @ S_t - c * Sc return A, B, C, R_c, Q_c, W_c Example with Restructuring¶ As an example of the model with restructuring, consider this model where $ H = 3 $. We will assume that there are two Markov states, one with a flatter yield curve, and one with a steeper yield curve. In state 1, prices are:$$ p^1_{t,t+1} = 0.9695 \hspace{2mm} , \hspace{2mm} p^1_{t,t+2} = 0.902 \hspace{2mm} , \hspace{2mm} p^1_{t,t+3} = 0.8369 $$ and in state 2, prices are:$$ p^2_{t,t+1} = 0.9295 \hspace{2mm} , \hspace{2mm} p^2_{t,t+2} = 0.902 \hspace{2mm} , \hspace{2mm} p^2_{t,t+3} = 0.8769 $$ We will assume the same transition matrix and $ G_t $ process as above # New model parameters H = 3 p1 = np.array([[0.9695], [0.902], [0.8369]]) p2 = np.array([[0.9295], [0.902], [0.8769]]) Pi = np.array([[0.9, 0.1], [0.1, 0.9]]) # Put penalty on different issuance across maturities c2 = 0.5 A1, B1, C1, R1, Q1, W1 = LQ_markov_mapping_restruct(A22, C_2, Ug, H, p1, c2) A2, B2, C2, R2, Q2, W2 = LQ_markov_mapping_restruct(A22, C_2, Ug, H, p2, c2) # Small penalties on debt required to implement no-Ponzi scheme R1[0, 0] = R1[0, 0] + 1e-9 R1[1, 1] = R1[1, 1] + 1e-9 R1[2, 2] = R1[2, 2] + 1e-9 R2[0, 0] = R2[0, 0] + 1e-9 R2[1, 1] = R2[1, 1] + 1e-9 R2[2, 2] = R2[2, 2] +3 = qe.LQMarkov(Π, Qs, Rs, As, Bs, Cs=Cs, Ns=Ws, beta=β) lqm3.stationary_values() x0 = np.array([[5000, 5000, 5000, 1, 10]]) x, u, w, t = lqm3.compute_sequence(x0, ts_length=300) # Plots of different maturities debt issuance fig, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(16, 4)) ax1.plot(u[0, :]) ax1.set_title('One-period debt issuance') ax1.set_xlabel('Time') ax2.plot(u[1, :]) ax2.set_title('Two-period debt issuance') ax2.set_xlabel('Time') ax3.plot(u[2, :]) ax3.set_title('Three-period debt issuance') ax3.set_xlabel('Time') ax4.plot(u[0, :] + u[1, :] + u[2, :]) ax4.set_title('Total debt issuance') ax4.set_xlabel('Time') plt.tight_layout() plt.show() # Plot share of debt issuance that is short-term fig, ax = plt.subplots() ax.plot((u[0, :] / (u[0, :] + u[1, :] + u[2, :]))) ax.set_title('One-period debt issuance share') ax.set_xlabel('Time') plt.show()
https://python-advanced.quantecon.org/tax_smoothing_2.html
CC-MAIN-2020-40
refinedweb
3,384
59.03
I am trying to figure out how to do multipart-uploads to AWS Glacier and found some Example Request on this documentation page. How do I implement this example in Python? I think I should use the 'requests' module but don't know exactly how to make it work. Here is what I have done: import requests r = requests.post('/042415267352/vaults/history/multipart-uploads') MissingSchema: Invalid URL '/042415267352/vaults/history/multipart-uploads': No schema supplied. Perhaps you meant? You do not need to implement low level HTTP requests yourself, this is what boto module is for in Python. You can do all this via module which abstracts all low level requests for you. For documentation and examples, see Boto3 Glacier docs which contains lots of examples.
https://codedump.io/share/pj4hQ9sMBOfP/1/how-to-implement-the-aws-glacier-example-request-in-python
CC-MAIN-2017-26
refinedweb
127
58.48