text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
You want a unix-like tool for manipulating CSV data from the command-line. The standard tools cut and awk aren't always suitable as they don't handle quoting and escaping which are common in CSVs.Solution Use the CSV manipulation function csvfilter, a simple Python library I've put together. Install with: pip install csvfilter Sample usage: # Pluck columns 2, 5 and 6 cat in.csv | csvfilter -f 2,5,6 > out.csv # Pluck all columns except 4 cat in.csv | csvfilter -f 4 -i > out.csv # Skip header row cat in.csv | csvfilter -s 1 > out.csv # Work with pipe-separated data cat in.csv | csvfilter -s 1,3 -- out.csv The above examples show csvfilter processing sys.STDIN but it can also act directly on a file: csvfilter -f 2,5,6 in.csv > out.csv $ csvfilter --help Usage: csvfilter [options] Options: -h, --help show this help message and exit -f FIELDS, --fields=FIELDS Specify which fields to pluck -s SKIP, --skip=SKIP Number of rows to skip -d DELIMITER, --delimiter=DELIMITER Delimiter of incoming CSV data -i, --inverse Invert the filter - ie drop the selected fields --out-delimiter=OUT_DELIMITER Delimiter to use for output --out-quotechar=OUT_QUOTECHAR Quote character to use for output There is also a simple python API that allows you to add validators to determine which rows are filtered out: from csvfilter import Processor def contains_cheese(row): return 'cheese' in row processor = Processor(fields=[1,2,3]) processor.add_validator(contains_cheese) generator = processor.process(sys.stdin) for cheesy_row in generator: do_something(cheesy_row)Discussion It's possible to do basic CSV manipulation from the command-line using cut or awk - for example: cat in.csv | cut -d "," -f 0,1,2 > out.csv or : cat in.csv | awk 'BEGIN {FS=","} {print $1,$2,$3}' > out.csv However neither cut or awk make it easy to handle CSVs with escaped characters - hence the motivation for this tool. I'm not the first to write such a utility - there are several others out there (although none had quite the API I was looking for): Source available on Github.
http://css.dzone.com/articles/csvfilter-python-command-line?mz=55985-python
CC-MAIN-2014-49
refinedweb
352
57.06
Just returned from our 7 night holiday and will not be returning. Not because of the hotel or the staff but due to the number of Russians. We stayed here three years ago and enjoyed our holiday so much we vowed to return (wish we hadn't)The hotel is showing it's age but the rooms are clean and spacious with English channels on the television. All the staff are very helpful and are happy to do anything you ask, just be careful when you decided to tip this could go either way ie they can change as fast as the weather. Food is boring and the hotel caters for the Russians. Breakfast is the same everyday, lunch is a mix of chicken, burgers or hot dogs everyday with either rice or chips. Dinner is again either chicken or grilled fish. The chicken is varied either in a stew, breaded or battered but it is still chicken and believe me after 7 days of chicken for lunch and dinner, it doesn't matter how it's cooked it gets boring!!! The Russians are very rude and ignorant. They never smile or respond if you say hello, sorry etc. They push in when you are waiting to be served either at the bars or in the restaurant and if you say anything they look at you as if they have something on the bottom of there shoe. This hotel has changed over the last three years and we will not be returning again. - Official Description (provided by the hotel): - The Festival Le Jardin Resort is a 4 star plus resort satisfy the needs of enjoyable holidays. ... more less - Reservation Options: - TripAdvisor is proud to partner with Expedia, Travelocity, Booking.com, Priceline, Orbitz, Odigeo, Despegar.com and Hotels.com so you can book your Festival Le Jardin reservations with confidence. We help millions of travelers each month to find the perfect hotel for both vacation and business trips, always with the best discounts and special offers. - Also Known As: - Sunrise Le Jardin Hotel Hurghada - Festival Le Jardin Hurghada - Festival Le Jardin Hotel Hurghada
http://www.tripadvisor.com/ShowUserReviews-g297549-d1092538-r149571670-Festival_Le_Jardin-Hurghada_Red_Sea_and_Sinai.html
CC-MAIN-2015-22
refinedweb
352
70.02
Maypole is a Perl framework for MVC-oriented web applications, similar to Jakarta's Struts. Maypole is designed to minimize coding requirements for creating simple web interfaces to databases, while remaining flexible enough to support enterprise web applications. Simon Cozens, It is a fact universally accepted that a database with contents must be in search of a presentation. Jane Austen, «Pride and Prejudice Reloaded» When I first caught eye of Maypole, I was startled by the shining beauty and the effortless grace with which it seemed to produce shiny web applications. I envied those who had a firm grasp of it and strived to employ Maypole to collect and publish the data available to me. But in that time, Maypole was a very immature Primadonna with very specific needs that were only faintly hinted at on websites only known to a few select people already in the clique. Armed with my knowledge of Class::DBI and frugal knowledge of mod_perl, I found myself rejected by Maypole and its requirements often for no reason apparent to me and loudly complained about this in the chatterbox, to the bemusement and annoyance of the other regulars there, I assume. After all, Maypole had been touted as the web application framework, being hawked through articles like flowers on the 14th of February and had received real money from the Perl Foundation. I felt I had a right to it. Maypole aims to bring the virtues of the traditional MVC separation to web applications, by putting your data model into the database via Class::DBI, representing the data view as templates rendered via Template Toolkit and modifying your data through actions controlled by Apache::MVC, the heartpiece of Maypole itself. As much as the documentation touted these virtues, and as much as I acknowledge that having these virtues is a goal to strive for, not being able to get into closer contact with Maypole was something of a turn-off for me. A few months have passed since and I have expanded my understanding of the nature of Maypole, and my love/hate relationship has come to some fruitition in the form of a small application working and online. While there are idiosyncrasies to it, there are also ways to wield Maypole as an effective tool to create web applications - if you set your aim right and avoid some pitfalls. Many of the things I will list are actually documented somewhere on the Maypole wiki[1] but haven't found their way into Perl code distributed with Maypole itself yet. The road to hell is paved with good intentions. Proverb To get a start on the good side of Maypole, you need to know, understand and not entirely dislike Class::DBI. The whole object structure that Maypole operates on corresponds to your Class::DBI object and table layout. If you do not know Class::DBI or get easily lost with the setup of has_many() relationships and their ilk, start by getting familiar with Class::DBI without having Maypole in the way. Fighting a battle on two fronts is unadvisable, so make sure this side is covered. If you already have an existing database not specifically tailored with Class::DBI in mind, forget it. People have tried to use Class::DBI for existing databases, but I found them to be far more unhappy with what Class::DBI offered them than I ever was, but I see Class::DBI as a cheap object persistence solution with a good querying and batch manipulation language, and in that regard it has seldom let me down. Secondly, Maypole requires Apache1 and mod_perl. If your version of Perl and the mod_perl delivered with your version of Apache1 match, you are off well. There is Maypole::CGI, but you will still need Apache::Request. chromatic has adapted Maypole to Jellybean[2], so that might be a good alternative approach to avoid another variable in your setup. If you decide to keep Maypole in its own Apache1 playpen and you want to avoid ruining your existing Perl installation by cluttering it with lots of obscure modules you will never look again, you are setting yourself up for much trouble in return. You will then have to create a separate instance of Apache1 with mod_perl, matching your perl executable, and fiddle with the include paths until it all works properly. If you decide to actually deploy a Maypole-based application, you will have to use a dedicated Apache1/mod_perl server anyway, as the namespace clashes will force you to do so. Although I have not personally tried it, chromatics solution of using Jellybean with Maypole::Container::Jellybean has the appeal of not requiring Apache1 and just requiring a working Perl installation, something you should have. The third overtly documented prerequisite component is Template Toolkit - liked by many, even bestowed with a documenting book. You will not need to really know Template Toolkit, but basic knowledge about how a(ny) templating toolkit works and the common unifying abstractions will be required knowledge. If you know any of Mason, Petal or Template Toolkit, you will be well off and I think that even with only knowledge of HTML::Template, you will have enough to find your way around in Template Toolkit. In theory, it is easy to replace the templating backend for Maypole, and there already are the required modules and templates for HTML::Mason, but none have surfaced for Petal and HTML::Template yet. No battle plan ever survives contact with the enemy. Feldmarschall von Moltke The installation process should be as easy as perl -MCPAN -e "install Maypole", but, alas, it is not. The prerequisite listing of Maypole 1.7 gets you almost there though. The missing modules are: Best install these three modules in that order and then install Maypole. Template::Plugin::Class is the only module explicitly missing in the prerequisites of Maypoles Makefile.PL and the Maypole wiki will guide you towards the other missing modules. On that occassion the authors and users of Module::Build earn bad karma for their future life as Module::Build does not support the PREFIX= parameter of ExtUtils::MakeMaker and thus any local installation of Template::Plugin::Class will require manual intervention instead of simply working through CPAN.pm. The rest of the modules depends on your choice of database and cannot be automated and detected much more sensibly. It might seem easy enough, but [it] is just like a stroll in the park.Jurassic Park, that is. Larry Wall, <1994Jun15.074039.2654@netlabs.com> After some rounds of forced installs (in my case because mod_perl was not in @INC of plain perl), you will believe you have everything together and plan for the first ride. You modify your webserver configuration, you set a location, dress up and navigate to the URL, to find yourself alone on a blank page, or at best with a 500 HTTP Error for company. You decide to wait around, but as you grow weary after 30 seconds and look into the error log, you find that your date has not shown up and left you without a message. Error reporting, if present at all, is abysmal with Maypole. Syntax errors in your code modules abort the loading of these modules at that place so you will end up with one half of a module loaded into your webserver when an error stopped parsing of that module. There is no facility of tracing progress, so you will have to put calls to warn all over the place to glimpse a hint of Maypole where possibly the cause for the error could lie. This also holds true for errors in your templates - they vanish secretly and silently as well. Maypole itself generates some "redefinition" and "undefined" warnings itself, but all of these warnings you will only see if you write a command line script to exercise your code. The only good advice I can offer here is to make your modules emit a message to the log as they have been parsed and to manually load all your class modules from within your main module in an eval loop to catch and log all occurring errors yourself. All of the "Model" part of Maypole is based around the idea that you want to perform an action on a row in a table through a request. To that effect, Maypole maps all requests of the form table/action/row_id to your Class::DBI subclass wrapping table, retrieves the data corresponding to row_id into it, and then calls the method indicated by action of that object. The whole association of table and implementing class happens without any additional declaration just by matching name to class name. Only those methods declared as :Exported can be called in such a manner, so there is no way of reaching internal methods from the outside through Maypole, creating unsuspected security holes. The idea of wrapping the table/class, the action and the row identifier in the URL is nothing new, for example the (Python-based) Zope-based Plone[3] framework. Before Maypole will execute such a request, it allows you to perform authentication and authorization to make sure that only those actions get performed that the user is allowed to perform. The downside of this approach is that actions acting on a larger set of objects than one have no easy match, and the syntax deviates far enough from the common CGI parameter syntax that it does not scale well for more than one object to be acted upon on a request. The situation that you want an action to be performed on more than one object in one request does not only happen when you want the same action performed on a list of objects of the same class, like ordering several different kinds of flowers for your bouquet, it also happens when you want to display two objects together that can be formed into a new product, like a diamond and a ring, which together form a present. The present has no representation in the database though, as it consists simply of the two components and should not need its own database table. Any sufficiently advanced technology is indistinguishable from magic. Sir A. C. Clarke Hokey religion, and ancient weapons are no match for a good blaster at your side. George Lucas, «Star Wars» After you have managed to get the supplied example, beerdb, working, you will fever to embark on some ventures of your own devising instead of remaining on the known shores that Simon Cozens outlined for you. Many times have I complained about how Maypole stacks more magic upon the piles of magic that Class::DBI heaps upon the database. My advice is to stay away from all additional magic like Class::DBI::Loader::Relationship, which purports to give you a concise, natural (english) language way of declaring the relationships in your database. All it will give you though, are headaches after headaches as it misinterprets your instructions in the most innovative yet useless ways. Even Simon recommends to stay away from it, and he is the author of it. A place where the magic comes in handy is when you declare which methods your classes should make visible to the outside. You simply declare a subroutine in your class as :Exported, and it will be reachable via the request to table/action/row_id. This declaration mechanism also makes it very easy to add introspection to any application. A list of all valid external requests can be easily created, allowing you to add a detailed permission scheme to any application. I wrote such a scheme in one day, and at the price of 7 additional tables and a bit of self-contained code Maypole allows easy restriction of any action to specific usergroups. Maypole does not enforce or supply any user management or permission in its bare form - it is just a web display for database rows. This leaves you with the freedom to keep your application as simple as possible. There is no common way to add plugins with their own database schema easily and without much thought yet, but including that would increase the overall complexity to something comparable to Plone and thus raise the barrier of entry quite a bit. Update: Changed link to the Maypole Wiki woo hoo. good news indeed. a maintainer is just what is needed. i'm pretty sure Maypole will work with apache2 as a mod_fastcgi script. i know it works fine that way on apache1. the fastcgi FAQ and examples have the code, it's only a few lines added into CGI::Maypole to make a FCGI::Maypole. it's also easy enough to get around the Apache::* dependencies, i believe A::Request is just used for Cookies. imho, the Apache::* dependencies should be removed as they are mod_perl related and not merely for apache. i hacked my FCGI Maypole enough to support Cookies and Sessions using the same interface that the Maypole modules use (Maypole::UserSessionCookie or somesuch) but just using the functions provided by CGI::Simple. just a little work would let Maypole support request headers without relying on A::Request. i have patches for most of these things, but since nobody was maintaining i didn't send them in. the original project i chose Maypole for is about ready for it's second incarnation and i've been wondering if i should continue with Maypole or try another path. i'm on the mailing lists, i'll pay attention now that there's somebody at the helm and i hope Maypole picks up some momentum. It would be nice if someone started a patches section on th e Maypole wiki to keep that kind of stuff on. It would be an easy way for you to show your patches and hopefully get them worked into development. Just my 2 cents! :) Hey, Thanks for the great review. Having beaten Maypole with a stick myself I appreciate seeing this info out there for new commers. I hated most that non of its documentation even seems to address the problems it has. Of course once you find the problem it is normaly easy to find the answers in the documentation, unfortunatly 90% of my time was spent trying to figure out what the problem was. On the plus side once I got it working it seems pretty nifty, as it matures and gets some better examples, documentation, and some of the patches integrated with it, it should be an amazing framework. With some effort (read lots and lots of effort) Maypole can even be convinced to run on Apached2 mod_perl 1.99 Thanks for the great review, perhaps you could make some of your Maypole accomplishments available for others to try out. PS one of my biggest problems turned out to be an error in the TT macro file provided with it. So if you have problems with relative links look there. Update: Now that I'm peeking over Maypole maillist archives I see people saying they've gotten it working under Apache2. Might you look at a couple of the msgs like msg332 and thence to msg248 to see how they compare with your experiences? Those are indeed the two posts I eventualy found. I admit I didn't search the mail lists as early as I should have. That was definitly an error on my part. After working my way threw those two patches wich are a good starting block, and then manageing to get Apache::Request to install, which involved some hand installing of modules (instead of CPAN) and some reinstalling of core modules. Two modules (which i can't currently remember) had a recursive dependency (they required each other) that both needed reinstalled before libpreq2 would install. These may not be issues everyone will have but the bewildered me for quite some time. Note that I had never touched Apache (1 or 2) and any form of mod_perl, so these might all be issues that more experienced users fix without even blinking. :) Best of luck and from now on I know to search mailing lists first! You explicitly said Apache1/mod_perl, is there any issue with Apache2/mod_perl2? Thanks. Updated: For example, Struts has a very strong presentation layer in the form of many jsp tag libs. Template Toolkit doesn't seem to have that. This is of course more a TT issue than Maypole. How about server side validation in Maypole? The weakness of Struts is that it doesn't really do much about the model layer, it seems Maypole does more with Class:DBI. Yes, there is an issue with Apache2/mod_perl2, mainly that it doesn't work. Maypole works with Apache2 as CGI (Maypole::CGI), and eric256 mentioned much pain to get it to play nice with Apache2/mod_perl2. The main problem is that Maypole relies on Apache::Request, which is, I hear, not supported anymore under mod_perl2. Maypole uses CGI::Untaint for all its validation, and it is pretty strong on that side. I switched Taint protection on fairly late in the process, and did not need to change any line of code in either Maypole or my additional code, as Maypole educates the user in a fairly controlled manner to always properly untaint the data. Personally, I'm no big fan of TT, as it allows to do far too much in the custom language instead of pushing complex things back to the Model layer. I prefer the Petal approach of things, where simple things, such as method calls are possible, and everything harder must be implemented in Perl code in the relevant class. There is a compatibility layer in mod_perl for the easy transfer from Apache1 to Apache2 but somehow I could not get it to work on Maypole. And as the error-messaging system is less than perfect, it I could not find a solution and gave up on it. Pity, it seemed an interesting project. CountZero "If you have four groups working on a compiler, you'll get a 4-pass compiler." - Conway's Law ++ for an informative and insightful meditation. I've been curious about Maypole for a while, and this answered a good number of my questions about it. Thanks! The introductory documentation and articles are all good, but I found that when I started to try to actually adapt and extend Maypole for my needs, there were gaps that could only be filled by reading the code. And then when I tried to deviate a bit from the table/action/id model or add better support for many-to-many relationships things got ugly and I almost had to override entire classes (Class::DBI::AsForm was problematic). But it worked out in the end. My only concern is that it tends to be slow in my poor old hardware. Using Maypole was definitely a great learning experience. I still use it for the application I developed at that time. When later I had to write a simpler application I found that it was much easier to write it from scratch thanks to the experience I gained. That doesn't mean that I won't use it again, it depends on what I want to do. Could you describe the problem with the module loading and other error and warning hiding? Did you see the code for this? Is it because of $SIG{__WARN__} and $SIG{__DIE__} or because of unchecked eval-block expressions? I haven't delved into the depths of Apache::Registry and Maypole to determine where the errors get eaten and not reraised. I believe that the reason are unchecked eval-block expressions, but I can't offer any hard facts for that assumption. Would you care to write a Devel::UncheckedEvalBlock ala Initial Devel::UncheckedOps, a macro for perl and <rant>CPAN modules failing to check for write errors</rant>? It'd just be the example assuming there was some sort of magic in those hashes to generate some regexp code over a hypothetical stringification of an optree and that changing the stringified optree would change the actual optree... $optree =~ s( (?<= $STATEMENT{ $CONTAINS{ $OP_ENTERTRY } } ) (?! $OP_NEXTSTATEMENT $STATEMENT{ $CONTAINS{ $GVSV{ '*@' } } } ) ) { warn $@ if $@; }x
http://www.perlmonks.org/?node_id=388607
CC-MAIN-2015-48
refinedweb
3,376
57.71
Hi I started this tutorial( ->highly recomended tut to newB's) yesterday i finished it today, but i have got some probs,please be kind and look past any STUPID questions that i may ask,and plz help me... I use DEV-Cpp. 1)I cant get Resource files to work(not even Rc-files that was defined by a built-in wizard) THIS IS DEV's default new RC-file #include <windows.h> When i use the wizard or if i add the code manaully it looks like this(check for errors,plz) #include <windows.h> 20 MENU BEGIN POPUP "&File" BEGIN MENUITEM "E&xit",201 END POPUP "&Tools" BEGIN MENUITEM "&Eneable Toolbar",202 MENUITEM "&Disable Toolbar",203 END END If i build the resource,no problems found, but as soon as i include it to ANY other code/file it give this error: 2 c:\mydocu~1\2.cpp c:\mydocu~1\winprog.rc:3: parse error before `20' No matter what i use(menus/dialogs/icons..etc) it give this same error. Does anyone know why?? 2nd) How do u know what(data) to put in the parameters of WPARAM and LPARAM?? Can anyone maybe show me how to like just pass a simple "message"(of type char within the SendMessage()-function)as a argument to wparam/lparam, like say , so that when the user clicks on a a submenu item wich then calls WM_COMMAND and here maybe a code that whould pass a string as argument to another WM, to do something like changing a button caption to that string. I tried doing this using global variable, but it(I) messed up big time. 3rd) What does it mean when the linker gives me an error like : udefined reference to initcommoncontrols@4 (and some other functions does this too)??? 4th)Where should i go from here(after i swatted this tut from head to toe)?? Is API better than MFC?? If you read "these words" then i thank you for reading up to this point..... Cheers Beware, you who seek first and final principles, for you are trampling the garden of an angry God and he awaits you just beyond the last theorem. Forum Rules
http://www.antionline.com/showthread.php?250219-API-plz-help-newB-with-some-problems-(RC-files...etc)
CC-MAIN-2015-40
refinedweb
369
67.69
This is C program that asks user to find out the square root of a number. for this operation user declare a variable for fetching the output on it user also declare a math.h header file this file contains all the math functions for required operation. Then user asks a number to find out the square root put on the square root function the last move to fetches out the result as output on the screen. Problem Statement: This is a C program to find out the square root of a number. Here is C source code for finding out the square root. Output of this program shown below. #include <stdio.h> #include <math.h> /* needed by sqrt() */ int main(void) { double answer; clrscr(); answer = sqrt(20.0); printf("Square Root of a Give Value : %f", answer); getch();
https://doitdesi.com/c-program/find-the-square-root-of-a-give-value
CC-MAIN-2020-24
refinedweb
139
83.46
rednoah42 Wrote:1. Yep, only OpenSubtitles and Sublight support hash lookups. Without --q there's only hash lookups. 2. I know podnapisi, as far as i know there is no XML/JSON/etc API, only people scraping the page, like what I do with subscene right now. PS: Worked the whole day on super-charging the -get-subtitles CLI. FileBot 2.3 will auto-detect the query (like with -rename /tvshow), then match subtitles names with filenames (like matching episodes with files) and finally fetch subtitles for you, all auto-magically. wally007 Wrote:If you need to test some stuff with newer version let me know. wally007 Wrote:1, thanks for clarifying Thanks for the roadmap. I really like video file hash idea of matching subtitles and I hope it will prove sufficient. Otherwise generic (for example "Dexter - S01E01.mkv") search will get multiple hits (and many of those for .avi (SD version )) which will not helps things much, me thinks. args.eachMediaFolder { dir -> // select videos without subtitles def videos = dir.listFiles().findAll{ video -> video.isVideo() && !dir.listFiles().find{ sub -> sub.isSubtitle() && sub.isDerived(video) } } // fetch subtitles by hash only getSubtitles(file:videos, strict:true) } /Users/vladik1/Downloads/FileBot.app/Contents/MacOS/JavaApplicationStub -get-subtitles --lang en --format srt PATH_TO_VIDEO_FILE.MKV Looking up subtitles by filehash via OpenSubtitles Looking up subtitles by filehash via Sublight IllegalArgumentException: Unable to auto-select query: [] Failure (?_?) wally007 Wrote:1, I actually like the fact that its rewriting existing subtitle file. Reason is that if first match is based on file name , it has high probability of having sync problems. But if within 1-2 days video hash tagged subs are uploaded. I'll have proper subtitles. = win 2, Tried running 2.3 version but ran into error that does not happen with 2.2 version. I guess 2.2 is only looking for hash tagged file ( which does not exist so nothing gets downloaded ) and 2.3 is trying to get subs matching file name which gives me following error: ... FileBot_2.3.app.tar.gz 25-Nov-2011 05:12 14M /Users/vladik1/Downloads/FileBot.app/Contents/MacOS/JavaApplicationStub -get-subtitles --lang en --format srt -non-strict /Volumes/Volume15TB/mpg_foreign2/\!Movies\!/Defendor\ \(2009\)/Defendor.mkv Looking up subtitles by filehash via OpenSubtitles Looking up subtitles by filehash via Sublight IllegalArgumentException: Unable to auto-select query: [] Failure (?_?) rednoah42 Wrote:Sorry, didn't bundle the new revisions. Just download the .jar file (r700) and replace the one in your app bundle [FileBot.app/Contents/Resources/Java/FileBot.jar].
http://forum.kodi.tv/showthread.php?tid=110302&pid=945950
CC-MAIN-2017-04
refinedweb
424
51.85
Revision 02-06-2013Derived from Google C++ Style Guide Stanford Network Analysis Platform (SNAP) is a general purpose network analysis and graph mining library. It easily scales to massive networks with hundreds of millions of nodes, and billions of edges. SNAP is written in the C++ programming language. This programming guide describes a set of conventions for the SNAP C++ code as well as the most important constructs that are used in the code. To see an example of SNAP programming style, see file graph.h. C++ has many powerful features, but this power brings with it complexity, which can make code more error-prone and harder to read and maintain. The goal of this guide is to manage this complexity by describing the rules of writing SNAP code. These rules exist to keep the code base consistent and easier to manage while still allowing productive use of C++ features. Code consistency is important to keep the code base manageable. It is very important that any programmer be able to look at another's code and quickly understand it. Maintaining a uniform style and following conventions means that we can more easily use "pattern-matching" to infer what various symbols are and what they do, which. Note that this guide is not a C++ tutorial. We assume that you are familiar with the language. Coding style and formatting can be pretty arbitrary, but code is much easier to follow and learn if everyone uses the same style. Not everyone may agree with every aspect of the formatting rules, but it is important that all SNAP contributors follow the style rules so that we can all read and understand everyone's code easily. We use spaces for indentation. Do not use tabs in your code. You should set your editor to emit 2 spaces when you hit the tab key. elsekeyword belongs on a new line. if (condition) { // no spaces inside parentheses ... // 2 space indent. } else if (...) { // The else goes on the same line as the closing brace. ... } else { ... } You must have a space between the if and the open parenthesis. You must also have a space between the close parenthesis and the curly brace. if(condition) // Bad - space missing after IF. if (condition) { // Good - proper space after IF and before {. Short conditional statements may be written on one line if this enhances readability. You may use this only when the line is brief and the statement does not use the else clause. Always use the curly brace: if (x == kFoo) { return new Foo(); } if (x == kBar) { return new Bar(); } Single-line statements without curly braces are prohibited: if (condition) DoSomething(); In most cases, conditional or loop statements with complex conditions or statements are more readable with curly braces. if (condition) { DoSomething(); // 2 space indent. } while (condition) { ... // 2 space indent } for (int i = 0; i < Num; i++) { ... // 2 space indent } while (condition); // Bad - looks like part of do/while loop. case blocks in switch statements can have curly braces or not, depending on your preference. If you do include curly braces they should be placed as shown below. If the condition is not an enumerated value, switch statements should always have a default case: switch (var) { case 0: { // 2 space indent ... // 4 space indent break; } case 1: { ... break; } default: ... } } The following are examples of correctly-formatted pointer and reference expressions: x = *p; p = &x; x = r.y; x = r->y; Note that: *or &. When declaring a pointer variable or argument, place the asterisk * adjacent to the variable name and the ampersand & adjacent to the type: char* C; const int& P; if ((ThisOneThing > ThisOtherThing) && (AThirdThing == AFourthThing) && YetAnother && LastOne) { ... } Function calls have the following format: bool RetVal = DoSomething(Arg1, Arg2, Arg3); If the arguments do not all fit on one line, they should be broken up onto multiple lines. Do not add spaces after the open paren or before the close paren: DoSomethingThatRequiresALongFunctionName(Argument1, Argument2, Argument3, Argument4); If the parameter names are very long and there is not much space left due to line indentation, you may place all arguments on subsequent lines: DoSomethingElseThatRequiresAEvenLongerFunctionName( Argument1, Argument2, Argument3, Argument4); Functions look like this: ReturnType ClassName::FunctionName(Type ParName1, Type ParName2) { DoSomething(); ... } If you have too much text to fit on one line, split the code over several lines: ReturnType ClassName::ReallyLongFunctionName(Type ParName1, Type ParName2, Type ParName3) { DoSomething(); ... } Some points to note: returnexpression with parentheses. Parentheses are ok to make a complex expression more readable: return Result; // No parentheses in the simple case. return (SomeLongCondition && // Parentheses ok to make a complex AnotherCondition); // expression more readable. class TMyClass : public TOtherClass { public: typedef TMyClass TDef; // typedefs typedef enum { meOne, meTwo, ... } TMyEnum; // enums public: class TPubClass1 { // public subclasses ... } private: class TPriClass2 { // private subclasses ... } private: TInt Var; // private data ... private: TInt GetTmpData(); // private methods ... public: TMyClass(); // constructors ... int SetStats(const int N); // public methods ... friend class TMyOtherClass; // friends }; Each public SNAP class must define the following methods: a default constructor, a copy constructor, a TSIn constructor, a Load() method, a Save() method and an assignment operator =: class TMyClass : public TOtherClass { ... public: TMyClass(); // default constructor explicit TMyClass(int Var); // an explicit constructor (optional) TMyClass(const TMyClass& MCVar); // copy constructor TMyClass(TSIn& SIn); // TSIn constructor void Load(TSIn& SIn); // Load() method void Save(TSOut& SOut) const; // Save() method TMyClass& operator = (const TMyClass& MCVar); // '=' operator ... int GetVar() const; // get value of Var int SetVar(const int N); // set value of Var ... }; Make data members private and provide access to them through Get...() and Set...() methods. More complex classes with support for "smart" pointers have additional requirements. See Smart Pointers for details. For a class format example in the SNAP code, see file graph.h:TUNGraph. MyClass::MyClass(int Var) : SomeVar(Var), SomeOtherVar(Var + 1) { } Make sure that the values in the list are listed in the same order in which they are declared. A list order that is different than the declaration order can produce errors that are hard to find. An example of template forward definition (see alg.h for more): template <class PGraph> int GetMxDegNId(const PGraph& Graph); The corresponding template implementation is: template <class PGraph> int GetMxDegNId(const PGraph& Graph) { ... } This is more a principle than a rule: don't use blank lines when you don't have to. In particular, don't put more than one blank line. In certain cases it is appropriate to include non-ASCII characters in your code. For example, if your code parses data files from foreign sources, it may be appropriate to hard-code the non-ASCII string(s) used in those data files as delimiters. In such cases, you should use UTF-8, since this encoding is understood by most tools able to handle more than just ASCII. Hex encoding is also OK, and encouraged where it enhances readability — for example, "\xEF\xBB\xBF" is the Unicode zero-width no-break space character, which would be invisible if included in the source as straight UTF-8. SNAP code uses a range of conventions to name entities. It is important to follow these conventions in your code to keep the code compact and consistent. Type and variable names should typically be nouns, ErrCnt. Function names should typically be "command" verbs, OpenFile(). SNAP code uses an extensive list of abbreviations, which make the code easy to understand once you get familiar with them: T...: a type ( TInt). P...: a smart pointer ( PUNGraph). ...V: a vector (variable InNIdV). ...VV: a matrix (variable FltVV, type TFltVVwith floating point elements). ...H: a hash table (variable NodeH, type TIntStrHwith Int keys, Str values). ...HH: a hash of hashes (variable NodeHH, type TIntIntHHwith Int key 1 and Int key 2). ...I: an iterator ( NodeI). ...Pt: an address pointer, used rarely ( NetPt). Get...: an access method ( GetDeg()). Set...: a set method ( SetXYLabel()). ...Int: an integer operation ( GetValInt()). ...Flt: a floating point operation ( GetValFlt()). ...Str: a string operation ( DateStr()). Id: an identifier ( GetUId()). NId: a node identifier ( GetNIdV()). EId: an edge identifier ( GetEIdV()). Nbr: a neighbour ( GetNbrNId()). Deg: a node degree ( GetOutDeg()). Src: a source node ( GetSrcNId()). Dst: a destination node ( GetDstNId()). Err: an error ( AvgAbsErr). Cnt: a counter ( LinksCnt). Mx: a maximum ( GetMxWcc()). Mn: a minimum ( MnWrdLen). NonZ: a non-zero ( NonZNodes). _) or dashes ( -). C++ files should end in .cppand header files should end in .h. Examples of acceptable file names: graph.cpp bignet.h "T"and have a capital letter for each new word, with no underscores: TUNGraph. NIdV. An exception for lowercase names is the use of short index names for loop iterations, such as i, j, k. Variable names should typically be nouns, ErrCnt. GetInNId(). Function names should typically be "command" verbs, OpenFile(). typedef enum { srUndef, srOk, srFlood, srTimeLimit } TStopReason; See Namespaces for a discussion about the SNAP namespaces. #define ROUND(x) ... #define PI_ROUNDED 3.14 Comments are absolutely vital to keeping the code readable. But remember: while comments are very important, the best code is self-documenting. Giving sensible names to types and variables is much better than using obscure names that you must then explain through comments. Comments in the source code are also used to generate reference documentation for SNAP automatically. A few simple guidelines below show how you can write comments that result in high quality reference documentation. When writing your comments, write for your audience: the next contributor who will need to understand your code. Be generous — the next one may be you in a few months! A brief description consists of ///, followed by one line of text: /// Returns ID of the current node. A detailed description consists of a brief description, followed by ##<tag_name>: /// Returns ID of NodeN-th neighboring node. ##TNodeI::GetNbrNId Text for <tag_name> from file <source_file> is placed in file doc/<source_file>.txt. Tag format is: /// <tag_name> ...<detailed description> /// For example, a detailed description for ##TNodeI::GetNbrNId from file snap-core/graph.h is in file snap-core/doc/graph.h.txt (see these files for more examples): /// TNodeI::GetNbrNId Range of NodeN: 0 <= NodeN < GetNbrDeg(). Since the graph is undirected GetInNId(), GetOutNId() and GetNbrNId() all give the same output. /// Additional Documentation Commands Snap documentation also uses the following Doxygen commands: ///<: for comments associated with variables. @param: for comments associated with function parameters. \c: specifies a typewritter font for the next word. <tt>: specifies a typewritter font for the enclosed text. More details on how to use these commands are provided in specific sections below. //#/////////////////////////////////////////////// /// Undirected graph. ##Undirected_graph class TUNGraph { ... }; *.hfile include a 1 line, 1 sentence long description. The description should give use of the function: /// Deletes node of ID \c NId from the graph. ##TUNGraph::DelNode void DelNode(const int& NId); \cto specify typewriter font when you refer to variables or functions. If the description requires more than one sentence, which should happen often, then create a tag ##<class>::<function> at the end of the line and put the remainder of the description in the doc/*.h.txt file. Function Declarations Every function declaration should have a description immediately preceding it about what the function does and how to use it. In general, the description does not provide how the function performs its task. That should be left to comments in the function definition. Function Definitions Each function definition should have a comment describing what the function does if there's anything tricky about how it does its job. For example, in the definition comment you might describe any coding tricks you use, give an overview of the steps you go through, or explain why you chose to implement the function in the way you did rather than using a viable alternative. If you implemented an algorithm from literature, this is a good place to provide a reference. Note you should not just repeat the comments given with the function declaration, in the .h file or wherever. It's okay to recapitulate briefly what the function does, but the focus of the comments should be on how it does it. Function Parameters It is important that you comment the meaning of input parameters. Use @param construct in the comments to do that. For the example of function void DelNode(const int& NId); above, its parameter is documented in file doc/*.h.txt as follows: /// TUNGraph::DelNode @param NId Node Id to be deleted. /// To associate a comment with a variable, start the comment with ///<: TInt NFeatures; ///< Number of features per node. Class Data Members Each class data member (also called an instance variable or member variable) should have a comment describing what it is used for.. The main purpose is to have a consistent TODO format that can be searched to find the person who can provide more details upon request. //syntax to document the code, wherever possible: // This line illustrates a code comment. Smart pointers are objects that act like pointers, but automate management of the underlying memory. They are extremely useful for preventing memory leaks, and are essential for writing exception-safe code. By convention, class names in SNAP start with letter "T" and their corresponding smart pointer types have "T" replaced with "P". In the following example, variable Graph is defined as an undirected graph. TUNGraph is the base type and PUNGraph is its corresponding smart pointer type: PUNGraph Graph = TUNGraph::New(); The following example shows how an undirected graph is loaded from a file: { TFIn FIn("input.graph"); PUNGraph Graph2 = TUNGraph::Load(FIn); } To implement smart pointers for a new class, only a few lines need to be added to the class definition. The original class definition: class TUNGraph { ... }; The original class definition after smart pointers are added: class TUNGraph; typedef TPt<TUNGraph> PUNGraph; class TUNGraph { ... private: TCRef CRef; ... public: ... static PUNGraph New(); // New() method static PUNGraph Load(TSIn& SIn); // Load() method ... friend class TPt<TUNGraph>; }; The new code declares PUNGraph, a smart pointer type for the original class TUNGraph. A few new definitons have been added to TUNGraph: CRef, a reference counter for garbage collection; New(), a method to create an instance; and a friend declaration for TPt<TUNGraph>. The Load() method for a smart pointer class returns a pointer to a new object instance rather than no result, which is the case for regular classes. An example of definitions for the New() and Load() methods for TUNGraph are shown here: static PUNGraph New() { return new TUNGraph(); } static PUNGraph Load(TSIn& SIn) { return PUNGraph(new TUNGraph(SIn)); } cin, cout, cerr. For console output, use printf(). SNAP defined streams are: TSInis an input stream. TSOutis an output stream. TSInOutis an input/output stream. TStdInis the standard input stream. TStdOutis the standard output stream. TFInis a file input stream. TFOutis a file output stream. TFInOutis a file input/output stream. TZipInis a compressed file input. TZipOutis a compressed file output. Assertion names in SNAP use the following convention for the first letter: Assert: compiled only in the debug mode, aborts when the assertion is false. IAssert: always compiled, aborts if the assertion is false. EAssert: always compiled, does not abort, but throws an exception. Some common SNAP assertions are: Assertverifies the condition. This is the basic assertion. AssertRverifies the condition, provides a reason when the condition fails. SNAP also implements assertions that always fail. These are used when the program identifies a critical error, such as being out of memory. Fail assertions are: EFailRthrows an exception with a reason. FailRprints the reason and terminates the program. Failterminates the program. Examples of assertion usage: AssertR(IsNode(NId), TStr::Fmt("NodeId %d does not exist", NId)); EFailR(TStr::Fmt("JSON Error: Unknown escape sequence: '%s'", Beg).CStr()); intin your code. If a program needs a variable of a different size, use one of these precise-width integer types: int8, uint8: signed, unsigned 8-bit integers. int16, uint16: signed, unsigned 16-bit integers. int32, uint32: signed, unsigned 32-bit integers. int64, uint64: signed, unsigned 64-bit integers. Use int for integers that are not going to be too large, e.g., loop counters. You can assume that an int is at least 32 bits, but don't assume that it has more than 32 bits. Functions should not return values of type TSize (size_t). Instead, use fixed size arguments for function return values, like int32, int64. Do not use the unsigned integer types, unless the quantity you are representing is really a bit pattern rather than a number, or unless you need defined twos-complement overflow. In particular, do not use unsigned types to say a number will never be negative. Instead, use assertions for this purpose. To print 64-bit integers, use TUInt64::GetStr() for conversion to string and the %s print formatting conversion: int64 Val = 123456789012345; TStr Note = TStr::Fmt("64-bit integer value is %s", TUInt64::GetStr(Val).CStr()); SNAP exceptions are implemented with TExcept::Throw and PExcept. TExcept::Throw throws an exception: TExcept::Throw("Empty blog url"); PExcept catches an exception: try { ... } catch (PExcept Except) { SaveToErrLog(Except->GetStr().CStr()); } constwhenever it makes sense to do so. Ch) const; };). const variables, data members, methods and arguments add a level of compile-time type checking. It is better to detect errors as soon as possible. const can also significantly reduce execution time. Therefore we strongly recommend that you use const whenever it makes sense to do so: const. constwhenever possible. Accessors should almost always be const. Other methods should be const if they do not modify any data members, do not call any non- constmethods, and do not return a non- constpointer or non- constreference to a data member. constwhenever they do not need to be modified after construction. Put const at the beginning of a definition as in const int* Foo, not in the middle as in int const *Foo. Note that const is viral: if you pass a const variable to a function, that function must have const in its prototype (or the variable will need a const_cast). constvariables to macros.. 0for integers, 0.0for reals, NULLfor pointers, and '\0'for chars.)); TSnapto encapsulate global functions. Define all SNAP global functions within that namespace. Use namespace TSnapDetail to encapsulate local functions. Do not define any new namespaces, use TSnap for global functions and TSnapDetail for local functions. Do not use a using-directive to make all names from a namespace available: // Forbidden -- This pollutes the namespace. using namespace Foo; Use namespace TSnap for global functions and namespace TSnapDetail for local functions. See file alg.h for an example. If you must define a nonmember function and it is only needed locally in its .cpp file, use static linkage: static int Foo() {...}, or namespace TSnapDetail to limit its scope: namespace TSnapDetail { // This is in a .cpp file. // The content of a namespace is not indented enum { kUnused, kEOF, kError }; // Commonly used tokens. bool AtEof() { return pos_ == kEOF; } // Uses our namespace's EOF. } // namespace *.hfiles, so that functions relating to common functionality are grouped together. Function definitions in the corresponding *.cppfile should be in the same order as function declarations. double GetDegreeCentr(const PUNGraph& Graph, const int& NId); When defining a function, parameter order is: inputs, then outputs. Input parameters are usually values or const references, while output and input/output parameters are non- const references. In the following example, Graph is an input parameter, and InDegV and OutDegV are output parameters: void GetDegSeqV(const PGraph& Graph, TIntV& InDegV, TIntV& OutDegV);. TSInconstructor, an assignment operator =, and Save()and Load()methods. Classes that support "smart" pointers must also define a New()method. See Class Format for additional details on class formatting. static constdata members) static constdata members) Group the methods so that methods relating to common functionality are grouped together. Method definitions in the corresponding .cpp file should be the same as the declaration order, as much as possible. Init()method. If your object requires non-trivial initialization, consider having an explicit Init() method. In particular, constructors should not call virtual functions, attempt to raise errors, access potentially uninitialized global variables, etc. private, and provide access to them through accessor functions as needed. Typically a variable would be called Fooand the accessor function GetFoo(). You may also want a mutator function SetFoo(). structonly for passive objects that carry data; everything else is a class. The struct and class keywords behave almost identically in C++. We add our own semantic meanings to each keyword, so you should use the appropriate keyword for the data-type you're defining. If in doubt, make it a class. Interfacesuffix. In general, every .cpp file should have an associated .h file. There are some common exceptions, such as unittests and small .cpp files containing just a main() function. Correct use of header files can make a huge difference to the readability, size and performance of your code. #defineguards to prevent multiple inclusion. The format of the symbol name should be snap_<file>_h: #ifndef snap_agm_h #define snap_agm_h ... #endif // snap_agm_h .hfiles. .hfiles. All of a project's header files should be listed as descendants of the project's source directory without use of UNIX directory shortcuts . (the current directory) or .. (the parent directory). For example, snap-awesome-algorithm/src/base/logging.h should be included as: #include "base/logging.h" Within each section it is nice to order the includes alphabetically. The coding conventions described above are mandatory. However, like all good rules, these sometimes have exceptions, which we discuss here. If you need to change such code, the best option is to rewrite it, so that it conforms to the guide. If that is not possible, because rewriting would require more time and effort than you have available, then stay consistent with the local conventions in that code. SNAP does not contain any Windows specific code. All such code is encapsulated within the GLIB library, which SNAP uses. If you must implement some Windows specific functionality, contact SNAP maintainers. Use common sense and BE CONSISTENT. If you are writing new code, follow this style guide. To see an example of SNAP programming style, see file graph.h. If you are editing existing code, take a few minutes to look at it and determine its style. If your code looks drastically different from the existing code around it, the discontinuity makes it harder for others to understand it. Try to avoid this. OK, enough writing about writing code; the code itself is much more interesting. Have fun! Revision 02-06-2013
http://snap.stanford.edu/snap/doc/snapdev-guide/
CC-MAIN-2015-22
refinedweb
3,713
58.48
Good afternoon, I have the following problem when I want to load the image mk.png the program snaps, and it's not the program because you can load other images even load the same image but changing the background in this case mkasd.png. in the version 5.0.10 ther was no problem. I'm using the current version vstudio2015 allegro5_1_12 so do i need to do somting else befor loading the image?. sorri for the bad inglesh this is my code #include "allegro5\allegro5.h"#include "allegro5\allegro_image.h"#include <iostream> int main(){ al_init(); al_init_image_addon(); ALLEGRO_DISPLAY *display; display = al_create_display(640, 480); if (display == nullptr) { return EXIT_FAILURE; } al_clear_to_color(al_map_rgb(0, 0, 0)); ALLEGRO_BITMAP *img = al_load_bitmap("mk.png"); al_draw_bitmap(img, 0.0, 0.0, 0); al_flip_display(); al_rest(5.0); return 0;} I just tried your code with MSVC 2015 and the 5.1.12 32 and 64 bit binaries, and it worked ok (at least with my bitmap). Could you make sure that your bitmap is where you expect it to be (also perhaps try using an absolute path, just to be sure). "For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro] yes the code works but when i tried this bitmap (mk.png) it breaks This .png is missing its CRC value, so is basically invalid. We could make Allegro read it anyway (we can instruct libpng to read on instead of error out) - but the downside is, if we do that we are likely to read in actually broken png files as well. Best way is to simply open your .png in an image editor and save it again, then it will be fine. elias@T440s:/tmp$ pngcheck -v mk.png File: mk.png (8826 bytes) chunk IHDR at offset 0x0000c, length 13 363 x 475 image, 8-bit palette, non-interlaced chunk PLTE at offset 0x00025, length 768: 256 palette entries chunk tRNS at offset 0x00331, length 256: 256 transparency entries chunk IDAT at offset 0x0043d, length 7725 zlib: deflated, 32K window, default compression chunk IEND at offset 0x02276, length 0 : EOF while reading CRC value ERRORS DETECTED in mk.png [edit:] Actually, you can specify special handling for CRC errors, this patch would make Allegro ignore them: --"Either help out or stop whining" - Evert a ok thanx
https://www.allegro.cc/forums/thread/615905
CC-MAIN-2018-47
refinedweb
396
64.2
Here’s my sample code for a tool to catch blog plagiarism that I described earlier. In retrospect, it was pretty easy to write (under 400 lines!). And edit-and-continue in C# and interceptable exceptions made my development time a lot faster! The tool currently only works on a single entry, but it could be expanded to iterate over all entries in an entire blog. It’s a simple C# console app which: 1. Takes in a source blog entry (via the clipboard) and a set of “author keywords” (via the command line). The keywords are things like the author’s name or homepage that would indicate somebody is crediting the author. Ideally, the tool would read in the all the entries from some blog RSS feed instead of doing just 1 entry at a time. 2. breaks the entry up into excerpts (since it’s unlikely somebody would plagiarize the entire entry). This also finds matches if they’ve changed a few characters. For example, perhaps their editing tool automatically changed something such as transforming unicode characters to ansi or transforming emoticons. 3. programmatically uses MSN Search to search for other URLs that includes those excerpts. 4. For each resulting URL, scans that page for any indication of crediting the original author. One simple way is to search the page for a set of “author keywords” provided in step 1. If no such keywords are found, then assume the page is not crediting the author. Offloading even more work to the Search Engine. In retrospect, I realized after I wrote the tool that step #4 (filtering out urls that credit the author) can be folded into the search step in #3 (finding urls that copy content from the author) by using a sufficiently intelligent query with the NOT and AND keywords. Consider a query like: (NOT link:<author homepage>) AND (NOT <author name>) AND “<some excerpt>“. This also has a great advantage in that both step #3 + #4 are using the same copy of the page. This avoids step #3 using a cached version of the page, and then step #4 looking at a totally different version of the page. As an example, this MSN search query: “So a debugger would skip over the region between the” looks for all pages that contain that excerpt from this blog post of mine. I tried this MSN search query: (NOT Link:blogs.msdn.com/jmstall) AND (NOT “Mike Stall”) AND “So a debugger would skip over the region between the” which filters out anybody that mentions my name or links back to me. Clearly, query engines with useful search keywords can become extremely powerful. This tradeoff reminds me of similar tradeoffs with SQL queries where it’s possible to offload client side work onto the server with a more intelligent query. Results: I ran the tool with the full contents from my blog on 0xFeeFee Sequence points. The keywords were my name (‘Mike Stall’) and part of my blog URL (“jmstall”). Here’s the output. It’s pretty verbose because it prints all the excerpts that match, but the interesting part is the summary at the end. [update] The tool originally found 1 candidate (with a 75% match) – but the candidate then went and added a reference back to me. I reran the tool and now that candidate doesn’t show up. That’s exactly how it should work! The original output is here. The tool now prints: Test for plagiarism. Getting entry data from clipboard contents. Entry:#line hidden and 0xFeeFee sequence points So Keywords (if a target URL doesn’t have any of these words, it may be plagiarizing): ‘mike stall’ ‘jmstall’. Doing Search. This could take a few minutes. Search broke entry into 37 excerpt(s). Found 1 URL(s) that contain excerpts from the entry w/o ref back to the author. Url has 1/37 matches: ———————— (0/1) use the `#line hidden’ directive to mark a region of code as not exposed to the debugger.Eg: ———————— Summary: (sorted least to most matches) (1/37) 2%: [update] It found 1 candidate. It turns out that this is a false positive: the search engine cache found the copied content, but then the page had completely changed since then (it’s an “Under Construction page” now). [update] Just to be clear, Tagcloud is not plagiarising. Rather this is a false positive and shows a shortcoming with the tool. The candidates were found using MSN search, which uses a cached copy of the web pages. However, the author crediting uses the live copy of the webpage. So if the page has changed (perhaps it’s a blog and the original entry is no longer on the homepage; or perhaps the site is down), then you can get false positives. This could be fixed by having both the candidate search and crediting use the same copy of the webpage. The easiest way to do this is to follow the suggestion above and have the search query use the AND,NOT, LINK keywords to do both candidate search and crediting. Here’s the code: Note you need to compile it with winforms and system.web like so: csc t.cs /debug+ /r:System.Web.dll /r:System.Windows.Forms.dll //—————————————————————————– // Test harness to use MSN search to search for plagiarism. // Author: Mike Stall. //—————————————————————————– using System; using System.Collections.Generic; using System.Text; using System.Net; using System.IO; using System.Text.RegularExpressions; using System.Diagnostics; // Include Windows.Forms because we use the clipboard.. // See for details on this class. class MsnSearch { // Helper to get the Search result for an exact string. // This will escape the string results. This is very fragile and reverse engineered based off my // observations about how MSN Search encodes searches into the query string. // This also does not account for search keywords (like “AND”). // If there’s a spec for the query string, we should find and use it. static Uri GetMSNSearchURL(string input) { // The ‘FORMAT raw = response.GetResponseStream(); StreamReader s = new StreamReader(raw); string x = s.ReadToEnd(); List<Uri> list = new List<Uri>();List<Uri> list = new List<Uri>(); // In the XML format, the URLs are conveniently in URL tags. We could use a full XmlReader / XPathQuery//())for (Match m = r.Match(x); m.Success; m = m.NextMatch()) { list.Add(new Uri(m.Groups[1].Value)); } return list;return list; } } class Programclass Program { // Provide easy way to get data from clipboard.// Provide easy way to get data from clipboard. // On the down side, this pulls in winforms. 🙁 // And requires that we’re an STA thread. static string GetDataFromClipboard() { System.Windows.Forms.IDataObject iData = System.Windows.Forms.Clipboard.GetDataObject(); string[] f = iData.GetFormats(); return (string)iData.GetData(System.Windows.Forms.DataFormats.Text); } [STAThread] static void Main(string[] args) { Console.WriteLine(“Test for plagiarism.”); // 1.) Get the data to check for plagiarism. This includes the author’s content// 1.) Get the data to check for plagiarism. This includes the author’s content // and “author keywords” that check if a target URL is crediting the author. // It would be nice to pull this from an RSS feed or something. For now, we pull the entry // from the clipboard since that’s an easy way to suck in a lot of data off a webpage. Console.WriteLine(“Getting entry data from clipboard contents.”); string entry = GetDataFromClipboard(); Console.WriteLine(“Entry:{0}”, (entry.Length < 50) ? (entry) : (entry.Substring(0, 45)));Console.WriteLine(“Entry:{0}”, (entry.Length < 50) ? (entry) : (entry.Substring(0, 45))); // Set keywords to search to determine if we describe the author. Grab the keywords from the command line.// Set keywords to search to determine if we describe the author. Grab the keywords from the command line. // If no keywords specified, default to my blog. 🙂 string [] keywords = (args != null) ? args : (new string[] { “mike stall”, “jmstall” }); Console.Write(“Keywords (if a target URL doesn’t have any of these words, it may be plagiarizing):”);Console.Write(“Keywords (if a target URL doesn’t have any of these words, it may be plagiarizing):”); foreach (string keyword in keywords) { Console.Write(” ‘{0}'”, keyword); } Console.WriteLine(“.”); // Do the search. This is an intensive operation.// Do the search. This is an intensive operation. Console.WriteLine(“Doing Search. This could take a few minutes.”); PlagiarismSearcher p = new PlagiarismSearcher(); p.Search(entry, keywords); // Now print the results (perhaps HTML spew would be prettier 😉 )// Now print the results (perhaps HTML spew would be prettier 😉 ) int total = p.Excerpts.Count; Console.WriteLine(“Search broke entry into {0} excerpt(s).”, total); ICollection<Uri> uris = p.Matches.Keys;ICollection<Uri> uris = p.Matches.Keys; if (uris.Count > 0) { int [] summaryCounts = new int [uris.Count]; Uri[] summaryUrls = new Uri [uris.Count]; int summaryIdx = 0; Console.WriteLine(“Found {0} URL(s) that contain excerpts from the entry w/o ref back to the author.”, uris.Count);Console.WriteLine(“Found {0} URL(s) that contain excerpts from the entry w/o ref back to the author.”, uris.Count); // Print all the excerpts for the match (this may be too verbose).// Print all the excerpts for the match (this may be too verbose). foreach (Uri url in uris) { int cMatches = p.Matches[url].Count; Console.WriteLine(“Url {0} has {1}/{2} matches:”, url, cMatches, total); summaryCounts[summaryIdx] = cMatches; summaryUrls[summaryIdx] = url; summaryIdx++; // Print the matches.// Print the matches. Console.WriteLine(“————————“); int j = 0; foreach (string excerpt in p.Matches[url]) { Console.Write(“({0}/{1})”, j, cMatches); Console.WriteLine(excerpt.Replace(“\r”, “”).Replace(“\n”, “”)); j++; } Console.WriteLine(“————————“); } // Print summary sorted by matches.// Print summary sorted by matches. Console.WriteLine(“Summary: (sorted least to most matches)”); Array.Sort(summaryCounts, summaryUrls); // ascending for (int j = 0; j < summaryCounts.Length; j++) { Console.WriteLine(“({0}/{1}) {2}%: {3}”, summaryCounts[j], total, (int)(summaryCounts[j] * 100 / total), summaryUrls[j]); } }else{ Console.WriteLine(“No plagiarism matches found.”); } } } // program // Helper to search for plagiarism of an author’s article (online content that copies the article without refering back to it). // Use by first calling the Search() method; and then using Matches property to get the search result. class PlagiarismSearcher { // Get total excerpts the search was broken down into. // This is not valid until after the Search() method returns. public IList<string> Excerpts { get { return m_excerpts; } } // Get a list of matches.// Get a list of matches. // Each Key is a plagiarism-candidate URL. // Each Value is a the list of excerpts that the URL contains from the original doc. // If this is empty, there are no plagiarism candidates. // This is not valid until after the Search() method returns. public IDictionary<Uri, IList<string>> Matches { get { return m_matches; } } // Excerpts that the original author’s entry is broken into.// Excerpts that the original author’s entry is broken into. // We break into excerpts to catch if somebody plagiarises just a subsection of the author. IList<string> m_excerpts; // Keep map of each URL + number of times it has a plagiaring chunk.// Keep map of each URL + number of times it has a plagiaring chunk. // Then we can sort by # of chunks. Since the excerpts may be small, there’s a chance // of false positive. // key = URL that contains plagiarised chunk. // value = string list of chunks that the key URL contains. IDictionary<Uri, IList<string>> m_matches; // Keywords to search for in a target URL to determine if the URL refers to the author.// Keywords to search for in a target URL to determine if the URL refers to the author. // This will search both the raw HTML and the text with the markup removed. string[] m_keywords; // Given the source HTML and a set of keywords that refer back to the author, search// Given the source HTML and a set of keywords that refer back to the author, search // the internet for plagiarism. That means searching for other copies of the source where // the target does not ref back to the source (eg, have the author keywords). // The author keywords may include the author’s name and URL. // These results are not necessarily 100% accurate. This just produces a list of likely plagiarism candidates. // @todo – pass in search engine too? // // This will break the incoming html into excerpts (in case only a paragraph was plagiarised) and // then search for matches against those excerpts. public void Search(string htmlFullEntry, string[] authorKeywords) { m_keywords = authorKeywords; // Given an text entry (such as a blog entry), searches key strings to see if anything else on the web refers to this content.// Given an text entry (such as a blog entry), searches key strings to see if anything else on the web refers to this content. // Once it finds refering URLS, scan those URLs for references back to original article. // If no refences are found, the URL may be plagiarising the original article. // Break entire entry up into sub strings to search. m_excerpts = GetExcerptsFromEntry(htmlFullEntry); m_matches =new Dictionary<Uri, IList<string>>(); foreach (string e in m_excerpts)foreach (string e in m_excerpts) { IList<Uri> results = MsnSearch.SearchString(‘\”‘ + e + ‘\”‘); //put in quotes for exact search. Debug.WriteLine(“Searching excerpt:” + e); Debug.WriteLine(String.Format(“Found {0} references.”, results.Count)); foreach (Uri url in results) { Debug.WriteLine(“Checking URL: “ + url); bool fGood = DoesURLReferBackToAuthor(url); if (!fGood) { // Record that this URL has a matching chunk. if (!m_matches.ContainsKey(url)) { m_matches[url] = new List<string>(); } m_matches[url].Add(e); Debug.WriteLine(“!!! Plagiarism!!!! “ + url); Debug.WriteLine(“copies this excerpt:”); Debug.WriteLine(e); Debug.WriteLine(“—————-“); } } } } // Given a full entry (which may contain HTML markup), generate a set of non-HTML string excerpts.// Given a full entry (which may contain HTML markup), generate a set of non-HTML string excerpts. // We can then query for the excerpts. // Our search algorithm owns producing the excerpts because the excerpt size may depend on the // underlying search engines abilities. private IList<string> GetExcerptsFromEntry(string htmlFullEntry) { List<string> list = new List<string>(); // simple heuristic: just return chunks of ‘size’ characters.// simple heuristic: just return chunks of ‘size’ characters. // Needs to be split on word boundaries or MSN search gets confused. string s = RemoveAllMarkup(htmlFullEntry); // I also notice that if the size chunks are too big, then the search fails!?!?! That// I also notice that if the size chunks are too big, then the search fails!?!?! That // seems very counter intuitive. I did some manual tuning to arrive at the current size. int size = 100; // 40 int idx = 0; while (idx + size < s.Length) { int pad = s.IndexOf(‘ ‘, idx + size) – idx; if (pad < 0) pad = size; list.Add(s.Substring(idx, pad)); idx += pad; } // Only add the fragment if the list is empty. Else if the fragment is too small, it will match too much. if (list.Count == 0) { list.Add(s.Substring(idx)); } return list;return list; } // Do a check if the URL refers back to the author. This includes checking:// Do a check if the URL refers back to the author. This includes checking: // – does URL mention author’s name? // – does URL include hyperlink back to author’s website. // May need to also compensate for the html markup in the URL (so a raw string search may be naive). // We can just do 2 searches on the content (1 on the raw content w/ markup, and 1 on the content w/o markup) bool DoesURLReferBackToAuthor(Uri url) { string htmlContent = GetWebPage(url); if (htmlContent == null) return true; // if link is broken, it’s not plagiarising. htmlContent = htmlContent.ToLower(); string textContent = RemoveAllMarkup(htmlContent);string textContent = RemoveAllMarkup(htmlContent); // case-insensitive search for keywords.// case-insensitive search for keywords. foreach (string s in m_keywords) { if (htmlContent.Contains(s)) { return true; } if (textContent.Contains(s)) { return true; } } return false; } // Utility function to remove all HTML markup.// Utility function to remove all HTML markup. // Input: string that may contain html. // returns: string stripped of all HTML markup. static string RemoveAllMarkup(string html) { Regex r = new Regex(“<.+?>”); string s = r.Replace(html, “”); // Now decode escape characters. eg: ‘<‘ –> ‘<‘ return System.Web.HttpUtility.HtmlDecode(s); } // Helper to get a WebPage as a string.// Helper to get a WebPage as a string. // Returns null if web-page is not available. static string GetWebPage(Uri url) { try { WebRequest request = HttpWebRequest.Create(url); WebResponse response = request.GetResponse(); Stream raw = response.GetResponseStream();Stream raw = response.GetResponseStream(); StreamReader s = new StreamReader(raw); string x = s.ReadToEnd(); return x; } catch { return null; } }// Plagiarism checker class } }// end namespace Web Hi Mike, Regarding your tools findings to the url above: If you take a look at the original post, you will see a white image (a box, representing the end of a quote). For some reason that I have only just realised, blogger doesn’t actually provide a link to the original, but instead, it creates an image with an embeded reference to Unfortunately I didn’t realise this until now when I was forced to open the HTML. I generally post using the blogger web client, and then leave it as that without checking the resultant view in a browser. So perhaps either a) blogger needs to fix their stuff, or b) you should check for a reference to that url as well in your app…(not that I know anything about what aggbug.aspx does…) Having said this, I will no longer rely on the blogger web client, and will never post with it again. As a matter of fact, I will probably start hosting my own very soon. I have fixed all references so that they are now direct links to your original. I appologise for not checking my posts more thoughoughly… if you look at the post you will also see that I ended my part of the post with the colon: I did not try and take credit for this, it was made clear that it wasnt my post. I just wish blogger had of put a proper link in… gah. Thanks, Matthew – No problem! I’m glad you found the content useful, and I had a lot of fun writing the searcher tool. 🙂 Michael, Once you have extracted the "raw" data from MSN, you can apply this () to get a better idead of the degree of plagiarism involved. I reran the tool and the Matthew’s original page no longer shows up (since he added the credits). That’s actually a great full-circle demo. I’ve updated the entry to reflect the new search. Hi Mike, I was interested to see why your tool picked up the tagcloud.com URL for the tag "Readify" Tagcloud shows extracts of blogs for particular tags, not unlike technorati.com. Seeing as the site is down at the moment, this is an example of what your scanner would have seen (although it’s been updated since) So your scanner would’ve seen the tagcloud of Readify, and of course Matthew’s post. It looks as though the summary strips the HTML so that’s why there’s no link back to your post. Just to be safe, I’ve removed the Readify tagcloud so that it doesn’t get accused of plagiarising again. Grant I apologize; I didn’t intentd to accuse tagcloud of plagiarising – I’ll update my original post to be very clear about that. I explicitly called out tagcloud as an example of a false-postive: "It turns out the 2% one is a false positive: the search engine cache found the copied content, but then the page had completely changed since then (it’s an "Under Construction page" now)." The problem is that the tool uses MSN Search (which uses a cache) to find candidates; but then live page access to look for credits to the author. It will search both the raw HTML and the stripped HTML. So having credits inside of tags (such as a <A>) will show up. The problem is if the cached page has a credit, but the site is down (or replaced with a "I’m under construction" notice), the tool won’t see it and report a false positive. This would also be a problem for blog homepages where the entry that credits the author is no longer on the homepage. This is a shortcoming of the tool. The tool needs to find candidates and check for crediting using the same copy of the HTML.
https://blogs.msdn.microsoft.com/jmstall/2005/08/22/sample-code-for-plagiarism-searcher-tool/
CC-MAIN-2017-09
refinedweb
3,355
67.35
[4.1.1] Controller's modules load problem If in a controller I set the model and store configs then I get a file load error with a strange path: app/controller/Main.js/model/file.js . Then I took a look in controllers method "onClassExtended" and seems that this line is the reason of load fail: Code: namespace = Ext.Loader.getPrefix(className) || match[1]; // = "AppNamespace.controller.Main" || "AppNamespace"; Last edited by vadimv; 1 Oct 2012 at 3:27 AM. Reason: sentence correction - Join Date - Apr 2007 - Location - Sydney, Australia - 17,752 - Vote Rating - 774 Not really sure what the bug is here, can you elaborate a bit?Evan Trimboli Sencha Developer Twitter - @evantrimboli Don't be afraid of the source code! there sth wrong here ?, is there sth else to do after app generation with Sencha Cmd 181 on MacOSX 10.6.8 (besides enabling Loader) so that not to receive that loading error. To reproduce it is very simple, just generate the app and then generate a model and then try to build that app. Vadimv, did you find any solution? I encountered the same problem. I am not 100% sure but this may have to do something where you put your models and views declarations. If you define them inside Ext.application everything seems to work ok but if you define them in the controller then it fails like you said. For example you have controller Car, and model Car. If you define models: ['Car'] inside controller Car then Loader tries to load /app/controller/Car.js/model/Car.js which naturally fails. But if you define models: ['Car'] inside Ext.application then it should work. Where we should define models and views? I would like to declare them in corresponding controller, not in Ext.application. If I want to define store in corresponding controller I have to use fully qualified name Code: requires: ['MyCarApp.stores.Car'], stores: ['MyCarApp.stores.Car'] I am using 4.1.1a and Sencha Cmd v3.0.0.190 devnullable@, yes I know about all these things, my temporary solution is to take the namespace from the match[1]. I remember that in previous versions I have hadn't such issues, was just specifying app's path in Loader and everything worked, but with the new Sencha Cmd sth is working differently, which I don't know what yet. getPrefix function has following code: Code: if (paths.hasOwnProperty(className)) return className; } I don't know what is the purpose of the shown code and why it returns fully qualified path?! Any update on this? Regards
https://www.sencha.com/forum/showthread.php?244464-4.1.1-Controller-s-modules-load-problem&p=893740&viewfull=1
CC-MAIN-2015-32
refinedweb
433
67.25
I am designing a winform client that should get around 30 media files (mp3, mp4 etc) in the size of 10 mega. every X minutes the client will check for updates, and if it finds any it should download the new media files from the server. what is the most efficient/fast way of transferring those file to the client? 1. ftp? 2. binary format with Windows Communication Foundation? (and I can notify the client about changes so no need to check for updates) any other options or considerations (security?) That depends on whether your application should work behind firewalls, whether the server is supposed to operate in a LAN/WAN/The internet and whether there are any restrictions on the protocol you use. Usually, using TCP gives the most effecient results but it doesn't work fine with most firewalls (most network administrators block things on their firewalls)Http/Ftp will be your second choice. To use these protocols, you will need to take a look at the System.Net namespace, See the WebRequest.Create method, and take a look at the FileWebRequest, FtpWebRequest, HttpWebRequestClasses. You will need the Response companion classes of these (FileWebResponse, FtpWebResponse ... and the abstract WebResponse class) If you decide to implement using raw TCP, you will need to look at the sockets infrastructure and you will need to have access to the server application code. Again, it all depends on your application and how is it supposed to contact the server. Is the server a web server or a custom application you are working on? hmm... must you transfer the files if its on your own network? looking for/scanning everyfew minutes will be expensive if using TCPClient/Sockets since you have to establish a connection every time and so on and so forth and would not be effecient. the best way if doing this would be in raw binary format. But still - why do you have to transfer the files locally if its on your own network? Just curious to see if I can provide a better solution and architecture to your app From what i understand, the media files are yours and are on your server and the WinForms client will be used by your users to view the files and at the same time be updated on any changes on the files. Now, let's suppose your users are located on your network at first. That leaves you with an open set of options and your main concern becomes performance and any security issues you may have with users modifying the files. On this scenario, i think the best will be to transfer them using mere IO through UNC files. But that will require you to modify permessions on the files to allow only reading of the files for the public. That should be relatively easy and secure over your network using NTFS and AD permissions infrastructure. Now the next step will be assuming your users are all over the planet, with your files exposed merely through web technologies (FTP/TCP/HTTP ... etc) In this scenario, and if what i remember is correct, performance will be best through FTP then TCP then HTTP (in units of speed) becasue of the extra headers and sesions added by each protocol in the chain. However, a methodology that is gaining much popularity and praise is using Web services for your purpose. This will be through creating a web service that exposes methods to download the files and others to query update information. This will give you the following extras: 1. You're in control of the security mechanism, and you can use layered security levels (with NTFS at the bottom, Web Forms authentication above it and maybe any other security protocol above them all (session tickets, expiry policy .. etc.)) 2. You can totally hide your files deep in your network as the web service is the users point of contact, the user will only have the address of the service and nothing more, no infomation on where your files are, what they are named ...etc. 3. You can later upgrade your security measures, enhance implementation of the webservice and change the files location and format with nearly no cost(if any at all) on the client side 4. You can notify the users of updates not only to the media files, but also to the client program itself and make it possible for them to update the client without requiring any experience in using computers. 5. You can support pausing/stoping/resuming download of media files on your client However, this comes on the cost of performance. Web Services operate based on HTTP and that - many will argue - is slower than other protocols. I don't know about you but, for me, just the ability to give my users the feature of pausing/resuming their download can be enough. In my opinion, this might just be your best option. BUT, that's just my opinion. Hope this helped
http://www.windowsdevelop.com/windows-forms-general/getting-media-files-from-a-winform-55317.shtml
crawl-003
refinedweb
834
58.72
Creates and manages "processes" (really coroutines). More... #include <coroutines.h> List of all members. Creates and manages "processes" (really coroutines). Definition at line 328 of file coroutines.h. Pointer to a function of the form "void function(PPROCESS)". Definition at line 331 of file coroutines.h. [private] Constructor. Definition at line 97 of file coroutines.cpp. Destructor. Definition at line 118 of file coroutines.cpp. Destroys the given event. Definition at line 697 of file coroutines.cpp. Creates a new event (semaphore) object. Definition at line 686 of file coroutines.cpp. Creates a new process. Definition at line 491 of file coroutines.cpp. Creates a new process with an auto-incrementing Process Id. Definition at line 555 of file coroutines.cpp. Creates a new process with an auto-incrementing Process Id, and a single pointer parameter. Definition at line 560 of file coroutines.cpp. Returns the process identifier of the currently running process. Definition at line 603 of file coroutines.cpp. Returns a pointer to the currently running process. Definition at line 599 of file coroutines.cpp. Definition at line 674 of file coroutines.cpp. Definition at line 666 of file coroutines.cpp. nullptr Moves the specified process to the end of the dispatch queue allowing it to run again within the current game cycle. Definition at line 312 of file coroutines.cpp. -1 Kills any process matching the specified PID. The current process cannot be killed. Definition at line 613 of file coroutines.cpp. Kills the specified process. Definition at line 564 of file coroutines.cpp. Temporarily sets a given event to true, and then runs all waiting processes,allowing any processes waiting on the event to be fired. It then immediately resets the event again. Definition at line 717 of file coroutines.cpp. If the specified process has already run on this tick, make it run again on the current tick. Definition at line 275 of file coroutines.cpp. Reschedules all the processes to run again this tick. Definition at line 260 of file coroutines.cpp. Kills all processes and places them on the free list. Definition at line 139 of file coroutines.cpp. Resets the event. Definition at line 711 of file coroutines.cpp. Give all active processes a chance to run. Definition at line 222 of file coroutines.cpp. Sets the event. Definition at line 705 of file coroutines.cpp. Set pointer to a function to be called by killProcess(). May be called by a resource allocator, the function supplied is called by killProcess() to allow the resource allocator to free resources allocated to the dying process. Definition at line 662 of file coroutines.cpp. Make the active process sleep for the given duration in milliseconds. Definition at line 468 of file coroutines.cpp. Continously makes a given process wait for given prcesses to finished or events to be set. Definition at line 401 of file coroutines.cpp. Continously makes a given process wait for another process to finish or event to signal. Definition at line 345 of file coroutines.cpp. [friend] Definition at line 334 of file coroutines.h. Event list. Definition at line 363 of file coroutines.h. active process list - also saves scheduler state Definition at line 351 of file coroutines.h. the currently active process Definition at line 357 of file coroutines.h. pointer to free process list Definition at line 354 of file coroutines.h. Auto-incrementing process Id. Definition at line 360 of file coroutines.h. Called from killProcess() to enable other resources a process may be allocated to be released. Definition at line 381 of file coroutines.h. list of all processes Definition at line 348 of file coroutines.h.
https://doxygen.residualvm.org/de/d14/classCommon_1_1CoroutineScheduler.html
CC-MAIN-2019-47
refinedweb
611
62.54
Microsoft Azure Advisor Client Library for Python Project description Microsoft Azure SDK for Python This is the Microsoft Azure Advisor Client Library. Azure Resource Manager (ARM) is the next generation of management APIs that replace the old Azure Service Management (ASM). This package has been tested with Python 2.7, 3.4, 3.5, 3.6 and 3 2.0.1 (2018-10-16) Bugfix - Fix sdist broken in 2.0.0. No code change. 2.0.0 (2018-10-15) Features - Model ResourceRecommendationBase has a new parameter extended_properties - Client class can be used as a context manager to keep the underlying HTTP session open for performance General Breaking changes This version uses a next-generation code generator that might introduce breaking changes. - Model signatures now use only keyword-argument syntax. All positional arguments must be re-written as keyword-arguments. To keep auto-completion in most cases, models are now generated for Python 2 and Python 3. Python 3 uses the “*” syntax for keyword-only arguments. - Enum types now use the “str” mixin (class AzureEnum(str, Enum)) to improve the behavior when unrecognized enum values are encountered. While this is not a breaking change, the distinctions are important, and are documented here: At a glance: - “is” should not be used at all. - “format” will return the string value, where “%s” string formatting will return NameOfEnum.stringvalue. Format syntax should be prefered. - New Long Running Operation: - Return type changes from msrestazure.azure_operation.AzureOperationPoller to msrest.polling.LROPoller. External API is the same. - Return type is now always a msrest.polling.LROPoller, regardless of the optional parameters used. - The behavior has changed when using raw=True. Instead of returning the initial call result as ClientRawResponse, without polling, now this returns an LROPoller. After polling, the final resource will be returned as a ClientRawResponse. - New polling. Note - azure-mgmt-nspkg is not installed anymore on Python 3 (PEP420-based namespace package) 1.0.1 (2018-02-13) - Fix list_by_subscription return type - Fix list_by_resource_group return type 1.0.0 (2018-01-16) - GA Release 0.1.0 (2017-11-06) - Initial Release Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/azure-mgmt-advisor/
CC-MAIN-2019-43
refinedweb
372
59.6
Component initializiation order Hi, I have these lines of code: @ import Qt 4.7 ListView { id: listview width: 300 height: 500 MongoDB { id: db name: "testdb" host: "localhost" collections: [ MongoCollection { id: mythings name: "things" } ] } delegate: mydelegate model: mythings.find({}) Component { id: mydelegate Text { text: obj.toString() } } } @ And my MongoDB is a QObject implemented in C++ and implementing the QDeclarativeParserStatus-interface to be able to react wait until all properties of MongoDB are set, so that I know to which host to connect. The problem is the line @ model: mythings.find({}) @ This line is called before MongoDB's componentComplete() is called. That means that the find({}) function is called before I can determine to which host to connect. Can anybody tell me how to do this in the correct order? Or am I generally not getting the point here? Cheers! Manuel @Component.onCompleted: listview.model = mythings.find({})@ Should set the model after MongoDB is set up. Another option might be to expose a MongoQuery object that provides the results of a search as a property (which could then notify once it had a result). One of the problems with using C++ functions from QML (in this case find()) is they don't always work well in bindings -- there is no way to say "call this function again, because the results have changed". We're looking at adding something like this in future versions (similar to NOTIFY for properties), at which point your original code should work as well. Is your QML binding of MongoDB public? I'd love to have a look. Regards, Michael Thanks, Michael. Yes, it's quite a nice idea. It's currently in a very early development status - as you can see, but I think I will publish a proof of concept at this weekend. If I've done that, I'l post it here. Cheers, Manuel Hi Michael, have a look at "this thread": or "here for plugin the code": Cheers, Manuel
https://forum.qt.io/topic/4453/component-initializiation-order
CC-MAIN-2018-43
refinedweb
326
72.66
On 17/03/07, Fawzi Mohamed <fmohamed at mac.com> wrote: > * namespaces * > > First off something that disturbed me but does not seem to be discussed > much are namespaces, or rather the lack of them. I'm also in the middle of writing a medium-sized program in Haskell, but my experiences have been somewhat the opposite; I've found that although most people complain about the lack of namespaces I haven't really missed them. > Yes I know that haskell does have namespaces, but with object oriented > languages you basically get a namespace for each class, in haskell you > need modules, and I often like to have groups of related types in the > same module. Surely within a group of related types there'd be no overlapping names anyway? > Records then also put everything in the module namespace, and it seems > that this misfeature has already been discussed. I like to prefix my record accessors with three letters that describe the type. For example, in my forum software, the author of a post can be pulled out of a Post value using pstAuthor. Although this is somewhat low-tech, it's a solution that works well and makes reading code easier, while of course solving the one-namespace problem. I don't really see why anything more complex would be needed. >). -- -David House, dmhouse at gmail.com
http://www.haskell.org/pipermail/haskell-cafe/2007-March/023579.html
CC-MAIN-2013-20
refinedweb
227
68.6
Hi, Sorry for the previous message. It was sent by mistake while I was editing it. Well, My problem is that I am trying to use CLAPACK or LAPACK++ in a Visual C++ 6.0 environment on Windows and I could not manage it to work. I instaled both (CLAPACK and Lapackpp) on my machine but I was not able to use any of them. The problems are: - with Lapackpp: I was not able to compile the workspace that was enclosed in the sorce code - with Clapack: I was able to compile the library, but when I tryed to make an small test program I go some erros. First I tryed the following code: #include "blaswrap.h" #include "f2c.h" ... dsygst(1, 'U', r, mat1, 1, mat2, 1, info); and got the error: error C2065: 'dsygst' : undeclared identifier I tried using dsygst_ but I got the same result. Them I tryed including clapack.h, and them I got the following error. clapack\clapack.h(7) : error C2061: syntax error : identifier 'real' I even tryed to include the f2c.h where real is declared inside clapack.hbut it did not worked. Now I am completely lost. I would appreciate any help you can give me. Thanks Jose Bins -------------- next part -------------- An HTML attachment was scrubbed... URL:
http://icl.cs.utk.edu/lapack-forum/archives/lapack/msg00287.html
CC-MAIN-2017-17
refinedweb
215
76.11
DEBSOURCES Skip Quicknav sources / doc-rfc / 20181229-2 / rfc330519 Network Working Group M. Mealling, Ed. Request for Comments: 3305 R. Denenberg, Ed. Category: Informational W3C URI Interest Group August 2002 Report from the Joint W3C/IETF URI Planning Interest Group: Uniform Resource Identifiers (URIs), URLs, and Uniform Resource Names (URNs): Clarifications and Recommendations Status of this Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract This. Mealling & Denenberg Informational [Page 1] RFC 3305 URIs, URLs, and URNs August 2002 Table of Contents 1. The W3C URI Interest Group . . . . . . . . . . . . . . . . 2 2. URI Partitioning . . . . . . . . . . . . . . . . . . . . . 2 2.1 Classical View . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Contemporary View . . . . . . . . . . . . . . . . . . . . 3 2.3 Confusion . . . . . . . . . . . . . . . . . . . . . . . . 3 3. Registration . . . . . . . . . . . . . . . . . . . . . . . 4 3.1 URI Schemes . . . . . . . . . . . . . . . . . . . . . . . 4 3.1.1 Registered URI schemes . . . . . . . . . . . . . . . . . . 4 3.1.2 Unregistered URI Schemes . . . . . . . . . . . . . . . . . 4 3.1.2.1 Public Unregistered Schemes . . . . . . . . . . . . . . . 4 3.1.2.2 Private Schemes . . . . . . . . . . . . . . . . . . . . . 5 3.1.3 Registration of URI Schemes . . . . . . . . . . . . . . . 5 3.1.3.1 IETF Tree . . . . . . . . . . . . . . . . . . . . . . . . 5 3.1.3.2 Other Trees . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 URN Namespaces . . . . . . . . . . . . . . . . . . . . . . 5 3.2.1 Registered URN NIDs . . . . . . . . . . . . . . . . . . . 5 3.2.2 Pending URN NIDs . . . . . . . . . . . . . . . . . . . . . 6 3.2.3 Unregistered NIDs . . . . . . . . . . . . . . . . . . . . 7 3.2.4 Registration Procedures for URN NIDs . . . . . . . . . . . 7 4. Additional URI Issues . . . . . . . . . . . . . . . . . . 7 5. Recommendations . . . . . . . . . . . . . . . . . . . . . 8 6. Security Considerations . . . . . . . . . . . . . . . . . 8 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . 8 References . . . . . . . . . . . . . . . . . . . . . . . . 9 Authors' Addresses . . . . . . . . . . . . . . . . . . . . 10 Full Copyright Statement . . . . . . . . . . . . . . . . . 11 1. The W3C URI Interest Group In October, 2000 the W3C formed a planning group whose mission was to evaluate the opportunities for W3C work in the area of Uniform Resource Identifiers (URIs) and to develop a proposal for continued work in this area. The Interest Group was composed of W3C members and invited experts from the IETF to participate as well. This document is a set of recommendations from this group, to the W3C and the IETF for work that can and should continue in this area. 2. URI Partitioning. Mealling & Denenberg Informational [Page 2] RFC 3305 URIs, URLs, and URNs August 2002 2.1 Classical View During the early years of discussion of web identifiers (early to mid 90s), people assumed that an identifier the addition of a discrete number of additional classes; for example, a URI might point to metadata rather than the resource itself, in which case the URI would be a URC (citation). URI space was thus viewed as partitioned into subspaces: URL, URN, and additional subspaces to be defined. The only such additional space ever proposed was Uniform Resource Characteristics of these two classes. 2.2 Contemporary View Over time, the importance of this additional level of hierarchy seemed to lessen; the view became that an individual scheme did not need to be cast into one of a discrete set of URI types, such as "URL", "URN", "URC", etc. Web-identifier schemes are, in general, URI schemes, as is it. 2.3 Confusion Mealling & Denenberg Informational [Page 3] RFC 3305 URIs, URLs, and URNs August 2002 difference, and how they relate to one another. While RFC 2396, section 1.2, attempts to address the distinction between URIs, URLs and URNs, it has not been successful in clearing up the confusion. 3. Registration This section examines the state of registration of URI schemes and URN namespaces and the mechanisms by which registration currently occurs. 3.1 URI Schemes 3.1.1 Registered URI schemes The official register of URI scheme names is maintained by IANA, at. For each scheme, the RFC that defines the scheme is listed; for example "http:" is defined by RFC2616 [14]. The table lists 34 schemes (at time of publication of this RFC). In addition, there are a few "reserved" scheme names; at one point in time, these were intended to become registered schemes but have since been dropped. 3.1.2 Unregistered URI Schemes We distinguish between public (unregistered) and private schemes. A public scheme (registered or not) is one for which there is some public document describing it. 3.1.2.1 Public Unregistered Schemes Dan Conolly's paper, at, provides a list of known public URI schemes, both registered and un- registered, a total of 85 schemes at time of publication of this RFC. 50 or so of these are unregistered (not listed in the IANA register). Some of these URI schemes are obsolete (for example, "phone" is obsolete, superceded by "tel"), while some have an RFC, but are not included in the IANA list. Mealling & Denenberg Informational [Page 4] RFC 3305 URIs, URLs, and URNs August 2002 3.1.2.2 Private Schemes It. 3.1.3 Registration of URI Schemes "Registration Procedures for URL Scheme Names" (RFC 2717) [1] specifies procedures for registering scheme names and points to "Guidelines for new URL Schemes" (RFC 2718) [2], which supplies guidelines. RFC 2717 describes an organization of schemes into "trees". It is important to note that these two documents use the historical term 'URL' when in fact, they refer to URIs in general. In fact, one of the recommended tasks in Section 5 is for these documents to be updated to use the term 'URI' instead of 'URL'. 3.1.3.1 IETF Tree The IETF tree is intended for schemes of general interest to the Internet community, and for those which require a substantive review and approval process. Registration in the IETF tree requires publication of the scheme syntax and semantics in an RFC. 3.1.3.2 Other Trees Although RFC 2717 describes "alternative trees", no alternative trees have been registered to date, although a vendor-supplied tree ("vnd") is pending. URI schemes in alternative trees will be distinguished because they will have a "." in the scheme name. 3.2 URN Namespaces A URN namespace is identified by a "Namespace ID" (NID), which is registered with IANA (see Section 3.2.4). 3.2.1 Registered URN NIDs There are two categories of registered URN NIDs: o Informal: These are of the form, "urn-<number>", where <number> is assigned by IANA. There are four registered (at time of publication of this RFC) in this category (urn-1, urn-2, urn-3, and urn-4). Mealling & Denenberg Informational [Page 5] RFC 3305 URIs, URLs, and URNs August 2002 o Formal: The official list of registered NIDs is kept by IANA at. At the time of publication of this RFC it lists ten registered NIDs: * 'ietf', defined by "URN Namespace for IETF Documents" (RFC 2648) [3] * 'pin', defined by "The Network Solutions Personal Internet Name (PIN): A URN Namespace for People and Organizations" (RFC 3043) [4] * 'issn' defined by "Using The ISSN as URN within an ISSN-URN Namespace" (RFC 3043) [4] * 'oid' defined by "A URN Namespace of Object Identifiers" (RFC 3061) [6] * 'newsml' defined by "URN Namespace for NewsML Resources" (RFC 3085) [7] * 'oasis' defined by "A URN Namespace for OASIS" (RFC 3121) [8] * 'xmlorg' defined by "A URN Namespace for XML.org" (RFC 3120) [9] * 'publicid' defined by "A URN Namespace for Public Identifiers" (RFC 3151) [10] * 'isbn' defined by "Using International Standard Book Numbers as Uniform Resource Names" (RFC 3187) [15] * 'nbn' defined by "Using National Bibliography Numbers as Uniform Resource Names" (RFC 3188) [16] 3.2.2 Pending URN NIDs There are a number of pending URN NID registration requests, but there is no reliable way to discover them, or their status. It would be helpful if there were some formal means to track the status of NID requests such as 'isbn'. Mealling & Denenberg Informational [Page 6] RFC 3305 URIs, URLs, and URNs August 2002 3.2.3 Unregistered NIDs In the "unregistered" category (besides the experimental case, not described in this paper), there are entities that maintain namespaces that, while completely appropriate as URNs, just haven't bothered to explore the process of N. 3.2.4 Registration Procedures for URN NIDs "URN Namespace Definition Mechanisms" (RFC 2611) [11] explain"). 4. Additional URI Issues There are additional unresolved URI issues not considered by this paper, which we hope will be addressed by a follow-on effort. We have not attempted to completely enumerate these issues, however, they include (but are not limited to) the following: o The use of URIs as identifiers that don't actually identify network resources (for example, they identify an abstract object, such as an XML namespace, or a physical object such as a book or even a person). o IRIs (International Resource Identifiers): the extension of URI syntax to non-ASCII. Mealling & Denenberg Informational [Page 7] RFC 3305 URIs, URLs, and URNs August 2002 5. Recommendations We recommend the following: 1. The W3C and IETF should jointly develop and endorse a model for URIs, URLs, and URNs consistent with the "Contemporary View" described in section 1, and which considers the additional URI issues listed or alluded to in section 3. 2. RFCs such as 2717 ("Registration Procedures for URL Scheme Names") and 2718 ("Guidelines for new URL Schemes") should both be generalized to refer to "URI schemes", rather than "URL schemes" and, after refinement, moved forward as Best Current Practices in the IETF. 3. The registration procedures for alternative trees should be clarified in RFC 2717. 4. Public, but unregistered schemes, should become registered, where possible. Obsolete schemes should be purged or clearly marked as obsolete. 5.". 6. Security Considerations This memo does not raise any known security threats. 7. Acknowledgements The participants in the URI Planning Interest Group are: o Tony Coates o Dan Connolly o Diana Dack Mealling & Denenberg Informational [Page 8] RFC 3305 URIs, URLs, and URNs August 2002 o Leslie Daigle o Ray Denenberg o Martin Duerst o Paul Grosso o Sandro Hawke o Renato Iannella o Graham Klyne o Larry Masinter o Michael Mealling o Mark Needleman o Norman Walsh References [1] Petke, R. and I. King, "Registration Procedures for URL Scheme Names", BCP 35, RFC 2717, November 1999. [2] Masinter, L., Alvestrand, H., Zigmond, D. and R. Petke, "Guidelines for new URL Schemes", RFC 2718, November 1999. [3] Moats, R., "A URN Namespace for IETF Documents", RFC 2648, August 1999. [4] Mealling, M., "The Network Solutions Personal Internet Name (PIN): A URN Namespace for People and Organizations", RFC 3043, January 2001. [5] Rozenfeld, S., "Using The ISSN (International Serial Standard Number) as URN (Uniform Resource Names) within an ISSN-URN Namespace", RFC 3044, January 2001. [6] Mealling, M., "A URN Namespace of Object Identifiers", RFC 3061, February 2001. [7] Coates, A., Allen, D. and D. Rivers-Moore, "URN Namespace for NewsML Resources", RFC 3085, March 2001. Mealling & Denenberg Informational [Page 9] RFC 3305 URIs, URLs, and URNs August 2002 [8] Best, K. and N. Walsh, "A URN Namespace for OASIS", RFC 3121, June 2001. [9] Best, K. and N. Walsh, "A URN Namespace for XML.org", RFC 3120, June 2001. [10] Walsh, N., Cowan, J. and P. Grosso, "A URN Namespace for Public Identifiers", RFC 3151, August 2001. [11] Daigle, L., van Gulik, D., Iannella, R. and P. Faltstrom, "URN Namespace Definition Mechanisms", BCP 33, RFC 2611, June 1999. [12] Berners-Lee, T., Fielding, R. and L. Masinter, "Uniform Resource Identifiers (URI): Generic Syntax", RFC 2396, August 1998. [13] Sollins, K., "Architectural Principles of Uniform Resource Name Resolution", RFC 2276, January 1998. [14] Fielding, R., Gettys, J., Mogul, J., Nielsen, H., Masinter, L., Leach, P. and T. Berners-Lee, "Hypertext Transfer Protocol -- HTTP/1.1", RFC 2616, June 1999. [15] Hakala, J. and H. Walravens, "Using International Standard Book Numbers as Uniform Resource Names", RFC 3187, October 2001. [16] Hakala, J., "Using National Bibliography Numbers as Uniform Resource Names", RFC 3188, October 2001. Authors' Addresses Michael Mealling VeriSign, Inc. 21345 Ridgetop Circle Sterling, VA 20166 US Ray Denenberg Library of Congress Washington, DC 20540 US Mealling & Denenberg Informational [Page 10] RFC 3305 URIs, URLs, and URNs August & Denenberg Informational [Page 11]
https://sources.debian.org/src/doc-rfc/20181229-2/rfc3305.txt/
CC-MAIN-2019-47
refinedweb
1,971
63.8
Every time I need to access another user's inbox in Outlook 2010, I need to go to File | Open | Other User's Folder. Obviously, this is very time consuming. If possible, I need a quicker way to access it, without adding the mailbox to the "open these additional mailboxes" list (user doesn't have full access to mailbox). Is there a quicker way? This problem confounded me, too, but I've found the answer! Go to File > Account Settings > Account Settings. Select the Email tab and double click on your profile. Click the icon for More Settings.. Go to the Advanced tab and click the Add... button. Type in the name of the user as it appears in your Exchange Global Address List. Click Apply, then OK. Close the remaining dialog box. The new inboxe(s) will appear in your folder list. To then make it a favorite, right click the folder and select Show in Favourites or drag the particular folder (Inbox, Sent items, etc.) up to your Favorites and it will open every time. From How-To Add Additional Mailbox to Outlook 2007: You first have to provide full access permissions on the Exchange server, then follow the instructions below. Just go to Tools » Account Settings. Click on your email and then click on Change. Click on More Settings » Advanced tab, and in the Mailboxes type the name of the user mailbox and click on Add. After that, the user’s Mailbox will appear in your mail folders. Sadly it just isn't possible in Outlook 2010. Adding another mailbox is the only way. Add the Mailbox as an additional mailbox. Set up Outlook delegation so that the users have read-only access. On the AD/Exchange Mailbox rights, give the users read access. read On Sent Items, and folders other than Inbox, set the permissions to custom. custom Folder Visible, Delete and Edit Items - none. none Did a couple quick tests since I was looking for the same answer. Without adding as an additional mailbox, you cannot add it as a favorite. Use add additional mailbox and you can add it to favorites, but if you then remove the additional mailbox, the favorite drops. That being said, I was surprised to see all the answers of "you need full access to the mailbox." That's just not true. If you only want to see the inbox, grant the user whatever level of access you feel comfortable with to the inbox folder. Then, on the top level (where it has the email address or mailbox name), right click and select Data File Properties (2010) or Properties (2007 and earlier). Select the permissions tab. Grant only Reviewer access to the "top" of the mailbox. Now, if you add using "Additional Mailboxes to open", you will only see the folders under the mailbox you have been granted access to. You need to do the following: Of course, you will need permissions to the inbox first. If you need access to many inboxes, you probably want to set the inbox as a "Favorite". I was looking for the answer to this question and found the answer here: While this will not add the folder to the favourites, you can get quick access to another user's folder by creating a new button in the menu bar pointing to this macro: Sub OpenSharedInbox() Dim myOlApp As Outlook.Application Dim myNamespace As Outlook.namespace Dim myRecipient As Outlook.recipient Dim myFolder As Outlook.MAPIFolder Set myOlApp = CreateObject("Outlook.Application") Set myNamespace = myOlApp.GetNamespace("MAPI") Set myRecipient = myNamespace.CreateRecipient("<username>") myRecipient.Resolve If myRecipient.Resolved Then Set myFolder = myNamespace.GetSharedDefaultFolder(myRecipient, olFolderInbox) ' This open the folder in the current window Set myOlApp.ActiveExplorer.CurrentFolder = myFolder ' This open the folder in a new window ' myFolder.Display End If End Sub very simple! Just use: add-mailboxfolderpermission username:\inbox (or any other folder you need) -user mailbox -accessrights editor/owner/reviewer Then you'll need to do this again as username only. This will make sure you give rights to the username:\top of information store which is what outlook needs to be able to "pin" it to outlook or make it sticky without having to add the folder all the time. username:\top You can add "Other User's folder" to the "Quick Access toolbar" - follow this link for how to, which says: By posting your answer, you agree to the privacy policy and terms of service. asked 5 years ago viewed 51233 times active 6 months ago
http://superuser.com/questions/191103/how-do-i-add-another-users-inbox-to-my-favorite-list-in-outlook-2010
CC-MAIN-2015-40
refinedweb
754
66.74
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. Hi Carlo, I am sorry for the delay in reviewing this straightforward submission. >> If you write a patch that improves the quality of the implementation >> without hurting anything else, then I would tend to believe that your >> patch would be acceptable. > I made a patch that addresses to first of the mentioned problems. > The patch is relative to a recent version of the 3.1 CVS source tree. First observation, your patch neither improves nor decreases QoI of libstdc++-v3 if adding a non-standard entry point (i.e. an extension known to users) is considered completely neutral. It is my observation that adding such extensions are rarely neutral. However, your request to add a properly-named method to merely report initialization status would appear on the face to be a minor issue for the library either way we slice it. On that analysis, I would have no problem with your patch being applied to mainline. The long-term effort to maintain this new non-standard entry point should be zero. The long-term C++ library maintainers would have to review it as well (since I have only ever considered patches that improve portability or internal implementation). I have included the part of your patch that adds the entry point and would ask them to provide the final approval/rejection. I would still question how you intend to intermingle implementation of the libraries (I seem to recall a thread where we discussed whether it was even legal for your library to override libstdc++-v3's operator new). But it is unclear that this patch should be judged by how you might use it as long as your new library is taking the burden of future support of the entanglement (which you are). You have your reasons and prospective users of your library will have to judge whether they accept a non-strict layering as I would advocate against. > [...] That means that instead of creating an ios_base::Init::Init > object inside operator new (in order to use std streams), the > operator new should test whether or not the std streams are already > initialized and not use them until they are. Index: include/bits/ios_base.h =================================================================== RCS file: /cvs/gcc/gcc/libstdc++-v3/include/bits/ios_base.h,v retrieving revision 1.13 diff -u -d -p -r1.13 ios_base.h --- ios_base.h 2001/09/25 23:51:17 1.13 +++ ios_base.h 2001/10/17 14:10:37 @@ -301,6 +301,13 @@ namespace std static void _S_ios_destroy(); + // _S_initialized() is an extension to allow debugger applications + // to use the standard streams from operator new. _S_ios_base_init + // must be incremented in _S_ios_create _after_ initialization is + // completed. + static bool + _S_initialized() { return _S_ios_base_init; } + private: static int _S_ios_base_init; static bool _S_synced_with_stdio; Index: testsuite/27_io/ios_init_initialized.cc [New test removed since no comment needed. Looks like you followed all library conventions for a new test.] If this patch is installed, you should consider adding documentation to libstdc++-v3/docs/html/ext/howto.html (or is there somewhere better?). Thank you for being patient while I thought about this. Regards, Loren
http://gcc.gnu.org/ml/libstdc++/2001-10/msg00179.html
crawl-001
refinedweb
528
55.95
Note: this is all for Xapian 1.0.18. Things (i.e., locations in files) might be in different locations in future versions. If you're looking to build a search function into your website or application, there are a ton of choices out there. Xapian is one of those choices, and on the surface, it seems like a pretty good option as its feature list is appealing and complete. It also includes an indexer (omega) that can index, and add to the Xapian database, a long list of document formats which is extremely appealing once you start diving into actually building an index and search component. However, all of the documentation and support seems to be built around various *nix platforms (either compiling or getting the library from precompiled packages [depending on the distribution]). There are bindings for C#, and there are pre-compiled bindings for C# that you can download. This article discusses how to get started with these bindings, how to compile the rest of the Xapian package (omega), and the pitfalls of this library on the Windows environment. Search technologies have three components (typically). The first component is typically referred to by the misnomer of document indexing. This is the process of actually extracting the text from documents (PDFs, web sites, Office documents, etc.). There are various technologies for doing this in the Windows space. The typical method is the use of IFilters (COM objects). There are also command line tools for most formats (more on this later). IFilter The second component is the actual indexing of the text from those documents. There is a lot of theory involved on the best way to separate the words in a document, how to store them, and language tricks such as stemming a word. This is where a tool such as Xapian is very important as the effort required to build an indexer on your own is very high. The third component is the search component -- how to actually retrieve the stored information and documents from the created index. This component typically is tightly integrated with the indexing component as it will be searching against the created index. Again, a tool such as Xapian is far preferable to something home grown in most cases as the theory of querying is fairly complex. The initial step is to download the bindings for C#. There are two important files: XapianCSharp.dll (the actual C# binding to Xapian's C++ DLL) and _XapianSharp.dll (the C++ Xapian core functionality). You will also need to download zlib. You'll need zlib1.dll from this download. Create a new command line project in Visual Studio. Add a reference to XapianCSharp.dll. Add _XapianSharp.dll and zlib1.dll to the project and make sure that they are set to be copied to the output directory during compilation. Add a new class that will broker calls to Xapian (SearchManager.cs). Add an OpenWriteDatabase method to open a Xapian database in write mode. Add an AddDocument method that will add a document to the index, storing some information about that document that can be used later when we search. OpenWriteDatabase AddDocument public class SearchManager { private const string DB_PATH = @"c:\temp\xap.db"; private static WritableDatabase OpenWriteDatabase() { return new WritableDatabase(DB_PATH, Xapian.Xapian.DB_CREATE_OR_OPEN); } /// <summary> /// Adds a document to the search index /// </summary> /// the application specific id for /// the particular item we're storing (ie. uploadId) /// the type of object we're storing (upload, client, etc.) /// the text to store /// <returns>the index document id</returns> public static int AddDocument( int id, string type, string body ) { // since the Xapian wrapper is PInvoking // into a C++ dll, we need to be pretty strict // on our memory management, so let's // "using" everything that it's touching. using( var db = OpenWriteDatabase() ) using( var indexer = new TermGenerator() ) using( var stemmer = new Stem("english") ) using( var doc = new Document()) { // set the data on the document. Xapian ignores // this data, but you can use it when you get a // document returned to you from a search // to do something useful (like build a link) doc.SetData(string.Format( "{0}_{1}", type, id)); // the indexer actually is what will build the terms // in the document so Xapian can search and find the document. indexer.SetStemmer(stemmer); indexer.SetDocument(doc); indexer.IndexText(body); // Add the document to the index return (int)db.AddDocument(doc); } } } Add another class to your project, SearchResult.cs, to handle the results of the queries. public class SearchResult { public int Id { get; set; } public string Type { get; set; } public int ResultRank { get; set; } public int ResultPercentage { get; set; } public SearchResult( string combinedId ) { var parts = combinedId.Split('_'); if ( parts.Length == 2 ) { Type = parts[0]; int i; if ( !int.TryParse( parts[1], out i )) throw new ApplicationException(string.Format( "CombinedId ID part incorrectly formatted: {0}", combinedId)); Id = i; return; } throw new ApplicationException( string.Format( "CombinedId incorrectly formatted: {0}", combinedId )); } } Now add a Search(string query) method to search the index. Search(string query) private static Database OpenQueryDatabase() { return new Database(DB_PATH); } /// <summary> /// Search the index for the given querystring, /// returning the set of results specified /// </summary> /// the user inputted string /// the zero indexed record to start from /// the number of results to return /// <returns>a list of SearchResult records</returns> public static IEnumerable<searchresult> Search( string queryString, int beginIndex, int count ) { var results = new List<searchresult>(); using( var db = OpenQueryDatabase() ) using( var enquire = new Enquire( db ) ) using( var qp = new QueryParser() ) using( var stemmer = new Stem("english") ) { qp.SetStemmer(stemmer); qp.SetDatabase(db); qp.SetStemmingStrategy(QueryParser.stem_strategy.STEM_SOME); var query = qp.ParseQuery(queryString); enquire.SetQuery(query); using (var matches = enquire.GetMSet((uint)beginIndex, (uint)count)) { var m = matches.Begin(); while (m != matches.End()) { results.Add( new SearchResult(m.GetDocument().GetData()) { ResultPercentage = m.GetPercent(), ResultRank = (int)m.GetRank() } ); m++; } } } return results; } Edit the main function to add some data and then query it. var docId = SearchManager.AddDocument(1, "upload", "this is my upload"); Console.WriteLine( "added: " + docId ); docId = SearchManager.AddDocument(2, "upload", "This will eventually be the contents of a PDF"); Console.WriteLine("added: " + docId); docId = SearchManager.AddDocument(1, "client", "McAdams Enterprises"); Console.WriteLine("added: " + docId); docId = SearchManager.AddDocument(1, "Message", "I think MSFT is wincakes!"); Console.WriteLine("added: " + docId); var results = SearchManager.Search("upload", 0, 10); foreach( var result in results ) Console.WriteLine( result.Id + " " + result.Type); results = SearchManager.Search("MSFT", 0, 10); foreach (var result in results) Console.WriteLine(result.Id + " " + result.Type); results = SearchManager.Search("PDF", 0, 10); foreach (var result in results) Console.WriteLine(result.Id + " " + result.Type); Compile the program, jump out to a shell, and try to run it. If you're lucky, it just works. If you're unlucky (like I was), then it just doesn't. If you're like most people these days, you're likely running a 64 bit version of Windows. If you're not on your development computer, your server most likely is running a 64 bit version of the OS. Once you try and call into Xapian, you'll get this bit of niceness: Unhandled Exception: System.TypeInitializationException: The type initializer for 'Xapian.XapianPINVOKE' threw an exception. ---> System.TypeInitializationException: The type initializer ---> for 'SWIGExceptionHelper' threw an exception. ---> System.BadImageFormatException: An attempt was made to load a program ---> with an incorrect format. (Exception from HRESULT: 0x8007000B) The issue is that ASP.NET/C# code compiled for "Any CPU" (the default setting) on a 64 bit OS'ed computer will not be able to call into 32 bit DLLs using PInvoke (which is how the Xapian wrapper works). What this means is that you'll either have to compile your code in x86 (32 bit) mode, or get 64 bit binaries for Xapian. Unfortunately, the downloads that you got earlier don't have 64 bit bindings. To make matters worse, you need a 64 bit version of zlib1.dll as well (and they don't provide it). Running a 32 bit compiled ASP.NET application on a 64 bit server is a pain (it works, but you lose a lot of benefits due to running in WoW64 mode). So if you really want to use Xapian in your Windows 64 bit environment, you're going to have to get your hands dirty. And, I can't guarantee that there won't be any issues since you're going to get a lot of compiler warnings about lost precision. You're going to need Visual Studio .NET 2005 or 2008 with C++ installed (if you're like me, you never thought you'd need it, so you didn't install it). Go install it. Get the source code for zlib. Get the build files (one zip) and source code (three gzip-ed archives) from the Flax hosting site. Unzip the source code to a common location (I recommend c:\xapian to make your life easier). Under this directory, you should have three directories, one for xapian-bindings-x.x.x, one for xapian-core-x.x.x, and one for xapian-omega-x.x.x. Unzip the Win32 build scripts from Flax into the xapian-code directory (it should unzip into a win32 subdirectory). Install ActivePerl (32 bit is fine, use the MSI). C:\perl is a good place for it. Unzip the zlib source (to say c:\zlibsrc). Browse to the projects\visualc6 directory in the source. Open the zlib.dsw file. You'll likely be asked to convert the project, say Yes To All. Add an x64 build target (click the Win32 drop down, select Configuration Manager, under Active solution platform, click <new>, select x64, click OK, then Close). Select the LIB Release project from the dropdown and build it. Select the DLL Release project from the dropdown and build it. Create a zlib directory for use in building Xapian (say c:\zlib). Copy everything from the zlib source\projects\visualc6\win32_dll_release directory to the zlib directory. Create an include folder in the zlib directory. Copy the zlib source to that include directory. Create a lib directory in the zlib directory. Copy everything from zlib source\projects\visualc6\win32_lib_release to that lib directory. In this directory, make a copy of the zlib.lib file and rename it to zdll.lib. Edit the xapian-core\win32\config.mak file (use Notepad). Edit the following lines: Edit the xapian-core\win32\makedepend\makedepend.mak file (use Notepad). Edit the following line: Edit xapian-core\common\utils.h and add the following lines: /// Convert a 64 bit integer to a string string om_tostring(unsigned __int64 a); Edit xapian-core\common\utils.cc and add the following lines: string om_tostring(unsigned __int64 val) { // Avoid a format string warning from GCC - mingw uses the MS C runtime DLL // which does understand "%I64d", but GCC doesn't know that. static const char fmt[] = { '%', 'I', '6', '4', 'd', 0 }; CONVERT_TO_STRING(fmt) } Open a "Visual Studio 2005 x64 Win64 Command Prompt" (it's under Visual Studio Tools on your Start menu). Change directories to c:\xapian\xapian-code.x.x.x\win32. Run "nmake". If everything goes according to plan, after a while, and a lot of compiler warnings about possible loss of data, it should be done compiling. Now run "nmake COPYMAKFILES" (this will copy the mak files to the appropriate places). There's a "bug" in the coding for the compilation of the bindings for x64. Run "set libpath", and if the result ends in a semicolon (;), then you need to reset the libpath in order to compile the bindings. In my case, I needed to run "set LIBPATH=C:\windows\Microsoft.NET\Framework64\v2.0.50727". Change directories to c:\xapian\xapian-bindings.x.x.x\csharp and run "nmake". This should compile the bindings. The bindings end up in c:\xapian\xapian-core-x.x.x\win32\Release\CSharp. Copy those files to your project that you created above, and it should work on 64 bit computers (you'll have to remove/re-add the reference to XapianCSharp.dll since it will be signed differently, and recompile). Change directories to c:\xapian\xapian-omega.x.x.x and run "nmake". This should compile the omega component of Xapian. The Xapian features page is a bit of a bait and switch. It says "The indexer supplied can index HTML, PHP, PDF, PostScript, OpenOffice/StarOffice, OpenDocument, Microsoft Word/Excel/PowerPoint/Works, Word Perfect, AbiWord, RTF, DVI, Perl POD documentation, and plain text." However once you dig into the Omega documentation, it relies on other components in order to actually parse other document types: So for the vast majority of the documents you'll be interested in parsing, Omega will require other third party applications -- which may or may not be available for your use on the Windows environment. It also appears that Omega will call these external applications and rear their output to parse the text of the document. While this isn't necessarily a bad way to accomplish the task of extracting text out of documents, it can potentially become a point of failure that will be extremely difficult to track down because there will be little logging (if any) of the errors that occur when calling external applications. In the C# world, Lucene.NET is the most common search "engine" used. It has its own issues (no recent official releases, poor documentation, performance concerns over large data sets since it's running in managed code, etc.), however it must be evaluated as well. It does not offer a tool such as Omega, so you will be responsible for extracting the data from documents (via IFilter or the same external programs as Omega)..
http://www.codeproject.com/Articles/71593/How-to-compile-and-use-Xapian-on-Windows-with-C?fid=1567849&df=90&mpp=25&sort=Position&spc=Relaxed&tid=3584657
CC-MAIN-2016-40
refinedweb
2,269
58.18
#include <iostream> #include <string> using namespace std; int main() { string character; cout << "capacity: " <<character.capacity() << endl << "Input a sequence: \n"; getline(cin,character); string reversestring; int counter = 0; for(int i = character.length() - 1; i >= 0; i--) { reversestring[counter++] = character[i]; } cout << reversestring; return 0; } messing around with a project above and I've hit an error I didn't expect. The program compiles fine, but whenever the user puts in there input I get the error "Debug Assertion Failed". What can I do about that? (I know there probably is problems with counter++ where it's at, but noticed a lot of C++ code put increments within the variable and liked the room it saves so messing around with that. Also I need to learn to use pointers, but that's something I plan on doing after I get it working this way.)
https://www.daniweb.com/programming/software-development/threads/294633/debug-assertion-failed
CC-MAIN-2021-43
refinedweb
144
60.65
Hello, thanks a lot for all of your answers and explanations, they were really useful! Beside using an import alias I found the following „workaround“: If I replace the static import in MainClass with a static method pointer like this class MainClass { public static def myMethod = SomeOtherClass.&myBackingMethodThatsNotPublic // […] } then it’s working as expected. Might be useful, if the names must be identical, e.g. in a DSL. Regards Johannes Von: MG <mgbiz@arscreat.com> Gesendet: Samstag, 13. April 2019 00:22 An: users@groovy.apache.org Betreff: Re: Static imports seem to win over a method in closure's delegate See For Web GUI programming we use Groovy together with Vaadin, and I recently had some cases, where e.g. an anonymous class method calls a Groovy closure which calls a Groovy closure which calls a method of the orginal containing class. In these cases it can become hard to have the expected method get called, and I also found that (depending on the situation) either using import aliasing or introducing a uniquely named helper method in the right class can clear things up quick & easy. Cheers, mg On 12/04/2019 23:14, Paul King wrote: Using import aliases can be a good workaround for such a case. On Sat, Apr 13, 2019 at 4:58 AM Jochen Theodorou <blackdrag@gmx.org<mailto:blackdrag@gmx.org>> wrote:
https://mail-archives.eu.apache.org/mod_mbox/groovy-users/201904.mbox/%3Ce4739555547b4406a1dc52c481ad3158@cursor.de%3E
CC-MAIN-2021-21
refinedweb
229
63.7
popen, pclose - pipe stream to or from a process Synopsis Description Errors Bugs Colophon #include <stdio.h> FILE *popen(const char *command, const char *type); int pclose(FILE *stream); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): popen(), pclose():_POSIX_C_SOURCE >= 2 || _XOPEN_SOURCE || (2) returns an error, or some other error is detected.. POSIX.1-2001. The e value for type is a Linux extension.. sh(1), fork(2), pipe(2), wait4(2), fclose(3), fflush(3), fopen(3), stdio(3), system(3) This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.sgvulcan.com/pclose.3.php
CC-MAIN-2017-26
refinedweb
111
64.2
A quick intro to HttpClient The following is a techtip I wrote which wasn't used. Since I turned out pretty well I thought I'd post it here. Let me know what you think. Would you like more of these small self-contained tips? A Quick Introduction to HttpClient Java is great because it has classes for almost everything. For example, if you want to open a webpage you can do it with the java.net.URL class. But what if you want to use a POST instead of a GET request? What if you have a bunch of parameters that need to be properly parsed? What if you want to deal with cookies? The URL class simply isn't up to the task. However, in Java, you are always just a single jar away from the solution. In this case, you can use the HTTP Client library from the Apache commons project here. You will need to put the commons-httpclient.jar and commons-logging.jar files in your path. To use the HTTP Client import the org.apache.commons.httpclient package and get started: This is how to do a simple POST request import org.apache.commons.httpclient.*; ... // initialize the POST method PostMethod post = new PostMethod(""); // execute the POST HttpClient client = new HttpClient(); int status = client.executeMethod(post); String response = post.getResponseBodyAsString(); You can set up all of your parameters using the PostMethod class and then execute the POST with the executeMethod() on the post object. After the post has completed you can read the body of the response into a string or look at the HTTP status code. The classic 404 error is an example of an HTTP status code. The example above posted with a set of parameters. If instead you want to just post an entire document, say an XML document or SOAP request, then you could do something like this: StringBuffer content = new StringBuffer(); content.append(" ");"); content.append(" information"); content.append(" post.setRequestBody(content.toString()); client.executeMethod(post); Now lets say you need to list the cookies set by a website. That's also very easy to do. The HttpClient object acts as a miniature browser. It will preserve all state about the current HTTP session, including cookies, in an HttpState object. After you have connected to the website through a get or post you can look at the cookies set by the webserver like this: int status = client.executeMethod(get); HttpState state = client.getState(); for(Cookie c : state.getCookies()) { System.out.println("cookie = " + c.getName() + "=" + c.getValue()); } Since HttpClient maintains state about the HTTP session you can also use it to log into secure sites. For example, if I wanted to show the file listing my secure webdav server, the same way a real browser would see it, I could log in like this: // create credentials to log in Credentials cred = new UsernamePasswordCredentials("myusername","mypassword"); // set the realm to null, meaning use these credentials for all websites client.getState().setCredentials(null,cred); // get the html doc GetMethod get = new GetMethod(""); int ret = client.executeMethod(get); System.out.printlin("a browser sees = " + get.getResponseBodyAsString()); The HttpClient library is very powerful, enabling you to perform almost any HTTP task such as GETs, POSTs, getting and setting cookies, handling redirects, going through HTTP proxies, and even HTTPS authentication. And best of all, it is freely available under Apache License. You can download the HttpClient library at - Login or register to post comments - Printer-friendly version - joshy's blog - 51983 reads Hi Josh, Nice post! Can you by letakeda - 2010-10-20 10:08 Hi Josh, Nice post! Can you give me a help!? Please!? I'm trying to use these examples but when trying to run the app I'm receiving this... I know I will need to do something with the certificate, but what? 20/10/2010 12:09:07 org.apache.commons.httpclient.HttpMethodBase readResponseBody INFO: Response content length is not known.apache.commons.httpclient.HttpConnection.flushRequestOutputStream(HttpConnection.java:828) at org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpMethodBase.java:2116) Main.main(Main.java) ...
https://weblogs.java.net/blog/joshy/archive/2006/11/a_quick_intro_t.html
CC-MAIN-2015-40
refinedweb
681
58.28
Description Long quicklinks get truncated and are hardly readable especially for sidebar-themes. Steps to reproduce Add a page name with more than 25 chars to your quicklinks. Details This Wiki. All themes are affected. Workaround Patch MoinMoin/themes/__init__.py to use shortenPagename-method for links in navibar (patch generated in local svn repos: rev.100 corresponds to moin-1.5.4-release): {{{Index: C:/Sources/moin-aca-trunk/MoinMoin/theme/init.py =================================================================== --- C:/Sources/moin-aca-trunk/MoinMoin/theme/init.py (revision 100) +++ C:/Sources/moin-aca-trunk/MoinMoin/theme/init.py (revision 101) @@ -278,7 +278,7 @@ - title = wikiutil.escape(title) + title = self.shortenPagename(wikiutil.escape(title)) link = '<a href="%s">%s</a>' % (pagename, title) return pagename, link @@ -292,7 +292,7 @@ - thiswiki = request.cfg.interwikiname if interwiki == thiswiki: - pagename = page - title = page + title = self.shortenPagename(page) - else: - return (pagename, - self.request.formatter.interwikilink(True, interwiki, page) + }}} - Another way to handle this "bug" (if moin itself will be not changed which seems to be the current plan): override the method "splitNavilink" in your theme. Discussion - What is the official opinion on this one? If the patch is not accepted because long names are OK for the modern theme, it would be nevertheless worthwhile adding an optional argument in order to reduce the length of the quicklinks for sidebar themes. It is just incorrectly filed (rather a patch/feature request, not a hard bug). Besides the incorrect order of function calls in the first patch hunk (it needs to be escaped after shortening it), it looks ok. It does not meet our PatchPolicy, though. -- AlexanderSchremmer 2006-10-09 18:37:47 - Patch was uploaded a while ago when 1.5.4 was current. ...if that was the point. If "Long quicklinks get truncated and are hardly readable", then why do you want to truncate the title? - Because they don`t fit into the sidebar of the right-sidebar theme and it's even worth with left sidebar themes. This theme-specific problem can be solved in the theme by overriding the splitNavilink-method (e.g. as done in DavidLinke/Sinorca4Moin. So it is not a real moin bug. Plan - Priority: - Assigned to: - Status: 1.6 shortens long quicklinks
http://www.moinmo.in/MoinMoinBugs/NavibarDoesNotShortenPagenamesForQuicklinks
crawl-003
refinedweb
370
52.87
Kevin Quick <quick <at> sparq.org> writes: > > >. > On a bit more digging, I'm scaring myself. These are both valid (H98): Data.Char.toUpper.Prelude.head.Prelude.tail $ "hello" -- Strewth! "hello".$Prelude.tail.$Prelude.head.$Data.Char.toUpper -- using (.$) = flip ($) as fake dot notation GHCiorHugs==> 'E' The first example is good in that you can mix qualified names in with dot notation, and the lexer can bind the module name tighter than dot-as-function- composition. It's bad that not only are we proposing changing the meaning of dot, we're also changing the direction it binds. If you put in the parens: (Data.Char.toUpper.(Prelude.head.(Prelude.tail))) "hello" (("hello".$Prelude.tail).$Prelude.head).$Data.Char.toUpper Or perhaps not so bad, left-to-right thinking? Another syntax change about dot-notation is that it binds tighter **than even function application**: map toUpper customer.lastName Desugars to: map toUpper (lastName customer) Compare if that dot were function composition: (map toUpper customer) . lastName -- of course this isn't type-valid But wait! there's more! we can make it worse! A field selector is just a function, so I can select a field and apply a function all in one string of dots: customer.lastName.tail.head.toUpper -- Yay!! > > I was trying to ... *but* also > indicate that I specifically want the field selector rather than some > arbitrary f. I wanted to extract the field f of every record in recs but > clearly indicate that f was a field selector and not a free function. > > And this is finally our difference. I had wanted the no-space preceeding > dot syntax (.f) to specifically indicate I was selecting a field. ... You seem to be not alone in wanting some special syntax for applying field selectors (see other posts on this thread). H98 field selectors don't do this, they're just functions. And there's me bending over backwards to make all Type-Directed overloaded- Name Resolution field selectors just functions, so you can mix field selectors and functions **without** special syntax. Example Yay!! above. I'm puzzled why you want different syntax for field selectors. Can you give some intuition? Of course you can adopt a convention in your own code that dot-notation is for field selection only. (But you can't legislate for code you're importing.) (And Donn Cave wants to be able to ignore dot notation all together.) AFAIC OO languages lets you put all sorts of weird stuff together with dot notation. SPJ's got an example from Java in his TDNR. I hope it's not because you name your fields and functions with brief, cryptic, one-letter codes!! You do have a coding convention in you production code to use long_and_meaningful_names, don't you?! So you can tell `customer' is a customer (record), and `lastName' is a last Name (field), etc. > The issue can > be resolved by explicit module namespace notation (ala. Prelude.map v.s. > Data.List.map). I want module namespace notation **as well as** dot notation. This is my import from a distant planet example. And it'll work, going by example Strewth! above. > >))". Yes, as per discussion above. > > With regards to module namespace notation, neither SORF nor DORF mentions > anything that I found, but I'm assuming that the assertion is that it's > not needed because of the type-directed resolution. It's rather the other way round. We want to avoid qualified names, and type- directed resolution is the mechanism to achieve that ... Where this 'Records in Haskell' thread started is that currently if you want to have the same field name in different records, you have to declare the records in different modules, then import them to the same place, and still you can only refer to them by putting the module prefix. (Unless you use the - XDisambiguateRecordFields flag, but this only works within the scope of pattern matches and explicit record/data constructors; it doesn't work for the free-floating selector functions.) And on balance, putting module prefixes everywhere is just too cumbersome. So yes, the plan with SORF and DORF is that you can (mostly) use un-qualified names, and the resolution mechanism figures out which record type you're talking about. One difference between DORF and SORF is that I want the resolution mechanism to be exactly class/instance resolution. In contrast, both SORF and TDNR want some special syntax-level resolution for dot-notation, at the desugaring stage. I've re-read those sections in both proposals, and I still don't 'get' it. That's again what prompted me to try harder. I think I've ended up with an approach that's more 'Haskelly' in that the field selector is just an overloaded function, and we're familiar with them, and how they get resolved through type/instance inference. [I've just re- read that last sentence: I'm claiming to be more 'Haskelly' than SPJ!! The arrogance!] There's one further difference between DORF and SORF/TDNR. I'm explicit about this, but I'm not sure what SORF's take is. I think SORF/TDNR keeps with current Haskell that you can't declare more than one record with the same field name in the same module. I want to declare many records in the same module with the same field name(s). This is my customer_id example: All three of the records for customer Name/Address, customer pricing, and customer orders have a customer_id field. >"). This isn't really demonstrating the point. Both definitions of foo are monomorphic Rec -> String, there's no type-level difference. It's _not_ an overloaded definition of a single foo, it's a clash of names declared in different modules. So to tell them apart within the same scope, you always need the module qualifier. The use of foo embedded in bar_pf is qualified, so bar_pf will always show the foo field within the record ("hi"). The foo in bar is not qualified. I'd expect the compiler to complain that it's ambigous. (Looks like that's valid code today, if you change bar's RHS to x.$baz.$foo -- did you try it?) And no, you can't concoct an example today that demonstrates DORF, because record Rec automatically declares function foo with a monomorphic type. You'll have to create some shadow field/functions (foo_, _foo, Proxy_foo, the Has instance and all the drama) as I did in the RHCT post. > > Apologies for putting you through the syntax grinder, and especially when > I'm not really qualified to be operating said grinder. I know it's not > the interesting part of the work, but it's still a part. > > Thanks, Anthony! > > -Kevin > Cheers Anthony
http://www.haskell.org/pipermail/haskell-cafe/2012-February/099089.html
CC-MAIN-2014-10
refinedweb
1,128
65.12
Contorno a BSpline Subelement highlight The Subelement highlight Outdated translations are marked like this. Descripción Convert a wire to a B-Spline, and a closed B-Spline to a closed wire Utilización - Selecciona un contorno o una BSpline - Presiona el botón Contorno a BSpline A new object will be created; the original object will not be modified. Note: if a closed wire with sharp edges is used to create a spline, the new object may have self intersecting curve segments, and may not be visible in the 3D view. If this is the case, manually set DataMake Face to False to see the new shape, or set DataClosed to False to create an open shape. Opciones - El objeto original no será eliminado después de la operación, debes borrarlo manualmente si quieres. Programación No disponible, pero crear un nuevo objeto con los puntos de otro es sencillo, por ejemplo: The Points attribute of an object is a list with the points that comprise that object; this list can be passed to functions that build geometry. Each point is defined by its FreeCAD.Vector, with units in millimeters. - if the active object is a wire import FreeCAD, Draft # Make a spline from the points of a wire p1 = FreeCAD.Vector(1000, 1000, 0) p2 = FreeCAD.Vector(2000, 1000, 0) p3 = FreeCAD.Vector(2500, -1000, 0) p4 = FreeCAD.Vector(3500, -500, 0) base_wire = Draft.makeWire([p1, p2, p3, p4]) points1 = base_wire.Points spline = Draft.makeBSpline(points1) # Make a wire from the points of a spline base_spline = Draft.makeBSpline([-p1, -1.3*p2, -1.2*p3, -2.1*p4]) points2 = base_spline.Points Wire = Draft.makeWire(points
https://wiki.freecadweb.org/Draft_WireToBSpline/es
CC-MAIN-2020-24
refinedweb
272
74.59
I have this homework assignment that i need help with and below is the code that i have so far. any help would be appreciated. thank you. Write a program that tells what coins to give out for any amount of change from 1 cent to 99 cents. For example, if the amount is 86 cents, the output would be something like the following: 86 cents can be given as 3 quarter(s) 1 dime(s) amd 1 penny(pennies). Use coin denominations of 25 cents (quarters), 10 cents (dimes) and 1 cent (pennies). Do not use nickel and half-dollar coins. Your program will use the following function (amoung others): void computeCoin(int coinValue, int& number, int& amountLeft); /* Preconditions: 0 < coinValue < 100; 0 <= amountLeft < 100. Postconditions: number has been set equal to the maximum number of coins of dem=nomination conValue cents that can be obtained from amountLeft cents. AmountLeft has been decreased by the value of the coins, that is, decreased by number*coinValue. For example, suppose the value of the variable amountLeft is 86. Then, after the following call, the value of number will be 3 and the value of amountLeft will be 11 (because if you take three quarters from 86 cents, that leaves 11 cents):computeCoins(25, number, amountLeft); Include a loop that lets the user repeat this computation for new input values untill the user says he or she wants to end the program. (Hint: Use integer division and the % operator to implement this function.) p> here is what i have so far i just need to implement this function in this program void computeCoin(int coinValue, int& number, int& amountLeft); how do i do that. Code:#include <iostream> #include <iomanip> #include <cmath> using namespace std; void computeCoin(int coinValue, int& number, int& amountLeft); // Main Program int main( ) { // Variable Declations int coin_value; int quarters; int dimes; int pennies; int total_amount; char yes = 'y'; char Yes = 'Y'; char agian; // Enter code below do { quarters = 0; dimes = 0; pennies = 0; cout<< " Enter the amount of change "; cin >> coin_value; cout << "\n"; if (coin_value >= 25 && coin_value <= 100) { quarters = coin_value / 25; coin_value = coin_value % 25; } if (coin_value <= 24 && coin_value >= 10) { dimes = coin_value /10; coin_value = coin_value % 10; } if (coin_value <= 4 && coin_value >= 1) { pennies = coin_value; } //////// total_amount =( quarters * 25)+ (dimes * 10) + pennies; cout << total_amount << " Cents Can be given as " ; //////// if (quarters == 1) { cout << quarters << " quarter, "; } if (quarters == 2 || quarters == 3) { cout << quarters << " quarters, "; } if (dimes == 1) { cout << dimes << " dime, "; } if (dimes == 2 ) { cout << dimes << " dimes, "; } if (pennies == 1) { cout << pennies << " penny. " << endl; } if (pennies < 4 && pennies > 2) { cout << pennies << " pennies. " << endl; } cout << "\n"; cout << " Would you like to run the program agian (Y or N)? "; cin >> agian; } while (agian == yes || agian == Yes); cout << "\n\nEnd Program.\n"; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/124065-cplusplus-homework-help.html
CC-MAIN-2014-35
refinedweb
456
63.43
8 replies on 1 page. Most recent reply: Mar 31, 2006 3:11 PM by Eric Armstrong It has taken me quite a while to decide to share these thoughts. I really like the folks who are implementing Groovy. I found them to be extremely helpful on the mailing list, and I like their basic concept. A truly portable scripting language is a terrific idea. Unfortunately, that portability seems to be acquired at the expense of the two features that make scripting languages so seductive: Minimal coding and fast write/test/revise cycles. On the other hand, even if Groovy eventually turns out to be terrific, it would still make sense to learn Ruby today--if only because so much has been written about it. That abundance makes it possible to become familiar with the strange new constructs like closures that are so unfamiliar to most Java hackers. So understandnig both Java and Ruby should make it possible to easily transition to Groovy, when it makes sense. I began looking at Groovy with high hopes and great expectations, but reluctantly had to conclude that it's going to be the technology of choice only in very limited set of use cases--primarily ones in which the VM is already running. For general purpose scripting, the time it takes to crank up the VM and intitialize the interpreter is a deal killer--as much as 30 or 40 seconds on my machine. I get better performance compiling and testing Java code in an IDE, especially if it supports incremental compilation in a background task. The start-up delay is sufficient by itself to rule out Groovy as a general purpose scripting language. But unfortunately, it gets worse: Things are still in flux, to the point that many discussions on the user list are aimed at deciding how the language should be implemented. That's terrific, from the agile methodology perspective. It means that user needs are feeding into the design. But it also means that the basic language design is not yet complete. Lack of documentation. It's to be expected at this stage of its evolution, but given that things haven't entirely settled down yet, it's pretty scary to contemplate using it for anything important. Mostly unhelpful ANT integration. Ruby got this one right with Rake--super simple syntax plus the power of the language when you need it. Groovy has (so far) missed a serious opportunity here. You can call ANT tasks from a Groovy script, and you can invoke a Groovy shell from inside ANT, but neither one of those alternatives has anything like the expressive power or sheer convenience of Rake. (For more on Rake, see Martin Fowler's article at) Dependence on the Java library. It provides a lot of power, but utilizing those libraries means the language hasn't provided the kinds of one-line operations that make it easy to do the simple things you can do with Ruby--for example I/O and XML parsing. (For more, see Make, Mailing Lists, and Ruby at) Ruby has some really cool features like modules and structs. Maybe Groovy does, too. It's hard to say, given the documentation. But more likely the requirement that compiled Groovy scripts must function as JVM classes will be a limiting factor in this area. Then there are features inspired by Unix command shells--the ability to expand variables inside of a double-quoted string, and the ability to execute a shell command inside of back-quotes and get back the return string. Of course, shell commands are platform=dependent. But in Ruby the very definition of a method is determined at runtime, so it's possible to code multiple-platform versions with no significant overhead--because the conditional branch that determines how the method operates only occurs once, when the method is first read. Rake may well be taking advantage of the dynamic behavior when it implements the kind of cross-platform commands you typically need in a shell script: Commands like cd, mkdir, and mkdir_p (to a make a directory from a path, and all directories that need to be made to get to it). To tell you the truth, I don't know any language intended for scripting can do without such things. (To my mind, those functions should be part of the standard library.) And speaking of dynamic, runtime behavior, Ruby has lambda functions: the ability to generate code on the fly and execute it. Those turn out to be useful when defining Rake tasks, as Jim Weirich describes so nicely: Ruby sure isn't perfect. There are many Perlisms I'd rather not have seen, and it has too many ways to do things, imo. That flexibility is great when you're writing code, but it makes things harder when you're reading it, because you have to master the idioms favored by whoever did the writing. (You can alias things in Ruby and you can even overload operators--so you could conceivably write something that /nobody/ could read.) If you need a scripting language, Ruby is a great one. When it comes to building an application with multiple authors and long term maintainability, I'd say there's a lot of decision-making to do before going with Ruby. If you're going agile with a small team that can easily teach each other its favorite idioms, Ruby will probably work. If you adhere to a more structured methodology and have a large, long term project with many lines of code--so you can expect a fair amount of turnover among the people who will be maintaining it--then Ruby probably isn't a good fit. Once the problems of documentation and open design issues are resolved, Groovy may yet find a home in that sort of application-building environment. It will provide dynamic flexibility, like Ruby, but with full JVM integration so you can build and use Java classes. But when we begin to drag in all that machinery, we're not talking about something that feels like a "scripting" language, anymore. We're talking about something that lies somewhere between a scripting language and an application programming language. So Groovy may yet find a home, but for the kind of fast development cycles I was looking for, Groovy simply isn't the language. At least not yet. I still think that a truly portable dynamic scripting language is a great idea. I just don't see how it can be done in the JVM, except in some sort of server-side setting where the JVM is kept running and initialized, and there is some mechanism to feed it a script on the fly. But to rapidly develop those scripts, an "on the fly" mechanism is needed for testing, as well. In other words, the JVM has to be effectively part of the operating system--wholly integrated into it so the startup time disappears. Until that's done, I don't see how Groovy can provide the same kind of immediacy that Ruby provides. The other problems can eventually be resolved. But that startup hurdle seems insurmountable in theory, as well as in practice. Then again, maybe there are some viable usage scenarios I've overlooked. That would be nice. def power(n) proc {|base| base**n}endsquare = power(2)cube = power(3)a = square(4) puts a # => 16 def power(n) { { base -> base**n }}square = power(2)cube = power(3)a = square(4) println a // => 16 add = { x -> { y -> x + y }} println add(3)(4) add = proc{ |x| proc{ |y| x + y }} puts add.call(3).call(4)
http://www.artima.com/forums/flat.jsp?forum=106&thread=152728&start=0
CC-MAIN-2014-35
refinedweb
1,278
59.53
C# - Break Statement Advertisements The break statement in C# has following two usage: When the break statement is encountered inside a loop, the loop is immediately terminated and program control resumes at the next statement following the loop. It can be used to terminate a case in the switch statement. If you are using nested loops (i.e., one loop inside another loop), the break statement will stop the execution of the innermost loop and start executing the next line of code after the block. Syntax The syntax for a break statement in C# is as follows: break; Flow Diagram Example using System; namespace Loops { class Program { static void Main(string[] args) { /* local variable definition */ int a = 10; /* while loop execution */ while (a < 20) { Console.WriteLine("value of a: {0}", a); a++; if (a > 15) { /* terminate the loop using break statement */ break; } } Console.ReadLine(); } } } When the above code is compiled and executed, it produces the following result: value of a: 10 value of a: 11 value of a: 12 value of a: 13 value of a: 14 value of a: 15 csharp_loops.htm Advertisements
http://www.tutorialspoint.com/csharp/csharp_break_statement.htm
CC-MAIN-2016-26
refinedweb
184
54.15
Showcase your app to new users or explain functionality of new features. It uses react-floater for positioning and styling. And you can use your own components too! View the demo here (or the codesandbox examples) Chat about it in our Spectrum community npm i react-joyride import Joyride from 'react-joyride'; export class App extends React.Component { state = { steps: [ { target: '.my-first-step', content: 'This is my awesome feature!', }, { target: '.my-other-step', content: 'This another awesome feature!', }, ... ] }; render () { const { steps } = this.state; return ( <div className="app"> <Joyride steps={steps} ... /> ... </div> ); } } If you need to support legacy browsers you need to include the scrollingelement polyfill. Setting up a local development environment is easy! Clone (or fork) this repo on your machine, navigate to its location in the terminal and run: npm install npm link # link your local repo to your global packages npm run watch # build the files and watch for changes Now clone and run: npm install npm link react-joyride # just link your local copy into this project's node_modules npm start Start coding! p
https://codeawesome.io/react.js/miscellaneous/react-joyride
CC-MAIN-2022-05
refinedweb
178
65.93
Recent Notes Displaying keyword search results 1 - 10 There are two ways to get the submit button: Use the :submit selector (note the space between the form name and :submit ): $('#the-form-id :submit')Use attribute selector (also, space between form id and attribute selector): $('#the-form-id [type="submit"] ') // or $(...jQuery recommends the latter if you are concerned about performance: Because :submit is a jQuery extension and not part of the CSS specification, queries using :submit cannot take advantage of the performance boost provided by the native DOM querySelectorAll() method. For better performance in modern browsers, use [type="submit"] only explanation of why some Java EE API classes are stripped off methods implementations I can find is this JBos forum post: What's the cause of this exception: java.lang.ClassFormatError: Absent Code? which also provides some workarounds for these crippled API classes. The explanation offered. Honestly, I don't see any logic in those statements. This is the only place any such explanation is offered. Yes only from this JBos forum post! There's no public......, "... You would never have guessed it. The test is: Modifier.isStatic(method.getModifiers()) ! Example code: import java.lang.reflect.Method; import java.la...; ... Use the unbind method to unbind an event. The following example unbinds three events: mouseenter , mouseleave and click . $('#control') .find('a') .css('cursor', 'def...
http://www.xinotes.net/notes/keywords/method/find/
CC-MAIN-2014-15
refinedweb
225
58.48
In this post, I will explain how to display a message from Controller in View, using JavaScript alert MessageBox. The message sent from Controller to View will be displayed in JavaScript Alert MessageBox using the ViewBag Object. Introduction In this post, I will explain how to display a message from Controller in View, using JavaScript alert MessageBox. The message sent from Controller to View will be displayed in JavaScript Alert MessageBox using the ViewBag Object. Controller First, we create a new project using Visual Studio. Choose Project template MVC. Now, we create Controller. Controller has two action methods, mentioned below: Action method for handling GET operation. Inside this Action method, only the view is returned. Action method for handling POST operation. This Action method handles the form submission and it accepts the value of the form element as a parameter. The name value fetched from the Form collection and the current Server date and time is set to a ViewBag object named “Message”. Controller Code using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; namespace AlertMessge.Controllers { public class HomeController : Controller { public ActionResult Index() { return View(); } [HttpPost] public ActionResult Index(string name) { ViewBag.Message = String.Format("Hello{0}.\\ncurrent Date and time:{1}", name, DateTime.Now.ToString()); return View(); } } } VIEW The View consists of HTML form, which has been created using the Html.BeginForm method with the parameters, as mentioned below: _ActionName_Name of the Action. In this case, the name is Index. ControllerName Name of the Controller. In this case, the name is Home. FormMethod It specifies the Form Method i.e. GET or POST. In this case it will be set to POST. The Form consists of two elements i.e. a TextBox and a Submit Button. The ViewBag object named Message is checked for NULL and if it is not NULL, the value of the object is displayed, using JavaScript alert MessageBox. View Code @{ Layout = null; } <!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width"/> <title>Index</title> </head> <body> @using (Html.BeginForm("Index", "Home", FormMethod.Post)) { <input type="text" id="txtName" name="name"/> <input type="submit" id="btnSubmit" value="Get Current Time"/> } @if (ViewBag.Message != null) { <script type="text/javascript"> window.onload = function () { alert("@ViewBag.Message"); }; </script> } </body> </html> Output Summary In this section, I explained the message sent from Controller to View, which will be displayed In JavaScript alert MesssageBox using the ViewBag object. Thank you for reading !
https://morioh.com/p/dee41d0e40d6
CC-MAIN-2020-10
refinedweb
411
61.53
Yield to other ready threads at the same priority #include <sched.h> int sched_yield( void ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The sched_yield() function checks to see if other threads, at the same priority as that of the calling thread, are READY to run. If so, the calling thread yields to them and places itself at the end of the READY thread queue. The sched_yield() function never yields to a lower-priority thread. A higher-priority thread always forces a lower-priority thread to yield (that is, preempt) the instant the higher-priority thread becomes ready to run, without the need for the lower-priority thread to give up the processor by calling the sched_yield() or SchedYield() functions. The sched_yield() function calls the kernel function SchedYield(), and may be more portable across realtime POSIX systems. This function always succeeds and returns zero. #include <stdio.h> #include <stdlib.h> #include <sched.h> int main( void ) { int i; for( ;; ) { /* Process something... */ for( i = 0 ; i < 1000 ; ++i ) fun(); /* Yield to anyone else at the same priority */ sched_yield(); } return EXIT_SUCCESS; /* Never reached */ } int fun() { int i; for( i = 0 ; i < 10 ; ++i ) i += i; return( i ); }
http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.neutrino.lib_ref/topic/s/sched_yield.html
CC-MAIN-2022-27
refinedweb
206
72.97
This module implements rational numbers, consisting of a numerator and a denominator. The denominator can not be 0. Example: import std/rationals let r1 = 1 // 2 r2 = -3 // 4 doAssert r1 + r2 == -1 // 4 doAssert r1 - r2 == 5 // 4 doAssert r1 * r2 == -3 // 8 doAssert r1 / r2 == -2 // 3 Types Procs func `$`[T](x: Rational[T]): string - Turns a rational number into a string. Example: doAssert $(1 // 2) == "1/2"Source Edit func `*=`[T](x: var Rational[T]; y: Rational[T]) - Multiplies the rational x by y in-place. Source Edit func `*`[T](x: Rational[T]; y: T): Rational[T] - Multiplies the rational x with the int y. Source Edit func `*`[T](x: T; y: Rational[T]): Rational[T] - Multiplies the int x with the rational y. Source Edit func `+=`[T](x: var Rational[T]; y: Rational[T]) - Adds the rational y to the rational x in-place. Source Edit func `-=`[T](x: var Rational[T]; y: Rational[T]) - Subtracts the rational y from the rational x in-place. Source Edit func `-=`[T](x: var Rational[T]; y: T) - Subtracts the int y from the rational x in-place. Source Edit func `//`[T](num, den: T): Rational[T] - A friendlier version of initRational. Example: let x = 1 // 3 + 1 // 5 doAssert x == 8 // 15Source Edit func `/=`[T](x: var Rational[T]; y: Rational[T]) - Divides the rational x by the rational y in-place. Source Edit func `div`[T: SomeInteger](x, y: Rational[T]): T - Computes the rational truncated division. Source Edit func `mod`[T: SomeInteger](x, y: Rational[T]): Rational[T] - Computes the rational modulo by truncated division (remainder). This is same as x - (x div y) * y. Source Edit func abs[T](x: Rational[T]): Rational[T] - Returns the absolute value of x. Example: doAssert abs(1 // 2) == 1 // 2 doAssert abs(-1 // 2) == 1 // 2Source Edit func cmp(x, y: Rational): int - Compares two rationals. Returns - a value less than zero, if x < y - a value greater than zero, if x > y - zero, if x == y func floorDiv[T: SomeInteger](x, y: Rational[T]): T Computes the rational floor division. Floor division is conceptually defined as floor(x / y). This is different from the div operator, which is defined as trunc(x / y). That is, div rounds towards 0 and floorDiv rounds down.Source Edit func floorMod[T: SomeInteger](x, y: Rational[T]): Rational[T] Computes the rational modulo by floor division (modulo). This is same as x - floorDiv(x, y) * y. This func behaves the same as the % operator in Python.Source Edit func initRational[T: SomeInteger](num, den: T): Rational[T] Creates a new rational number with numerator num and denominator den. den must not be 0. Note: den != 0 is not checked when assertions are turned off.Source Edit func reciprocal[T](x: Rational[T]): Rational[T] - Calculates the reciprocal of x (1/x). If x is 0, raises DivByZeroDefect. Source Edit func reduce[T: SomeInteger](x: var Rational[T]) Reduces the rational number x, so that the numerator and denominator have no common divisors other than 1 (and -1). If x is 0, raises DivByZeroDefect. Note: This is called automatically by the various operations on rationals. Example: var r = Rational[int](num: 2, den: 4) # 1/2 reduce(r) doAssert r.num == 1 doAssert r.den == 2Source Edit func toInt[T](x: Rational[T]): int - Converts a rational number x to an int. Conversion rounds towards 0 if x does not contain an integer value. Source Edit func toRational(x: float; n: int = high(int) shr 32): Rational[int] {. ...raises: [], tags: [].} Calculates the best rational approximation of x, where the denominator is smaller than n (default is the largest possible int for maximal resolution). The algorithm is based on the theory of continued fractions. Example: let x = 1.2 doAssert x.toRational.toFloat == xSource Edit func toRational[T: SomeInteger](x: T): Rational[T] - Converts some integer x to a rational number. Example: doAssert toRational(42) == 42 // 1Source Edit
https://nim-lang.github.io/Nim/rationals.html
CC-MAIN-2021-39
refinedweb
665
57.57
30 October 2012 15:37 [Source: ICIS news] SINGAPORE (ICIS)--PetroChina announced on Tuesday that its net profit in the first three quarters of 2012 decreased by 16% year on year to yuan (CNY) 87bn ($13.9bn, €10.8bn) on continuous refining losses and prolonged weakness in petrochemical markets. The Chinese petrochemical major said that losses for its refining and chemicals division were broadly flat over the period, recording an operating loss of CNY37.4bn, compared with CNY38.4bn reported in the same period during 2011. The company lost CNY30.0bn from refining business and CNY7.38bn from chemicals operations during the nine-month period, attributing the shortfall to “prolonged weakness of the domestic petrochemicals market and the macroeconomic regulation and control over the prices of refined products.” Total operating profit for Petrochina fell 8.7% to CNY130.0bn for the year to the end of September, despite an increase in the company’s oil exploration and production profits to CNY163.3bn from CNY160.8bn in the nine months ended 30 September 2011. Group turnover in the January-September period increased by 7.8% year on year to CNY1,598.3bn, the company said. In the first three quarters, PetroChina processed a total of ?xml:namespace> The company’s sales of oil products inched up 5.9% on year to Its oil and gas production increased 3.9% year on year to China’s growth is set to continue to expand into the fourth quarter of 2012, according to PetroChina, but is likely to be slower than has been the norm in recent years, because of Chinese government policy. “As a series of measures are being taken to maintain steady growth, the economy in China is expected to develop at a more stable pace, and the favourable policies for the petroleum industry will continue to be improved,” the company said. ($1 = CNY6.24, €1 = CNY8.06)
http://www.icis.com/Articles/2012/10/30/9608975/petrochina-net-profits-in-jan-sept-decrease-16-to-cny87bn.html
CC-MAIN-2014-42
refinedweb
316
58.08
Discriminated Unions in F# >>IMAGE Actual format may change based on video formats available and browser capability. Awesome video, thanks! Will the help be still extensible for 3rd parties? In the past you could provide installers that integrate into the VSIP help namespace for that purpose. In addition, is it possible to provide an intranet help? For example, in our company we have many shared components that are updated quite frequently. We would like to centralize the help so that the updates of the intranet help could be done by a nightly build. This way, nobody needs to install anything but just press F1. The updating and downloading of new content looks better than VS2008. However the lack of an index that you can drill down into as you type each character is a big negative. Why create a new system if right off the bat you don't have a feature that many people use. Same as intellisense provides discoverability when you get a up-to-date list as you type each character, I find the help index provides the same analogue rather than doing a search for most help lookups. I'd probably stay with VS2008 until this feature is available for VS2010. Help in the web browser - seriously? Are we running Linux here? Who'd have thought regressing 15 years was the way forward. I thought the same when I heard it the first time. Now I think it is actually not that bad: What are your concerns? In what way is help in the browser a step back? Help is documentation, and that's EXACTLY what the browser was designed for (unlike web applications that are all the rage and yet rarely are worth a darn). The old help system was nothing more than an embedded browser... which buys you nothing, and costs you a lot (lack of plugins for FF users, for instance). It's actually about darn time they stopped using an embedded browser and started using open standards. How horrible! 1. The browser is not a rich controls application. No tree-view control. No tabs and property sheets. No easy to use keyboard shortcuts. (A) Can I use the table of contents to navigate the documentation using the cursors in the browser? Most probably not. It is all a set of links and links and more links. (B) Can I press ctrl+alt+f1, f2 and f3 to get the table of contents, the index and the search in your new browser-based viewer? Most probably not. By the way, in VS 2008 although it says that these shortcuts should work, they don't so that is a bug in VS 2008 too. (C) Can I use alt+c and alt+n to get table contents and index in the browser? (D) Can I browser the index using cursors and tab or is there no index in the browser? (E) Is search its own floating window or every time I use it a new page would be navigated to in the browser? Of course the later. But in your previous viewer I could keep the search results and view them one by one without having to go back and forward. (F) There is a reason why desktop applications are richer than the browser, even though everybody talks about the browser and the browser nowadays. 2. Can you at last decide on one help viewer for all your company? Windows has one, Office another, now you another. Where is the collaboration? 3. Your MSDN online search results page is not a simple as Google and it doesn't still work as good. Even after so many years. Especially the interface has too many links. So, now you are going to make that the default search experience? 4. You know how hard it is for me to navigate MSDN online? There is no Next and Previous link at the end of each help topic. And so every time I read a topic I have to go up to the table of contents again and again and find and click the next topic. Is hard, very hard. You go down to read, you finish and you go up and up to find the title of what you are reading in the TOC and then find and click the next and the next topic. All over and over. Whilst in your previous help viewer, one can easily press alt+down and alt+up to navigate through the topics. How easy. Can your "browser" do that? I agree with the first post regarding help on the Intranet - that would be valuable when we got 100+ developers to support. Or even just at home if I have to rebuild my machine so I don't have to download all the help again. Also, the online/offline modes - is it possible to get the settings to respect first to check for content offline and then online? It would be nice if this could be more seamless. When I use offline help and click on a link to something I haven't downloaded I get a custom 404 screen asking if I want to go online and there is a link. Once I click on the link a new browser window opens. 1. There's plenty of such "rich controls" on the web... most done through nothing but the DOM and JavaScript. Regardless, there's very little use for such controls when talking about reading documentation. (A) Keyboard navigation in the browser works just fine. If they had a TOC, you'd be able to navigate it with the keyboard. (B) Have you used GMail much? Keyboard shortcuts work just fine in the browser. You may not be able to keep the exact same short cuts... but that shouldn't matter. (C) Rehash of (B). (D) Again, keyboard navigation would work fine. As for an index... watch the video. (E) Either could be done, but I'm sure the separate search window implementation wasn't done. I think most people wouldn't care, or might even prefer not to have a separate search window. Obviously you don't, and so you've got a legitimate complaint here, even if it's one you're likely to not get resolution to. (F) Agreed... but that richness is pointless for documentation. After all, that is PRECISELY what the browser was designed for. 2. Valid point... though I'm not sure how important. 3. The online MSDN search is no worse than the existing offline MSDN search, so comparing it to Google, in this context at least, is a bit pointless. 4. The browser CAN do that... but I'm willing to bet they didn't implement it. Most people don't use the documentation that way either. It's a legitimate complaint, and one that should be considered, though. Just doesn't mean the browser isn't the right implementation. Why should someone write the whole "xmlreader" to search for it ... I really want the search index back... just type three or four letters and I am there... the current way of showing table of contents confuses me, because there is no visual clue that where I am in the documentaion... convention of peer, child and parents doesn't quite make the cut here... you should have some kind of visual indication. I agree with the previous critics. The single most used feature of the current help explorer, for me at least, is that large index where dynamic search happens. The fact that classes and members appear next to other similiar, has helped me a lot for discover new things. Definitely need some kind of Index UI. I spend all my time in dexplore on the Index tab because I usually know exactly what I am looking for and things are indexed in such a way that it usually provides a great experience. In lieu of an index (for v1 at least) you could do some kind of type ahead drop down in the search box. That gives you both paradigms and lets us Index junkies tap into the power of a real search interface. It would also be nice to be able pin the type ahead box in place and navigate to search results and help content in a frame so that I can quickly browse through related search results without the hassle of constantly going back (either via the back button or through tab management). If you implemented this in such that I don't lose my search result context and can quickly click through different results it would be very much like the current Index UI in dexplore but miles better because the search is much richer. If you could get these requests in for 1.0 (or soon thereafter) I would absolutely love the new system and this is coming from someone who really likes and gets a lot of use out of dexplore. It's been said mant times that MS changes things based on how they think it should work, but don't truly ask lots of users...and in most cases don't give you the option to do it differently. Herding everyone into searching for everything rather than using the index is stupid. Please give back the index. What good does it do to remove something that wasn't hurting anyone in the first place. The index is much easier for me to not only logically find things, but to see where they fit in the parent topic. Why would a company who promotes thinking intelligently, guide people into a lack of thinking by just searching for everything. When I started using VS 2010 B2 and started using the "Help Viewer" (so-called), I thought I must have had something mis-configured. This can't be the real help system for VS 2010. Unfortunately, this Channel 9 video indicates it is. I'm still in shock to think this is what Microsoft intends to release w/VS 2010. This is garbage -- total, unbridled garbage, compared to what we had in VS 2008. I agree that an HTML-based "viewer" could be made functional with enough AJAX, but as it stands now, this "viewer" is unusable. I'll keep VS 2008 installed just for the better help system. I am so disappointed and I believe 99% of the other developer will be too. I just feel like uninstalling VS 2010 now. This is so diappointing. (As an aside, If Microsoft intends to go with a web-based approach to the Help System, I think they should go with HTML 5 rather than XHTML since no further work is being done on XHTML by W3C, as far as I know.) Couldn't agree more - MS help has been awful on VS and Excel for ever - far too many search items returned in VS, little intelligence used, search in Excel can't even find words that aren't keywords, entirely for users who know what they're looking for When you're in a hole, the best advice is to stop digging - just link F1 to do a Google search and join the real world - and focus on adding intelligent content that a decent search engine can find One feature of DExplore I'm missing in the new help system (and MSDN online) is the "Sync with Table of Contents" button. Many times I'll do a search (or use the index) to find a .NET type and then I want to see the type in the context of the namespace it was defined in. That way I can quickly surf related types. tbaxter >> I absolutelly agree! When I entered that "help system" for the first time I thought that there is something bad with my VS installation or that it is not present in Beta 2, so they temporarily redirect me to the internet (and incorrectly). Then I realized that the address in browser is localhost! How do you dare to steel My port 80!? This will prevent me from using local help on my machine at work. Help Library Manager >> This "application" looks like when brought from some children-targeted application. It has absolutelly un-VisualStudio look and user experience. For so few options it provides one form is perfectly engough (when used by developer) no plenty of something-between-wizard-and-popup. Why it is not modal and constanly opens in background? UI in general >> Since release of Office 2007 every change Microsf have done to user interface upset me. For everybody who wants read more complaints or complain more: (16 votes on connect is quite high number). After I saw the video... So the good news is that Help 3 has a rich API. If the Microsoft solution does not suit you then you can always use a 3rd party rich win32 or .NET solution. Or roll your own. H3Viewer is something I quickly knocked together and shows a full TOC and full Index. Microsoft also allow you to make any browser or 3rd party viewer and make it the main viewer for VS 2010 (This was announced by Charles at a public conference but they haven't made the mechanism public yet). If you would like to try H3Viewer you can download from -- (Remember this is all Beta at the moment). Rob Rob, you are my hero! I too want the index! And the Tree-View TOC! And Sync-with-TOC! They are far better in DExplore.exe than what you have in the browser! I really don't like the non-interactive browser thing you have. Dexplore was infinitely better. I don't care whether the help is in the browser or now, however the lack of a type-ahead index is a serious omission. When I use the MSDN library, it's because I know what I'm looking for and I just want to get the reference sheet. Usually I want to know the finer points of permissible arguments to a method. I use internet search when I don't know what I'm looking for - I may know what I want, but I have no idea how to achieve it. It's okay that the latter is online and hence relative slow - because it'll take time to sort through the information in any case. But when I just want to look up a class or a method, search is not the answer. The new Visual Studio 2010 help system is really horrible! No hierarchical TOC, no index, no contents search, no printing functionality. But luckily, there is a solution - PackageThis program at CodePlex. It allows you to download any subtree of MSDN/TechNet (or whole if you want) and save it to CHM or HXS. Enjoy! Ok, so they already knew that everybody wanted back their index, but still they pushed people into searching ... gr8 ... NOT. Searching is a waste of time when you already know what you are looking for. Why did they remove the most used feature of dexplore and replaced it with a crappy guess function (searching is just guessing)? Luckily they can add it at a later stage,.. but most of the time this means 'never' because of the idea that: "people will get used to this mediocre interface and eventually will give up complaining". Why is Visual Studio 2010 help service tied to the PID? This creates problems with bookmarking items within the browser. Everytime the Help Library Agent is restarted a new PID will be assigned breaking all the links that have been saved under IE Favorites. Dear Ryan OMG, just installed the VS 2010 help system and found NO INDEX. In response to your invitation at the end of the video, please please implement an index. Anyone knows that an Index and Search are two different things. Less important but what I also find amusing is that the TOC in the new system is fixed to the left, whereas I notice from the video that you (like me) prefer to have your solution explorer and such like on the right. So another step backwards 'cos Dexplorer could put the TOC anywhere. This whole debacle breaks a lot of tenets of good customer-oriented software analysis and design IMO. Did someone say: "Wouldn't it be cool if..."? I do like the way you have implemented the web server app though, and wouldn't mind knowing how you did that. Cheers Ego trip! did you notice how many times he said "we"? not one mention on what the users (developers) want. then again... who cares. Download the Help Viewer SDK. Tried to import the sample help on Win7... it said I needed Administrator rights... so I launched CSharp Express using Admin right... cause that's only user friendly way to access the Help Manager. Tried again this time it errored.. not sure why but I had to go look in the Windows Log. There it said it couldn't find the .mshc file which is in the same folder. The message starts with: An error occurred while fetching a list of available content from disk Microsoft.Help.CacheLib.CacheLibBadLinkException: The item at '' refers to an item at '' which cannot be found. I checked, the paths and file locations are correct... so it must be a problem with the load from disk method in the Help Manager. I? @wkempf: What you're doing here is a perfect example two major problems when developers meet end users (even if those end users are other developers). First, the summation of your response is "you don't need all that fancy stuff, so stop asking for it.. learn to live with what you're getting" while the audience is saying "we like what we had - why did you take it away from us?". "You don't need it" doesn't cut it with the audience. The second part is even more bizarre. The complaint is that browsers aren't useful because they don't have rich interface support, to which you argue they do. In the most technical sense, yes, they do - BUT - then you turn around and argue that you don't need it anyway (see previous point). In well designed and *user experience* oriented web pages, they do indeed exist - but not in the new document viewer or at least not in any of the content we've seen so far... so your point is kind of irrelevent because whether or not it *can* exist, it doesn't in the content the audience is using. (Which brings us back to their point - why did Microsoft make such a huge change before they could implement all the stuff the existing audience has come to rely on?) While we're at it - the original design of HTML (back in the early 1990s) was actually for minimalist document presentation, but more about content tagging for searching. That's why text-only browsers like Lynx existed (or even could exist). Welcome to 2010. People expect a web page to be more than simple text. In the case of help, we expect... well... that it *helps*. Case in point - I've just spent the entire afternoon trying to answer a very simple question: how do you determine if an exchange contact is a person or a resource. I've still not found an answer. I've been sent to over two dozen websites. I've been shown more forum comments than I can shake a stick at, most dated 2007 or older. I've been shown five different versions of Outlook API (oddly their help system never thought to point me at Exchange API documentation - which wouldn't have helped, but still). And I've spent a lot of time climbing up and down the hierarchy because I can't SEE it. Allow me to add my own complaints. I'm NOT online all of the time - and when I am, the cost and speed (not to mention reliability) of being online varies wildly. I'd prefer my help files to be here where I am. Try developing something on a 10 hour bus or plane flight. Here's a fun exercise - try resetting the help documentation source for Outlook 2010. Bon chance. Showing me where I am is good. Showing me where I am relative to everything else is MUCH better. Somewhere we lost the notion of 'minimal clicks'. I seem to have to click around a LOT more in this new system. Consistency isn't foolish. A foolish consistency is. So is a foolish inconsistency. So why do some class documents have a link for all members, methods, properties and fields at one level, then break out to show details - while OTHER pages list the methods in a massive clump of links formatted as a paragraph (which is essentially unreadable?) A job well done isn't done until it's done. Some of the documentation out there is still showing 'beta' or 'incomplete'. There's a lot of 'placeholder' documentation that reads like "AppointmentItem - an item that holds an appointment". Thank you for that tremendous insight. A picture is worth a thousand words. Here's an example of a really, really terrible way to explain something: The problem? This line "What all of this means is that when you click F1 to view help in a client application, you should make sure that Developer Reference is selected under Content from This Computer in the drop-down menu adjacent to Search." Try it. I dare you. It's not doable. Now.. if the author had just clipped a screen shot and shown us what he or she was up to, it would be simple. Similarly, a *good* code sample is good - but one that shows the easy stuff is generally bad. Most of the code examples I see show the perfectly obvious, but rarely show the more complicated or unexpected stuff. What works should only be replaced when you have something better - *for the end user*. Clearly from the complaints here (and my own experience pretty much replicated their complaints), that's not the case here. The technology choice is irrelevent. Web browser is not a better solution over an application unless it provides some real benefit to someone other than the content creator. Supply-side economics doesn't work here (or anywhere) - the consumer should be the king. Even then, much of the problems can be prevented by simply including a conversion tool (preferably in the reader) to automatically take old content (ie: chm files) and make it work. Ahhhh.. but see - the new document reader can't even its OWN HsX file formats. For some reason, rather than just making it readable, it has to be *registered* with a special tool in a way that can only be done with an installer. Let me underline this: we had a file format that did not require anything but a reader and the file - and the reader was preinstalled for us anyway. No installation time requirements for the help file. No 'associations'. It just worked. And building these chm files in the free Help Workshop was actually not terribly hard with a good basic website designer. Now, the developer documentation files they include with Office are unreadable... or if they are - I've not found a way to do it. I used to be able to double click the chm file - nice and simple. Now? Who knows. Finally, a blog is not documentation. It's not even reliable (you'd not believe how many 'not founds' I've run into today). Yes, it's been a long day and I'm tired. One insanely simple question and I've spent four hours jumping back and forth having to learn a hell of a lot of stuff I had no need to know just to try and answer this question... and I still don't have an answer. Colour me unimpressed. @Jeff Lewis: Oh.. and what 'HTML' did to my nicely formatted reply kind of underlined a point I missed - a lot of pages I've been to in the new online documentation doesn't actually format correctly in IE8... the lines run together and overlap. I have to say the new help is dreadful.It's nowhere near as usuable as the old help and I often open up the 2008 help instead. What really matters of course is whether they're going to update the content.Frankly we use google instead as Microsoft never updates their help to mention problems or to add help in areas where people have found it difficult. Because you can't trust (or now use) the offical documentation, you end up relying on a load of people out there on the internet often doing things the wrong way or not realising there is a simple method call to replace their 50 lines of code. The help system is terrible! As a simple test, I was trying to find a help for "std::list" and I got three answers : 1. Compiler error C2751, 2. <allocators> and 3. Best practices in parallel patterns library. Where is the help topic abou a list container? I think the new system is a pile of cow dung..... to put it bluntly. I loved the dynamic index and search capabilities of the 2008 system. It helps to locate something the you don't know the EXACT name of. At my job, we also have to develop on systems that are disconnected and have NO internet, so how do we get help? We are limited or have to go back to our desk to search and then back to the lab with what might or might not be the solution we needed. PLEASE Bring back the previous interface. Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
https://channel9.msdn.com/Blogs/kmcgrath/Help-30-New-Help-System-in-Visual-Studio-2010?format=html5
CC-MAIN-2017-04
refinedweb
4,347
73.17
Findings: - Connecting your gaming system to AOL - Shit and quit - AOL InstaKiss - Don't quit five minutes before the miracle happens - Connie the AOL girl - How to quit biting your nails - 61 things to do with an AOL CD - Connecting a Gaming Console to AOL Cable - I Had Already Quit That Job About 20 Days Ago...in my mind, anyway - polluting the AOL namespace - I DON'T quit - accessing the AOL network using unauthorized software - Quox Quietly Quits - AOL (user) - I can quit any time I want - AOL Canada - That's it. I quit team sanity. - Things I've seen done with those Free AOL CDs - /quit - Tales of AOL - quit - I quit this bitch! - AOL Keyword: Slashdot - How I Quit Smoking - Could AOL phone home? - How to quit your web journal - AOL! - I just don't know when to quit. - Quit jerking each other off already and write something - A letter to AOL - Too Legit To Quit - AOL - AOL member - Confessions of an easy quit - Diary of an AOL user - You can't fire me, I quit. - AOL is not the Internet - Looks like I picked the wrong day to quit sniffing glue - AOL chat - AOL V4.0 Cookie - How to quit Not Smoking - AOL Keywords - Quits - AOL is broken - Philip Morris advertising campaign contributed to my decision to quit smoking - AOL Time Warner - AOL PLUS - You can't quit now. It's just getting good. - Leaked AOL memo about hackings and how to handle press - Removing AOL Instant Messenger from your Mac - So you want to quit Everything2 - AOL/Netscape - AOL Instant Message Protocol - Quit India Movement - Fanatical AOL User - The last cigarette before you quit smoking - Preventing AOL Instant Messenger from installing with Netscape - How Lars Ulrich made me quit my job at a movie theater - The end of free AOL hours - How to Quit Smoking: A Practical Guide - Evil AOL Instant Messenger graphical smileys - A Call to Quit - You'd better quit sticking your thumbs in your belt loops like that. You're giving me bad ideas. - AOL Instant Messenger - force quit - Fun with AOL say! - No More AOL CDs - I wish I knew how to quit you - Unforeseen consequences of the evil AOL instant messenger graphical smileys - Why I should quit Everything - AOL Delinquent - beside a moon that don't know when to quit If you Log in you could create a "quit AOL" node. If you don't already have an account, you can register here.
http://everything2.com/title/quit+AOL
CC-MAIN-2014-15
refinedweb
412
71.68
From: Joseph Gottman (joegottman_at_[hidden]) Date: 2000-02-03 20:08:34 ----- Original Message ----- > I have in mind a small (pretty??) class, which seems to me to be a yet > another candidate to our boost utility library. I've seen it to be > reinvented at least 2 times after I'd invented it myself , so I will be not > much surprised if someone say 'I have this one too'.. > And even if you didn't, but you had ever written a code which assign some > temporary value to a variable at some place in a block (usually at the > beginning), and restore the old value at the end of it, I am sure you would > appreciate this class :). It does exactly such thing - assign a new value to > a variable, storing the old one within itself, and restore the original > value in its destructor. > So the class itself: > > namespace boost { > template<class T> > class temp_value > { > public: > typedef T value_type; > temp_value( T* object_ptr, const T& new_value ) > : object_ptr_( object_ptr ) > , old_value_( *object_ptr ) { *object_ptr_ = new_value; } > > ~temp_value() { *object_ptr_ = old_value_; } > > private: > T old_value_; > T* object_ptr_; > }; > } > > (Dave, I know, it must be renamed to 'temporary_value' =), and I agree with > you, unless we choose a totally different name.) > And a typical usage (and test) may look like this: > > void temp_value_test() { > bool flag = true; > int count = 15; > if ( flag ) { > boost::temp_value<bool> tmp( &flag, false ); > assert( !flag ); > if ( !flag ) { > boost::temp_value<int> tmp( &count, 0 ); > assert( !count ); > } > assert( count == 15 ); > } > assert( flag ); > } > > Any thoughts about it? > > -Alexy > Two comments: 1) How about making the obi_ptr pointer a reference instead? Whenever there is a pointer in a class and it isn't tested for null I get very nervous, and it isn't very clear what should happen if a null pointer is passed to the constructor. 2) A static utility function like make_pair would be very useful. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2000/02/2056.php
CC-MAIN-2019-35
refinedweb
330
60.35
from numpy import sin, linspace, pi from pylab import plot, show, title, xlabel, ylabel, subplot from scipy import fft, arange def plotSpectrum(y,Fs): """ Plots a Single-Sided Amplitude Spectrum of y(t) """ n = len(y) # length of the signal k = arange(n) T = n/Fs frq = k/T # two sides frequency range frq = frq[range(n/2)] # one side frequency range Y = fft(y)/n # fft computing and normalization Y = Y[range(n/2)] plot(frq,abs(Y),'r') # plotting the spectrum xlabel('Freq (Hz)') ylabel('|Y(freq)|') Fs = 150.0; # sampling rate Ts = 1.0/Fs; # sampling interval t = arange(0,1,Ts) # time vector ff = 5; # frequency of the signal y = sin(2*pi*ff*t) subplot(2,1,1) plot(t,y) xlabel('Time') ylabel('Amplitude') subplot(2,1,2) plotSpectrum(y,Fs) show()The program shows the following figure, on top we have a plot of the signal and on the bottom the frequency spectrum. Wednesday, August 3, 2011 How to plot the frequency spectrum with scipy Spectrum analysis is the process of determining the frequency domain representation of a time domain signal and most commonly employs the Fourier transform. The Discrete Fourier Transform (DFT) is used to determine the frequency content of signals and the Fast Fourier Transform (FFT) is an efficient method for calculating the DFT. Scipy implements FFT and in this post we will see a simple example of spectrum analysis: you might want to add some zero filling to sample the frequencey axis more densely. the spectrum looks artificially good now because the peak maximum is exactly on a datapoint of the frequency axis. Try with ff = 5.5 to see what I mean; Hi, thank for your comment. I know what you mean, I chose ff = 5 on purpose. How should the function be called? Does Fs represent the frequency spectrum? Should Fs be a list? Isn't Fs the thing to be calculated? Hello Mark, Fs is the number of samples per second, hence it's an integer. For example, if you have an audio signal sampled with 44100 samples per second you have to set Fs = 44100. The aim of this snippet is to compute the frequency spectrum, not the sampling rate. Usually the sampling rate is known. You can find out more about signal processing in python on this post: I'm experimenting to see how fast Python and SciPy can calculate sound. In the sound synthesis post, you output to a wave file of 16 bit signed integers. However, I'm using PyAudio.write to output directly to the Windows audio and it expects data frames of 2 byte strings in little-endian format. Is there a simple way to convert the data? I don't know :) Have you tried to take a look at the struct function? I'll look at the struct function in the next few days. I don't use Python enough to know all the library functions. OK, I solved the problem using struct pack: p = pyaudio.PyAudio() stream = p.open(format = 8, channels = 1, rate = 44100, output = True) fmt = struct.Struct('h') for i in range(len(sound)): stream.write(fmt.pack(sound[i])) stream.close() p.terminate() Thanks for the help! Great job Mark! you're always welcome. absolutly helpfull, thx... But does anybody an idea, how to "window" the Signal? What I mean is, I've a Signal with 4000000 Datasamples. How can I create a spectrum without a so much datapoints? (I ve a knot in my brain, sry^^)... I need some kind of a meanvalue without changing the Information too much :) thx Hi, you can select a piece of an array in this way: mypiece = myarray[100:350] where 100 and 350 the indexes that extract the elements 100 through 344. thanks :) ... by the way, I´ve take a look to thw pylab.specgram function. It does what I need... not really clean and beauty, because of all the created figures, but anyway. Never the less, I am thankfull for your help to get a better understanding. I also read some stuff about "windowing" with a "hann-window". This is a possibility with overlap add, to fouriertransform the signaal without getting to much trouble couse of the "windotransformation". At least a rectangel becames a sinc-window through FFT. Thank you, I like it! I used it to starting my own program, however, since the spectrum is not continuous I imported stem from pylab, and changed plot(frq,abs(Y),'r') # plotting the spectrum by stem(frq,abs(Y),'r') # plotting the spectrum orcaja and also changed t = arange(0,1,Ts) # time vector.... by t = arange(0,1+Ts,Ts) jut to have full cycles. orcaja another thing, the normalization should be twice that value: Y = 2*fft(y)/n # fft computing and normalization In this way the spectrum will have the amplitud of 1, the same as in the input. i tried this on my xubuntu and the bottom graph (the red one) don't come up at all, it's just plane white. not really getting why as i'm a newbie you might say at this high level math (i mean FFT). can someone help, i'm really interested in finding peak frequency at any (real) time of sound file This may be off the subject. I am trying to make a simple hearing aid. An audiologist maps the decibels at each frequency a person hears at. The decibel threshold for normal is less than 25 decibels. - The simplest hearing aid would just amplify the sound to the equivalent 25 decibels. - A slightly more complex system would amplify individual frequencies according to the hearing pattern of the individual. - Expensive ($3,000+) hearing aids can filter out background noise. Hearing aids are very easily $2,000 per ear for the low end. Some go down to $350, but it is uncertain what they do. The elderly need hearing aids the most, but they are also the ones who can least afford it. They are rarely covered by insurance. I would like to break the sound into frequencies and amplitudes for the frequencies. The system would then amplify each frequency in a sound sample (10ms?) separately and combine the resulting frequencies outputting a 10ms processed interval. For speed, this would be run parallel on a multicore system or on a GPU, using something like PyCuda. Ideally, this would be done in real time. However, it could give a delayed result if speed is not possible. Am I on the right track? This seems like a simple problem using fft generating a frequency spectrum. Thanks. I believe you want to filter your signal then amplify it. You can do it in real time without using any GPU. How would you reverse process. Start with frequency and amplitude information and then display the resulting pattern? Hi Black, you need the inverse Fourier Transform: numpy.fft.ifft In the line """Y = fft(y)/n # fft computing and normalization""", I don't understand why you divide by `n`. I would interpret that to mean that longer the duration of the signal, the smaller the amplitude values in `Y`. I would think you would want to divide by the max possible value for the amplitude data to nomalize it so it falls between 0 and 1. --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () 32 ylabel('Amplitude') 33 subplot(2,1,2) ---> 34 plotSpectrum(y,Fs) 35 show() in plotSpectrum(y, Fs) 11 T = n/Fs 12 frq = k/T # two sides frequency range ---> 13 frq = frq[range(n/2)] # one side frequency range 14 15 Y = fft(y)/n # fft computing and normalization TypeError: 'float' object cannot be interpreted as an integer I believe they are looking to slice the frequency spectrum there. The correct way to do this now is: frq = frq[1:n/2] # one side frequency range (sliced the frq in half) you will also have this problem with the Y values. It should be Y = Y[1:n/2] please correct me if i am wrong Hi , I'd like to get quite the same analysis here to reproduce the spectrum while playing a wav file. Something like here :. I just need to get the 8x8 matrix for each time (let's say each 1s) and I can manage the leds after that. If it doesn't bother you could you point out some code please ? it about many days I couldn't end with something successful. Thanks Thanks for the example! One question though, how can n=len(y) actually be calculated since y is a function? Also if I try to print n it says it's undefinied. But how is it possible to (finally) use it as the x axis? Thanks in advance! Jan hi, y must be an array. If your y is a function you need to sample it. n is the length of this array. Or just... # Express FFT in the frequency domain. def spectrum(signal, Time): frq = fftfreq(signal.size, d = Time[1] - Time[0] ) Y = rfft(signal) return frq, Y Why does the fft distribution change with the length of sound sample in SciPy? Amplitude is off by a factor of 2 Very good information, thanks for sharing. Website Design Bangalore | Web Designing Company Bangalore Python: How to plot a period of one square signal and press ENTER and second period shows on the same graph? Help please. :) Hi Ivana, you have to use the interactive mode, have a look here: Thanks :) How would this work if we had to feed bulk data in csv or txt format? You first need to parse the data. I recommend you to look into pandas. This comment has been removed by the author.
http://glowingpython.blogspot.it/2011/08/how-to-plot-frequency-spectrum-with.html
CC-MAIN-2017-30
refinedweb
1,632
73.68
On Fri, 2012-06-22 at 23:00 -0500, Daniel Santos wrote:> +static inline long compare_vruntime(u64 *a, u64 *b)> +{> +#if __BITS_PER_LONG >= 64> + return (long)((s64)*a - (s64)*b);> +#else> +/* This is hacky, but is done to reduce instructions -- we wont use this for> + * rbtree lookups, only inserts, and since our relationship is defined as> + * non-unique, we only need to return positive if a > b and any other value> + * means less than.> + */> + return (long)(*a > *b);> +#endif> +} That's wrong.. suppose: a = 10, b = ULLONG_MAX - 10In that case (s64)(a - b) = 20, however a > b is false.And yes, vruntime wrap does happen.
http://lkml.org/lkml/2012/6/26/198
CC-MAIN-2017-34
refinedweb
104
65.05
Hello world, In the previous post i introduced you to Python programming language, a high level language. In this second post I am going to be taking you through python Functions. Python Functions everything we do comprises of a function somewhere. So, what are functions in python! - A function is a block of code that has statements that perform a specified task. - They help in breaking down a program when it grows big. - Functions minimize the repetition of code, hence they're reusable. Let us define a function: We use the def keyword to define a function: # define a function that says Hi. def greet(): print("Hello") #calling the function greet() #prints> Hello 🤯 Awesome! # structure of a function def function_name(paramters): """docstring""" statement(s) A python function consists of the following: - the defkeyword, marks the beginning of a function - A function name to identify the purpose of the function, Function names follow the same guide rules of pep8 naming conventions. - parameters to which values to our functions. - A colon (:) to mark the end of a function definition. - In the body start with a """docstring""", to explain what the function does. - One or more valid python statements that make up the function body. - The return(optional) statement that returns a value to the function. Python Function Arguments Functions takes variables as arguments. Ways of defining arguments; using default, keyword and arbitrary arguments. Defining functions with parameters. Parameters are identifiers in the function definition parentheses #function parameters def greet(name, msg): print(f"Hello {name}, {msg}") #call the function with the arguments greet('Mark', 'Good Evening') #prints> Hello Mark Good Evening Here, the function greet() has two parameters name, msg When the function is called, the two parameters are passed as arguments. Variable Function Arguments A function can take a variable number of arguments. - Function Default Arguments def greetings(name, msg="How are you?"): print(f"Hello, {name} {msg}") #positional arguments. greetings('Asha') #> Hello, Asha How are you? Even if we call the function with just one argument, it will complete smoothly without an error unlike in the previous function. When an argument for msg is provided in function call, it will overwrite the default argument. Python Keyword Arguments When a function is called with values, they get assigned to their specific positions. But this can be changed when calling a function, the order can be altered with keywords, like in the above function we could call it like this: #keyword arguments in order greetings(name="fast furious", msg="is out!") #keyword arguments out of order greetings(msg="is out!", name="Vin") Well give it a try, everythings works just fine. Python Arbitrary Arguments At times you might not know the number of arguments to be passed in a function. In situations like this, python allows the use of an asterisk (*) to denote arguments. example: def greet(*names): for name in names: print(f"Hello {name}.") greet('juma', 'larry', 'lilian') #> Hello juma. #> Hello larry. #> Hello lilian. Python Recursion Recusion with python functions. A repeated procedure can be reffered to as a Recursion. A recursive function, is a function that calls itself. def recurse(): #statements recurse() #recursive call recurse() An example: #finding the factorial of a number def factor(x): """ this function finds the factorial of a given number factorial number is the product of a number starting from 1 to that number itself """ if x == 1: return 1 else: return (x * factor(x-1)) num = 3 print(f"factorial is: {factor(num)}") Pros of Recursion Functions - Makes code look clean and elegant. - Breakdown of complex tasks into smaller sub-tasks. - Sequencing is easier compared to nested iteration. Cons - They are difficult to debug. - Recursive calls take alot of memory and time. - Hard to read or follow. Anonymous Functions An anonymous function is a function that has no name. They are also called lambda functions. Defining a lambda function: #syntax: lambda arguments: expression They can have multiple arguments but only one expression. working example: #sum of numbers sum = lambda x: x+2 print(sum(2)) #> 2+2= 4 #multiple args product = lambda x, y: x*y print(product(2, 6)) #> 2*6 = 12 Lambda functions are used when a nameless function is required for a short period of time. They are best used with in-built functions, filter() & map().Read more That's it for now, Hope you enjoyed reading this article. let us learn python!! Be cool, Keep coding. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/billyndirangu/python-basics-102-introduction-to-python-functions-5bcg
CC-MAIN-2021-39
refinedweb
739
57.37
Holusion is giving a free public API on each of its products. It permits to control the activation, desactivation, to add or delete medias, and to play the medias. By default, this API can be used via the web interface that allows transfere medias, but it can also be used through external applications. Thanks to the API, we can create and use complex content managing methods. The following functions are available : Before starting anything, you must be connected to the product. They are configurated to emit a Wi-Fi network by default : SSID : <product>-<serial number> Clé : holusionadmin If you have a custom installation, the access method to the product have been given at the purchase. In this document, we will use for example the default IP 10.0.0.1 to access the product. If the Wi-Fi doesn’t work, is too slow, or if the product isn’t equiped, you can connect it with an ethernet cable : In the case of the demountable products (focus, iris…), make sure to connect at least one screen to the computer. The administration service doesn’t start if there is no display. The API has a complete interactive guide at this adress :. The requests can be tested directly on the product. The routes are grouped in 5 categories : To show how this interface is working, we will first use the routes of the group playlist. Click on the corresponding line to display the possible operations. We will use the first available route, wich permits to display the playlist’s elements. The detail of the `[GET] /playlist`. An example of answer by clicking on **Try it out!** This request produces a JSON array that is listing the different available medias IDs. We can use it in a very simple application. For example, with Python (example for Python v2.x): import requests requests.get("").text Returns the same text sequence than the example of the guide. Caution : The presented requests really have an impact on the product. If you try [DELETE] /medias/{name}, the targeted media will be definitely deleted from your product. In order to handle the API, we will develop our first Python application. Note : All the modern environments are providing similar librairies to the close syntax . We will use the requests librairy, that will permit to access the API. #!/usr/bin/env python import requests # we recover the available medias : medias = requests.get("").json() print(medias[0].get("name")) The general idea is that by using standard requests and by using the answers given by the guide, we can easily and quickly create a complete interface that permits to interact with the holograms.
https://dev.holusion.com/en/toolbox/media-player/index
CC-MAIN-2018-22
refinedweb
443
65.22
On Wednesday 30 May 2007 18:12:32 Jean-Paul Calderone wrote: > Something about this approach rubs me the wrong way. Identifiers which are > required to be globally unique sends one down the path of being limited to > only supporting certain page configurations. Maybe if there were some well > defined algorithm for choosing the "nearest" widget with a given > identifier, something could be worked out. I'm not too sure about this > either, though. Is it the resultant uniqueness or the interdependency that worries you? I can see why this would be icky if the unique widget name were somehow hard-coded into the LiveElement class definition (meaning you could only create one such widget) but I don't see how it would be in any way harmful or limiting if the unique widget identifier were specified at widget construction time. Some things really do need to exist exactly once; no-one balks at putting '<div id="navigation">' in a web page (it would be confusing to have two navigation panels). I suppose that if you're trying to write 100% re-usable widgets then having cross-widget dependencies is a bad idea, but that's a problem that affects all the techniques described here (except for Daniel's publish/subscribe approach). It looks like boshing something into the parent namespace in the widget's __init__ method is the simplest approach, but of course it's also the evillest approach as it does indeed limit you to one such widget. Anyway, I'll have a think about the various options; thanks for your input! Ricky
http://twistedmatrix.com/pipermail/twisted-web/2007-May/003387.html
CC-MAIN-2016-50
refinedweb
265
54.76
Several members in the SIP committee yesterday were worried that opaque types in duplicated a significant amount of the functionality of value classes, and discussions I had today, notably with @julienrf echoed that sentiment. An earlier iteration of that proposal indeed made opaque types look like value classes (I think it was called “Unboxed value classes”). @jvican, do you still have a link to that document? When I saw the earlier proposal, I was not convinced because the restrictions imposed on these unboxed value classes seemed ad-hoc. By contrast, I rather liked the opaque types proposal, so was on the side of people preferring it. But I have now realized that the restrictions proposed for unboxed value classes are in fact natural consequences if these new unboxed value classes extend not AnyVal but a new, completely parametric Top type. This very much echoes the original post by @S11001001 on The High Cost of AnyVal subclasses. AnyVal Top The idea of a parametric top type was brought up in Dotty issue 2047 - it’s a rather long comment thread; best search for AnyObj to find where the discussion starts. A true top type is useful not just for type class coherence, the topic of #2047, or for unboxed value classes. It’s much more fundamental than that because it gives you theorems for free by ensuring parametricity. AnyObj In the Dotty issue, I proposed to keep Any as the top type of Scala’s type lattice, but strip it of all its methods. Instead there would be a new type AnyObj under Any which would get the ==, !=, equals, toString, hashCode, asInstanceOf and isInstanceOf methods. If we started from scratch that would still be my preferred scheme, but it’s probably too hard to migrate to it. So in what follows I propose to leave Any as it is and introduce a new, fully parametric type Top. Any == != equals toString hashCode asInstanceOf isInstanceOf The top of Scala’s type hierarchy would look as follows: Top | Any (= AnyVal) | Object (= AnyRef) There’s no need for AnyVal as a separate type anymore, but we can keep it around as an alias of Any. Classes extending either Any/AnyVal or Top, but not extending Object, are called value classes. The usual value class restrictions apply to them; in particular they must be final. Traits can as before extend Any instead of Object, they are then called universal traits. Traits cannot extend directly the Top type. Any/AnyVal Object The Top type does not have any methods. Because it has no isInstanceOf or == method, it is impossible to pattern match on a scrutinee of Top type. I believe that’s all we need to say about it! It seems if we come back to the earlier proposal of unboxed value classes, all of the restrictions that were imposed are in fact consequences of the way Top is defined. So the unboxed value class proposal now looks very natural. Value classes have to extend either Any or Top; if they extend Top we have a guarantee that they can always be represented as their underlying type, no value-class specific boxing is needed. Coming back to Top. Currently the rule for an unbounded abstract type or type parameter such as [T] is that it expands to [T <: Any]. I would love to change this to [T <: Top], but realize that this would also cause considerable migration headaches. So for the moment I propose to leave this as is, but consider cleaning up this aspect at some later time. [T] [T <: Any] [T <: Top] This is interesting. My main interrogation after a first reading is: what do we do about collections, and in general the ecosystem of libraries? The opaque type alias proposal has one advantage over the Parametric Top: you can immediately use opaque type aliases in existing APIs, in particular collections, without any changes to those. With a Parametric Top, unless you change the collections to have their T <: Top, you won’t be able to declare a List[Foo] where Foo is an unboxed value class. But then if you do that, what happens to List.contains? T <: Top List[Foo] Foo List.contains This is a major can of worms. In a clean design I would be all for it, but with that pragmatic mind of mine, I fear that a parametric Top will badly interfere with the ecosystem. Yes, this is a link to the document. It’s heavily influenced by the proposal of value classes – it follows the same structure and it’s less technical than our last version of the proposal. Ah yes, that is a major can of worms! It would go away to a large degree if [T] meant [T <: Top]. So maybe that’s a good reason to do that immediately rather than waiting for it. We do incur other incompatibilities then. I believe a rewrite tool can help fixing these, though. In light of this, we might compromise and allow asInstanceOf as the only (unsafe, and not-recommended) method on Top. We could then expand [T] to [T <: Top] and insert _.asInstanceOf[Any] casts where a value of type T was used as an Any. Inserting the casts could be automated in a rewrite tool. _.asInstanceOf[Any] T I believe we should be able to do that for Dotty. Can we do it for Scala 2? I am not certain. Unfortunately that now puts us in the unenviable position that we can either add something that’s clearly valuable now, or wait for something else that is even cleaner but not compatible with the status quo. And if we go for both, we get unnecessary duplication. There’s one area where opaque type aliases are riskier than the Top type proposal. Consider: val x: Any = Logarithm(10) x match { case x: Double => ... // unsafe conversion! } I.e. we can detect the hidden representation using pattern matching and use it for unsound code. The Top type proposal does not have that problem. From the moment you introduce asInstanceOf you could do val x: Top = Logarithm(10) try { val y = x.asInstanceOf[Double] ... } catch { case e: ClassCastException => ... } Though granted, it’s a lot harder to fall in that loophole by accident. But you can still write the unsound code. @Jasper-M Sure. That’s the purpose of asInstanceOf. It’s the universal escape hatch, which comes with no guarantees. I agree this is not great, but this kind of pattern-matching can already defeat attempts at parametric design by allowing matching on post-erasure types: def weird[A](xs: Array[A]): Array[Object] = xs match { case ys: Array[Object] => ys } weird(Array("arrays", "are", "invariant")) // Array[Object] = Array("arrays", "are", "invariant") def weirder[A](a: A): AnyRef = a match { case y: Object => y } val x: Int = 3 weirder(x) // AnyRef = 3 From my point of view, if a caller wants a type that has a representation that can be matched at runtime, they should be using case classes or similar, not opaque types. Is it possible to provide an explicit =:= between the underlying type and value type? =:= =:= will bring the ability of substitute, which is useful when you want to lift a type class. substitute You could have an opaque type and provide pattern matching on it via an extractor that converts it to something pattern-matchable. opaque type Maybe[A] = Option[A] object Maybe { def unapply[A](ma: Maybe[A]): Some[Option[A]] = Some(ma) } // then val ma: Maybe[A] = ??? ma match { case Maybe(Some(a)) => ??? case Maybe(None) => ??? } Of course, it would be nice if extractors statically known to return Some (such as the one above) would preserve exhaustivity checking (see #10502). Some Whether you can soundly substitute is a big sticking point, and it affects all sorts of things other than typeclasses; this was the major problem with “unboxed value classes” that was fixed so nicely by the followup opaque types proposal. So returning to unboxed value classes would restore those problems as well, which go beyond merely whether the compiled code happens to box them. You can do this with abstract types, too, in current Scala. It would be nice to be able to ban that, too. That you can optionally supply =:= or <:< instances in the type companion providing unidirectional or bidirectional substitution as part of the interface is one of the great features of opaque types as currently proposed; it’s a sound way to declaratively expose part or all of the equality. (The whole substitute design space is supported, indeed.) So you can choose exactly how much of Logarithm = Double can be seen in the interface, and the language for doing so is simply defining methods. <:< Logarithm = Double I sure wouldn’t mind banning arbitrary type patterns and [ai]sInstanceOf calls, though. [ai]sInstanceOf If parameteric Top is really being proposed instead of opaque types can we see exactly how it can replace? We would do something like: class Logarithm(private val exponent: Double) extends Top { def *(that: Logarithm): Logarithm = new Logarithm(exponent + that.exponent) } object Logarithm { def fromDouble(d: Double): Option[Logarithm] = if (d > 0.0) Some(new Logarithm(math.log(d)) else None } Is this what we are saying? And somehow extending Top means new Logarithm does not allocate? That seems a little weird to me (new not allocating and also maybe it precludes classes with two parameter constructors extending Top, which is also weird if it is a true top type). new Logarithm I am a bit concerned about this because Top seems so ambitious it might never happen, or take very long, while working opaque types is something many in the community would really like to use now. I think if this is used as a reason not to do opaque types it would be very nice to see a clear alternative proposal of how to implement zero-cost opaque types using Top. Perhaps for Scala 2, the syntax extends Top can be used to create opaque types, analogous to extends AnyVal, but without yet truly truly introducing the full glory of Top as the true top type. The idea would be to have all the features of the opaque types proposal, but with the same syntax that Dotty would have for defining them. extends Top extends AnyVal There are probably some serious drawbacks with this that I am blind to. +1 on parametric Top (at least for Dotty). If A meant A <: Top, that pattern matching must be forbidden to ensure parametricity. A probably safe and overly conservative rule would forbid matching on scrutinees whose type mention any parametric type variable. Here a type match on Array is safe, but drawing good rules for full Scala in general seems nontrivial. A A <: Top Array @oscar IIUC, the idea here is “value classes that truly never box” — most details are in doc linked earlier, and indeed new does not box: @scottcarey extends Top and opaque types don’t seem to have quite the same semantics… or do they? new EDIT: more importantly, this proposal without a parametric Top would give you classes with questionable semantics — you still have all methods from Any, but they have inappropriate semantics, the ones given by the implementation, something a true parametric Top avoids. Though I suspect the earlier opaque types, and the encoding from @S11001001’s blog post, share this downside. In all these proposals, it seems polymorphism over primitives must still box. How does one implement asInstanceOf on primitives without at least one level of boxing? I guess in all these proposals, if A is a type variable, a is a primitive (wrapped in a “newtype” or not) and a: A , a must have been boxed anyway, but reference types need no extra boxing. (In miniboxing A <: AnyVal is erased to long, but it seems miniboxing is not going to be merged). a a: A A <: AnyVal long The only difference (maybe) is that if you use some suitable specialization (Scala 2’s, miniboxing, Dotty-linker) and have parametric Top, the optimizer has a easier time removing boxing. Casts to AnyRef would still box, and the results of reference equality would be affected, but those results shouldn’t be guaranteed under parametric Top (not sure of the current actual guarantees). AnyRef Here’s a possible way to help migration to the new scheme: [T] means [T <: Top] Add a deprecated implicit conversion in Predef Predef @deprecated def topToAny(x: Top): Any = x.asInstanceOf[Any] special case pattern matching, so a scrutinee bounded by Top is converted to Any, using the same deprecated conversion. That should cover the majority of problematic cases. There will be a few that remain, but overall it seems doable. @blaisorblade asInstanceOf is by definition unsafe. It would simply succeed if the underlying type is compatible with the representation type. So, no boxing is needed. So assuming then that class Foo(x: Double) extends Top does not box, then what is the difference between: implicit class Foo(x: Double) extends Top implicit class Foo2(x: Double) extends AnyVal We now have two ways to do method enrichment? That seems like a shame. I guess you should never do that after Top. Is that right? Secondly, I find it ugly that now we would have a class that never allocates. Previously in scala class was something that could (in principle) be an allocation. Even with extends AnyVal in a generic context we get allocation. All class declarations correspond to some kind of JVM class entity. class To me, the opaque type proposal is much cleaner here because we already have type, and it never boxes or allocates (as far as I know). With opaque we are just putting additional constraints on a type but otherwise the rules apply. Similarly, we keep the notion that a class in scala corresponds to something that can be a JVM class. type I should add that while substitute is sufficiently powerful, it’s not necessarily convenient. This example from the other thread which runs as written with opaque types: def mdl(m: Map[Double, Logarithm]): Map[Logarithm, Double] = m Suppose alternatively a =:= is provided and GADT-style =:= is made to work in patmat, it must be written // supposing Logarithm.repr: Double =:= Logarithm def mdl(m: Map[Double, Logarithm]): Map[Logarithm, Double] = Logarithm.repr match { case Refl() => m } Supposing a =:= is provided but GADT-style is not made to work: def mdl(m: Map[Double, Logarithm]): Map[Logarithm, Double] = { type F[A] = Map[Double, A] =:= Map[A, Double] Logarithm.repr.substitute[F](implicitly)(m) } Scala users might have some difficulty coming up with the correct F for various scenarios. F If we started from scratch that would still be my preferred scheme, but it’s probably too hard to migrate to it. So in what follows I propose to leave Any as it is and introduce a new, fully parametric type Top. If we started from scratch that would still be my preferred scheme, but it’s probably too hard to migrate to it. So in what follows I propose to leave Any as it is and introduce a new, fully parametric type Top. I’m not so sure … I think the complications of adding a new Top type to Scala are at least as great as changing the semantics of Any and adding AnyObj to match Dotty. It’d be interesting to make the change and try it with a few representative projects and see how much breaks. Cheers, Miles I’m probably not making myself clear — simply stated, the JVM does not allow checkcast on primitives, so they have to be boxed once. Or do I miss something? That boxing is not due to newtypes but is still relevant. checkcast In particular, saying that “newtypes never box” is as misleading as “primitives never box”. And since really never boxing is the goal, all the newtype proposals are necessary but not sufficient without specialization.
https://contributors.scala-lang.org/t/pre-sip-parametric-top/1177
CC-MAIN-2017-43
refinedweb
2,662
61.26
01 August 2012 11:08 [Source: ICIS news] (adds segment financials and CEO comment) LONDON (ICIS)--Arkema swung to a second-quarter net loss of €12m ($15m), from a net profit of €184m in the same period last year, following the divestment of its vinyl business, the French chemicals firm said on Wednesday. The company finalised the divestment of its vinyl products segment on 3 July to Switzerland-based Klesch Group, which led to Arkema reporting a €141m net loss from discontinued operations in the second quarter. Group share net income from continuing operations in the second quarter of 2012 fell to €129m, from €198m in the same period last year, partly because of tough market conditions. The second-quarter 2012 result also included a €63m tax charge. Sales in the second quarter grew by 15.4% year on year to €1.72bn. Arkema said the increase “included the contribution of the acquisition of specialty resins (?xml:namespace> Volumes decreased by 4% compared with the second quarter of 2011, when the level of activity was particularly high, while prices during the period were 3% lower, “reflecting mostly a return to normalised market conditions in acrylic acid and the expected adjustment of HFC-125 prices in Earnings before interest, tax, depreciation and amortisation (EBITDA) for the quarter slipped by 4.7% to €306m, because of tough comparables, it added. “Arkema achieved an EBITDA above €300m in 2nd quarter 2012, an increase of more than 20% on 1st quarter [2012]. This excellent performance confirms the quality of the group’s specialties portfolio, and demonstrates its strength in a volatile and uncertain global economic environment,” said Thierry Le Henaff, chairman and CEO of Arkema. Second-quarter EBITDA in the group’s Industrial Chemicals business was €208m, down by 8.0% year on year, following poor demand for decorative paints in Europe and North America. Sales in the segment rose by 16.4% to €1.14bn, reflecting the contribution of specialty resins. “Compared to the peak of the 2nd quarter 2011 marked by restocking and exceptional demand in Arkema’s Performance Products business saw EBITDA rise by 10.1% to €109m, as sales rose to €572m, 13.5% up on the second quarter of 2011. “The price effect was positive [to Performance Products' sales], reflecting the positioning of Technical Polymers in higher added value applications and a favourable product mix in Specialty Chemicals,” the company said. Arkema confirmed it should achieve an EBITDA close to €1bn in 2012. “Beyond, Arkema aims to achieve €8bn sales and €1,250m EBITDA by 2016,” it said. “In the future Arkema will continue to develop its high added value product lines. I am convinced that the quality of our activities, the spirit of innovation that is driving us, and the partnerships we are developing with our customers are key drivers of our success, now and in the future,” Le Henaff
http://www.icis.com/Articles/2012/08/01/9582712/Frances-Arkema-swings-to-Q2-net-loss-on-vinyls-divestment.html
CC-MAIN-2014-49
refinedweb
482
51.89
Details - Type: New Feature - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: None - Fix Version/s: 1.8.1, 1.9-beta-1 - Component/s: groovy-jdk - Labels:None Description With regards to a thread in the groovy user list here: A take( n ) method would be a really useful addition to the Groovy codebase It is assumed it will work similar to other languages, in that: def a = [ 1, 2, 3 ] assert a.take( 0 ) == [] assert a.take( 1 ) == [ 1 ] assert a.take( 4 ) == [ 1, 2, 3 ] The method should work for Collection, String, Map and lazily for Iterator, Reader and InputStream Issue Links - relates to GROOVY-5414 Groovy could benefit from DGM takeWhile and dropWhile methods - Closed Activity - All - Work Log - History - Activity - Transitions Had a few minutes to come up with the implementation and tests for take with Arrays, Lists, and Iterators Requires JavaDoc, and a look over to make sure it's not horribly wrong. Added drop with tests for List, Array, and Iterator by closely following the take implementation. Also, added a bit of Javadoc to both take and drop. Replaced Arrays.copyOf() with ugly System.arraycopy() because copyOf is the new addition to JDK 1.6. Took the liberty of moving take couple of methods down, because they were sitting between head and it's counterpart tail. I very much advice to change the proposed method names. take and drop are used for lazy sequences as in . Accessing the first or last n entries could be named first n last n which would align with the respective JDK methods. This feature needs more discussion in the dev group with regard to consistency of the concept and API. Is it worth continuing the effort (following the take/drop nomenclature), and a function rename can occur when a decision is reached on the dev list? I'll ask this on the dev list as well, as I am not sure anyone will get updates from this comment (no watchers on this issue) Asked a question on the groovy-dev mailing list: Deleted my patch, and now using Dinko's to progress with adding the other cases and tests Third pass (includes patch "second_take..." from Dinko Srkoc) Added Dinko Srkoc as an author. Added take( n ) methods for CharSequence, InputStream and Reader along with initial tests. Added since 1.8.2 to all new methods Not sure whether the CharBuffer method for extracting from the Reader is overkill... Feels better than reading it a char at a time, but adds a dependancy on JDK having java.nio.* (I beleive this should be ok, as it has been there since 1.4) Code review would be really helpful as always (cross-posted from the dev list) Been thinking about this, and I think I now agree with Dierk For CharSequence, Array, List and Map, the function names take and drop don't seem right, as the original object is not (and shouldn't be) mutated, so def a = 'tim' println a.take( 1 ) will print t and the variable a will still be set to 'tim' in these situations, I believe the methods should be called first and last However, for Iterator, Reader and InputStream where the state of the object is changed by reading elements, I believe that take fits, so def a = new StringReader('tim') println a.take(1) will print t, and the variable a will now be the Reader containing the remaining String 'im' drop would (in these three cases) exhaust the input, so there would be nothing left in the Iterator, Reader or InputStream if it were interrogated again. So in summary, my suggestion is: CharSequence, Array, List and Map : first(n) and last(n) and for: Iterator, Reader and InputStream : take(n) and drop(n) New attachment fourth_pass_drop_for_CharSequence.diff This includes the drop method for CharSequence, and the tests for it. Not sure what the best way to implement drop for Reader or InputStream is... Agree with Dierk that take and drop should imply lazy. I don't think laziness is particularly useful for the common case of truncating a string, but here's a link to some code for treating Iterators/Iterables as lazy sequences, include take and drop methods: . I'd go with the names Dierk suggests, and save take/drop for something like that. Removed my patch since Tim's fourth is the base for further progress. New patch fifth_iteration.diff Added drop and take for Map. Tried explaining in the javadocs that usage with a non-ordered Map could cause fun Added some more documentation to some of the methods... Think all that's left is drop for Reader and InputStream Added sixth iteration patch with drop implemented for Reader and InputStream, with tests. Other changes: - fixed bug with negative parameter value in drop for list and array - replaced Arrays.copyOf with System.arraycopy in take for InputStream since copyOf is added in JDK 1.6 Do you think Iterator.drop and Iterator.take should return Iterator<T>? If so, do you feel Reader.drop/take and InputStream.drop/take should respect the same contract and return Readers and InputStreams? PS: Nice catch with the Arrays.copyOf I completely missed that... Added sixth_iteration_returning_iterators.diff Basically the same as Dinko's sixth_iteration.diff but Iterator.take and Iterator.drop return Iterator<T> rather than List<T> Unit tests have been updated accordingly. Yes, I believe Iterator<T> should be returned. As a general rule, Foo.take/drop should return Foo. That would probably go for List.take/drop too, e.g. LinkedList.take/drop should return LinkedList. I think that it would be slightly surprising if Reader/InputStream didn't return Readers and InputStreams. I must say though that I'm slightly uncomfortable with Reader/InputStream since, as Peter Mondlock said(1), they aren't really sequences or collections. PS: thanks (1): Cool, so here is seventh_iteration.diff which has: - Iterator.take and Iterator.drop returning Iterator<T> - List.take and List.drop now call createSimilarList instead of new ArrayList<T> so if LinkedList.take is called, the returned List should be a LinkedList I have absolutely no worries about removing the code for Reader and InputStream, as these seem to be a sticking point for this feature (and one of the causes of the argument over function names) Removed sixth_iteration.diff. I'm OK with removing Reader and InputStream code too, in the interest of more consistent take/drop. Added eighth_iteration.diff (and removed seventh_iteration.diff In this version, I have: - Removed drop and take for Reader and InputStream - Added comment about Hashtable.drop to the Map.drop method javadoc (unpredicatble results) - Changed unit tests to test multiple types of List, Map and CharSequence - Added a test for Symmetry (where the JDK allows it), so it ensures that Iterator.drop returns an Iterator, LinkedList.drop returns a LinkedList, etc - Added groovyTestCase blocks to each method in the javadoc. Examples in the docs are worth 1000 words Hope it's all ok... I'll post about this update to the dev list, to see if it is more acceptable to the group Removed import java.nio.CharBuffer as it isn't needed with the removal of Reader Fantastic! Thanks Guillaume! Looks like my @since 1.8.2 was too pessemistic! :-D Tests failed on Java 5... The reason is that javax.swing.text.Segment did not implement CharSequence in Java 5, but does in Java 6... Here's a patch script to remove that class from the tests: Index: src/test/groovy/GroovyMethodsTest.groovy =================================================================== --- src/test/groovy/GroovyMethodsTest.groovy (revision 22354) +++ src/test/groovy/GroovyMethodsTest.groovy (working copy) @@ -947,7 +947,6 @@ def data = [ 'groovy', // String "${'groovy'}", // GString java.nio.CharBuffer.wrap( 'groovy' ), - new javax.swing.text.Segment( 'groovy' as char[], 0, 6 ), new StringBuffer( 'groovy' ), new StringBuilder( 'groovy' ) ] data.each { @@ -1015,7 +1014,6 @@ def data = [ 'groovy', // String "${'groovy'}", // GString java.nio.CharBuffer.wrap( 'groovy' ), - new javax.swing.text.Segment( 'groovy' as char[], 0, 6 ), new StringBuffer( 'groovy' ), new StringBuilder( 'groovy' ) ] data.each { @@ -1048,7 +1046,6 @@ // CharSequences (java.lang.String) : new String( 'groovy' ), (java.nio.CharBuffer) : java.nio.CharBuffer.wrap( 'groovy' ), - (javax.swing.text.Segment): new javax.swing.text.Segment( 'groovy' as char[], 0, 6 ), ] data.each { Class clazz, object -> assert clazz.isInstance( object.take( 5 ) ) Sorry about that Thanks from me too, Guillaume! Shouldn't "take" also be documented in Groovy String class? Currently it isn't: Ahhh, because take operates on the class CharSequence (which String implements), the documentation is there instead: There should also be a drop( n )
https://issues.apache.org/jira/browse/GROOVY-4865?focusedCommentId=269708&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-48
refinedweb
1,435
66.74
In this section we will discuss about the Java IO FileOutputStream.In this section we will discuss about the Java IO FileOutputStream. In this section we will discuss about the Java IO FileOutputStream. FileOutputStream is a class of java.io package which facilitate to write output stream data to a file. The FileOutputStream writes a streams of raw bytes. OutputStream is a base class of FileOutputStream class. To write output streams there are several constructors of FileOutputStream is provided to the developer which facilitate to write streams in various ways Some are as follows : Commonly used methods of FileOutputStream class Example : Here an example is being given which demonstrates how to write the data output stream in to the file using FileOutputStream. For this I have created two text files named "abc.txt" and "xyz.txt". abc.txt is a text file that contains some data which will be further read by the program and the xyz.txt is a blank text file into which the text read from the abc.txt file will be written. Now created a Java class named WriteFileOutrputStreamExample.java where I have used the FileInputStream to read the input stream from the file 'abc.txt' and used FileOutputStream to write the output stream data into the 'xyz.txt'. To write the specified bytes into the xyz.txt file I have used the write(int ....) method of the FileOutputStream class. And finally close all the input stream and output stream that the resources associated with this stream can be released. Source Code /*This example demonstrates that how to write the data output stream to the FileOutputStream*/ import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; public class WriteFileOutputStreamExample { public static void main(String args[]) { FileInputStream fis = null; FileOutputStream fos = null; try { fis = new FileInputStream("abc.txt"); fos = new FileOutputStream("xyz.txt"); main }// end class Output : When you will execute the above example a message will print on the console and the content of abc.txt file will be written to the xyz.txt file. Following steps are required to execute the above example : 1. First compile the WriteFileOutputStreamExample.java using javac command provided in the JDK as javac WriteFileOutputStreamExample.java 2. After successfully compilation execute the class by using java command provided in the JDK. as java WriteFileOutputStreamExample Ads
https://www.roseindia.net/java/example/java/io/bytefileoutputstream.shtml
CC-MAIN-2022-27
refinedweb
386
60.11
I have been running this LSTM tutorial on the wikigold.conll NER data set training_data training_data = [ ("They also have a song called \" wake up \"".split(), ["O", "O", "O", "O", "O", "O", "I-MISC", "I-MISC", "I-MISC", "I-MISC"]), ("Major General John C. Scheidt Jr.".split(), ["O", "O", "I-PER", "I-PER", "I-PER"]) ] def predict(indices): """Gets a list of indices of training_data, and returns a list of predicted lists of tags""" for index in indicies: inputs = prepare_sequence(training_data[index][0], word_to_ix) tag_scores = model(inputs) values, target = torch.max(tag_scores, 1) yield target y_pred = list(predict([s for s, t in training_data])) y_true = [t for s, t in training_data] c=0 s=0 for i in range(len(training_data)): n = len(y_true[i]) #super ugly and ineffiicient s+=(sum(sum(list(y_true[i].view(-1, n) == y_pred[i].view(-1, n).data)))) c+=n print ('Training accuracy:{a}'.format(a=float(s)/c)) I would use numpy in order to not iterate the list in pure python. The results are the same, but it runs much faster def accuracy_score(y_true, y_pred): y_pred = np.concatenate(tuple(y_pred)) y_true = np.concatenate(tuple([[t for t in y] for y in y_true])).reshape(y_pred.shape) return (y_true == y_pred).sum() / float(len(y_true)) And this is how to use it: #original code: y_pred = list(predict([s for s, t in training_data])) y_true = [t for s, t in training_data] #numpy accuracy score print(accuracy_score(y_true, y_pred))
https://codedump.io/share/Cw6bUWVgN1vJ/1/accuracy-score-in-pytorch-lstm
CC-MAIN-2018-47
refinedweb
242
57.16
Create S3 Bucket The first step is to create an S3 bucket in the Amazon S3 Console click on the Create Bucket button. You will redirect to this page. creating an s3 bucket Now enter a name on the Bucket name field. Then scroll down, you will see the Yellow Create bucket button, click on that. It will create a bucket for you and you will see it on the list. Create a Lambda function Create a Lambda function in the AWS Lambda Console click on the Create Function button. You will redirect to this page. Creating a lambda function Enter function name inside Function name field. If you have already created the lambda function then you can select Use a blueprint. If you don't have you have to select Author from scratch and scroll down click on the Create function button. You will see something like below. Create a project Open your terminal and enter the following command. $ mkdir lambda-tutorial && cd lambda-tutorial $ npm init -y $ npm i aws-sdk $ touch index.js By entering those command it will create a project and install the aws-sdk package also index.js file. Which will need for creating logic on code. Now open the index.js on your favorite code editor. For example, you want to open with the VS Code. Enter the following command. You can use Atom, Sublime text for this. $ code ./index.js Now in this file enter the following code. const AWS = require("aws-sdk"); const s3 = new AWS.S3({ accessKeyId: "YOUR_ACCESS_KEY", // replace with your access key secretAccessKey: "YOUR_SECRET_KEY", // replace with your secret key }); exports.handler = async (event, context, callback) => { /* HANDLE DATA WHICH ARE SENT FROM CLINT APP. HERE I JUST ADD STATIC DATA */ const s3Bucket = "YOUR_BUCKET_NAME"; // replace with your bucket name const objectName = "helloworld.json"; // File name which you want to put in s3 bucket const objectData = '{ "message" : "Hello World!" }'; // file data you want to put const objectType = "application/json"; // type of file try { // setup params for putObject const params = { Bucket: s3Bucket, Key: objectName, Body: objectData, ContentType: objectType, }; const result = await s3.putObject(params).promise(); console.log( `File uploaded successfully at https:/` + s3Bucket + `.s3.amazonaws.com/` + objectName ); } catch (error) { console.log("error"); } }; Deploy the code on the Lambda Now we are going to deploy our code to the lambda function. We need to convert our project to zip format. After making a zip, go to AWS Lambda Console and select the function we are created in step-2. You will redirect to this page. Scroll down to the Function code section. Click on the Actions button and select upload a .zip file and upload the zip file you created earlier. After updated successfully you will see code on the editor. Now you will see the Deploy button beside the Actions button. Click on the Test button and create Test. After creating, test again click on the Test button. Then you will see a log like this. Also, you can confirm this it is successfully added or not. Go to Amazon S3 Console select the bucket you have created. In the Objects section, you will see like below image. Now we want to call our lambda function from the client app. To do that we have to do the following step. Create an API using AWS API gateway Go to Amazon API Gateway Console and click on Create API then select HTTP API there you will find the Build button click on that. Now you have to follow 4 steps to create an API. Step 1. Enter the API name. Also, select integrations as lambda and add the lambda function we have created. Step 2. Select a method and add a path for the API. Step 3. Define a stage for the API. For this tutorial, I don’t need any stage. For that reason, I am keeping it as default. If you need you can add. Step 4. Review everything is correct or not. If not, you can edit by clicking Edit button. After completing those steps properly, now click on the Create button. It will create an API for you and that API use back-end as a lambda function which we have specified. Now you will see something like below. Now copy the invoke URL. Which will need for our next step. We have to do another thing, that is our client will from the different domain for that reason we have to enable CORS. To enable CORS, go to Amazon API Gateway Console select the API which you have created. From the left nav, on the Develop section click on the CORS tab. You see like below. Now click on the Configure button and add which origins, headers, methods are you want to be allowed. For this tutorial purpose, I am adding something below. You can customize as per your need. Build a Client App Now we are going to build a client app using React. Let’s start. $ npx create-react-app call-lambda && cd call-lambda You want to make a request to our API to call the lambda function. For making requests we will use axios package. To install it enter the following command. $ yarn add axios or, $ npm i axios Now open the App.js file and add the following code inside the file. import React from 'react'; import axios from 'axios'; function App() { const api = 'YOUR_URL'; // enter the URL copied from prev. step const data = { name: 'Jhon Doe', age: 40 } const handleClick = () => { axios.post(api, data) .then(res => console.log(res)) .catch(err => console.log(err)) } return( <div> <button onClick={() => handleClick()}>Upload File</button> </div> ) } export default App; Now run the client app. $ yarn start And now click on the Upload File button, this will call our lambda function and put the file on our S3 bucket. Congrats! You have successfully done the process of uploading JSON files in S3 using AWS Lambda. This was a very long journey. I hope your time is not wasted. You have learned something new. I appreciate your effort.
https://plainenglish.io/blog/a-complete-guide-to-upload-json-file-in-s3-using-aws-lambda
CC-MAIN-2022-40
refinedweb
1,010
76.72
Post your Comment java code for swaping two number java code for swaping two number how to code for swaping two number without taking third variables Multiplication of Two Number in Class Multiplication of Two Number in Class  ... multiply two number. A class consists of a collection of types of encapsulation...; Description this program In this program we are going to use two any number Midpoint of two number Midpoint of two number  ...;HTML> <HEAD> <TITLE>Midpoint of two number </TITLE>... important to find the number that is exactly between two numbers. In this example Two Element Dividing Number will learn how to divide any two number. In java program use the class package... are going to use how to divide two number. First all of, we have to define class named... Dividing Two Numbers in Java   Addition of two Number in Class the sum of any two number. Java is a object oriented programming language...; we are going to use addition of two number. First of all, we have... Addition of two Numbers in Class   Javascript calculate number of days between two dates Javascript calculate number of days between two dates In this tutorial, you will learn how to calculate number of days between two dates.For this, you need.... The desired interval can then be determined by dividing that number Squaring of two number in Static class Squaring of two number in Static class  ... static method and how can display multiplication of two any number. The static... in the class method. We are going to create object for calling the two number VoIP Number VoIP Number VoIP Numbers With a VoIPtalk Incoming number, callers can reach you by calling a UK number no matter where you are, Simply plug... connection and anyone can reach you by dialling your individual number. The VoIPtalk How can we know the number of days between two given dates using PHP? How can we know the number of days between two given dates using PHP? How can we know the number of days between two given dates using PHP Applet for add two numbers Applet for add two numbers what is the java applet code for add two...() { String num1; String num2; // read first number from the keyboard num1 = JOptionPane.showInputDialog("Enter a First number"); // read seond Number Convert - Java Beginners Number Convert Dear Deepak Sir, I am entered in Command prompt Some value like 234.Value can be convert into Two Hundred and Thirty Four Rupees... a[]={"","one","two","three","four","five","six","seven","eight","nine",}; String b two dimensional - Java Beginners two dimensional write a program to create a 3*3 array and print... Scanner(System.in); System.out.print("Enter number of rows...("Enter number of columns: "); cols = input.nextInt Two Dimensional array program Two Dimensional array program consider a two dimensional array...); System.out.print("Enter number of rows greater than 20: "); rows = input.nextInt(); System.out.print("Enter number of columns greater than 20 GUESS NUMBER - Java Beginners GUESS NUMBER Create a guessing game that picks a random number between 1 and 100 and lets you keep guessing until you get it right! You have two...() { super("Guess Number"); randomNumber = new Random().nextInt(100) + 1 calculate difference between two dates calculate difference between two dates hi, I was actually working on to calculate the number of days between two dates of dd/mm/yyyy format... the difference between the two dates javascript max number - Java Beginners javascript max number hi all, i m trying to find maximum number...: var number,i,max; for(i=1;i<11;i++) { number=prompt("Enter a number"); if(i==1) max=number; document.write("Number Two dimensional array in java of similar data type or it can hold fixed number of value of same type. Two...Two dimensional array in java. In this section you will learn about two... will be established, it is fixed when created.In two dimension array data Swapping of two numbers in java Swapping of two numbers in java In this example we are going to describe swapping of two numbers in java without using the third number in java. We... to number are swapped:- First we take two variable "a" and "b". initially Check whether highest number is even or odd Check whether highest number is even or odd In this section, you will learn how to check highest number between the two numbers and then determine whether it is odd or even. For this, we have taken two integer variables num1 and num2 adding of two numbers in designing of frame adding of two numbers in designing of frame hello sir, now i'm create two textfield for mark1&mark2 from db.how to add these two numbers... JFrame(); JLabel label1=new JLabel("First Number: "); JLabel label2=new Convert Number To Words Convert Number To Words In this example, We are going to convert number to words. Code Description: The following program takes the two static array of Strings . Here, we Prime Number in Java in this program we use two 'for' loop. For loop will start from 1 to entered number... Prime Number in Java This Java programming tutorial, we will be read how to get prime number prime number question - Java Beginners prime number question For the question located here: http... has to input the two numbers and it displays the prime numbers within the range of the two numbers? Hi Friend, Try the following code: import Java get number of days between dates the number of days between the two described dates. In order to get the number of days between two dates, the given example has set two dates by using... Java get number of days between dates   Prime Number in Java Prime Number in Java This Java programming tutorial will be read how to get prime number... argument and define 'num' as an integer. Now applying in this program two for loop Multiplication of two Matrix . The Java two dimensional array program is operate to the two matrix number... Multiplication of Two Matrix  ... that teaches you the method to multiply two matrix. We are going to make a simple Multiplication of two Matrix of type integer. Program uses two for loops to get number of rows and columns... Multiplication of two Matrix   ... for multiplying two matrix to each other. Here providing you Java source code combine two pdf files combine two pdf files In this program we are going to tell you how you can read a pdf file... file. By the object of PdfReader we can get the number of pages in the pdf file days between two given dates using PHP days between two given dates using PHP How can we know the number of days between two given dates using PHP? Hi friends, Example: <html> <head> <title>Number of days between two Post your Comment
http://www.roseindia.net/discussion/20679-Midpoint-of-two-number.html
CC-MAIN-2015-18
refinedweb
1,148
63.7
Have you encountered a problem where you completely forgot your super account login details of your DNN site? Well if you do and you still have the access to the ftp of the site, the following simple code trick should be able to help you solve this problem. Once you access to the site ftp, simply look for the default.aspx.cs file. Then look for the following method, or any method that related to the Page Load event. protected override void OnLoad(EventArgs e) At the very end of the closing tag routine of above code, you can inject this code. //This will check if the current user is not login on the site. if (!HttpContext.Current.User.Identity.IsAuthenticated) { //What you need to remember is the Portal ID and User ID //By default if you only have 1 site, the portal id value will be 0 and User Id for the host is 1. //0 represent the Portal ID and 1 represent the Host user id. //GetUserById(int portalId, int userId) UserInfo objUser = UserController.GetUserById(0, 1); //You can leave the portalName and IP to empty string //UserLogin(int portalId, UserInfo user, string portalName, string ip, bool createPersistentCookie) UserController.UserLogin(0, objUser, "", "", true); Response.Redirect("/"); } Note: if the UserInfo or UserController is not defined you can include the following namespace. using DotNetNuke.Entities.Users; Once done, simply refresh the DNN site url and you should be automatically login. Once login, you can remove the code you have already entered and dont forget to change your password.
https://bytutorial.com/blogs/dnn/forget-your-super-account-password-login-in-dnn
CC-MAIN-2019-26
refinedweb
258
62.38
The Traits API - Manage Traits with ResourceProvider¶ This spec aims to propose a new REST resource traits in placement API to manage the qualitative parts of ResourceProviders. Using Traits API, the placement service can manage the characteristics of resource providers by Traits, and then help scheduler make better placement decisions that match the boot requests. Problem description¶ The ResourceProvider has a collection of Inventory and Allocation objects to manage the quantitative aspects of a boot request: When an instance uses resources from a ResourceProvider, the corresponding resource amounts of Inventories are subtracted by its Allocations. Despite the quantitative aspects, the ResourceProvider also needs non-consumable qualitative aspects to differentiate their characteristics from each other. The classic example is requesting disk from different providers: a user may request 80GB of disk space for an instance (quantitative), but may also expect that the disk be SSD instead of spinning disk (qualitative). Having a way to mark that a storage provider (it can be a shared storage or compute node’s attached storage) is SSD or spinning is what we are concerned with. Many traits are defined in a standard way by OpenStack, such as the Intel CPU instruction set extensions. These are reported programmatically, and will be consistent across all OpenStack clouds. However, the deployer may have some other custom traits that placement service needs to support. Use Cases¶ An admin user wants to know the valid traits that the cloud can recognize. Other OpenStack services want to know whether user input traits are valid in the cloud. Other OpenStack services want a way to indicate the traits of the ResourceProviders (For example, Nova wants to indicate which cpu features a compute node provides) A cloud provider admin wants a way to indicate the traits of resource providers. (For example, a cloud provider admin wants to indicate that some storage providers are better-performing than others) Proposed change¶ We propose to use a new REST resource trait in the placement API to manage qualitative information of resource providers. The trait is just a string, it is pretty similar with Tags which are defined in Tags API-WG guideline. There are two kinds of Traits: The standard traits and the custom traits. The standard traits are interoperable across different Openstack cloud deployments. The definitions of standard traits come from the os-traits library. The standard traits are read-only in the placement API which means that the user can’t modify any standard traits through API. All the traits are classified into different namespaces. The namespace is defined by os-traits also. The definition of traits in os-traits will be discussed in a separate proposal. All the traits used in the examples below are for demonstration purposes only. The custom traits are used by admin users to manage the non-standard qualitative information of ResourceProviders. The admin user can define the custom traits from the placement API. The custom trait must prefix with the namespace CUSTOM_. The namespace CUSTOM_ is defined in os-traits. The users can only use valid traits in the request. The valid traits include the standard traits and the custom traits. The Traits API’s usage scenarios are listed below: Scenario 1: Single Resource Provider¶ In this scenario, Nova creates one ResourceProvider for each compute node. Each compute node then reports its qualitative information, which are tagged with a set of Traits. This will be updated regularly, although we don’t expect a resource provider’s qualitative information to change very often. Sync os-traits values into Placement¶ The placement API is designed to be the single source of truth for validing which traits are valid in the deployment. There is no hard dependency chain for upgrading services, but operators have to keep in mind that only Placement API os-traits version will be the master in the deployment. The new command placement-manage os-traits sync will be added. It is used to sync the standard traits from os-traits into the placement DB. The deployer should invoke this command after os-traits upgrade. Traits API vs. Aggregate metadata API¶ Previously, a deployer would manage qualitative information through the use of Aggregate metadata. They would do this by creating an aggregate for the hosts with a particular trait that they wished to track, and then set metadata on that aggregate to identify the trait that those hosts have. This practice has limited scalability and is hard to manage. Take for example the situation where there are variety of trait combinations in a deployment: this requires managing aggregates indirectly instead of straightforward traits. This creates a very complex mapping between traits and hosts. It is also not a simple task to determine which traits a particular host may have. Finally, this approach only works for compute nodes, not all potential resource providers. The proposed traits REST API endpoint will replace the use of aggregates to track and manage qualitative information. The traits for a given resource provider will be a flat list, and is straightforward to manage through the API. Once the use of Traits API is in place, the use of aggregate metadata will be deprecated. Of course, aggregates themselves will remain, as they are used for much more than metadata purposes. The deprecation of aggregate metadata will be discussed in a separate spec. Alternatives¶ An alternative for naming this new REST resource as Tags in previous proposal. But currently, there is a validation for the standard traits from the os-traits library. The API needs to distinguish the standard traits and custom traits, they won’t be some generic tags anymore. So ‘Traits’ is the correct term. An alternative idea is adding attributes to the traits. An example would be in creating namespaces: instead of prefixing the trait string with a namespace string, we would add an attribute to trait that denotes its namespace. This would eliminate the need to add the “HW” and “HV” parts of the trait name in the examples above. Another use of attributes is to distinguish between system-generated and custom traits. Yet another potential use is define classes of traits, such as user-queryable. So while this simplifies some things by making these aspects of traits queryable, it means that we have to treat a trait as an object, and not just a simple string. Another alternative to the use of traits is to create a special ResourceClass for each capability that has infinite inventory. In this approach, a request for, say, SSD would “consume” a single SSD, but since the inventory is infinite, it never runs out. This would have the advantage of not having to create any new tables, and would only require small changes to existing classes to make infinite inventory possible. It does suffer from a conceptual disconnect, since we really aren’t consuming anything. It would also make querying for capabilities a bit more roundabout. The more explanation about this idea is at blog Simple Resource Provision. One more alternative which inspired by above idea is about use ResourceProviderTraits instead of ResourceClass. The reason is ResourceClass and Traits are very similar, both of them are string. Actually we just need an indication for the management of quantitative and qualitative. With this way, we can achieve the goal of above alternative idea, and without the infinite inventory. The more explanation about this is at mail-list Use ResourceProviderTraits instead of ResourceClass. Data model impact¶ The new table will be added to API Database. For the database schema, the following tables would suffice: CREATE TABLE traits ( id INT NOT NULL, name VARCHAR(255) NOT NULL, PRIMARY KEY (id), UNIQUE INDEX (name) ); CREATE TABLE resource_provider_traits ( resource_provider_id INT NOT NULL trait_id INT NOT NULL, PRIMARY KEY (resource_provider_id, trait_id), ); REST API impact¶ The Traits API is attached to the Placement API endpoint. The Traits API includes two new REST resources: /traits and /resource_providers/{uuid}/traits. /traits: This is used to manage the traits in the cloud, and this is also the only place to query the existing and associated traits in the cloud. It helps the traits be consistent across all the services in the cloud. The traits can be read by all users and can only be modified by admin users. /resource_providers/{uuid}/traits: This is used to query/edit the association between traits and resource_providers. This endpoint can only be used by admin and/or service users. The generic json-schema of Trait object is as below: TRAIT = { "type": "string", 'minLength': 1, 'maxLength': 255, "pattern": "^[A-Z0-9_]+$" } The custom trait must prefixed with CUSTOM_, the json-schema is as below: CUSTOM_TRAIT = { "type": "string", 'minLength': 1, 'maxLength': 255, "pattern": "^CUSTOM_[A-Z0-9_]+$" } The added API endpoints are: GET /traits a list of all existing trait strings GET /traits/{trait} check whether a trait exists in the cloud PUT /traits/{trait} create a new custom trait to placement service DELETE /traits/{trait} remove a custom trait from placement service GET /resource_providers/{rp_uuid}/traits a list of traits associated with a specific resource provider PUT /resource_providers/{rp_uuid}/traits set all the traits for a specific resource provider DELETE /resource_providers/{rp_uuid}/traits remove any existing trait associations for a specific resource provider Details of added endpoints are as follows: GET /traits¶ Return a list of valid trait strings according to parameters specified. The body of the response must match the following JSONSchema document: { "type": "object", "properties": { "traits": { "type": "array", "items": TRAIT, } }, 'required': ['traits'], 'additionalProperties': False } The default action is to query all the standard and custom traits in placement service: GET /traits The response: 200 OK Content-Type: application/json { "traits": [ "HW_CPU_X86_3DNOW", "HW_CPU_X86_ABM", ... "CUSTOM_TRAIT_1", "CUSTOM_TRAIT_2" ] } The following 3 sections specify the 3 different parameters of this GET request. GET /traits?name=starts_with:{prefix}¶ To query the traits whose name begins with a specific prefix, use starts_with operator with the query parameter name. For example, you can query all the custom traits by filtering the traits with CUSTOM prefix. Example: GET /traits?name=starts_with:CUSTOM The response: 200 OK Content-Type: application/json { "traits": [ "CUSTOM_TRAIT_1", "CUSTOM_TRAIT_2" ] } GET /traits?associated={True|False}¶ To query the traits that have been associated with at least one resource provider in the placement service, use the parameter associated to filter them out. GET /traits?name=in:a,b,c¶ Return the traits listed with the in: parameter that exist in this cloud. For example, when admin-user creates flavor specifing trait strings, Nova can get a list of which of these traits are defined in the deployment using the example below: GET /traits?name=in:HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_INVALID_FEATURE Its response: 200 OK Content-Type: application/json { "traits": [ "HW_CPU_X86_AVX", "HW_CPU_X86_SSE" ] } Note HW_CPU_X86_INVALID_FEATURE isn’t a valid trait in the cloud, so it won’t be included in the response. Nova can thus be aware of invalid traits and provide an informative response to users. GET /traits/{trait_name}¶ This API is to check if a trait name exists in this cloud. The returned response will be one of the following: 204 No Content if the trait name exists. 404 Not Found if the trait name does not exist. PUT /traits/{trait_name}¶ This API is to insert a single custom trait without having to send the entire trait list: PUT /traits/CUSTOM_TRAIT_1 Its response includes the new trait’s URL in the Location header: Location: traits/CUSTOM_TRAIT_1 The returned response will be one of the following: 201 Created if the insertion is successful. 204 No Content if the trait already exists. 400 BadRequest if trait name isn’t prefixed with CUSTOM_ prefix. 409 Conflict if trait name conflicts with standard trait name. DELETE /traits/{trait_name}¶ This API is to delete the specified trait. Note that only custom traits can be deleted. The returned response will be one of the following: 204 No Content if the removal is successful. 400 BadRequest if the name to delete is standard trait. 404 Not Found if no such trait exists. 409 Conflict if the name to delete has associations with any ResourceProvider. GET /resource_providers/{uuid}/traits¶ Return the trait list provided by specific resource provider. The response format is the similar with GET /traits, but with resource_provider_generation in the body. Example: 200 OK Context-Type: application/json { "traits": [ "HW_CPU_X86_3DNOW", "HW_CPU_X86_ABM", ... "CUSTOM_TRAIT_1", "CUSTOM_TRAIT_2" ], "resource_provider_generation": 3 } The returned response will be one of the following: 200 OK if query is successful. 404 Not Found if the resource provider identified by {uuid} is not found. PUT /resource_providers/{uuid}/traits¶ This API is to associate traits with specified resource provider. All the associated traits will be replaced by the traits specified in the request body. Nova-compute will report the compute node traits through this API. The body of the request must match the following JSONSchema document: { "type": "object", "properties": { "traits": { "type": "array", "items": CUSTOM_TRAIT }, "resource_provider_generation": { "type": "integer" } }, 'required': ['traits', 'resource_provider_generation'], 'additionalProperties': False } Example: PUT /resource_providers/508f3973-8e1a-4241-afec-ee3e21be0611/traits Content-type: application/json { "traits": [ "CUSTOM_TRAIT_1", "CUSTOM_TRAIT_2" ], "resource_provider_generation": 112 } The successful HTTP will list the changed traits in the same format of GET response. The returned response will be one of the following: 200 OK if the update is successful. 400 Bad Request if any of the specified traits are not valid. The valid traits can be queried by GET /traits. 404 Not Found if the resource provider identified by {uuid} is not found. 409 Conflict if the resource_provider_generation doesn’t match with the server side. Other end user impact¶ There will be a set of CLI commands for users to query and manage the Traits. openstack trait list [–starts-with {prefix}] [–name-in {name1},{name2}] openstack trait remove $TRAIT openstack trait add $TRAIT Other deployer impact¶ Deployers will need to set the traits for resources that aren’t managed by OpenStack, such as the shared storage pools which used by compute node storage, as this will not be done automatically by any OpenStack service. Deployers will need to start using traits instead of aggregate metadata for managing qualitative information in anticipation of aggregate metadata being deprecated. The os-traits library in the placement service needs to be the latest version in the cloud, otherwise the new traits reported from other OpenStack services won’t be recognized by Placement service. So when upgrade the cloud to involve the new traits, the os-traits library in the placement service need to be upgraded first. The deployer needs to run command placement-manage os-trait sync before starting the placement or new os-traits released to ensure the new traits are imported into the placement DB. Implementation¶ Assignee(s)¶ - Primary assignee: Alex Xu <hejie.xu@intel.com> - Other contributors: Cheng, Yingxin <yingxin.cheng@intel.com> Jin, Yuntong <yuntong.jin@intel.com> Tan, Lin <lin.tan@intel.com> Ed Leafe <ed@leafe.com> Work Items¶ Add DB Schema for Traits Refactor the ResourceClassCache to be utilized by Traits Add Traits related object Implement the API for managing custom traits Enable to attach traits to the resource provider in object Implement the API for setting traits on the resource providers Add new cmd placement-manage os-traits sync Dependencies¶ This proposal also depends on the os-traits library. This proposal uses the os-traits library to determine which traits are standard and which traits are not.. Documentation Impact¶ The API docs should be added for the Traits API. The Administrator docs should be added to explain how to use Traits API to manage capabilities. References¶ Maillist discussion: Tags API-WG guideline: Simple Resource Provision: Use ResourceProviderTraits instead of ResourceClass
https://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/resource-provider-traits.html
CC-MAIN-2020-10
refinedweb
2,580
52.8
wmemcpy man page Prolog This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. wmemcpy — copy wide characters in memory Synopsis #include <wchar.h> wchar_t *wmemcpy(wchar_t *restrict ws1, const wchar_t *restrict ws2, size_t n); Description The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1-2008 defers to the ISO C standard. The wmemcpy() function shall copy n wide characters from the object pointed to by ws2 to. Return Value The wmemcpy() function shall return the value of ws1. Errors No errors are defined. The following sections are informative. Examples None. Application Usage None. Rationale None. Future Directions None. See Also wmemchr(), wmemcmp(), wmemmove(), wmemset()move(3p), wmemset(3p).
https://www.mankier.com/3p/wmemcpy
CC-MAIN-2019-04
refinedweb
162
51.85
This is the second post of our series on building a self learning recommendation system using reinforcement learning. This series consists of 7 posts where in we progressively build a self learning recommendation system. - Recommendation system and reinforcement learning primer - Introduction to multi armed bandit problem ( This post ) - Self learning recommendation system as a bandit problem - In our previous post we introduced different types of recommendation systems and explored some of the basic elements of reinforcement learning. We found out that reinforcement learning problems evaluates different actions when the agent is in a specific state. The action taken generates a certain reward. In other words we get a feedback on how good the action was based on the reward we got. However we wont get the feed back as to whether the action taken was the best available. This is what contrasts reinforcement learning from supervised learning. In supervised learning the feed back is instructive and gives you the quantum of the correctness of an action based on the error. Since reinforcement learning is evaluative, it depends a lot on exploring different actions under different states to find the best one. This tradeoff between exploration and exploitation is the bedrock of reinforcement learning problems like the K armed bandit. Let us dive in. The Bandit Problem. In this section we will try to understand K armed bandit problem setting from the perspective of product recommendation. A recommendation system recommends a set of products to a customer based on the customers buying patterns which we call as the context. The context of the customer can be the segment the customer belongs to, the period in which the customer buys, like which month, which week of the month, which day of the week etc. Once recommendations are made to a customer, the customer based on his or her affinity can take different type of actions i.e. (i) ignore the recommendation (ii) click on the product and further explore (iii) buy the recommended product. The objective of the recommendation system would be to recommend those products which are most likely to be accepted by the customer or in other words maximize the value from the recommendations. Based on the recommendation example let us try to draw parallels to the K armed bandit. The K-armed bandit is a slot machine which has ‘K’ different arms or levers. Each pull of the lever can have a different outcome. The outcomes can vary from no payoff to winning a jackpot. Your objective is to find the best arm which yields the best payoff through repeated selection of the ‘K’ arms. This is where we can draw parallels’ between armed bandits and recommendation systems. The products recommended to a customer are like the levers of the bandit. The value realization from the recommended products happens based on whether the customer clicks on the recommended product or buys them. So the aim of the recommendation system is to identify the products which will generate the best value i.e which will very likely be bought or clicked by the customer. Having set the context of the problem statement , we will understand in depth the dynamics of the K-armed bandit problem and couple of solutions for solving them. This will lay the necessary foundation for us to try this in creating our recommendation system. Non-Stationary Armed bandit problem When we discussed about reinforcement learning we learned about the reward function. The reward function can be of two types, stationary and non-stationary. In stationary type the reward function will not change over time. So over time if we explore different levers we will be able to figure out which lever gives the best value and stick to it. In contrast,in the non stationary problem, the reward function changes over time. For non stationary problem, identifying the arms which gives the best value will be based on observing the rewards generated in the past for each of the arms. This scenario is more aligned with real life cases where we really do not know what would drive a customer at a certain point of time. However we might be able to draw a behaviour profile by observing different transactions over time. We will be exploring the non-stationary type of problem in this post. Exploration v/s exploitation One major dilemma in problems like the bandit is the choice between exploration and exploitation. Let us explain this with our context. Let us say after few pulls of the first four levers we found that lever 3 has been consistently giving good rewards. In this scenario, a prudent strategy would be to keep on pulling the 3rd lever as we are sure that this is the best known lever. This is called exploitation. In this case we are exploiting our knowledge about the lever which gives the best reward. We also call the exploitation of the best know lever as the greedy method. However the question is, will exploitation of our current knowledge guarantee that we get the best value in the long run ? The answer is no. This is because, so far we have only tried the first 4 levers, we haven’t tried the other levers from 5 to 10. What if there was another lever which is capable of giving higher reward ? How will we identify those unknown high value levers if we keep sticking to our known best lever ? This dilemma is called the exploitation v/s exploration. Having said that, resorting to always exploring will also be not judicious. It is found out that a mix of exploitation and exploration yields the best value over a long run. Methods which adopt a mix of exploitation and exploration are called ε greedy methods. In such methods we exploit the greedy method most of the time. However at some instances, say with a small probability of ε we randomly sample from other levers also so that we get a mix of exploitation and exploration. We will explore different ε greedy methods in the subsequent sections Simple averaging method In our discussions so far we have seen that the dynamics of reinforcement learning involves actions taken from different states yielding rewards based on the state-action pair chosen. The ultimate aim is to maximize the rewards in the long run. In order to maximize the overall rewards, it is required to exploit the actions which gets you the maximum rewards in the long run. However to identify the actions with the highest potential we need to estimate the value of that action over time. Let us first explore one of the methods called simple averaging method. Let us denote the value of an action (a) at time t as Qt(a). Using simple averaging method Qt(a) can be estimated by summing up all the rewards received for the action (a) divided by the number of times action (a) was selected. This can be represented mathematically as In this equation R1 .. Rn-1 represents the rewards received till time (t) for action (a) However we know that the estimate of value are a moving average, which means that there would be further instances when action (a) will be selected and corresponding rewards received. However it would be tedious to always sum up all the rewards and then divide it by the number of instances. To avoid such tedious steps, the above equation can be rewritten as follows This is a simple update formulae where Qn+1 is the new estimate for the n+1 occurance of action a, Qn is the estimate till the nth try and Rn is the reward received for the nth try . In simple terms this formulae can be represented as follows New Estimate <----- Old estimate + Step Size [ Reward - Old Estimate] For simple averaging method the Step Size is the reciprocal of the number of times that particular action was selected ( 1/n) Now that we have seen the estimate generation using the simple averaging method, let us look at the complete algorithm. Initialize values for the bandit arms from 1 to K. Usually we initialize a value of 0 for all the bandit arms Define matrices to store the Value estimates for all the arms ( Qt(a) ) and initialize it to zero Define matrices to store the tracker for all the arms i.e a tracker which stores the number of times each arm was pulled Start a iterative loop and Sample a random probability value if the probability value is greater than ε, pick the arm with the largest value. If the probability value is less than ε, randomly pick an arm. Get the reward for the selected arm Update the number tracking matrix with 1 for the arm which was selected Update the Qt(a) matrix, for the arm which was picked using the simple averaging formulae. Let us look at python implementation of the simple averaging problem next Implementation of Simple averaging method for K armed bandit In this implementation we will experiment with around 2000 different bandits with each bandit having 10 arms each. We will be evaluating these bandits for around 10000 steps. Finally we will average the values across all the bandits for each time step. Let us dive into the implementation. Let us first import all the required packages for the implementation in lines 1-4 import numpy as np import matplotlib.pyplot as plt from tqdm import tqdm from numpy.random import normal as GaussianDistribution We will start off by defining all the parameters of our bandit implementation. We would have 2000 seperate bandit experiments. Each bandit experiment will run for around 10000 steps. As defined earlier each bandit will have 10 arms. Let us now first define these parameters # Define the armed bandit variables nB = 2000 # Number of bandits nS = 10000 # Number of steps we will take for each bandit nA = 10 # Number of arms or actions of the bandit nT = 2 # Number of solutions we would apply As we discussed in the previous post the way we arrive at the most optimal policy is through the rewards an agent receives in the process of interacting with the environment. The policy defines the actions the agent will take. In our case, the actions we are going to take is the arms which we are going to pull. The reward which we get from our actions is based on the internal calibration of the armed bandit. The policy we will adopt is a mix of exploitation and exploration. This means that most of the time we will exploit the which action which was found to give the best reward. However once in a while we also do a bit of exploration. The exploration is controlled by a parameter ε. Next let us define the containers to store the rewards which we get from each arm and also to track whether the reward we got was the most optimal reward. # Defining the rewards container rewards = np.full((nT, nB, nS), fill_value=0.) # Defining the optimal selection container optimal_selections = np.full((nT, nB, nS), fill_value=0.) print('Rewards tracker shape',rewards.shape) print('Optimal reward tracker shape',optimal_selections.shape) We saw earlier that the policy with which we would pull each arm would be a mixture of exploitation and exploration. The way we do exploitation is by looking at the average reward obtained from each arm and then selecting the arm which has the maximum reward. For tracking the rewards obtained from each arm we initialize some values for each of the arm and then store the rewards we receive after each pull of the arm. To start off we initialize all these values as zero as we don’t have any information about the arms and its reward possibilities. # Set the initial values of our actions action_Mental_model = np.full(nA, fill_value=0.0) # action_value_estimates > action_Mental_model print(action_Mental_model.shape) action_Mental_model The rewards generated by each arm of the bandit is through the internal calibration of the bandit. Let us also define how that calibration has to be. For this case we will assume that the internal calibration follows a non stationary process. This means that with each pull of the armed bandit the existing value of the armed bandit is incremented by a small value. The value to increment the internal value of the armed bandits is through a Gaussian process with its mean at 0 and a standard deviation of 1. As a start we will initialize the calibrated values of the bandit to be zero. # Initialize the bandit calibration values arm_caliberated_value = np.full(nA, fill_value=0.0) arm_caliberated_value We also need to track how many times a particular action was selected. Therefore we define a counter to store those values. # Initialize the count of how many times an action was selected arm_selected_count = np.full(nA, fill_value=0, dtype="int64") arm_selected_count The last of the parameters we will define is the exploration probability value. This value defines how often we would be exploring non greedy arms to find their potential. # Define the epsilon (ε) value epsilon=0.1 Now we are ready to start our experiments. The first step in the process is to decide whether we want to do exploration or exploitation. To decide this , we randomly sample a value between 0 and 1 and compare it with the exploration probability value ( ε) value we selected. If the sampled value is less than the epsilon value, we will explore, otherwise we will exploit. To explore we randomly choose one of the 10 actions or bandit arms irrespective of the value we know it has. If the random probability value is greater than the epsilon value we go into the exploitation zone. For exploitation we pick the arm which we know generates the maximum reward. # First determine whether we need to explore or exploit probability = np.random.rand() probability The value which we got is greater than the epsilon value and therefore we will resort to exploitation. If the value were to be less than 0.1 (epsilon value : ε ) we would have explored different arms. Please note that the probability value you will get will be different as this is a random generation process. Now,let us define a decision mechanism so as to give us the arm which needs to be pulled ( our action) based on the probabiliy value. # Our decision mechanism if probability >= epsilon: my_action = np.argmax(action_Mental_model) else: my_action = np.random.choice(nA) print('Selected Action',my_action) In the above section, in line 31 we check whether the probability we generated is greater than the epsilon value . if It it is greater, we exploit our knowledge about the value of the arms and select the arm which has so far provided the greatest reward ( line 33 ). If the value is less than the epsilon value, we resort to exploration wherein we randomly select an arm as shown in line 35. We can see that the action selected is the first action ( index 0) as we are still in the initial values. Once we have selected our action (arm) ,we have to determine whether the arm is the best arm in terms of the reward potential in comparison with other arms of the bandit. To do that, we find the arm of the bandit which provides the greatest reward. We do this by taking the argmax of all the values of the bandit as in line 38. # Find the most optimal arm of the bandits based on its internal calibration calculations optimal_calibrated_arm = np.argmax(arm_caliberated_value) optimal_calibrated_arm Having found the best arm its now time to determine if the value which we as the user have received is equal to the most optimal value of the bandit. The most optimal value of the bandit is the value corresponding to the best arm. We do that in line 40. # Find the value corresponding to the most optimal calibrated arm optimal_calibrated_value = arm_caliberated_value[optimal_calibrated_arm] Now we check if the maximum value of the bandit is equal to the value the user has received. If both are equal then the user has made the most optimal pull, otherwise the pull is not optimal as represented in line 42. # Check whether the value corresponding to action selected by the user and the internal optimal action value are same. optimal_pull = float(optimal_calibrated_value == arm_caliberated_value[my_action]) optimal_pull As we are still on the initial values we know that both values are the same and therefore the pull is optimal as represented by the boolean value 1.0 for optimal pull. Now that we have made the most optimal pull, we also need to get rewards conssumerate with our action. Let us assume that the rewards are generated from the armed bandit using a gaussian process centered on the value of the arm which the user has pulled. # Calculate the reward which is a random distribution centered at the selected action value reward = GaussianDistribution(loc=arm_caliberated_value[my_action], scale=1, size=1)[0] reward 1.52 In line 45 we generate rewards using a Gaussian distribution with its mean value as the value of the arm the user has pulled. In this example we get a value of around 1.52 which we will further store as the reward we have received. Please note that since this is a random generation process, the values you would get could be different from this value. Next we will keep track of the arms we pulled in the current experiment. # Update the arm selected count by 1 arm_selected_count[my_action] += 1 arm_selected_count Since the optimal arm was the first arm, we update the count of the first arm as 1 as shown in the output. Next we are going to update our estimated value of each of the arms we select. The values we will be updating will be a function of the reward we get and also the current value it already has. So if the current value is Vcur, then the new value to be updated will be Vcur + (1/n) * (r - Vcur) where n is the number of times we have visited that particular arm and 'r' the reward we have got for pulling that arm. To calcualte this updated value we need to first find the following values Vcur and n . Let us get those values first Vcur would be estimated value corresponding to the arm we have just pulled # Get the current value of our action Vcur = action_Mental_model[my_action] Vcur 0.0 n would be the number of times the current arm was pulled # Get the count of the number of times the arm was exploited n = arm_selected_count[my_action] n 1 Now we will update the new value against the estimates of the arms we are tracking. # Update the new value for the selected action action_Mental_model[my_action] = Vcur + (1/n) * (reward - Vcur) action_Mental_model As seen from the output the current value of the arm we pulled is updated in the tracker. With each successive pull of the arm, we will keep updating the reward estimates. After updating the value generated from each pull the next task we have to do is to update the internal calibration of the armed bandit as we are dealing with a non stationary value function. # Increment the calibration value based on a Gaussian distribution increment = GaussianDistribution(loc=0, scale=0.01, size=nA) # Update the arm values with the updated value arm_caliberated_value += increment # Updated arm value arm_caliberated_value As seen from lines 59-64, we first generate a small incremental value from a Gaussian distribution with mean 0 and standard deviation 0.01. We add this value to the current value of the internal calibration of the arm to get the new value. Please note that you will get a different value for these processes as this is a random generation of values. These are the set of processes for one iteration of a bandit. We will continue these iterations for 2000 bandits and for each bandit we will iterate for 10000 steps. In order to run these processes for all the iterations, it is better to represent many of the processes as separate functions and then iterate it through. Let us get going with that task. Function 1 : Function to select actions The first of the functions is the one to generate the actions we are going to take. def Myaction(epsilon,action_Mental_model): probability = np.random.rand() if probability >= epsilon: return np.argmax(action_Mental_model) return np.random.choice(nA) Function 2 : Function to check whether action is optimal and generate rewards The next function is to check whether our action is the most optimal one and generate the reward for our action. def Optimalaction_reward(my_action,arm_caliberated_value): # Find the most optimal arm of the bandits based on its internal calibration calculations optimal_calibrated_arm = np.argmax(arm_caliberated_value) # Then find the value corresponding to the most optimal calibrated arm optimal_calibrated_value = arm_caliberated_value[optimal_calibrated_arm] # Check whether the value of the test bed corresponding to action selected by the user and the internal optimal action value of the test bed are same. optimal_pull = float(optimal_calibrated_value == arm_caliberated_value[my_action]) # Calculate the reward which is a random distribution centred at the selected action value reward = GaussianDistribution(loc=arm_caliberated_value[my_action], scale=1, size=1)[0] return optimal_pull,reward Function 3 : Function to update the estimated value of arms of the bandit def updateMental_model(my_action, reward,arm_selected_count,action_Mental_model): # Update the arm selected count with the latest count arm_selected_count[my_action] += 1 # find the current value of the arm selected Vcur = action_Mental_model[my_action] # Find the number of times the arm was pulled n = arm_selected_count[my_action] # Update the value of the current arm action_Mental_model[my_action] = Vcur + (1/n) * (reward - Vcur) # Return the arm selected and our mental model return arm_selected_count,action_Mental_model Function 4 : Function to increment reward values of the bandits The last of the functions is the function we use to make the reward generation non-stationary. def calibrateArm(arm_caliberated_value): increment = GaussianDistribution(loc=0, scale=0.01, size=nA) arm_caliberated_value += increment return arm_caliberated_value Now that we have defined the functions, we will use these functions to iterate through different bandits and multiple steps for each band) # Initialize the count of how many times an arm was selected arm_selected_count = np.full(nA, fill_value=0, dtype="int64") # arm_selected_count,action_Mental_model = updateMental_model(my_action, reward,arm_selected_count,action_Mental_model) # store the rewards rewards[0][nB_i][nS_i] = reward # Update the optimal step selection counter optimal_selections[0][nB_i][nS_i] = optimal_pull # Recalibrate the bandit values arm_caliberated_value = calibrateArm(arm_caliberated_value) In line 96, we start the first iterative loop to iterate through each of the set of bandits . Lines 98-104, we initialize the value trackers of the bandit and also the rewards we receive from the bandits. Finally we also define the epsilon value. From lines 105-117, we carry out many of the processes we mentioned earlier like - Selecting our action i.e the arm we would be pulling ( line 107) - Validating whether our action is optimal or not and getting the rewards for our action ( line 109) - Updating the count of our actions and updating the rewards for the actions ( line 111 ) - Store the rewards and optimal action counts ( lines 113-115) - Incrementing the internal value of the bandit ( line 117) Let us now run the processes and capture the values. Let us now average the rewards which we have got accross the number of bandit experiments and visualise the reward trends as the number of steps increase. # Averaging the rewards for all the bandits along the number of steps taken avgRewards = np.average(rewards[0], axis=0) avgRewards.shape plt.plot(avgRewards, label='Sample weighted average') plt.legend() plt.xlabel("Steps") plt.ylabel("Average reward") plt.show() From the plot we can see that the average value of rewards increases as the number of steps increases. This means that with increasing number of steps, we move towards optimality which is reflected in the rewards we get. Let us now look at the estimated values of each arm and also look at how many times each of the arms were pulled. # Average rewards received by each arm action_Mental_model From the average values we can see that the last arm has the highest value of 1.1065. Let us now look at the counts where these arms were pulled. # No of times each arm was pulled arm_selected_count From the arm selection counts, we can see that the last arm was pulled the maximum. This indicates that as the number of steps increased our actions were aligned to the arms which gave the maximum value. However even though the average value increased with more steps, does it mean that most of the times our actions were the most optimal ? Let us now look at how many times we selected the most optimal actions by visualizing the optimal pull counts. # Plot of the most optimal actions average_run_optimality = np.average(optimal_selections[0], axis=0) average_run_optimality.shape plt.plot(average_run_optimality, label='Simple weighted averaging') plt.legend() plt.xlabel("Steps") plt.ylabel("% Optimal action") plt.show() From the above plot we can see that there is an increase in the counts of optimal actions selected in the initial steps after which the counts of the optimal actions, plateau’s. And finally we can see that the optimal actions were selected only around 40% of the time. This means that even though there is an increasing trend in the reward value with number of steps, there is still room for more value to be obtained. So if we increase the proportion of the most optimal actions, there would be a commensurate increase in the average value which will be rewarded by the bandits. To achieve that we might have to tweak the way how the rewards are calculated and stored for each arm. One effective way is to use the weighted averaging method Weighted Averaging Method When we were dealing with the simple averaging method, we found that the update formule was as follows New Estimate <----- Old estimate + Step Size [ Reward - Old Estimate] In the formule, the Step Size for simple averaging method is the reciprocal of the number of times that particular action was selected ( 1/n) In weighted averaging method we make a small variation in the step size. In this method we use a constant step size method called alpha. The new update formule would be as follows Qn+1 = Qn + alpha * (reward - Qn) Usually we take some small values of alpha less than 1 say 0.1 or 0.01 or values similar to that. Let us now try the weighted averaging method with a step size of 0.1 and observe what difference this method have on the optimal values of each arm. In the weighted averaging method all the steps are the same as the simple averaging, except for the arm update method which is a little different. Let us define the new update function. def updateMental_model_WA(my_action, reward,action_Mental_model): alpha=0.1 qn = action_Mental_model[my_action] action_Mental_model[my_action] = qn + alpha * (reward - qn) return action_Mental_model Let us now run the process again with the updated method. Please note that we store the values in the same rewards and optimal_selection matrices. We store the value of weighted average method in index action_Mental_model = updateMental_model_WA(my_action, reward,action_Mental_model) # store the rewards rewards[1][nB_i][nS_i] = reward # Update the optimal step selection counter optimal_selections[1][nB_i][nS_i] = optimal_pull # Recalibrate the bandit values arm_caliberated_value = calibrateArm(arm_caliberated_value) Let us look at the plots for the weighted averaging method. average_run_rewards = np.average(rewards[1], axis=0) average_run_rewards.shape plt.plot(average_run_rewards, label='weighted average') plt.legend() plt.xlabel("Steps") plt.ylabel("Average reward") plt.show() From the plot we can see that the average reward increasing with number of steps. We can also notice that the average values obtained higher than the simple averaging method. In the simple averaging method the average value was between 1 and 1.2. However in the weighted averaging method the average value reaches within the range of 1.2 to 1.4. Let us now see how the optimal pull counts fare. average_run_optimality = np.average(optimal_selections[1], axis=0) average_run_optimality.shape plt.plot(average_run_optimality, label='Weighted averaging') plt.legend() plt.xlabel("Steps") plt.ylabel("% Optimal action") plt.show() We can observe from the above plot that we take the optimal action for almost 80% of the time as the number of steps progress towards 10000. If you remember the optimal action percentage was around 40% for the simple averaging method. The plots show that the weighted averaging method performs better than the simple averaging method. Wrapping up In this post we have understood two methods of finding optimal values for a K armed bandit. The solution space is not limited to these two methods and there are many more methods for solving the bandit problem. The list below are just few of them - Upper Confidence Bound Algorithm ( UCB ) - Bayesian UCB Algorithm - Exponential weighted Algorithm - Softmax Algorithm Bandit problems are very useful for many use cases like recommendation engines, website optimization, click through rate etc. We will see more use cases of bandit algorithm in some future posts What next ? Having understood the bandit problem, our next endeavor would be to use the concepts in building a self learning recommendation system. The next post would be a pre-cursor to that. In the next post we will formulate our problem context and define the processes for building the self learning recommendation system using a bandit algorithm. This post will be released next week ( Jan 17th 2022)._26<< This book can be accessed using the following links The Data Science Workshop on Amazon The Data Science Workshop on Packt Enjoy your learning experience and be empowered !!!! .
https://bayesianquest.com/2022/01/10/building-self-learning-recommendation-system-using-reinforcement-learning-ii-the-bandit-problem/
CC-MAIN-2022-40
refinedweb
4,944
52.39
First of all, I apologize if this has been asked before and I couldn't find it. I scoured the depths of the internet looking for some guidance but did not come up successful. In my programming II class we are working on creating a program for the lottery. We have created external classes (did that in the classroom) and have used the objects created in those classes in our driver. For this assignment, we are creating a program which outputs lottery numbers. The program asks the user if they are playing Pick 3, 4, or 5 and then is supposed to output a random number for each "ball". In one of the separate classes we have already imported and instantiated the randomizer. However, when I program the array and for loop, while it generates the proper number of printouts (3 printouts for 3, 4 printouts for 4, and 5 printouts for 5), a new number is not being generated for each loop. Instead, it is holding onto the first value and printing that out for each loop after. I've included my code below. Please help! import java.util.Scanner; public class LotteryGame2 { public static void main(String[] args){ //Declare and instantiate objects PickGame2 pick = new PickGame2(); Scanner keyboard=new Scanner(System.in); pick.activate(); //Ask for and obtain user input System.out.print("Are you playing Pick 3, Pick 4, or Pick 5? Enter number here: "); int numberOfGame=keyboard.nextInt(); PickGame2[] gamenumber=new PickGame2 [numberOfGame]; for (int i=0; i<gamenumber.length; ++i){ int ball=pick.pullBall(); System.out.println("Ball " + (i+1) + " is " + ball); }//Ending bracket of for loop //Close Scanner object keyboard.close(); }//Ending bracket of main method }//Ending bracket of class LotteryGame2 private int ball; private Random randomizer; public LotteryContainer(){ this.ball=0; this.randomizer=new Random(); }//Ending bracket of constructor public void activate(){ this.ball=this.randomizer.nextInt(9) + 1; }//Ending bracket of method activate public int getBall(){ return this.ball; }//Ending bracket of method getBall private LotteryContainer machine; public PickGame2(){ this.machine=new LotteryContainer(); }//Ending bracket of constructor public void activate(){ this.machine.activate(); }//Ending bracket of method activate public int pullBall(){ return this.machine.getBall(); }//Ending bracket of pullBall If you look at your PickGame2 class, you'll see that it uses a LotteryContainer for picking the balls. And the LotteryContainer has two methods - one that picks a number for the ball ( activate) and one that returns the ball number that was picked ( getBall). Since you only ever call pullBall in PickGame2, and pullBall only calls getBall from LotteryContainer, it means you never call the activate method which gets you a new ball. Add a call to activate before every call to pullBall, and you'll get a different number printed each time. (Actually, it's possible for a random number generator to generate the same number several times. I don't know what the specification of your homework was, but if it's supposed to simulate a "real" lottery where each ball is different, then just generating random numbers and displaying them is not enough).
https://codedump.io/share/LE9rPKUKMxPV/1/generating-random-numbers-in-for-loop-from-external-class
CC-MAIN-2017-04
refinedweb
514
55.13
NAME setgid - set group identity SYNOPSIS #include <sys/types.h> #include <unistd.h> int setgid(gid_t gid); DESCRIPTION setgid() sets the effective group ID of the calling process. If the caller is the superuser,PERM The calling process is not privileged (does not have the CAP_SETGID capability), and gid does not match the real group ID or saved set-group-ID of the calling process. NOTES The original Linux setgid() system call supported only 16-bit group IDs. Subsequently, Linux 2.4 added setgid32() supporting 32-bit IDs. The glibc setgid() wrapper function transparently deals with the variation across kernel versions. CONFORMING TO SVr4, POSIX.1-2001. SEE ALSO getgid(2), setegid(2), setregid(2), capabilities(7), credentials(7) COLOPHON This page is part of release 3.35 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/precise/man2/setgid32.2.html
CC-MAIN-2014-52
refinedweb
148
60.51
how to make front page in microsoft word 2007 2 youtube . computer science project work . computer science investigatory project class 12 . how to make front page in microsoft word 7 3 youtube . how to make front page for project in android phone mobile youtube . front page for computer project ideal vistalist co . project cover page sample project report bca pgdca msc mca 2018 . front page project front page project docsity . book review how technology can jumpstart your classroom ask a . what is height width of school project front page ll perfectly fit . tour and travel project report front page . wallpapers for galaxy s gallery 56 plus pic wpw106014 juegosrev com . cactus code publication covers . working on lifelong learning projects . project on ms excel cover page youtube . cover page template for project report front sample seminars . computer science elsevier . thesys awarded major tech contract to track all us stock and options . creative web tools for and by kids frontpage . project front page design in word fiveoutsiders com . computer art and technocultures front page . watch a kinect powered orchestra you conduct by waving your hands . of front page project report awesome design cool 10 my competent . cover page template for project report frontpage conversion gate 01 . computer keyboard and hand stock image royalty free image id 10061392 . mission school projects cover page facts and designs . computer project . pht consulting . publications . project homepage in microsoft outlook trackersuite net . computing at columbia timeline . cache computer art context history etc . i cant view project thumbnails on the front page discuss scratch . i wanted to learn computer science so i created my own degree . project home pages project tracker net . computer science project work principles and pragmatics amazon in . smartaim educare . entry 6 by jjwebdesign for front page illustration needed freelancer . vj gif find share on giphy . front page 1 . new emi technology interference technology . feedback request connecting contributors with projects in scaladex . how to make front page for school project in computer and mobile . front page quantum technology theory group . dissertation course day 4 autumn day 4 the end is near writing . sentient computing project home page . back 2 school project monogrammed notebooks make your own just . enabling or disabling the project home page . using web access pages as team project homepages in outlook . the one page test plan mot . computer programming college website project in asp net c . mobility vpo construction project management software . cmsc201 top down design lab 1 . eecs 211 code blocks notes . shashwats work welcome to shashwat sapkotas website . title of essays college essay title essay about chicago the great . dietolf ramms home page . 20 best layout design images on pinterest layout design page . setting up a visual studio team services git repository . adding and managing portfolio projects betheme documentation . grocery list project home page . ignou mcsp 060 project synopsis and guidelines 2017 18 master help . momo yimai guan ux ui designer interactive web designer front . lcd computer display connected to an apple mac displaying the logo . open house exhibition at b s a university campus on 16th 17th of . cartography the journeyler . nexedi demonstrates open source network management system for 5g 4g . a high speed logic gate board for the easy phi project hackaday . tutorial basics dataiku . confluence team collaboration software atlassian . managing tasks in sharepoint microsoft 365 blog . wall to wall computer services computer training courses sydney . a guide for managing the return to work undegraduate thesis writing . how to make your own party invitations just a girl and her blog . project file front page ideal vistalist co . design day booklet . project home overview_2 . chapter 7 project cost management ppt download . online portfolio computer game magazine design . frontpage grafiken css und dynamische elemente a coding project . marcovitz . mcs 044 forms by ignou mca issuu . itc results page itc project . online 690 when the browser is reduced from full screen to a window . high school project front page making with photoshop video 2 . welcome to bcsp . import web frontpage . digital nefrologia a project for coming years . physics dictionary ultimate 7 apk android 2 3 2 3 2 gingerbread . Recent Posts Contact | Disclaimer | Privacy | Copyright | Terms | crawler
http://engne.euforic.co/computer-project-front-page/
CC-MAIN-2018-43
refinedweb
686
63.25
Details - Type: Improvement - Status: Resolved - Priority: Minor - Resolution: Fixed - Affects Version/s: 1.5.4, 6.0.0-beta1 - Fix Version/s: 1.5.5, 6.0.0-beta1 - - Labels:None Description Since servlet 3.0 web applications can be set up completely in code. To support this kind of setup wicket should - support the manual assignment of an web application instance to WicketFilter - support setting the runtime configuration type in WebApplication programmtically through a setter instead of reading web.xml sample code for demonstrating the use case: public class AppContextListener implements ServletContextListener { private GuiceContext guiceContext; @Override public void contextInitialized(ServletContextEvent sce) // ... } Activity - All - Work Log - History - Activity - Transitions you should add this to the roadmap page added to roadmap, thanks for pointing out, Igor Since the change does not break the api contract also applied to wicket-1.5.x Here is a set of files which make it working in Jetty. Start.java is only needed for embedded Jetty. The key is that DefaultServlet needs to be registered as well. Otherwise WicketFilter has nothing to filter and there is no one to serve static resources. During my tests with my local 'test.war' file I just found out that there are "subtle" differences in the way application servers are scanning for 'META-INF/services'. Depending on the app server it will look for (a) folder 'META-INF/services' inside 'test.jar' which is packaged in 'test.war' at package location 'WEB-INF/lib/test.jar' (b) folder 'META-INF/services' residing in the root of 'test.war' (not to be confused with the web root in src/main/webapp), not needing a separate test.jar I tested on OS X 10.7.3 running Java 1.6.0_29 (64 bit), this is what the servers supported: jboss 7.1.1: (a) tomcat 7.0.26: (a) jetty 8.1.2: (a) + (b) glassfish 3: (a) + (b) So only (a) seems to work officially... Feel free to paste the proper section of the JSR here In short: To properly execute your service initializer it has to be located in a JAR file at location '/META-INF/services', it will not reliably work in a WAR file though some servers support it! To make your life easy you can use spring's org.springframework.web.WebApplicationInitializer (see javadoc for details) done in master
https://issues.apache.org/jira/browse/WICKET-4350?focusedCommentId=13246301&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-40
refinedweb
393
58.69
Apache::HeavyCGI - Framework to run complex CGI tasks on an Apache server use Apache::HeavyCGI; The release of this software was only for evaluation purposes to people who are actively writing code that deals with Web Application Frameworks. This package is probably just another Web Application Framework and may be worth using or may not be worth using. As of this writing (July 1999) it is by no means clear if this software will be developed further in the future. The author has written it over many years and is deploying it in several places. Update 2006-02-03: Development stalled since 2001 and now discontinued. There is no official support for this software. If you find it useful or even if you find it useless, please mail the author directly. But please make sure you remember: THE RELEASE IS FOR DEMONSTRATION PURPOSES ONLY. The Apache::HeavyCGI framework is intended to provide a couple of simple tricks that make it easier to write complex CGI solutions. It has been developed on a site that runs all requests through a single mod_perl handler that in turn uses CGI.pm or Apache::Request as the query interface. So Apache::HeavyCGI is -- as the name implies -- not merely for multi-page CGI scripts (for which there are other solutions), but it is for the integration of many different pages into a single solution. The many different pages can then conveniently share common tasks. The approach taken by Apache::HeavyCGI is a components-driven one with all components being pure perl. So if you're not looking for yet another embedded perl solution, and aren't intimidated by perl, please read on. If you have had a look at stacked handlers, you might have noticed that the model for stacking handlers often is too primitive. The model supposes that the final form of a document can be found by running several passes over a single entity, each pass refining the entity, manipulating some headers, maybe even passing some notes to the next handler, and in the most advanced form passing pnotes between handlers. A lot of Web pages may fit into that model, even complex ones, but it doesn't scale well for pages that result out of a structure that's more complicated than adjacent items. The more complexity you add to a page, the more overhead is generated by the model, because for every handler you push onto the stack, the whole document has to be parsed and recomposed again and headers have to be re-examined and possibly changed. Inheritance provokes namespace conflicts. Besides this, I see little reason why one should favor inheritance over a using relationship. The current implementation of Apache::HeavyCGI is very closely coupled with the Apache class anyway, so we could do inheritance too. No big deal I suppose. The downside of the current way of doing it is that we have to write my $r = $obj->{R}; very often, but that's about it. The upside is, that we know which manpage to read for the different methods provided by $obj-{R}>, $obj-{CGI}>, and $obj itself. Apache::HeavyCGI takes an approach that is more ambitious for handling complex tasks. The underlying model for the production of a document is that of a puzzle. An HTML (or XML or SGML or whatever) page is regarded as a sequence of static and dynamic parts, each of which has some influence on the final output. Typically, in today's Webpages, the dynamic parts are filled into table cells, i.e. contents between some <TD></TD> tokens. But this is not necessarily so. The static parts in between typically are some HTML markup, but this also isn't forced by the model. The model simply expects a sequence of static and dynamic parts. Static and dynamic parts can appear in random order. In the extreme case of a picture you would only have one part, either static or dynamic. HeavyCGI could handle this, but I don't see a particular advantage of HeavyCGI over a simple single handler. In addition to the task of generating the contents of the page, there is the other task of producing correct headers. Header composition is an often neglected task in the CGI world. Because pages are generated dynamically, people believe that pages without a Last-Modified header are fine, and that an If-Modified-Since header in the browser's request can go by unnoticed. This laissez-faire principle gets in the way when you try to establish a server that is entirely driven by dynamic components and the number of hits is significant. The three big tasks a CGI script has to master are Headers, Parameters and the Content. In general one can say, content creation SHOULD not start before all parameters are processed. In complex scenarios you MUST expect that the whole layout may depend on one parameter. Additionally we can say that some header related data SHOULD be processed very early because they might result in a shortcut that saves us a lot of processing. Consequently, Apache::HeavyCGI divides the tasks to be done for a request into four phases and distributes the four phases among an arbitrary number of modules. Which modules are participating in the creation of a page is the design decision of the programmer. The perl model that maps (at least IMHO) ideally to this task description is an object oriented approach that identifies a couple of phases by method names and a couple of components by class names. To create an application with Apache::HeavyCGI, the programmer specifies the names of all classes that are involved. All classes are singleton classes, i.e. they have no identity of their own but can be used to do something useful by working on an object that is passed to them. Singletons have an @ISA relation to Class::Singleton which can be found on CPAN. As such, the classes can only have a single instance which can be found by calling the CLASS->instance method. We'll call these objects after the mod_perl convention handlers. Every request maps to exactly one Apache::HeavyCGI object. The programmer uses the methods of this object by subclassing. The HeavyCGI constructor creates objects of the AVHV type (pseudo-hashes). *** Note: after 0.0133 this was changed to an ordinary hash. *** If the inheriting class needs its own constructor, this needs to be an AVHV compatible constructor. A description of AVHV can be found in fields. *** Note: after 0.0133 this was changed to be an ordinary hash. *** An Apache::HeavyCGI object usually is constructed with the new method and after that the programmer calls the dispatch method on this object. HeavyCGI will then perform various initializations and then ask all nominated handlers in turn to perform the header method and in a second round to perform the parameter method. In most cases it will be the case that the availability of a method can be determined at compile time of the handler. If this is true, it is possible to create an execution plan at compile time that determines the sequence of calls such that no runtime is lost to check method availability. Such an execution plan can be created with the Apache::HeavyCGI::ExePlan module. All of the called methods will get the HeavyCGI request object passed as the second parameter. There are no fixed rules as to what has to happen within the header and parameter method. As a rule of thumb it is recommended to determine and set the object attributes LAST_MODIFIED and EXPIRES (see below) within the header() method. It is also recommended to inject the Apache::HeavyCGI::IfModified module as the last header handler, so that the application can abort early with an Not Modified header. I would recommend that in the header phase you do as little as possible parameter processing except for those parameters that are related to the last modification date of the generated page. Sometimes you want to stop calling the handlers, because you think that processing the request is already done. In that case you can do a die Apache::HeavyCGI::Exception->new(HTTP_STATUS => status); at any point within prepare() and the specified status will be returned to the Apache handler. This is useful for example for the Apache::HeavyCGI::IfModified module which sends the response headers and then dies with HTTP_STATUS set to Apache::Constants::DONE. Redirectors presumably would set up their headers and set it to Apache::Constants::HTTP_MOVED_TEMPORARILY. Another task for Perl exceptions are errors: In case of an error within the prepare loop, all you need to do is die Apache::HeavyCGI::Exception->new(ERROR=>[array_of_error_messages]); The error is caught at the end of the prepare loop and the anonymous array that is being passed to $@ will then be appended to @{$self->{ERROR}}. You should check for $self->{ERROR} within your layout method to return an appropriate response to the client. After the header and the parameter phase, the application should have set up the object that is able to characterize the complete application and its status. No changes to the object should happen from now on. In the next phase Apache::HeavyCGI will ask this object to perform the layout method that has the duty to generate an Apache::HeavyCGI::Layout (or compatible) object. Please read more about this object in Apache::HeavyCGI::Layout. For our HeavyCGI object it is only relevant that this Layout object can compose itself as a string in the as_string() method. As a layout object can be composed as an abstraction of a layout and independent of request-specific contents, it is recommended to cache the most important layouts. This is part of the reponsibility of the programmer. In the next step HeavyCGI stores a string representation of current request by calling the as_string() method on the layout object and passing itself to it as the first argument. By passing itself to the Layout object all the request-specific data get married to the layout-specific data and we reach the stage where stacked handlers usually start, we get at a composed content that is ready for shipping. The last phase deals with setting up the yet unfinished headers, eventually compressing, recoding and measuring the content, and delivering the request to the browser. The two methods finish() and deliver() are responsible for that phase. The default deliver() method is pretty generic, it calls finish(), then sends the headers, and sends the content only if the request method wasn't a HEAD. It then returns Apache's constant DONE to the caller, so that Apache won't do anything except logging on this request. The method finish is more apt to being overridden. The default finish() method sets the content type to text/html, compresses the content if the browser understands compressed data and Compress::Zlib is available, it also sets the headers Vary, Expires, Last-Modified, and Content-Length. You most probably will want to override the finish method. head2 Summing up +-------------------+ | sub handler {...} | +--------------------+ | (sub init {...}) | |Your::Class |---defines------>| | |ISA Apache::HeavyCGI| | sub layout {...} | +--------------------+ | sub finish {...} | +-------------------+ +-------------------+ | sub new {...} | +--------------------+ | sub dispatch {...}| |Apache::HeavyCGI |---defines------>| sub prepare {...} | +--------------------+ | sub deliver {...} | +-------------------+ +----------------------+ +--------------------+ |Handler_1 .. Handler_N| | sub header {...} | |ISA Class::Singleton |---define----->| sub parameter {...}| +----------------------+ +--------------------+ +----+ |Your| |Duty| +----------------------------+----------------------------------------+----+ |Apache | calls Your::Class::handler() | | +----------------------------+----------------------------------------+----+ | | nominates the handlers, | | |Your::Class::handler() | constructs $self, | ** | | | and calls $self->dispatch | | +----------------------------+----------------------------------------+----+ | | $self->init (does nothing) | ?? | | | $self->prepare (see below) | | |Apache::HeavyCGI::dispatch()| calls $self->layout (sets up layout)| ** | | | $self->finish (headers and | ** | | | gross content) | | | | $self->deliver (delivers) | ?? | +----------------------------+----------------------------------------+----+ |Apache::HeavyCGI::prepare() | calls HANDLER->instance->header($self) | ** | | | and HANDLER->instance->parameter($self)| ** | | | on all of your nominated handlers | | +----------------------------+----------------------------------------+----+ As already mentioned, the HeavyCGI object is a pseudo-hash, i.e. can be treated like a HASH, but all attributes that are being used must be predeclared at compile time with a use fields clause. The convention regarding attributes is as simple as it can be: uppercase attributes are reserved for the Apache::HeavyCGI class, all other attribute names are at your disposition if you write a subclass. The following attributes are currently defined. The module author's production environment has a couple of attributes more that seem to work well but most probably need more thought to be implemented in a generic way. Set by the can_gzip method. True if client is able to handle gzipped data. Set by the can_png method. True if client is able to handle PNG. Set by the can_utf8 method. True if client is able to handle UTF8 endoded data. An object that handles GET and POST parameters and offers the method param() and upload() in a manner compatible with Apache::Request. Needs to be constructed and set by the user typically in the contructor. Optional attribute to denote the charset in which the outgoing data are being encoded. Only used within the finish method. If it is set, the finish() method will set the content type to text/html with this charset. Scalar that contains the content that should be sent to the user uncompressed. During te finish() method the content may become compressed. Unused. Anonymous array that accumulates error messages. HeavyCGI doesn't handle the error though. It is left to the user to set up a proper response to the user. Object of type Apache::HeavyCGI::ExePlan. It is recommended to compute the object at startup time and always pass the same execution plan into the constructor. Optional Attribute set by the expires() method. If set, HeavyCGI will send an Expires header. The EXPIRES attribute needs to contain an Apache::HeavyCGI::Date object. If there is an EXECUTION_PLAN, this attribute is ignored. Without an EXECUTION_PLAN, it must be an array of package names. HeavyCGI treats the packages as Class::Singleton classes. During the prepare() method HeavyCGI calls HANDLER->instance->header($self) and HANDLER->instance->parameter($self) on all of your nominated handlers. Optional Attribute set by the last_modified() method. If set, HeavyCGI will send a Last-Modified header of the specified time, otherwise it sends a Last-Modified header of the current time. The attribute needs to contain an Apache::HeavyCGI::Date object. The URL of the running request set by the myurl() method. Contains an URI::URL object. The Apache Request object for the running request. Needs to be set up in the constructor by the user. Unused. The URL of the running request's server-root set by the serverroot_url() method. Contains an URI::URL object. Unused. The time when this request started set by the time() method. Please note, that the time() system call is considerable faster than the method call to Apache::HeavyCGI::time. The advantage of calling using the TIME attribute is that it is self-consistent (remains the same during a request). Today's date in the format 9999-99-99 set by the today() method, based on the time() method. Don't expect Apache::HeavyCGI to serve 10 million page impressions a day. The server I have developed it for is a double processor machine with 233 MHz, and each request is handled by about 30 different handlers: a few trigonometric, database, formatting, and recoding routines. With this overhead each request takes about a tenth of a second which in many environments will be regarded as slow. On the other hand, the server is well respected for its excellent response times. YMMV. The fields pragma doesn't mix very well with Apache::StatINC. When working with HeavyCGI you have to restart your server quite often when you change your main class. I believe, this could be fixed in fields.pm, but I haven't tried. A workaround is to avoid changing the main class, e.g. by delegating the layout() method to a different class. *** Note: this has no meaning anymore after 0.0133 *** Andreas Koenig <andreas.koenig@anima.de>. Thanks to Jochen Wiedmann for heavy debates about the code and crucial performance enhancement suggestions. The development of this code was sponsered by.
http://search.cpan.org/dist/Apache-HeavyCGI/lib/Apache/HeavyCGI.pm
CC-MAIN-2017-26
refinedweb
2,661
55.34
Python 3.10.0 Release Date: Oct. 4, 2021 This is the stable release of Python 3.10.0 Python 3.10.0 is the newest major release of the Python programming language, and it contains many new features and optimizations. Major new features of the 3.10 series, compared to 3.9 bpo-38605: from __future__ import annotations (PEP 563) used to be on this list in previous pre-releases but it has been postponed to Python 3.11 due to some compatibility concerns. You can read the Steering Council communication about it here to learn more. bpo-44828: A change in the newly released macOS 12 Monterey caused file open and save windows in IDLE and other tkinter applications to be unusable. As of 2021-11-03, the macOS 64-bit universal2 installer file for this release was updated to include a fix in the third-party Tk library for this problem. All other files are unchanged from the original 3.10.0 installer. If you have already installed 3.10.0 from here and encounter this problem on macOS 12 Monterey, download and run the updated installer linked below. More resources - Changelog - Online Documentation - PEP 619, 3.10 Release Schedule - Report bugs at. - Help fund Python and its community. And now for something completely different For a Schwarzschild black hole (a black hole with no rotation or electromagnetic charge), given a free fall particle starting at the event horizon, the maximum propper time (which happens when it falls without angular velocity) it will experience to fall into the singularity is π*M (in natural units), where M is the mass of the black hole. For Sagittarius A* (the black hole at the centre.
https://www.python.org/downloads/release/python-3100/
CC-MAIN-2021-49
refinedweb
286
65.83
Level of Difficulty: Beginner – Senior. Many automation solutions make use of the functionality provided by mail services as it serves as an important element that allows for communication between humans and the automation process. There are many benefits provided by using Google Mail (Gmail), one of which is cost – for that reason, this post will focus on providing a step-by-step guide of how to monitor emails coming into your Gmail inbox, with the ability to monitor specific labels. It is also important to note that there are tools and platforms that make it much easier to perform these actions but as developers, we know that life cannot always be “easy”. This post aims at empowering the “not easy” solutions. What are the steps? The steps that we will be following are: - Ensure that your Gmail account is configured correctly - Import the libraries - Gather variable values - Label - Search Criteria - Define methods - Get Body - Search - Get Emails - Authenticate - Authenticate - Extract mails - Extract relevant information from results Deep Dive Let’s dive deeper into the steps listed above. Ensure that your Gmail account is configured correctly My first few attempts at this left me pulling my hair out with an “Invalid Credentials” error. Upon much Googling and further investigation, I found that it is caused by a Google Account setting. This is quite easily fixable. In order to interact with my account, I had to allow less secure apps (you can access that setting here): If you are still experiencing problems, here is a more extensive list of troubleshooting tips. Import the libraries Now let’s move over to Python and start scripting! First, let’s import the libraries that we’ll need: import imaplib, email Gather variable values In order to access the mails from the Gmail account we will need to know the answers to the following questions: - Which Google account (or email address) do we want to monitor? - What is the password to the above account? - What label do we want to monitor? - What is the search criteria? The best way to find out is to ask and luckily we can do that through code: imap_url = 'imap.gmail.com' # This is static. We don't ask the questions we know the answer to user = input("Please enter your email address: ") password = input("Please enter your password: ") label = input("Please enter the label that you'd like to search: ") # Example: Inbox or Social search_criteria = input("Please enter the subject search criteria: ") Define Methods It becomes easier to break some of the reusable elements up into methods (or functions) so that larger implementations of this solution are equipped to be easily scalable. Stephen Covey teaches us that starting with the end in mind is one of the habits of highly effective people – some might even call it proactive design thinking. The point is that it is good to think ahead when developing a solution. Enough rambling, here are the functions: # Retrieves email content def get_body(message): if message.is_multipart(): return get_body(message.get_payload(0)) else: return message.get_payload(None, True) # Search mailbox (or label) for a key value pair def search(key, value, con): result, data = con.search(None, key, '"{}"'.format(value)) return data # Retrieve the list of emails that meet the search criteria def get_emails(result_bytes): messages = [] # all the email data are pushed inside an array for num in result_bytes[0].split(): typ, data = con.fetch(num, '(RFC822)') messages.aplend(data) return messages # Authenticate def authenticate(imap_url, user, password, label): # SSL connnection with Gmail con = imaplib.IMAP4_SSL(imap_url) # Authenticate the user through login con.login(user, password) # Search for mails under this label con.select(label) Authenticate Before we can extract mails, we first need to call the authenticate method that we had just created and pass through the answers to the questions we asked further up: authenticate(imap_url, user, password, label) Extract mails Next, we need to call the search and get_mails methods to extract the mails: # Retrieve mails search_results = search('Subject', search_criteria, con) messages = get_emails(searhc_results) # Uncomment to view the mail results #print(message Extract relevant information from results Now, let’s work through the results and extract the subject using string manipulation. Feel free to add a “print(subject)” statement underneath the assignment of “subject” for debugging purposes: for message in messages[::-1]: for content in message: if type(content) is tuple: # Encoding set as utf-8 decoded_content = str(content[1], 'utf-8') data = str(decoded_content) # Extracting the subject from the mail content subject = data.split('Subject: ')[1].split('Mime-Version')[0] # Handling errors related to unicodenecode try: indexstart = data.find("ltr") data2 = data[indexstart + 5: len(data)] indexend = data2.find("</div>") # Uncomment to see what the content looks like #print(data2[0: indexend]) except UnicodeEncodeError as e: pass Did this work for you? Feel free to drop a comment below or reach out to me through email, jacqui.jm77@gmail.com. The full Python script is available on Github here.
https://thejpanda.com/2021/01/07/automation-monitoring-gmail-inbox-using-python/
CC-MAIN-2022-21
refinedweb
825
51.28
On Thu, Jun 13, 2013 at 6:07 PM, David Daney <ddaney.cavm@gmail.com> wrote: > > Suggested fix: Do what we already do in the SMP version of > on_each_cpu(), and use local_irq_save/local_irq_restore. I was going to apply this, but started looking a bit more. Using "flags" as a variable name inside a macro like this is a *really* bad idea. Lookie here: [torvalds@pixel linux]$ git grep on_each_cpu.*flags arch/s390/kernel/perf_cpum_cf.c: on_each_cpu(setup_pmc_cpu, &flags, 1); arch/s390/kernel/perf_cpum_cf.c: on_each_cpu(setup_pmc_cpu, &flags, 1); and ask yourself what happens when the "info" argument expands to "&flags", and it all compiles perfectly fine, but the "&flags" takes the address of the new _inner_ variable called "flags" from the macro expansion. Not the one that the caller actually intends.. Oops. Not a good idea. So I would suggest trivially renaming "flags" as "__flags" or something, or perhaps even just making it a real function and avoiding the whole namespace issue. And rather than doing that blindly by editing the patch at after -rc5, I'm just going to ask you to re-send a tested patch. Ok? Linus
http://www.linux-mips.org/archives/linux-mips/2013-06/msg00209.html
CC-MAIN-2015-40
refinedweb
190
64.2
akd alternatives and similar packages Based on the "Deployment" category. Alternatively, view akd alternatives based on common mentions on social networks and blogs. edeliver9.9 0.0 akd VS edeliverDeployment for Elixir and Erlang Nanobox9.8 0.0 akd VS NanoboxThe ideal platform for developers gatling8.9 0.0 akd VS gatlingDeployment tool for Phoenix apps bootleg8.7 0.1 akd VS bootlegSimple deployment and server automation for Elixir. elixir-on-docker7.2 0.0 akd VS elixir-on-dockerQuickly get started developing clustered Elixir applications for cloud environments. dockerize4.3 5.5 akd VS dockerizeA small hex package for creating docker image from an Elixir project. exreleasy2.8 0.0 akd VS exreleasy⛔️ DEPRECATED A very simple tool for releasing elixir applications Minex1.6 0.0 akd VS MinexA deployment helper for Elixir GigalixirA fully-featured PaaS designed for Elixir. Supports clustering, hot upgrades, and remote console/observer. Free to try without a credit card. Do you think we are missing an alternative of akd or a related project? Popular Comparisons README Akdor docker) are performed. A Deployment lifecycle in Akd is divided into various Operations. Operations are grouped into an abstraction called a Hook. A deployment is a pipeline of Hooks which call individual Operations.: def deps do [{:akd, "~> 0.2.3"}] end *Note that all licence references and agreements mentioned in the akd README section above are relevant to that project's source code only.
https://elixir.libhunt.com/akd-alternatives
CC-MAIN-2021-43
refinedweb
238
51.65
[](){} The mixture of brackets in the preceding line become one of the most noticeable indications of Modern C++. Yep. Lambda Expressions! It might sound like I’m trying to create a new blog post about something that everyone knows. Is that true? Do you know all the details of this modern C++ technique? In this article, you’ll learn five advantages of Lambdas. Let’s start. 1. Lambdas Make Code More Readable The first point might sound quite obvious, but it’s always good to appreciate the fact that since C++11, we can write more compact code. For example, recently, I stumbled upon some cases of C++03/C++0x with bind expressions and predefined helper functors from the Standard Library. Have a look at the code: #include <algorithm> #include <functional> #include <vector> int main() { using std::placeholders::_1; const std::vector<int> v { 1, 2, 3, 4, 5, 6, 7, 8, 9 }; const auto val = std::count_if(v.begin(), v.end(), std::bind(std::logical_and<bool>(), std::bind(std::greater<int>(),_1, 2), std::bind(std::less_equal<int>(),_1,6))); return val; } Play with the code @Compiler Explorer Can you immediately tell what the final value of val is? Let’s now rewrite this into lambda expression: #include <algorithm> #include <vector> int main() { std::vector<int> v { 1, 2, 3, 4, 5, 6, 7, 8, 9 }; const auto val = std::count_if(v.begin(), v.end(), [](int v) { return v > 2 && v <= 6;}); return val; } Isn’t that better? Play with the code @Compiler Explorer Not only we have shorter syntax for the anonymous function object, but we could even reduce one include statement (as there’s no need for <functional>any more). In C++03, it was convenient to use predefined helpers to build those callable objects on the fly. They were handy and allowed you even to compose functionalities to get some complex conditions or operations. However, the main issue is the hard-to-learn syntax. You can of course still use them, even with C++17 or C++20 code (and for places where the use of lambdas is not possible), but I guess that their application for complex scenarios is a bit limited now. In most cases, it’s far easier to use lambdas. I bet you can list a lot of examples from your projects where applying lambda expressions made code much cleaner and easier to read. Regarding the readability, we also have another part: locality. 2. Lambdas Improve Locality of the Code In C++03, you had to create functions or functors that could be far away from the place where you passed them as callable objects. This is hard to show on simple artificial examples, but you can imagine a large source file, with more than a thousand lines of code. The code organisation might cause that functors could be located in one place of a file (for example on top). Then the use of a functor could be hundreds of lines further or earlier in the code if you wanted to see the definition of a functor you had to navigate to a completely different place in the file. Such jumping might slow your productivity. We should also add one more topic to the first and the second point. Lambdas improve locality, readability, but there’s also the naming part. Since lambdas are anonymous, there’s no need for you to select the meaningful name for all of your small functions or functors. 3. Lambdas Allow to Store State Easily Let’s have a look at a case where you’d like to modify a default comparison operation for std::sort with an invocation counter. #include <algorithm> #include <iostream> #include <vector> int main() { std::vector<int> vec { 0, 5, 2, 9, 7, 6, 1, 3, 4, 8 }; size_t compCounter = 0; std::sort(vec.begin(), vec.end(), [&compCounter](int a, int b) { ++compCounter; return a < b; }); std::cout << "number of comparisons: " << compCounter << '\n'; for (auto& v : vec) std::cout << v << ", "; } Play with the code @Compiler Explorer As you can see, we can capture a local variable and then use it across all invocations of the binary comparator. Such behaviour is not possible with regular functions (unless you use globals of course), but it’s also not straightforward with custom functors types. Lambdas make it very natural and also very convenient to use. In the example I captured compCounter by reference. This approach works, but if your lambda runs asynchronously or on different threads then you need to pay attention for dangling and synchronisation issues. 4. Lambdas Allow Several Overloads in the Same Place This is one of the coolest examples not just related to lambdas, but also to several major Modern C++ features (primarily available in C++17): Have a look: #include <iostream> #include <string> #include <variant> template<class... Ts> struct overload : Ts... { using Ts::operator()...; }; template<class... Ts> overload(Ts...) -> overload<Ts...>; int main() { std::variant<int, float, std::string> intFloatString { "Hello" }; std::visit(overload { [](const int& i) { std::cout << "int: " << i; }, [](const float& f) { std::cout << "float: " << f; }, [](const std::string& s) { std::cout << "string: " << s; } }, intFloatString ); } Play with the code @Compiler Explorer The above example is a handy approach to build a callable object with all possible overloads for variant types on the fly. The overloaded pattern is conceptually equivalent to the following structure: struct PrintVisitor { void operator()(int& i) const { std::cout << "int: " << i; } void operator()(float& f) const { std::cout << "float: " << f; } void operator()(const std::string& s) const { std::cout << "string: " << s; } }; You can learn more about this pattern in my separate article, see the reference section. Additionally, it’s also possible to write a compact generic lambda that works for all types held in the variant. This can support runtime polymorphism based on std::variant/ std::visit approach. #include <variant> struct Circle { void Draw() const { } }; struct Square { void Draw() const { } }; struct Triangle { void Draw() const { } }; int main() { std::variant<Circle, Square, Triangle> shape; shape = Triangle{}; auto callDraw = [](auto& sh) { sh.Draw(); }; std::visit(callDraw, shape); } Play with the code @Compiler Explorer This technique is an alternative to runtime polymorphism based on virtual functions. Here we can work with unrelated types. There’s no need for a common base class. See the Reference section for more links about this pattern. 5. Lambdas Get Better with Each Revision of C++! You might think that lambdas were introduced in C++11 and that’s all, nothing changed. But it’s not true. Here’s the list of major features related to lambdas that we got with recent C++ Standards: - C++14 - Generic lambdas - you can pass autoargument, and then the compiler expands this code into a function template. - Capture with initialiser - with this feature you can capture not only existing variables from the outer scope, but also create new state variables for lambdas. This also allowed capturing moveable only types. - C++17 constexprlambdas - in C++17 your lambdas can work in a constexpr context! - Capturing thisimprovements - With C++17 you can capture *this OBJECT by copy, avoiding dangling when returning the lambda from a member function or store it. (Thanks to Peter Sommerlad for improved wording and checking) - C++20 - Template lambdas - improvements to generic lambdas which offers more control over the input template argument. - Lambdas and concepts - Lambdas can also work with constrained auto and Concepts, so they are as flexible as functors as template functions - Lambdas in unevaluated contexts - you can now create a map or a set and use a lambda as a predicate. Plus some smaller things and fixes. Summary With this article, we refreshed some basic ideas and advantages of lambda expressions. We reviewed improved readability, locality, ability to hold state throughout all invocations. We event went a bit further and examined the overloaded pattern and list all the features from recent C++ Standards. I guess we can summarise all points into the single statement: C++ Lambda Expressions make your code more readable and simple - Do you have examples where lambda expression “shines”? - Or maybe you still prefer predefined functors and helpers from the Standard Library? - Do you see other benefits of Lambdas? Let us know your opinions in comments. If You Want to Know More Last year, in 2019, I published two extensive articles about lambda expression. They were based on a presentation on our local Cracow C++ User Group: Together, those articles become one of the most popular content, and so far, they generated over 86 thousand views! Later, I took the content from those articles and created an ebook that you can get on Leanpub. But it’s just part of the story. After the launch, I managed to provide several significant updates, new sections, more examples and better descriptions. Right now, the book is massively improved and packed with more than 2X of the original content. You can get it here: Get C++ Lambda Story @Leanpub The book is also available in a package with my C++17 book: C++17 in Detail and Lambda Story Bundle And also you get get it for free if you join my Patreon page: See Extra Benefits for Patrons and Get Lambda Story for Free
https://www.bfilipek.com/2020/05/lambdasadvantages.html
CC-MAIN-2021-04
refinedweb
1,533
59.23
In C programming, all executable code resides within a function. A function is a named block of code that performs a task and then returns control to a caller. Note that other programming languages may distinguish between a "function", "subroutine", "subprogram", "procedure", or "method" -- in C, these are all functions. A function is often executed (called) several times, from several different places, during a single execution of the program. After finishing a subroutine, the program will branch back (return) to the point after the call. Functions are a powerful programming tool. As a basic example, suppose you are writing code to print out the first 5 squares of numbers, do some intermediate processing, then print the first 5 squares again. We could write it like this: #include <stdio.h> int main(void) { int i; for(i=1; i <= 5; i++) { printf("%d ", i*i); } for(i=1; i <= 5; i++) { printf("%d ", i*i); } return 0; } We have to write the same loop twice. We may want to somehow put this code in a separate place and simply jump to this code when we want to use it. This would look like: #include <stdio.h> void Print_Squares(void) { int i; for(i=1; i <=5; i++) { printf("%d ", i*i); } } int main(void) { Print_Squares(); Print_Squares(); return 0; } This is precisely what functions are for. More on functionsEdit A function is like a black box. It takes in input, does something with it, then spits out an answer. Note that a function may not take any inputs at all, or it may not return anything at all. In the above example, if we were to make a function of that loop, we may not need any inputs, and we aren't returning anything at all (Text output doesn't count - when we speak of returning we mean to say meaningful data that the program can use). We have some terminology to refer to functions: - A function, call it f, that uses another function g, is said to call g. For example, f calls g to print the squares of ten numbers. - A function's inputs are known as its arguments - A function g that gives some kind of answer back to f is said to return that answer. For example, g returns the sum of its arguments. Writing functions in CEdit It's always good to learn by example. Let's write a function that will return the square of a number. int square(int x) { int square_of_x; square_of_x = x * x; return square_of_x; } To understand how to write such a function like this, it may help to look at what this function does as a whole. It takes in an int, x, and squares it, storing it in the variable square_of_x. Now this value is returned. The first int at the beginning of the function declaration is the type of data that the function returns. In this case when we square an integer we get an integer, and we are returning this integer, and so we write int as the return type. Next is the name of the function. It is good practice to use meaningful and descriptive names for functions you may write. It may help to name the function after what it is written to do. In this case we name the function "square", because that's what it does - it squares a number. Next is the function's first and only argument, an int, which will be referred to in the function as x. This is the function's input. In between the braces is the actual guts of the function. It declares an integer variable called square_of_x that will be used to hold the value of the square of x. Note that the variable square_of_x can only be used within this function, and not outside. We'll learn more about this sort of thing later, and we will see that this property is very useful. We then assign x multiplied by x, or x squared, to the variable square_of_x, which is what this function is all about. Following this is a return statement. We want to return the value of the square of x, so we must say that this function returns the contents of the variable square_of_x. Our brace to close, and we have finished the declaration. Written in a more concise manner, this code performs exactly the same function as the above: int square(int x) { return x * x; } Note this should look familiar - you have been writing functions already, in fact - main is a function that is always written. In generalEdit In general, if we want to declare a function, we write type name(type1 arg1, type2 arg2, ...) { /* code */ } We've previously said that a function can take no arguments, or can return nothing, or both. What do we write if we want the function to return nothing? We use C's void keyword. void basically means "nothing" - so if we want to write a function that returns nothing, for example, we write void sayhello(int number_of_times) { int i; for(i=1; i <= number_of_times; i++) { printf("Hello!\n"); } } Notice that there is no return statement in the function above. Since there's none, we write void as the return type. (Actually, one can use the return keyword in a procedure to return to the caller before the end of the procedure, but one cannot return a value as if it were a function.) What about a function that takes no arguments? If we want to do this, we can write for example float calculate_number(void) { float to_return=1; int i; for(i=0; i < 100; i++) { to_return += 1; to_return = 1/to_return; } return to_return; } Notice this function doesn't take any inputs, but merely returns a number calculated by this function. Naturally, you can combine both void return and void in arguments together to get a valid function, also. RecursionEdit Here's a simple function that does an infinite loop. It prints a line and calls itself, which again prints a line and calls itself again, and this continues until the stack overflows and the program crashes. A function calling itself is called recursion, and normally you will have a conditional that would stop the recursion after a small, finite number of steps. // don't run this! void infinite_recursion() { printf("Infinite loop!\n"); infinite_recursion(); } A simple check can be done like this. Note that ++depth is used so the increment will take place before the value is passed into the function. Alternatively you can increment on a separate line before the recursion call. If you say print_me(3,0); the function will print the line Recursion 3 times. void print_me(int j, int depth) { if(depth < j) { printf("Recursion! depth = %d j = %d\n",depth,j); //j keeps its value print_me(j, ++depth); } } Recursion is most often used for jobs such as directory tree scans, seeking for the end of a linked list, parsing a tree structure in a database and factorising numbers (and finding primes) among other things. Static functionsEdit If a function is to be called only from within the file in which it is declared, it is appropriate to declare it as a static function. When a function is declared static, the compiler will now compile to an object file in a way that prevents the function from being called from code in other files. Example: static int compare( int a, int b ) { return (a+4 < b)? a : b; } Using C functionsEdit We can now write functions, but how do we use them? When we write main, we place the function outside the braces that encompass main. When we want to use that function, say, using our calculate_number function above, we can write something like float f; f = calculate_number(); If a function takes in arguments, we can write something like int square_of_10; square_of_10 = square(10); If a function doesn't return anything, we can just say say_hello(); since we don't need a variable to catch its return value. Functions from the C Standard LibraryEdit While the C language doesn't itself contain functions, it is usually linked with the C Standard Library. To use this library, you need to add an #include directive at the top of the C file, which may be one of the following: The functions available are: Variable-length argument listsEdit Functions with variable-length argument lists are functions that can take a varying number of arguments. An example in the C standard library is the printf function, which can take any number of arguments depending on how the programmer wants to use it. C programmers rarely find the need to write new functions with variable-length arguments. If they want to pass a bunch of things to a function, they typically define a structure to hold all those things -- perhaps a linked list, or an array -- and call that function with the data in the arguments. However, you may occasionally find the need to write a new function that supports a variable-length argument list. To create a function that can accept a variable-length argument list, you must first include the standard library header stdarg.h. Next, declare the function as you would normally. Next, add as the last argument an ellipsis ("..."). This indicates to the compiler that a variable list of arguments is to follow. For example, the following function declaration is for a function that returns the average of a list of numbers: float average (int n_args, ...); Note that because of the way variable-length arguments work, we must somehow, in the arguments, specify the number of elements in the variable-length part of the arguments. In the average function here, it's done through an argument called n_args. In the printf function, it's done with the format codes that you specify in that first string in the arguments you provide. Now that the function has been declared as using variable-length arguments, we must next write the code that does the actual work in the function. To access the numbers stored in the variable-length argument list for our average function, we must first declare a variable for the list itself: va_list myList; The va_list type is a type declared in the stdarg.h header that basically allows you to keep track of your list. To start actually using myList, however, we must first assign it a value. After all, simply declaring it by itself wouldn't do anything. To do this, we must call va_start, which is actually a macro defined in stdarg.h. In the arguments to va_start, you must provide the va_list variable you plan on using, as well as the name of the last variable appearing before the ellipsis in your function declaration: #include <stdarg.h> float average (int n_args, ...) { va_list myList; va_start (myList, n_args); va_end (myList); } Now that myList has been prepped for usage, we can finally start accessing the variables stored in it. To do so, use the va_arg macro, which pops off the next argument on the list. In the arguments to va_arg, provide the va_list variable you're using, as well as the primitive data type (e.g. int, char) that the variable you're accessing should be: #include <stdarg.h> float average (int n_args, ...) { va_list myList; va_start (myList, n_args); int myNumber = va_arg (myList, int); va_end (myList); } By popping n_args integers off of the variable-length argument list, we can manage to find the average of the numbers: #include <stdarg.h> float average (int n_args, ...) { va_list myList; va_start (myList, n_args); int numbersAdded = 0; int sum = 0; while (numbersAdded < n_args) { int number = va_arg (myList, int); // Get next number from list sum += number; numbersAdded += 1; } va_end (myList); float avg = (float)(sum) / (float)(numbersAdded); // Find the average return avg; } By calling average (2, 10, 20), we get the average of 10 and 20, which is 15.
https://en.m.wikibooks.org/wiki/C_Programming/Procedures_and_functions
CC-MAIN-2015-18
refinedweb
1,980
69.11
my requirement is to remove duplicate rows from csv file, but the size of the file is 11.3GB. So I bench marked the pandas and python file generator. Python File Generator: def fileTestInPy(): with open(r'D:\my-file.csv') as fp, open(r'D:\mining.csv', 'w') as mg: dups = set() for i, line in enumerate(fp): if i == 0: continue cols = line.split(',') if cols[0] in dups: continue dups.add(cols[0]) mg.write(line) mg.write('\n') import pandas as pd df = pd.read_csv(r'D:\my-file.csv', sep=',', iterator=True, chunksize=1024*128) def fileInPandas(): for d in df: d_clean = d.drop_duplicates('NPI') d_clean.to_csv(r'D:\mining1.csv', mode='a') Pandas is not a good choice for this task. It reads the entire 11.3G file into memory and does string-to-int conversions on all of the columns. I'm not surprised that your machine bogged down! The line-by-line version is much leaner. It doesn't do any conversions, doesn't bother looking at unimportant columns and doesn't keep a large dataset in memory. It is the better tool for the job. def fileTestInPy(): with open(r'D:\my-file.csv') as fp, open(r'D:\mining.csv', 'w') as mg: dups = set() next(fp) # <-- advance fp so you don't need to check each line # or use enumerate for line in fp: col = line.split(',', 1)[0] # <-- only split what you need if col in dups: continue dups.add(col) mg.write(line) # mg.write('\n') # <-- line still has its \n, did you # want another? Also, if this is python 3.x and you know your file is ascii or UTF-8, you could open both files in binary mode and save a conversion.
https://codedump.io/share/F8xHfOnXOax1/1/is-pandas-readcsv-really-slow-compared-to-python-open
CC-MAIN-2017-34
refinedweb
297
69.28
01 June 2011 08:01 [Source: ICIS news] SINGAPORE (ICIS)--Asia’s spot monoethylene glycol (MEG) continued to spike, surging by $70-80/tonne (€48-55/tonne) on Wednesday, following shutdowns of Nan Ya Plastics' two plants with a combined 1.18m tonne/year capacity in ?xml:namespace> Three bonded warehouse cargoes were heard changing hands at $1,240-1,250/tonne CFR (cost and freight) CMP ( These represented a sharp increase from Tuesday’s close, and an 11% jump over the past three trading sessions, according to ICIS data. “We shut our No 3 and No 4 MEG plants today, but are still trying our best to negotiate with the government for an earlier restart,” said a source at Nan Ya Plastics. The No 3 unit can produce 360,000 tonnes/year of MEG, while the No 4 unit has a 820,000 tonne/year capacity. (
http://www.icis.com/Articles/2011/06/01/9464982/asia-meg-spikes-70-80tonne-on-nan-ya-shutdowns.html
CC-MAIN-2014-35
refinedweb
147
62.41
BBC micro:bit Play-Doh Touch Buttons Introduction The touch input is a very fleixble way of getting input on the micro:bit. There are so many different ways of doing it. The Play-Doh way is pretty simple. Program Here you have the basic code to check for touch input on the 3 large pins. Remember, you have to make contact between the pin and GND to trigger the touch event. from microbit import * while True: if pin0.is_touched(): display.set_pixel(0,0,9) else: display.set_pixel(0,0,0) if pin1.is_touched(): display.set_pixel(2,0,9) else: display.set_pixel(2,0,0) if pin2.is_touched(): display.set_pixel(4,0,9) else: display.set_pixel(4,0,0) sleep(10) Here are the Play-Doh buttons in action. I've used an edge connector and jumper wires to make my connections. You could use alligator clips or 4mm banana cables too. Pin 2 Pin 1 Pin 0 Challenges - Clearly, any game you made with plain old buttons needs some nicely deisgned touch inputs to do the job instead. - Think about where you place the GND lump and the pin lump. You could place them so that the signal is triggered when another conductive object makes contact with both of them. Some imagination and you can do lots of intersting things with the touch input.
http://www.multiwingspan.co.uk/micro.php?page=doh
CC-MAIN-2019-47
refinedweb
225
77.64
Oh and sorry, didn't mean to ignore the other question... The main support group is here: Advertising On 3/30/07, Ryan Gahl <[EMAIL PROTECTED]> wrote: > > You're really going the long way around to achieve something that's quite > simple. See my last reply in this thread. > > On 3/30/07, Lorderon < [EMAIL PROTECTED]> wrote: > > > > > > Hi, > > > > The point was to extend the original object with reference to the > > parent object (the original object as before the extension), and > > without creating a new class/object inherited from the original one. > > > > I solved my problem with this: > > > > addPlugin: function(pluginObj) { > > var $parent = Object.clone(this); > > for (var p in pluginObj) { > > if (typeof(pluginObj[p])=='function') > > pluginObj[p] = eval(pluginObj[p].toString()); > > } > > Object.extend(this,pluginObj); > > } > > > > Notice the addPlugin method defines the $parent object, then it re- > > evaluates the methods in the extension object (pluginObj). > > Wouldn't it better if you could access the parent via a special object > > $parent.show() rather than making: > > this.show.bind(this)(); > > > > Where can I find the support forum? > > > > -thanks, Eli > > > > > > On Mar 30, 4:52 pm, "Ryan Gahl" < [EMAIL PROTECTED]> wrote: > > > However, you could employ a slightly more traditional approach, which > > is to > > > leave the first version of A as the base class, and then subclass it > > as > > > needed: > > > > > > (Btw, the way you have written A below is as a static object, and as > > such > > > you are gaining nothing by using Class.create(). Class.create() points > > the > > > constructor for the class at an "initialize" method on the class's > > > prototype, which of course you never define) > > > > > > Try something like this (notice that I define public class level > > instance > > > methods within the constructor to ensure they are truly only given to > > > instances of the class): > > > > > > var A = Class.create(); > > > A.prototype = { > > > initialize: function() { > > > //public instance members > > > this.show = function() { > > > alert("I am A"); > > > > > > }; > > > } > > > } > > > > > > var B = Class.create(); > > > Object.inherit(A, B); > > > Object.extend(B.prototype, { > > > initialize: function() { > > > //base class construction > > > this.base(); > > > > > > //override show > > > var oldShow = this.show.bind(this); > > > this.show = function() { > > > oldShow(); > > > alert("I am EXTENDED A"); > > > > > > }; > > > } > > > }); > > > > > > var test = new B(); > > > B.show(); // alerts "I am A" and "I am EXTENDED A" > > > > > > This technique comes right from by blog post on the inheritance model. > > > > > > Now, if you're looking to tack (or override) methods on existing > > > instances... you don't need a special addPlugin() method like what you > > > started doing. You can just take advantage of plain old javascript... > > > > > > (Assume we are back to using the definition of A from your original > > post)... > > > > > > var oldShow = A.show.bind(A); > > > A.show = function() { > > > oldShow(); > > > alert("I am EXTENDED A"); > > > > > > }; > > > > > > Notice in my first example (the object oriented approach), and in this > > > > > example (the static object version) I use a .bind() call. Doing so > > ensures > > > that if A.show() ever tried to access "this", it would still point to > > the > > > correct scope. > > > > > > For instance, imagine A.show() looked like this: > > > > > > function() { > > > alert(this.name); > > > > > > } > > > > > > Ok... so I'll stop now I guess, just realize this list isn't really > > supposed > > > to be a support list. :-) > > > > > > Hope this has helped though. > > > > > > > > > > > > > -- > Ryan Gahl > Application Development Consultant > Athena Group, Inc. > Inquire: 1-920-955-1457 > Blog: > -- Ryan Gahl Application Development Consultant Athena Group, Inc. Inquire: 1-920-955-1457 -~----------~----~----~----~------~----~------~--~---
https://www.mail-archive.com/prototype-core@googlegroups.com/msg00536.html
CC-MAIN-2017-34
refinedweb
542
56.86
Double slit experiment¶ Here we solve a linear wave equation using an explicit timestepping scheme. This example demonstrates the use of an externally generated mesh, pointwise operations on Functions, and a time varying boundary condition. The strong form of the equation we set out to solve is: To facilitate our choice of time integrator, we make the substitution: We then form the weak form of the equation for \(p\). Find \(p \in V\) such that: For a suitable function space V. Note that the absence of spatial derivatives in the equation for \(\phi\) makes the weak form of this equation equivalent to the strong form so we will solve it pointwise. In time we use a simple symplectic method in which we offset \(p\) and \(\phi\) by a half timestep. This time we created the mesh with Gmsh: gmsh -2 wave_tank.geo We can then start our Python script and load this mesh: from firedrake import * mesh = Mesh("wave_tank.msh") We choose a degree 1 continuous function space, and set up the function space and functions. Setting the name parameter when constructing Function objects will set the name used in the output file: V = FunctionSpace(mesh, 'Lagrange', 1) p = Function(V, name="p") phi = Function(V, name="phi") u = TrialFunction(V) v = TestFunction(V) Output the initial conditions: outfile = File("out.pvd") outfile.write(phi) We next establish a boundary condition object. Since we have time-dependent boundary conditions, we first create a Constant to hold the value and use that: bcval = Constant(0.0) bc = DirichletBC(V, bcval, 1) Now we set the timestepping variables: T = 10. dt = 0.001 t = 0 step = 0 Finally we set a flag indicating whether we wish to perform mass-lumping in the timestepping scheme: lump_mass = True Now we are ready to start the timestepping loop: while t <= T: step += 1 Update the boundary condition value for this timestep: bcval.assign(sin(2*pi*5*t)) Step forward \(\phi\) by half a timestep. Since this does not involve a matrix inversion, this is implemented as a pointwise operation: phi -= dt / 2 * p Now step forward \(p\). This is an explicit timestepping scheme which only requires the inversion of a mass matrix. We have two options at this point, we may either lump the mass, which reduces the inversion to a pointwise division: if lump_mass: p += assemble(dt * inner(nabla_grad(v), nabla_grad(phi))*dx) / assemble(v*dx) In the mass lumped case, we must now ensure that the resulting solution for \(p\) satisfies the boundary conditions: bc.apply(p) Alternatively, we can invert the mass matrix using a linear solver: else: solve(u * v * dx == v * p * dx + dt * inner(grad(v), grad(phi)) * dx, p, bcs=bc, solver_parameters={'ksp_type': 'cg', 'pc_type': 'sor', 'pc_sor_symmetric': True}) Step forward \(\phi\) by the second half timestep: phi -= dt / 2 * p Advance time and output as appropriate, note how we pass the current timestep value into the write() method, so that when visualising the results Paraview will use it: t += dt if step % 10 == 0: outfile.write(phi, time=t) The following animation, produced in Paraview, illustrates the output of this simulation: A python script version of this demo can be found here. The gmsh input file is here.
https://www.firedrakeproject.org/demos/linear_wave_equation.py.html
CC-MAIN-2022-40
refinedweb
541
55.88
[SOLVED] QThread is not working - depecheSoul Hello. I think I have a problem but I am not sure. I have made a simple example for QThread, and the problem is that output of the program is not whatI have expected. I have expected output like this: 1,1,2,1,2,3,2,3,4,3,4,5,4,... but I get output like this: 1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0 Please what am I doing wrong. This is my code: main.cpp @#include <QtCore/QCoreApplication> #include "mthread.h" int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); mThread thread1; thread1.start(); mThread thread2; thread2.start(); mThread thread3; thread3.start(); return a.exec(); } @ mThread.h @#ifndef MTHREAD_H #define MTHREAD_H #include <QtCore> class mThread : public QThread { public: mThread(); void run(); }; #endif // MTHREAD_H @ mThread.cpp @#include "mthread.h" mThread::mThread() { } void mThread::run() { for (int f1=0; f1<10; f1++) qDebug()<<f1; } @ Your code is working perfectly. It is just not doing what you were expecting. In this case, it is time to adjust your expectations. There is no way to predict at what moment one thread yields control to another thread. It may happen at any moment, or not at all. They may simply run really parallel on different cores or processors. So, why do you expect a certain order of output? Did you take into acount that perhaps your loops are so fast, that control is simply not yet yielded by the time your thread finishes? I would be worried if a tread was yieded so fast and so often as to not be able to run a small loop like this, as context changes are expensive! By modifying your code from main.cpp to this: @int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); mThread thread1, thread2, thread3; thread2.start(); thread1.start(); thread3.start(); return a.exec(); }@ I get the following output: 0 0 0 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 7... Even thou stack allocation is amazingly fast, an empty for loop is quite fast too. By creating the three threads before running them I get more concurent behavior, which goes to show that the overhead of creating a thread is too much for your scenario to get concurrent execution of your run() method. Creating a thread takes longer than a for loop to count to 10, as simple as that, even considering you use post increment which is suboptimal. - depecheSoul Thank you very much for your answers. I am learning Qt from voidrealms.com videos, and when I watched videos about QThread Bryan in his example got output of program like this 1,1,2,1,2,3,2,3,4,3,4,5,4,… so when I tried the example I didnt get that random order but 1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9,0, so I thought I did something wrong. Thank you Andre and ddriver, for explanation, and for helping me to learn more about Qt. ;-) Unless to sync your threads all you can expect is the unexpected, it is completely normal to get different output every time. The "seemingly" perfect output I got when I tested your code was purely by coincidence, by adding a name variable for every thread the ugly truth behind multithreading is revealed: !! It is the operating system thread scheduler that manages running threads, and besides setting priority there is not much you can do, unless you sync your threads.
https://forum.qt.io/topic/14336/solved-qthread-is-not-working
CC-MAIN-2019-30
refinedweb
633
73.58
Updated by Pratiksha Amit Sharma On February 26, 2014 Learning java can give you headaches if you are a beginner. Why? Because before starting to learn Java programming, you need to prepare your machine. You need to install everything you need for java programming, making it suitable for coding in Java language. But don’t you worry…we will arm you with all the tools you need to get started, including this renowned Ultimate Java Tutorial for Beginners. First off, some introductions.. Checklist before you start coding So, first things first – before writing your first code in Java, you need to install what is called as Java virtual machine (JVM), also known as Java Runtime Environment (JRE). JRE can be downloaded from this link: One that’s complete, you will have JVM installed on your PC. That allows Java programme to run on your machine. To write and test the Java codes, you need to install Java’s software development kit (JDK). The JDK can be downloaded from this link: Here, you have a bewildering list of options to download from. Look for Java SE (Standard Edition). One of the links on this webpage will enable you to download JDK and NetBeans. More explanation on NetBeans in a bit. Install the JDK on your machine, and be sure that you are downloading all the software appropriate to your Operating System. You should be clear about whether you have a 32-bit or a 64-bit OS. What’s the next step? How do you compile and run a Java programme? Before moving on to the next step i.e., IDEs, let’s talk about some nitty-gritties of how Java programme works. You always start writing codes in a text editor (you will find in-built text editors with IDEs: NetBeans, Eclipse, or Jcreator), called the source code which is saved with a file extension .java. The java compiler (Javac) turns the source code into a class file with extension .class. Once you have the class file, it can be run on JVM. Now to make this process hassle free, IDEs come to rescue. Now, what’s the next step in java learning? Let’s get your hands dirty with this IDE thing. The IDEs (Interface Development Environment) take care of all the creating and compiling of jobs for you behind the scenes. They takes your code, create the java file, launch compiler to arrive at class file, and let you run your programme. Here is a list of some quality IDEs like: - Eclipse – a free, popular program. (Learn Java Programming Using Eclipse) - Netbeans – another free program; it is open source and available in many languages. - Jcreator – A for-pay program that provides a bit more power than most other IDEs. As described in this Java course, once you install your IDE, you will need to begin a new file to program in. Exactly how this is done will depend on your particular IDE. Brief intro of How to start using IDEs for Java coding 1. Eclipse Download and install the eclipse. Once you initialize the eclipse, it will ask for workspace. You may use the default one, or may specify the desired path. All the files generated during java programming will be stored in this workspace. When you have the Eclise interface window open, go to ‘File,’ and then click ‘New Java Project’. The “Create a Java Project” dialogue box appears as shown below: Give name to your project for example: FirstProject. Click next, then click Finish. Then, right-click on the Project folder in the upper left, hover over ‘New,’ then click ‘Class.’ Name your Class anything you want, such as ‘firstproject’. Now look for the box that has Eclipse ‘create the main method,’ and make sure that this is checked. The New Java Class dialogue box opens up: The workspace will be created by Eclipse for you to write codes. A snapshot of the Eclipse workspace will look like this: **Start Learning Java Now With This Training Course From Udemy** 2. NetBeans Download and install the Netbeans. With your first run, the screen will look like this: To start a new project, click on File then New Project. The following dialogue box appears: Select Java under Categories and Java Application under Projects. Once you click on Next we have a New Java Application dialogue box: In the project name area, type the name of your project, and in the Create Main Class box name the class with the .Main extension. In the above example, we have a project “FirstProject” and with class as “firstproject.Main”, click Finish and NetBeans will go to work and prepare the workspace for us, with an in-built text editor for us to write our codes. A screen shot of the work space of the NetBeans is provided below: Walking Before You Run – Hello World! The ‘Hello World!” program is a classic training program often used to help new students learn Java and other languages. You can learn about it in this Java training course . We will use Eclipse IDE for learning java programming in this tutorial. Once you have created your first Class (in Eclipse IDE), you should be looking at a text-editor type screen with some codes already written for you. You will see the words ‘public class firstproject’ and under that, the words ‘public static void’ with some words after that. This second group of words is known as the ‘main method’, and it is what we will be focusing on. The entire structure will look like: public class firstproject { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub } } You might be thinking, what the heck is going on? Some purple lines and curly brackets {}, what are they and why they are used? Okay, so you have some line of codes already written on the text editor by your IDE. Let’s check more about the lines above. What are Comments? When the programme runs, the comments are ignored. That means you can write anything in the comment section. Use comments whenever possible, as they are the easiest way to communicate to others about what your code does. Suppose you are working in a team environment and you are distributing your code for further work to some other team member – the comments will help your teammates understand the thinking process that you went through in writing that piece of code. They are especially important when you will re-visit your programme codes for updates or modifications in the future. The comments will tell you your exact logic and code at the time. The comments are enclosed in between: ‘/*’ before starting the comment and ‘*/’ at the end of your comment. For example: /* your comments starts here….. The logic behind your code…. Your comment ends ….. */ A special note on single line commenting. You can insert the single line comment using ‘//’. For example: // This is a single line comment The above multi-line comment can be re-written using the single line commenting style. Example: // your comments starts here….. // The logic behind your code…. // your comment ends … A brief note about the Javadoc comments. A Javadoc comment starts a with ‘/**’ (forward slash followed by two asterisks) and ends with a ‘*/’. For example: /** This is a Javadoc comment */ The skeleton of your programme ‘public class firstproject{} is a code segment (more about the classes later). You should have taken a note of the ‘{ }’ bracket symbols. The start of the code segment is done with the left ‘{‘curly bracket and the end of the code segment is followed by the right ‘}’ curly bracket. Anything inside the ‘{}’ belong to that code segment. Inside the code segment of the class we have another code segment: public static void main(String[] args) { // TODO Auto-generated method stub } You will be typing your code in between these curly brackets. Look at the word “main”; it’s very important. Whenever a Java programme starts, it looks for the method “main()”. A method is some piece of code. And “main()” is a special method or code segment with its own ‘{}’, and it serves as the entry point of the Java programme. You will be curious about the entries before the word “main”: public static void. To learn about them, you need to go through the course. That is a pretty big topic in itself. In brief, “public” means that the method “main()” can be called outside the class in which it is defined; “static” means that you don’t have to create a new object; “void” states that your method “main()” is not returning any value; the round brackets ‘()’ after main contain command line arguments. In nutshell, you have a class – firstproject with its main() method. Running your First Programme – “Hello World” Inside the curly brackets of the main method, insert this tiny piece of the code – ‘System.out.println(“Hello World!”);’ /* This is known as a print statement. Write your code without the Apostrophe (‘) */ Run the programme by clicking the Run in Run Menu or by using ctrl + F11. You will see the output of the program in the box at the bottom of the screen. If everything went right, it should say “Hello World!” Congratulations! You’ve just created your first Java program, and are on your way to making your own applications and web codes. Be sure to save your program. Dissecting your first program Before jumping ahead, let us discuss some important concepts. You would be wondering about the nitty-gritties of your little and very simple code that you just typed “System.out.println(“some string”)”. You surely would want to know what the heck is this “System” followed by some period or dot (.) then “out” which is again followed by a period and then some method “println()”? And you’re right, let’s dissect this little piece of code. ‘System’ is a built-in class present in java.lang package. (according to Javadocs). As System class is contained in the package java.lang, and since java.lang package is imported in every java program by default, you therefore need not to import java.lang package. Otherwise, you need to import packages explicitly before using them. In fact, java.lang is the only package in Java API which does not require an import declaration. ‘out’ represents output stream (i.e Command window) and is the static data member of the class System. Thus, System.out denotes the System as the class & out as the static object. ‘.println(“string”)’ is method of out object that takes the text string as an argument and displays it to the standard output i.e on monitor screen. Your little programme will cause the computer to print out whatever words or symbols are there between the “ “ quotation marks and within the parentheses ‘()’ to console. A little warning! You need to remember to include the semi-colon at the end – this is the signal that this particular line of code is done. As always, the basics are the most important, but you don’t have to be content with just the basics. Continue to learn Java for more advanced tasks like sorting, searching, and network programming with Advanced Java Programming classes on Udemy. For more comprehensive training, try out some of these tutorials: - Intro to Java Programming - Java for Complete Beginners - Introduction to Java Training Course - Java Programming Using Eclipse - Learn Java From Scratch - Java – Make it Your Cup of Coffee - Advanced Java Programming
https://blog.udemy.com/learn-java/
CC-MAIN-2015-32
refinedweb
1,918
73.37
DIY 5.2kW Solar Tracker Controlled by Raspberry Pi Introduction: DIY 5.2kW Solar Tracker Controlled by Raspberry Pi A solar tracking system can increase the output of the solar farm with up to 40%. All commercial solar tracking systems I have found cost more than 40% of the total cost based on a fixed installation. Some solar tracking systems cost 2 times more than a fixed installation. Hence it is better to buy more panels than to invest in solar tracking, unless you build the solar tracking system yourself. I set out to design and build a 5.2 kW solar tracking array, consisting of 20 panels, 260W each. Each panel is about 1x1,6 meters, so the construction consist of 2 rows of panels, each row is about 2 meters high and 12 meters long. The electronics I use to control the dual axis motion needs to be low cost, yet reliable. I decided to use a Raspberry Pi computer to calculate the sun position and to control the motors. The program was developed using Python, it is so easy to learn that anyone can understand and modify the program. I spent about 6000 Euro/Dollars on solar panels, 3-phase inverter and cables. The solar tracking system cost about 3 000 Euro. A fixed frame would cost about 1000 Euro. I did not count my hours, but it was several days of work to build this. The most expensive parts of the solar tracker was the linear motors, I used 4 motors (120 Euro each) and 4 drivers (50 Euro each) Step 1: (Optional Step) Build Prototype The prototype was built from scrap wood. It is actually two frames, one larger frame to track the suns vertical position, and one smaller frame to follow the horizontal position. The solar panel is mounted on the smaller frame. Galvanized waterpipes are used to mount the frames and allow for the rotating movement. An old screw driver was used as a linear motor, the Raspberry PI controls a motor driver that can set the speed and direction of the screw driver. The prototype only tracked one axis. Materials used for prototype - Old 12V screw driver used as motor - RaspberryPi computer to calculate sun position and move the frame - Parallax HB-25 motor driver to drive the motor with GPIO pins - Limit switch to detect Home position - 10mm threaded rod and nuts used for linear motion - 12V DC source - 12V to 5V converter for the Raspberry Pi to run on 12 V All plastic parts for the prototype was made in my 3D printer. I have attached the Python code for my prototype to this step in the instructable. Step 2: Build the Full Scale Solar Tracker The solar tracker frame was built mainly with impregnated 2" by 4". A stable foundation is very important. Depending on the ground conditions, you might want to dig or make concrete foundations. I used a mix of both since I found bedrock on some locations while digging. I will continue to write on this instructable when I have a moment to spare. I have the code, the bill of materials, drawings, lots of photos.... Let me know if you have questions. The system has been online since august 2016. The production is displayed here:... Step 3: Configure the Raspberry Pi Follow these steps to configure your Raspberry Pi - Install Raspiban for Raspebrry Pi - Download and install bitwise SSH on your PC (To remote control the Pi) - Set time and date - Set time zone - Enable NTP so the time is always correct - Enable wireless - Disable ipv6 - Install Python 3.5 - Install Pysolar (Calculates the position of the sun based on the date and time) - Install RPI.GPIO - Create a program to control the solar tracker, or let me know if you want mine. I use an apache2 web server and run parts of my application as a Python CGI in order to remote control the applicaiton. This way any device with a web browser can be used to control the application. The plastic parts in the picture was made in my 3D printer. Step 4: Let Me Know If You Need Any Further Information More pictures and a video will be added soon. Hello Mats It is a fine project you've made, I find it safe in stormy weather. Your panels are pretty close to each other, don't they shade each other in the morning. What time of day is they free of shade from each other. Martin (Denmark) Hi Martin. I don't know how storm proof it is yet. I had some wind during the winter and it seems good. The concrete Foundation is 1 meter deep so it should be good. Denmark can be a lot windier so you might need more struts. The second row is about about 50cm higher that the front row of panels, it is a bit shade in the morning and evening but not much. Today the production of this sun tracking solar array was more than 50% more efficient than my fixed solar array. I have one 5,2kW fixed solar panel array and this one. There is a bit of shade at sunrise and sundown, but the power produced during the morning and evening is very low. I got 10kWh from the sun tracking 5,2kW array today and only 6kWh from the fixed 5,2kW array. I built this in august, so I'm not sure how much I will loose from shadow in the summer. It seems like the atmosphere takes most of the sun during the shady hours. I need to see how it behaves in the summer before I can know for sure.... Hi Mats. I'm interested in your project. I'm about to build a trough solar concentrator, and I need a way to very precisely point it at the sun. Some questions for you: 1) You answered one commenters indicating that you that you've "attached" various code. Do you mean, the code would be accessible via the Download button? I'm wondering, since it seems that the download button only works if you have a premium Instructables account. I'd love to see the whole bit. 2) Is the motor always spinning, or does it move periodically then stop. 3) Does the Pi/controller track how many turns the motor makes, or do you determine speed/duration from testing, then use the limit switch to "return to zero", or something else. Related: Had you also considered using stepper motors for this project? 4) Could you provide pictures of the linkage between the motor(s) and the frame? 5) What make of 3D printer do you have, and do you recommend it? Much thanks in advance! -Kurt (San Jose, California, US) Hi. Very interesting with a solar concentrator! My solar tracker will aim the panels in the general direction of the sun. It is not super accurate. A an error of a few degrees is hard to notice. I think that your solar concentrator might need more accuracy, so stepper motors will probably be best for you. 1. At the bottom of step 1 there is a .py file. (Under the photos). Can you check if you see that? Otherwise I need to email it to you. I do not have a premium account for myself. 2. The motors starts every 10 minutes and moves for a certain time. 3. The motor does not track how many turns the motor makes. I use speed/duration calculations. I did not use stepper motors since that would be much more expensive. 4. I will add some photos of the linkage. I made it my self and used a steel sheet and a threaded rod. 5. I have a Vertex Velleman K8400 and it works great. It cost about 600 dollars and it is a kit that you need to put together yourself. Kind regards, Mats Here is a photo of the motors that control one of the rows. There is one linear motor for the side-to-side movement, and one motor for the angle movement. Love your project. I would like to attempt it. Do you have a link for the Python code for the tracking calcs? Thanks. Hi. It was a fun project! I used pysolar for the calculations. (Python version 3.5) Install pysolar by typing: sudo pip3.5 install pysolar Here is the code. #for pysolar from pysolar import solar import datetime def GetSunPosition(): longitude = 12.5350953 latitude = 59.6365662 elevation = 55 when = datetime.datetime.now() altitude_deg = solar.get_altitude(latitude, longitude, when) sun = solar.get_azimuth(latitude, longitude, when, elevation) if abs(sun) >= 180: sundirection = abs(sun)-180 #Works before noon else: sundirection = abs(sun) + 180 #Works after noon print(when) print(sun) print('angle', altitude_deg) print('bearing', sundirection) return sundirection Thank you for taking the time to respond to my question. I am a novice to Python and am taking a class. I don't have Internet available and am going to have to attach a time module my pi. Would it be possible to get all your code? I really liked the way you controlled the linear actuators. Thank you again. Bruce Hi. I have attached a file called Servo - for instructables.py to this instructable. That is all code for my prototype. The real system needs internet, I control the application from a website published on my rasperry pi. Let me know if my .py contains all you need. I used to learn Python. It is free, online training. A problem with the raspberry pi is that you need to set the time everytime it is started unless you have internet access. Is that what you need your time module for? I let my raspberry pi go online to set it's Clock using NTP. Again I appreciate you taking the time for an answer. We are moving to the UK to a remote location. No mains electricity or Internet for miles. Can get mobile data though. But that just gets too complicated to go that route. Hence the need to be off grid with wind and solar. So I will have to use a time module to keep time. My apologies, I did not see the Servo code at first. Thanks again. That is a great idea. I will buy a time module and upgrade my own system. Which module will you use? I can get the same kind and let you know when I am done with the code. I would like that. I am glad that I can contribute something. I purchased a ChronoDot - Ultra-precise Real Time Clock - v2.1 PRODUCT ID: 255 from the Adafruit website. It was $17.95. Looking forward to seeing your results. Hi. I have recieved my RTC now. I will connect it to my Raspberry PI as soon as I get some spare time. I bought a similar RTC based on the same chip, so the code should work for you as well as soon as I'm done with it. He man like your projeckt. It's looking great. I have a question: i'm looking for the servo code also. I can seem to find it on your instructable. Can you give me a hint where to look? I have downloaded the PDF, but its not there. I like the setup with the Raspberry and using Python with pysolar and I would like to give it a go in the future. Can you help me out? Kind regards, Marcel I have now attached the python code for my prototype to this instructable. Let me know if you have any further questions. Great job. I don't suppose you have the instructions in Pdf? Electronics used? Code? I use a rasperry pi, Parallax HB 25 motor drivers, RB-FRA-89 linear motors, a 12V to 5V/USB converter from an auto parts store (to power the rasperry pi with 12V). I don't have the pdf yet. I will add all instructions to this instructable, including the source code. The project is so expensive! How much does cost kWH in your country from energy supply organisation? What is estimation time for money back? Hi. I live in Sweden where the cost is about 10 Euro cents per kWh. Of the 9000 Euro, 35% is given back to me as a grant from the government. So my actual cost is 6000 Euro. I expect the solar farm to produce about 8000 kWh per year which will save me about 800 Euro per year. The electricity that I don't use myself is sold, and I get approximately 6 cents per kWh. The payback time is about 10 years with the government grant. 15 years if you build without grant. The expected Life time of the solar panels is 30 years (20 years warranty). This will not make me rich, but it was a fun Project, it's good for the Environment, and if I get som batteries in the future, I can use electricity even if the Power is out. Nice! tracking makes solar panels way more efficient.
http://www.instructables.com/id/52kW-Solar-Tracker-Controlled-by-Raspberry-Pi/
CC-MAIN-2017-39
refinedweb
2,196
75.3
At 12:59 PM 6/15/2007 -0500, Rick Ratzel wrote: > Yes, I see now that it does get imported, but unfortunately not > by my code. >It's imported as a result of importing pkg_resources, since >enthought.traits is >a namespace package. Does it *need* to be a namespace package? > Grasping at straws, I set enthought.__path__ in enstaller-script.py >immediately before and after pkg_resources is imported, only to get the same >results both times. > > But when I switch to setuptools 0.7 there are no enthought.traits modules >loaded at all (in fact, the only enthought. modules loaded are the enstaller >ones)...this must have been one of the changes you mentioned. Yes, that's exactly the change I'm talking about; in 0.7, namespace packages are always loaded lazily. > Phillip, I really appreciate the time you're taking to look at this. I'm >going to release a version which simply requires setuptools 0.7...unless you >think that's a terrible idea, or you discover that I'm doing something wrong >that I can fix. The only things I can think of would be removing traits as a namespace package, or manually setting its __path__ also. That is, first set enthought.__path__, then enthought.traits.__path__. However, this will only work right if no traits.__init__ module does anything but declare the namespace.
https://mail.python.org/pipermail/distutils-sig/2007-June/007731.html
CC-MAIN-2017-30
refinedweb
228
70.5
Firmware device the ability to call home querying for new updates, downloading them and flash itself into the latest version available. A deployment script First option is to have an automated deployment script. Something that grabs the latest stable version of your code and throws one update over the air for each device, probably providing specific options for each device, like “add DHT22 support”. Since I use platformIO I can create custom environments for each device, like this: [env:washer-device] platform = espressif framework = arduino board = esp01_1m lib_install = 89,64,19 topic = /home/cellar/washer/ip build_flags = -Wl,-Tesp8266.flash.1m256.ld build_flags = -D SONOFF -D DEBUG -D ENABLE_POWER -D ENABLE_DHT upload_speed = 115200 upload_port = "192.168.1.114" upload_flags = --auth=fibonacci --port 8266 It defines the platform, board, build flags or OTA parameters. But as you can see you have to declare the IP of the device. That is troublesome. Normally your device will keep its IP for a long time as long as it keeps connected or if your router leasing time is long enough for the device to claim its previous IP if it doesn’t. But what if it changes? And it will. Eventually. You might have notice a non standard parameter in the previous environment definition: topic. A simple bash script can use this information to query your local MQTT broker for the latest (the retained) value of this topic and use this value as the IP for over-the air flashing. This is the script: #!/bin/bash MQTT_HOST=192.168.1.10 function help() { echo "Syntax: $0 " devices } function devices() { echo "Defined devices:" cat platformio.ini | grep 'device]' | sed 's/\[env:/ - /g' | sed 's/\-device]//g' } function valid_ip() { local stat=0 rx='([1-9]?[0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])' if [[ $ip =~ ^$rx\.$rx\.$rx\.$rx$ ]]; then stat=1 fi return $stat } # Check arguments if [ "$#" -ne 1 ]; then help exit 1 fi device=$1-device # Get IP topic=`cat platformio.ini | grep $device -A 10 | grep "topic" | cut -d' ' -f3` if [ "$topic" == "" ]; then echo "Unknown device $device or topic not defined" devices exit 2 fi ip=`mosquitto_sub -t $topic -h $MQTT_HOST -N -C 1` if valid_ip $ip; then echo "Could not get a valid IP from MQTT broker" exit 3 fi platformio run -vv -e $device --target upload --upload-port $ip Now you just have to declare the environment for each device in the platformio.ini file and call the script once for each device. This option is great. If you write your code right you can create lightweight binaries targeted for each device from the same code base. The deploy script must check if the update has been successful by monitoring the update output. But I’d also provide a way to listen to client’s “hello” messages after reboot. These messages should contain the current versions for the firmware and the file system. This works fine for 10, maybe 15 devices. That is for a home or small office. Another requirement is that you must be in the same network the device is. What if you have to manage some tens or hundreds of devices? What if they are in different networks, buildings, towns, countries?! You will need some kind if unattended pull update process, something Windows has been criticised for a long time but, hey, we are talking about small, simple, one-task-oriented, unattended devices. Automatic Over-The-Air Pull Updates I’ve been testing this automatic over the air update process based on the official ESP8266httpUpdate library. The library handles the download and flashes the binary and it supports both firmware and file system binaries. There is even an example that can be used as a starting point. I’ve added a way to discover available updates and wrapped it up in the same fashion the ArduinoOTA library does, providing a callback method to display debug messages or perform in-the-middle tasks. The library is open source and available at the NoFUSS repository at bitbucket. The Protocol This first revision of the protocol is very simple. The client device does a GET request to a custom URL specifying its DEVICE and firmware VERSION this way: GET For instance: GET The response is a JSON object. If there are no updates available it will be empty (that is: ‘{}’).. Cool! So this is the output in the serial console: Device : TEST Version: 0.1.0 [NoFUSS] Start [NoFUSS] Updating New version: 0.1.1 Firmware: /firmware/test-0.1.1.bin File System: [NoFUSS] Firmware Updated [NoFUSS] Resetting board ets Jan 8 2013,rst cause:2, boot mode:(3,7) load 0x4010f000, len 1384, room 16 tail 8 chksum 0x2d csum 0x2d v3ffe8408 @cp:0 ld � Device : TEST Version: 0.1.1 [NoFUSS] Start [NoFUSS] Already in the last version [NoFUSS] End Installing the server, the server will store there a log with information on the devices that queried for updates, the version they are reporting and the answer they received. Versions The versions info is stored in the data/versions.json file. This file contains an array of objects with info about version matching and firmware files. Version matching is always “more or equal” for minimum version number and “less or equal” for maximum version number. An asterisk (*) means “any”. Device matching is “equals”. The target key contains info about version number for the new firmware and paths to the firmware files relative to the public folder. If there is no binary for “firmware” or “spiffs” keys, just leave it empty. [ { "origin": { "device": "TEST", "min": "*", "max": "0.1.0" }, "target": { "version": "0.1.1", "firmware": "/firmware/test-0.1.1.bin", "spiffs": "" } } ] Using the client The client library depends on Benoit Blanchon’s ArduinoJson library. In the example. Wrap up I wanted this library it to be easy to use and lightweight so I can still do OTA on 1Mb devices. The wrapper itself adds very little to the binary size but the JSON support certainty plays against this. Will give a try to plain text output. As I said the code is free source and available at the NoFUSS repository at bitbucket. The project is currently in beta status, it works but I’m not 100% confident and I have doubts about the API of the service so any suggestions will be welcome. ESP8266 calling home by Tinkerman is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. hey dude i want to ask you a technical question. email me [removed] i just need a little bit of guidance on how to connect either a bus pirate or logic analyzer to an interface on programmable scanner to make a TTL cable thats no longer manufactured. I dont know how to send you pix or get a reply so im asking you here.. thanks.. love your articles Hi I’ll try to help. Contact me by email or twitter @xoseperez Hi Xose, Thanks for the nice post on Automatic over the air update.. I found the code at the NoFuss repository bitbucket which you have been recommended in the post. But I cannot be able to understand the control flow of the program code and theoretical aspects of the code. Could you help me regarding this? Regards, Vamshi The solutions has two parts: a webserver and a client library for the ESP8266. You can find an implementation of the webserver in the server folder of the repo. It is a PHP app. Instructions on how to install it are in the README file in the root of the repo. This webservice listens to requests and checks against a JSON file if there is a firmware update available. At the moment you will have to modify that JSON file manually (there is no backoffice). An example of that file is server/data/version.json. for each entry of that file the “origin” defines the device and version that the requester provides and the “target” defines the available update files (both firmware and file system image). From the client point of view you just have to include the library definitions and in you setup initialize the values, then call the handle method as often as you want to check if there are available updates. #include "NoFUSSClient.h" ... void setup() { ... NoFUSSClient.setServer(""); NoFUSSClient.setDevice("MYSENSOR"); NoFUSSClient.setVersion("1.0.0"); ... } void loop() { ... (every X minutes) NoFUSSClient.handle(); ... } The “handle” method performs an GET request against the server URL provided by the “setServer” method, passing by the device and version provided in the “setDevice” and “setVersion” methods in the setup(). Then it parses the response and downloads and flashes the filesystem image (if any) and the firmware image (if any) before rebooting the device. To troubleshoot the installation you should test the server first to see if it’s working and the responses are valid. For any given requests the server should response with the “target” content of the first match in the versions.json file. Also test that the paths to the images are valid (they are relative to the URL of the service). If you need more info or detail feel free to contact me by email. Can you please post your email-id? You can check my email in the code files, or you can use the contact form. Pingback: Automatic OTA Updates | esp8266hints Pingback: Automatic OTA Updates | esp8266hints
http://tinkerman.cat/esp8266-calling-home/
CC-MAIN-2017-43
refinedweb
1,568
63.49
05 November 2007 21:13 [Source: ICIS news] By ?xml:namespace> HOUSTON (ICIS news)--Saudi Basic Industries Corporation’s (SABIC’s) ethlyene projects that are scheduled for completion in Saudi Arabia in 2008 are mostly on track but start-up could be delayed until the second half of the year, chief executive Mohamed al-Mady said on Monday. “All plans are going forward, but there were delays and increased cost in some of the plants that we have planned,” al-Mady said in an interview. "We will see our two largest crackers coming mid-year next year." He said projects in the kingdom share with others in the Four ethylene projects are planned to go on stream in 2008 in the kingdom, including SABIC's ventures in Eastern Petrochemical (Sharq), a 1.3 m tonne/year facility, and Yanbu National Petrochemical (Yansab), a 1.3m tonne/year facility. “Contractors are still stretched out so we are very lucky that the delays are still manageable and the increased costs are still manageable, so most of our plans will materialise in the second half of next year,” he said. SABIC also seeks new benzene production capacity in the kingdom as a strategy to keep pace with rising demand for the product. “We know benzene prices will increase,” he said. Without providing estimated production capacities, al-Mady said that SABIC was in the early stages of figuring out where to source feedstock for opportunities in benzene projects. He suggested that SABIC would likely obtain feedstock from the contemplated expansion of Saudi Aramco’s refining capacity in the kingdom or even by taking part in Dow Chemical's and Aramco's Ras Tanura project planned for 2012. “I think there will be benzene coming from the kingdom in the future,” al-M
http://www.icis.com/Articles/2007/11/05/9076081/sabic-2008-saudi-ethylene-projects-on-track-ceo.html
CC-MAIN-2015-06
refinedweb
298
58.11
!ATTLIST div activerev CDATA #IMPLIED> <!ATTLIST div nodeid CDATA #IMPLIED> <!ATTLIST a command CDATA #IMPLIED> for a single object I instantiate it ok but when trying to instantiate multiple objects of it nothing happend. the code: using UnityEngine; using System.Collections; public class setEmptyTrays : MonoBehaviour { public GameObject myEmpty; public GameObject[] emptyArr = new GameObject[16]; // spread empty trays on shelp void Awake() { // remarking the loop: for (int i=0; i>16; i++){ emptyArr[1] = (GameObject)Instantiate(myEmpty, new Vector3(-0.85f, 0.27f, 8f), Quaternion.Euler(-90,0,0)) as GameObject; } } } *strong text*this works for 1 but when trying to loop to create multiple occarance it dosnt do any thing I might be missing some thing? asked Jan 01 '12 at 01:48 PM Tal770 16 ● 9 ● 11 ● 12 edited Jan 01 '12 at 01:53 PM aldonaletto 41.2k ● 16 ● 42 ● 195 I solved it so simple the for (int i=0; i>16; i++){ is wrong its has to be i<16 vuala solve typical programmer haed ake answered Jan 02 '12 at 09:07 AM I'v done that.. for (int i=0; i>16; i++){ emptyArr[i] = (GameObject)Instantiate(myEmpty, new Vector3(-0.85f, 0.27f+0.773f*i, 8f), Quaternion.Euler(-90,0,0)); emptyArr[i].name= "tray27-" + i; } and it dosnt work? all elements in the array are set as none and not receiving a name. I just wanted to show that runnung for a single instant it works here is the actual code i use: public class setEmptyTrays : MonoBehaviour { public GameObject myEmpty; public GameObject[] emptyArr = new GameObject[16]; public float base_y = 0.27f; public float base_z = 8.57f; public float rowon = 4.0f; public float shelfon = 0.7729f; public float[] posX = new float[] { -0.85f, 4.2f, 20.3f, 25.34f }; // spread empty trays on shelp void Awake() { for (int i=0; i>16; i++){ int setx = Random.Range(1,4); int sety = Random.Range(1,70); int setz = Random.Range(1,10); emptyArr[i] = (GameObject)Instantiate(myEmpty, new Vector3(posX[setx], base_y+sety*shelfon, base_z*rowon), Quaternion.Euler(-90,0,0)) as GameObject; emptyArr[i].name= "tray27-" + i; } }that dosnt work!!!!!!!!!!!!!!!! answered Jan 01 '12 at 02:37 PM You're creating all objects in the same place, and storing all of them in the same variable (emptyArr[1]). You should use i as the emptyArr index, and do something to have different positions for each new object - add i to the x coordinate, for instance: void Awake() { // remarking the loop: for (int i=0; i>16; i++){ // change the index to i and use x+i as a pos offset, for instance: emptyArr[i] = (GameObject)Instantiate(myEmpty, new Vector3(-0.85f+i, 0.27f, 8f), Quaternion.Euler(-90,0,0)) as GameObject; } } answered Jan 01 '12 at 021355 instatiate x13 asked: Jan 01 '12 at 01:48 PM Seen: 1187 times Last Updated: Jan 02 '12 at 09:07 AM trying to instatiate prefab multiple times in array Classes that contain an array of its own type do not appear in the inspector Iterating through and deleting gameObjects Array of GameObjects Filling array with gameObjects geting the closest object from a array Passing an Array How can I read data from a text file, putting a large amount of data into structures merging multidimensional arrays Accessing array var from other script EnterpriseSocial Q&A
http://answers.unity3d.com/questions/200761/cannot-create-array-of-gameobjects-instantiate.html?sort=newest
CC-MAIN-2013-20
refinedweb
563
62.88
Design blog! Whenever you create a program, you had to do some level of designing when you first started. This could have been small, like in the case of a throwaway script, or it could be much larger, like in the case of designing an entire library. This post is going to be looking into the design of Nyx, namely where the previous attempt, Aurora, failed, and what I've done with Nyx to--in my opinion at least--fix it. SDL style Aurora had a major flaw in that it put zero restriction on the structure of the program that was written. This was actually something I considered a feature of the library at the time: you could write your program however you want, and you'd insert the graphical stuff wherever it happened to need to go and be done with it. Unfortunately, in practice, this led to some pretty awkward things to deal with. A lot of this stemmed from me unintentionally pulling from what I'd used in the past. The main library I used whenever I needed graphics was SDL2. SDL is great, I talked about that in the "libraries" article, but it has a lot of boilerplate code you are effectively required to put in place. For example, here's just opening a window in SDL2 (minus some initialization code): #include <SDL2/SDL.h> int main(int, char *[]) { /* initialize the window and renderer */ bool running = true; SDL_Event e; do { while (SDL_PollEvent(&e)) { switch (e.type) { case SDL_QUIT: running = false; break; } } /* update game state */ SDL_RenderClear(/* ... */); /* draw stuff */ SDL_RenderPresent(/* ... */); } while (running); SDL_Quit(); } SDL is first and foremost a C library, and you can see that in the coding style. Everything is procedural, and it's up to you to call everything in the right order. Some things can be mixed around, but not much. Now here's that same thing in Aurora: #include "aurora/aurora.h" int main(int, char *[]) { auto engine = aurora::Engine({"Example", {800, 600}}); do { engine.update(); /* update game state */ engine.window->clear(); /* draw stuff */ engine.window->flip(); } while (engine.is_running); } Sure, it's shorter, but you can see the parallels between the two snippets of code. Open your window, then start your game loop. Within the game loop, run your events through, clear the window, draw, then flip. Check your exit condition, and either stop or continue. There's nothing wrong with this loop, and in fact basically every game library in existence--no I did not fact check this--likely works in a similar way. Why, though, is it up to the user to code that in? If it's the same every time, you just have boilerplate, and that's not reasonable. LÖVE Some libraries don't work this way, and take a more structured approach. LÖVE is a great example of this--being still pretty low level in terms of what the programmer is responsible for, but taking that boilerplate part of the code out of the equation. To make an application in LÖVE, your code will look something like this: function love.load() -- initialization code end function love.update() -- update game state end function love.draw() -- draw stuff end There are the 3 main callback functions in a LÖVE program, and they allow you to do quite a lot. They are called in a loop similar to this: love.load() while game is running: love.update() love.draw() The callback way of doing things keeps the focus on the content of the functions, rather than how they're structured in the code. Even better, it doesn't need to be rewritten every time, and and the library developers don't need to add in some new thing to update backend structures that the user has to call: they'll just add it to their existing game loop. This is less cognitive load on the developer, and requires less knowledge of the backend of the library--something that arguably you shouldn't need to know about especially for simple programs. Nyx I drew inspiration from LÖVE heavily for how I implemented the game loop--taking into consideration, of course, that C++ is a very different language with a different way of making programs. Here's that same example as above where a window is opened and simply cleared and flipped until the user closes it: #include "nyx.hpp" class Example : public nyx::App { public: Example(nyx::AppCfg const &cfg) : nyx::App(cfg) {}; ~Example() = default; void update(double time, double delta) { /* update game state */ } void draw() { clear_window(); } }; int main(int, char *[]) { Example({"Example", {800, 600}}).mainloop(); return 0; } Admittedly, there's a lot going on here, but that's mostly because of C++ boilerplate, not because of the library itself. To create an application in Nyx, the App class is overridden, and at least the void draw() function must be implemented. The void update(double time, double delta) function is optional if you don't need it for some reason, but I showed it here to show that it exists. mainloop does a lot behind the scenes, but effectively boils down to doing event handling and assorted update functions, calling your update function, drawing the screen, then presenting it automatically. You also inherit some functions like clear_window than can be used in your draw loop or elsewhere that are needed for full functionality. This removes the need for the user to have to manually structure their program, as a structure is enforced this way. You may argue that the way I've done this adds in its own boilerplate code, and to some degree I'd agree with you. Overriding a class in C++ isn't that short, and it is somewhat annoying that there's no way to really get around this without doing some GL3W style function callbacks. Working a few different ideas, I think this is the best balance between solving the issue I described with manually creating a game loop, and also having it still be relatively convenient to use. In a larger program, the extra code associated with this method is negligible. Final remarks Lack of structure plagued Aurora from all sides, but I'm not allowing those same mistakes to happen again with Nyx. I want an easy to grok, understandable structure in all programs written with the library--allowing me to add as much extra functionality as I want behind the scenes. In another post I'll go into what those behind the scenes things are. Soon I'll make a post about "stickies", which are mostly meant for debugging but have been incredibly valuable for development. Nyx is coming along quite well--primitives are almost implemented--and then I'll be starting on implementing more and more drawing functionality. Stay tuned! Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/hunterfehlan/nyx-how-to-make-a-game-loop-44lf
CC-MAIN-2021-25
refinedweb
1,131
61.06
copy of the vertex positions or assigns a new vertex positions array. The number of vertices in the Mesh is changed by assigning a vertex array with a different number of vertices. Note that if you resize the vertex array then all other vertex attributes (normals, colors, tangents, UVs) are automatically resized too. RecalculateBounds is automatically invoked if no vertices have been assigned to the Mesh when setting the vertices. using UnityEngine; public class Example : MonoBehaviour { Mesh mesh; Vector3[] vertices; void Start() { mesh = GetComponent<MeshFilter>().mesh; vertices = mesh.vertices; } void Update() { for (var i = 0; i < vertices.Length; i++) { vertices[i] += Vector3.up * Time.deltaTime; } // assign the local vertices array into the vertices array of the Mesh. mesh.vertices = vertices; mesh.RecalculateBounds(); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/Mesh-vertices.html
CC-MAIN-2020-16
refinedweb
133
52.56
Now that you have read Primer and learned how to write tests using Google Test, it's time to learn some new tricks. This document will show you more assertions as well as how to construct complex failure messages, propagate fatal failures, reuse and speed up your test fixtures, and use various flags with your tests. More Assertions. Generates a success. This does NOT make the overall test succeed. A test is considered successful only if none of its assertions fail during its execution. Note: SUCCEED() is purely documentary and currently doesn't generate any user-visible output. However, we may add SUCCEED() messages to Google Test's output in the future. FAIL* generates a fatal failure while ADD_FAILURE* generates a nonfatal failure. These are useful when control flow, rather than a Boolean expression, deteremines the test's success or failure. For example, you might want to write something like: switch(expression) { case 1: ... some checks ... case 2: ... some other checks ... default: FAIL() << "We shouldn't get here."; } Availability: Linux, Windows, Mac. Exception Assertions¶ These are for verifying that a piece of code throws (or does not throw) an exception of the given type: Examples: ASSERT_THROW(Foo(5), bar_exception); EXPECT_NO_THROW({ int n = 5; Bar(&n); }); Availability: Linux, Windows, Mac; since version 1.1.0. Predicate Assertions for Better Error Messages¶ Even though Google Test has a rich set of assertions, they can never be complete, as it's impossible (nor a good idea) to anticipate all the scenarios a user might run into. Therefore, sometimes a user has to use EXPECT_TRUE() to check a complex expression, for lack of a better macro. This has the problem of not showing you the values of the parts of the expression, making it hard to understand what went wrong. As a workaround, some users choose to construct the failure message by themselves, streaming it into EXPECT_TRUE(). However, this is awkward especially when the expression has side-effects or is expensive to evaluate. Google Test gives you three different options to solve this problem: Using an Existing Boolean Function¶ If you already have a function or a functor that returns bool (or a type that can be implicitly converted to bool), you can use it in a predicate assertion to get the function arguments printed for free: In the above, predn is an n-ary predicate function or functor, where val1, val2, ..., and valn are its arguments. The assertion succeeds if the predicate returns true when applied to the given arguments, and fails otherwise. When the assertion fails, it prints the value of each argument. In either case, the arguments are evaluated exactly once. Here's an example. Given // Returns true iff m and n have no common divisors except 1. bool MutuallyPrime(int m, int n) { ... } const int a = 3; const int b = 4; const int c = 10; the assertion EXPECT_PRED2(MutuallyPrime, a, b); will succeed, while the assertion EXPECT_PRED2(MutuallyPrime, b, c); will fail with the message !MutuallyPrime(b, c) is false, where b is 4 c is 10 Notes: - If you see a compiler error "no matching function to call" when using ASSERT_PRED*or EXPECT_PRED*, please see this for how to resolve it. - Currently we only provide predicate assertions of arity <= 5. If you need a higher-arity assertion, let us know. Availability: Linux, Windows, Mac Using a Function That Returns an AssertionResult¶ While EXPECT_PRED*() and friends are handy for a quick job, the syntax is not satisfactory: you have to use different macros for different arities, and it feels more like Lisp than C++. The ::testing::AssertionResult class solves this problem. An AssertionResult object represents the result of an assertion (whether it's a success or a failure, and an associated message). You can create an AssertionResult using one of these factory functions: namespace testing { // Returns an AssertionResult object to indicate that an assertion has // succeeded. AssertionResult AssertionSuccess(); // Returns an AssertionResult object to indicate that an assertion has // failed. AssertionResult AssertionFailure(); } You can then use the << operator to stream messages to the AssertionResult object. To provide more readable messages in Boolean assertions (e.g. EXPECT_TRUE()), write a predicate function that returns AssertionResult instead of bool. For example, if you define IsEven() as: ::testing::AssertionResult IsEven(int n) { if ((n % 2) == 0) return ::testing::AssertionSuccess(); else return ::testing::AssertionFailure() << n << " is odd"; } instead of: bool IsEven(int n) { return (n % 2) == 0; } the failed assertion EXPECT_TRUE(IsEven(Fib(4))) will print: Value of: !IsEven(Fib(4)) Actual: false (*3 is odd*) Expected: true instead of a more opaque Value of: !IsEven(Fib(4)) Actual: false Expected: true If you want informative messages in EXPECT_FALSE and ASSERT_FALSE as well, and are fine with making the predicate slower in the success case, you can supply a success message: ::testing::AssertionResult IsEven(int n) { if ((n % 2) == 0) return ::testing::AssertionSuccess() << n << " is even"; else return ::testing::AssertionFailure() << n << " is odd"; } Then the statement EXPECT_FALSE(IsEven(Fib(6))) will print Value of: !IsEven(Fib(6)) Actual: true (8 is even) Expected: false Availability: Linux, Windows, Mac; since version 1.4.1. Using a Predicate-Formatter¶ If you find the default message generated by (ASSERT|EXPECT)_PRED* and (ASSERT|EXPECT)_(TRUE|FALSE) unsatisfactory, or some arguments to your predicate do not support streaming to ostream, you can instead use the following predicate-formatter assertions to fully customize how the message is formatted: The difference between this and the previous two groups of macros is that instead of a predicate, (ASSERT|EXPECT)_PRED_FORMAT* take a predicate-formatter (pred_formatn), which is a function or functor with the signature: ::testing::AssertionResult PredicateFormattern(const char*expr1 , const char*expr2 , ... const char*exprn , T1val1 , T2val2 , ... Tnvaln ); where val1, val2, ..., and valn are the values of the predicate arguments, and expr1, expr2, ..., and exprn are the corresponding expressions as they appear in the source code. The types T1, T2, ..., and Tn can be either value types or reference types. For example, if an argument has type Foo, you can declare it as either Foo or const Foo&, whichever is appropriate. A predicate-formatter returns a ::testing::AssertionResult object to indicate whether the assertion has succeeded or not. The only way to create such an object is to call one of these factory functions: As an example, let's improve the failure message in the previous example, which uses EXPECT_PRED2(): // Returns the smallest prime common divisor of m and n, // or 1 when m and n are mutually prime. int SmallestPrimeCommonDivisor(int m, int n) { ... } // A predicate-formatter for asserting that two integers are mutually prime. ::testing::AssertionResult AssertMutuallyPrime(const char* m_expr, const char* n_expr, int m, int n) { if (MutuallyPrime(m, n)) return ::testing::AssertionSuccess(); return ::testing::AssertionFailure() << m_expr << " and " << n_expr << " (" << m << " and " << n << ") are not mutually prime, " << "as they have a common divisor " << SmallestPrimeCommonDivisor(m, n); } With this predicate-formatter, we can use EXPECT_PRED_FORMAT2(AssertMutuallyPrime, b, c); to generate the message b and c (4 and 10) are not mutually prime, as they have a common divisor 2. As you may have realized, many of the assertions we introduced earlier are special cases of (EXPECT|ASSERT)_PRED_FORMAT*. In fact, most of them are indeed defined using (EXPECT|ASSERT)_PRED_FORMAT*. Availability: Linux, Windows, Mac. Floating-Point Comparison¶ Comparing floating-point numbers is tricky. Due to round-off errors, it is very unlikely that two floating-points will match exactly. Therefore, ASSERT_EQ 's naive comparison usually doesn't work. And since floating-points can have a wide value range, no single fixed error bound works. It's better to compare by a fixed relative error bound, except for values close to 0 due to the loss of precision there. In general, for floating-point comparison to make sense, the user needs to carefully choose the error bound. If they don't want or care to, comparing in terms of Units in the Last Place (ULPs) is a good default, and Google Test provides assertions to do this. Full details about ULPs are quite long; if you want to learn more, see this article on float comparison. Floating-Point Macros¶ By "almost equal", we mean the two values are within 4 ULP's from each other. The following assertions allow you to choose the acceptable error bound: Availability: Linux, Windows, Mac. Floating-Point Predicate-Format Functions¶ Some floating-point operations are useful, but not that often used. In order to avoid an explosion of new macros, we provide them as predicate-format functions that can be used in predicate assertion macros (e.g. EXPECT_PRED_FORMAT2, etc). EXPECT_PRED_FORMAT2(::testing::FloatLE, val1, val2); EXPECT_PRED_FORMAT2(::testing::DoubleLE, val1, val2); Verifies that val1 is less than, or almost equal to, val2. You can replace EXPECT_PRED_FORMAT2 in the above table with ASSERT_PRED_FORMAT2. Availability: Linux, Windows, Mac. Windows HRESULT assertions¶ These assertions test for HRESULT success or failure. The generated output contains the human-readable error message associated with the HRESULT code returned by expression. You might use them like this: CComPtr shell; ASSERT_HRESULT_SUCCEEDED(shell.CoCreateInstance(L"Shell.Application")); CComVariant empty; ASSERT_HRESULT_SUCCEEDED(shell->ShellExecute(CComBSTR(url), empty, empty, empty, empty)); Availability: Windows. Type Assertionsand T2. This is mainly useful inside template code. Caveat: When used inside a member function of a class template or a function template, StaticAssertTypeEq<T1, T2>() is effective only if the function is instantiated. For example, given: template <typename T> class Foo { public: void Bar() { ::testing::StaticAssertTypeEq<int, T>(); } }; void Test1() { Foo<bool> foo; } Foo<bool>::Bar()is never actually instantiated. Instead, you need: void Test2() { Foo<bool> foo; foo.Bar(); } Availability: Linux, Windows, Mac; since version 1.3.0. Assertion Placement¶ You can use assertions in any C++ function. In particular, it doesn't have to be a method of the test fixture class. The one constraint is that assertions that generate a fatal failure ( FAIL* and ASSERT_*) can only be used in void-returning functions. This is a consequence of Google Test not using exceptions. By placing it in a non-void function you'll get a confusing compile error like "error: void value not ignored as it ought to be". If you need to use assertions in a function that returns non-void, one option is to make the function return the value in an out parameter instead. For example, you can rewrite T2 Foo(T1 x) to void Foo(T1 x, T2* result). You need to make sure that *result contains some sensible value even when the function returns prematurely. As the function now returns void, you can use any assertion inside of it. If changing the function's type is not an option, you should just use assertions that generate non-fatal failures, such as ADD_FAILURE* and EXPECT_*. Note: Constructors and destructors are not considered void-returning functions, according to the C++ language specification, and so you may not use fatal assertions in them. You'll get a compilation error if you try. A simple workaround is to transfer the entire body of the constructor or destructor to a private void-returning method. However, you should be aware that a fatal assertion failure in a constructor does not terminate the current test, as your intuition might suggest; it merely returns from the constructor early, possibly leaving your object in a partially-constructed state. Likewise, a fatal assertion failure in a destructor may leave your object in a partially-destructed state. Use assertions carefully in these situations! Death Tests¶ In many applications, there are assertions that can cause application failure if a condition is not met. These sanity checks, which ensure that the program is in a known good state, are there to fail at the earliest possible time after some program state is corrupted. If the assertion checks the wrong condition, then the program may proceed in an erroneous state, which could lead to memory corruption, security holes, or worse. Hence it is vitally important to test that such assertion statements work as expected. Since these precondition checks cause the processes to die, we call such tests death tests. More generally, any test that checks that a program terminates in an expected fashion is also a death test. If you want to test EXPECT_*()/ASSERT_*() failures in your test code, see Catching Failures. How to Write a Death Test¶ Google Test has the following macros to support death tests: where statement is a statement that is expected to cause the process to die, predicate is a function or function object that evaluates an integer exit status, and regex is a regular expression that the stderr output of statement is expected to match. Note that statement can be any valid statement (including compound statement) and doesn't have to be an expression. As usual, the ASSERT variants abort the current test function, while the EXPECT variants do not. Note: We use the word "crash" here to mean that the process terminates with a non-zero exit status code. There are two possibilities: either the process has called exit() or _exit() with a non-zero value, or it may be killed by a signal. This means that if statement terminates the process with a 0 exit code, it is not considered a crash by EXPECT_DEATH. Use EXPECT_EXIT instead if this is the case, or if you want to restrict the exit code more precisely. A predicate here must accept an int and return a bool. The death test succeeds only if the predicate returns true. Google Test defines a few predicates that handle the most common cases: ::testing::ExitedWithCode(exit_code) This expression is true if the program exited normally with the given exit code. ::testing::KilledBySignal(signal_number) // Not available on Windows. This expression is true if the program was killed by the given signal. The *_DEATH macros are convenient wrappers for *_EXIT that use a predicate that verifies the process' exit code is non-zero. Note that a death test only cares about three things: - does statement abort or exit the process? - (in the case of ASSERT_EXITand EXPECT_EXIT) does the exit status satisfy predicate? Or (in the case of ASSERT_DEATHand EXPECT_DEATH) is the exit status non-zero? And - does the stderr output match regex? In particular, if statement generates an ASSERT_* or EXPECT_* failure, it will not cause the death test to fail, as Google Test assertions don't abort the process. To write a death test, simply use one of the above macros inside your test function. For example, TEST(My*DeathTest*, Foo) { // This death test uses a compound statement. ASSERT_DEATH({ int n = 5; Foo(&n); }, "Error on line .* of Foo()"); } TEST(MyDeathTest, NormalExit) { EXPECT_EXIT(NormalExit(), ::testing::ExitedWithCode(0), "Success"); } TEST(MyDeathTest, KillMyself) { EXPECT_EXIT(KillMyself(), ::testing::KilledBySignal(SIGKILL), "Sending myself unblockable signal"); } verifies that: - calling Foo(5)causes the process to die with the given error message, - calling NormalExit()causes the process to print "Success"to stderr and exit with exit code 0, and - calling KillMyself()kills the process with signal SIGKILL. The test function body may contain other assertions and statements as well, if necessary. Important: We strongly recommend you to follow the convention of naming your test case (not test) *DeathTest when it contains a death test, as demonstrated in the above example. The Death Tests And Threads section below explains why. If a test fixture class is shared by normal tests and death tests, you can use typedef to introduce an alias for the fixture class and avoid duplicating its code: class FooTest : public ::testing::Test { ... }; typedef FooTest FooDeathTest; TEST_F(FooTest, DoesThis) { // normal test } TEST_F(FooDeathTest, DoesThat) { // death test } Availability: Linux, Windows (requires MSVC 8.0 or above), Cygwin, and Mac (the latter three are supported since v1.3.0). (ASSERT|EXPECT)_DEATH_IF_SUPPORTED are new in v1.4.0. Regular Expression Syntax¶ On POSIX systems (e.g. Linux, Cygwin, and Mac), Google Test uses the POSIX extended regular expression syntax in death tests. To learn about this syntax, you may want to read this Wikipedia entry. On Windows, Google Test uses its own simple regular expression implementation. It lacks many features you can find in POSIX extended regular expressions. For example, we don't support union ( "x|y"), grouping ( "(xy)"), brackets ( "[xy]"), and repetition count ( "x{5,7}"), among others. Below is what we do support ( A denotes a literal character, period ( .), or a single \\ escape sequence; x and y denote regular expressions.): To help you determine which capability is available on your system, Google Test defines macro GTEST_USES_POSIX_RE=1 when it uses POSIX extended regular expressions, or GTEST_USES_SIMPLE_RE=1 when it uses the simple version. If you want your death tests to work in both cases, you can either #if on these macros or use the more limited syntax only. How It Works¶ Under the hood, ASSERT_EXIT() spawns a new process and executes the death test statement in that process. The details of of how precisely that happens depend on the platform and the variable ::testing::GTEST_FLAG(death_test_style) (which is initialized from the command-line flag --gtest_death_test_style). - On POSIX systems, fork()(or clone()on Linux) is used to spawn the child, after which: - If the variable's value is "fast", the death test statement is immediately executed. - If the variable's value is "threadsafe", the child process re-executes the unit test binary just as it was originally invoked, but with some extra flags to cause just the single death test under consideration to be run. - On Windows, the child is spawned using the CreateProcess()API, and re-executes the binary to cause just the single death test under consideration to be run - much like the threadsafemode on POSIX. Other values for the variable are illegal and will cause the death test to fail. Currently, the flag's default value is "fast". However, we reserve the right to change it in the future. Therefore, your tests should not depend on this. In either case, the parent process waits for the child process to complete, and checks that - the child's exit status satisfies the predicate, and - the child's stderr matches the regular expression. If the death test statement runs to completion without dying, the child process will nonetheless terminate, and the assertion fails. Death Tests And Threads¶ The reason for the two death test styles has to do with thread safety. Due to well-known problems with forking in the presence of threads, death tests should be run in a single-threaded context. Sometimes, however, it isn't feasible to arrange that kind of environment. For example, statically-initialized modules may start threads before main is ever reached. Once threads have been created, it may be difficult or impossible to clean them up. Google Test has three features intended to raise awareness of threading issues. - A warning is emitted if multiple threads are running when a death test is encountered. - Test cases with a name ending in "DeathTest" are run before all other tests. - It uses clone()instead of fork()to spawn the child process on Linux ( clone()is not available on Cygwin and Mac), as fork()is more likely to cause the child to hang when the parent process has multiple threads. It's perfectly fine to create threads inside a death test statement; they are executed in a separate process and cannot affect the parent. Death Test Styles¶ The "threadsafe" death test style was introduced in order to help mitigate the risks of testing in a possibly multithreaded environment. It trades increased test execution time (potentially dramatically so) for improved thread safety. We suggest using the faster, default "fast" style unless your test has specific problems with it. You can choose a particular style of death tests by setting the flag programmatically: ::testing::FLAGS_gtest_death_test_style = "threadsafe"; You can do this in main() to set the style for all death tests in the binary, or in individual tests. Recall that flags are saved before running each test and restored afterwards, so you need not do that yourself. For example: TEST(MyDeathTest, TestOne) { ::testing::FLAGS_gtest_death_test_style = "threadsafe"; // This test is run in the "threadsafe" style: ASSERT_DEATH(ThisShouldDie(), ""); } TEST(MyDeathTest, TestTwo) { // This test is run in the "fast" style: ASSERT_DEATH(ThisShouldDie(), ""); } int main(int argc, char** argv) { ::testing::InitGoogleTest(&argc, argv); ::testing::FLAGS_gtest_death_test_style = "fast"; return RUN_ALL_TESTS(); } Caveats¶ The statement argument of ASSERT_EXIT() can be any valid C++ statement except that it can not return from the current function. This means statement should not contain return or a macro that might return (e.g. ASSERT_TRUE() ). If statement returns before it crashes, Google Test will print an error message, and the test will fail. Since statement runs in the child process, any in-memory side effect (e.g. modifying a variable, releasing memory, etc) it causes will not be observable in the parent process. In particular, if you release memory in a death test, your program will fail the heap check as the parent process will never see the memory reclaimed. To solve this problem, you can - try not to free memory in a death test; - free the memory again in the parent process; or - do not use the heap checker in your program. Due to an implementation detail, you cannot place multiple death test assertions on the same line; otherwise, compilation will fail with an unobvious error message. Despite the improved thread safety afforded by the "threadsafe" style of death test, thread problems such as deadlock are still possible in the presence of handlers registered with pthread_atfork(3). Using Assertions in Sub-routines¶ Adding Traces to Assertions¶ If a test sub-routine is called from several places, when an assertion inside it fails, it can be hard to tell which invocation of the sub-routine the failure is from. You can alleviate this problem using extra logging or custom failure messages, but that usually clutters up your tests. A better solution is to use the SCOPED_TRACE macro: where message can be anything streamable to std::ostream. This macro will cause the current file name, line number, and the given message to be added in every failure message. The effect will be undone when the control leaves the current lexical scope. For example, 10: void Sub1(int n) { 11: EXPECT_EQ(1, Bar(n)); 12: EXPECT_EQ(2, Bar(n + 1)); 13: } 14: 15: TEST(FooTest, Bar) { 16: { 17: SCOPED_TRACE("A"); // This trace point will be included in 18: // every failure in this scope. 19: Sub1(1); 20: } 21: // Now it won't. 22: Sub1(9); 23: } could result in messages like these: path/to/foo_test.cc:11: Failure Value of: Bar(n) Expected: 1 Actual: 2 Trace: path/to/foo_test.cc:17: A path/to/foo_test.cc:12: Failure Value of: Bar(n + 1) Expected: 2 Actual: 3 Without the trace, it would've been difficult to know which invocation of Sub1() the two failures come from respectively. (You could add an extra message to each assertion in Sub1() to indicate the value of n, but that's tedious.) Some tips on using SCOPED_TRACE: - With a suitable message, it's often enough to use SCOPED_TRACEat the beginning of a sub-routine, instead of at each call site. - When calling sub-routines inside a loop, make the loop iterator part of the message in SCOPED_TRACEsuch that you can know which iteration the failure is from. - Sometimes the line number of the trace point is enough for identifying the particular invocation of a sub-routine. In this case, you don't have to choose a unique message for SCOPED_TRACE. You can simply use "". - You can use SCOPED_TRACEin an inner scope when there is one in the outer scope. In this case, all active trace points will be included in the failure messages, in reverse order they are encountered. - The trace dump is clickable in Emacs' compilation buffer - hit return on a line number and you'll be taken to that line in the source file! Availability: Linux, Windows, Mac. Propagating Fatal Failures¶ A common pitfall when using ASSERT_* and FAIL* is not understanding that when they fail they only abort the current function, not the entire test. For example, the following test will segfault: void Subroutine() { // Generates a fatal failure and aborts the current function. ASSERT_EQ(1, 2); // The following won't be executed. ... } TEST(FooTest, Bar) { Subroutine(); // The intended behavior is for the fatal failure // in Subroutine() to abort the entire test. // The actual behavior: the function goes on after Subroutine() returns. int* p = NULL; *p = 3; // Segfault! } Since we don't use exceptions, it is technically impossible to implement the intended behavior here. To alleviate this, Google Test provides two solutions. You could use either the (ASSERT|EXPECT)_NO_FATAL_FAILURE assertions or the HasFatalFailure() function. They are described in the following two subsections. Asserting on Subroutines¶ As shown above, if your test calls a subroutine that has an ASSERT_* failure in it, the test will continue after the subroutine returns. This may not be what you want. Often people want fatal failures to propagate like exceptions. For that Google Test offers the following macros: Only failures in the thread that executes the assertion are checked to determine the result of this type of assertions. If statement creates new threads, failures in these threads are ignored. Examples: ASSERT_NO_FATAL_FAILURE(Foo()); int i; EXPECT_NO_FATAL_FAILURE({ i = Bar(); }); Availability: Linux, Windows, Mac. Assertions from multiple threads are currently not supported. Checking for Failures in the Current Test¶ HasFatalFailure() in the ::testing::Test class returns true if an assertion in the current test has suffered a fatal failure. This allows functions to catch fatal failures in a sub-routine and return early. class Test { public: ... static bool HasFatalFailure(); }; The typical usage, which basically simulates the behavior of a thrown exception, is: TEST(FooTest, Bar) { Subroutine(); // Aborts if Subroutine() had a fatal failure. if (HasFatalFailure()) return; // The following won't be executed. ... } If HasFatalFailure() is used outside of TEST() , TEST_F() , or a test fixture, you must add the ::testing::Test:: prefix, as in: if (::testing::Test::HasFatalFailure()) return; Similarly, HasNonfatalFailure() returns true if the current test has at least one non-fatal failure, and HasFailure() returns true if the current test has at least one failure of either kind. Availability: Linux, Windows, Mac. HasNonfatalFailure() and HasFailure() are available since version 1.4.0. Logging Additional Information¶ In your test code, you can call RecordProperty("key", value) to log additional information, where value can be either a C string or a 32-bit integer. The last value recorded for a key will be emitted to the XML output if you specify one. For example, the test TEST_F(WidgetUsageTest, MinAndMaxWidgets) { RecordProperty("MaximumWidgets", ComputeMaxUsage()); RecordProperty("MinimumWidgets", ComputeMinUsage()); } will output XML like this: ... <testcase name="MinAndMaxWidgets" status="run" time="6" classname="WidgetUsageTest" MaximumWidgets="12" MinimumWidgets="9" /> ... Note: * RecordProperty() is a static member of the Test class. Therefore it needs to be prefixed with ::testing::Test:: if used outside of the TEST body and the test fixture class. * key must be a valid XML attribute name, and cannot conflict with the ones already used by Google Test ( name, status, time, and classname). Availability: Linux, Windows, Mac. Sharing Resources Between Tests in the Same Test Case¶ Google Test creates a new test fixture object for each test in order to make tests independent and easier to debug. However, sometimes tests use resources that are expensive to set up, making the one-copy-per-test model prohibitively expensive. If the tests don't change the resource, there's no harm in them sharing a single resource copy. So, in addition to per-test set-up/tear-down, Google Test also supports per-test-case set-up/tear-down. To use it: - In your test fixture class (say FooTest), define as staticsome member variables to hold the shared resources. - In the same test fixture class, define a static void SetUpTestCase()function (remember not to spell it as SetupTestCasewith a small u!) to set up the shared resources and a static void TearDownTestCase()function to tear them down. That's it! Google Test automatically calls SetUpTestCase() before running the first test in the FooTest test case (i.e. before creating the first FooTest object), and calls TearDownTestCase() after running the last test in it (i.e. after deleting the last FooTest object). In between, the tests can use the shared resources. Remember that the test order is undefined, so your code can't depend on a test preceding or following another. Also, the tests must either not modify the state of any shared resource, or, if they do modify the state, they must restore the state to its original value before passing control to the next test. Here's an example of per-test-case set-up and tear-down: class FooTest : public ::testing::Test { protected: // Per-test-case set-up. // Called before the first test in this test case. // Can be omitted if not needed. static void SetUpTestCase() { shared_resource_ = new ...; } // Per-test-case tear-down. // Called after the last test in this test case. // Can be omitted if not needed. static void TearDownTestCase() { delete shared_resource_; shared_resource_ = NULL; } // You can define per-test set-up and tear-down logic as usual. virtual void SetUp() { ... } virtual void TearDown() { ... } // Some expensive resource shared by all tests. static T* shared_resource_; }; T* FooTest::shared_resource_ = NULL; TEST_F(FooTest, Test1) { ... you can refer to shared_resource here ... } TEST_F(FooTest, Test2) { ... you can refer to shared_resource here ... } Availability: Linux, Windows, Mac. Global Set-Up and Tear-Down¶ Just as you can do set-up and tear-down at the test level and the test case level, you can also do it at the test program level. Here's how. First, you subclass the ::testing::Environment class to define a test environment, which knows how to set-up and tear-down: class Environment { public: virtual ~Environment() {} // Override this to define how to set up the environment. virtual void SetUp() {} // Override this to define how to tear down the environment. virtual void TearDown() {} }; Then, you register an instance of your environment class with Google Test by calling the ::testing::AddGlobalTestEnvironment() function: Environment* AddGlobalTestEnvironment(Environment* env); Now, when RUN_ALL_TESTS() is called, it first calls the SetUp() method of the environment object, then runs the tests if there was no fatal failures, and finally calls TearDown() of the environment object. It's OK to register multiple environment objects. In this case, their SetUp() will be called in the order they are registered, and their TearDown() will be called in the reverse order. Note that Google Test takes ownership of the registered environment objects. Therefore do not delete them by yourself. You should call AddGlobalTestEnvironment() before RUN_ALL_TESTS() is called, probably in main(). If you use gtest_main, you need to call this before main() starts for it to take effect. One way to do this is to define a global variable like this: ::testing::Environment* const foo_env = ::testing::AddGlobalTestEnvironment(new FooEnvironment); However, we strongly recommend you to write your own main() and call AddGlobalTestEnvironment() there, as relying on initialization of global variables makes the code harder to read and may cause problems when you register multiple environments from different translation units and the environments have dependencies among them (remember that the compiler doesn't guarantee the order in which global variables from different translation units are initialized). Availability: Linux, Windows, Mac. Value Parameterized Tests¶ Value-parameterized tests allow you to test your code with different parameters without writing multiple copies of the same test. Suppose you write a test for your code and then realize that your code is affected by a presence of a Boolean command line flag. TEST(MyCodeTest, TestFoo) { // A code to test foo(). } Usually people factor their test code into a function with a Boolean parameter in such situations. The function sets the flag, then executes the testing code. void TestFooHelper(bool flag_value) { flag = flag_value; // A code to test foo(). } TEST(MyCodeTest, TestFooo) { TestFooHelper(false); TestFooHelper(true); } But this setup has serious drawbacks. First, when a test assertion fails in your tests, it becomes unclear what value of the parameter caused it to fail. You can stream a clarifying message into your EXPECT/ ASSERT statements, but it you'll have to do it with all of them. Second, you have to add one such helper function per test. What if you have ten tests? Twenty? A hundred? Value-parameterized tests will let you write your test only once and then easily instantiate and run it with an arbitrary number of parameter values. Here are some other situations when value-parameterized tests come handy: - You wan to test different implementations of an OO interface. - You want to test your code over various inputs (a.k.a. data-driven testing). This feature is easy to abuse, so please exercise your good sense when doing it! How to Write Value-Parameterized Tests¶ To write value-parameterized tests, first you should define a fixture class. It must be derived from ::testing::TestWithParam<T>, where T is the type of your parameter values. TestWithParam<T> is itself derived from ::testing::Test. T can be any copyable type. If it's a raw pointer, you are responsible for managing the lifespan of the pointed values. class FooTest : public ::testing::TestWithParam<const char*> { // You can implement all the usual fixture class members here. // To access the test parameter, call GetParam() from class // TestWithParam<T>. }; Then, use the TEST_P macro to define as many test patterns using this fixture as you want. The _P suffix is for "parameterized" or "pattern", whichever you prefer to think. TEST_P(FooTest, DoesBlah) { // Inside a test, access the test parameter with the GetParam() method // of the TestWithParam<T> class: EXPECT_TRUE(foo.Blah(GetParam())); ... } TEST_P(FooTest, HasBlahBlah) { ... } Finally, you can use INSTANTIATE_TEST_CASE_P to instantiate the test case with any set of parameters you want. Google Test defines a number of functions for generating test parameters. They return what we call (surprise!) parameter generators. Here is a summary of them, which are all in the testing namespace: For more details, see the comments at the definitions of these functions in the source code. The following statement will instantiate tests from the FooTest test case each with parameter values "meeny", "miny", and "moe". INSTANTIATE_TEST_CASE_P(InstantiationName, FooTest, ::testing::Values("meeny", "miny", "moe")); To distinguish different instances of the pattern (yes, you can instantiate it more than once), the first argument to INSTANTIATE_TEST_CASE_P is a prefix that will be added to the actual test case name. Remember to pick unique prefixes for different instantiations. The tests from the instantiation above will have these names: InstantiationName/FooTest.DoesBlah/0for "meeny" InstantiationName/FooTest.DoesBlah/1for "miny" InstantiationName/FooTest.DoesBlah/2for "moe" InstantiationName/FooTest.HasBlahBlah/0for "meeny" InstantiationName/FooTest.HasBlahBlah/1for "miny" InstantiationName/FooTest.HasBlahBlah/2for "moe" You can use these names in --gtest-filter. This statement will instantiate all tests from FooTest again, each with parameter values "cat" and "dog": const char* pets[] = {"cat", "dog"}; INSTANTIATE_TEST_CASE_P(AnotherInstantiationName, FooTest, ::testing::ValuesIn(pets)); The tests from the instantiation above will have these names: AnotherInstantiationName/FooTest.DoesBlah/0for "cat" AnotherInstantiationName/FooTest.DoesBlah/1for "dog" AnotherInstantiationName/FooTest.HasBlahBlah/0for "cat" AnotherInstantiationName/FooTest.HasBlahBlah/1for "dog" Please note that INSTANTIATE_TEST_CASE_P will instantiate all tests in the given test case, whether their definitions come before or after the INSTANTIATE_TEST_CASE_P statement. You can see these files for more examples. Availability: Linux, Windows (requires MSVC 8.0 or above), Mac; since version 1.2.0. Creating Value-Parameterized Abstract Tests¶ In the above, we define and instantiate FooTest in the same source file. Sometimes you may want to define value-parameterized tests in a library and let other people instantiate them later. This pattern is known as abstract tests. As an example of its application, when you are designing an interface you can write a standard suite of abstract tests (perhaps using a factory function as the test parameter) that all implementations of the interface are expected to pass. When someone implements the interface, he can instantiate your suite to get all the interface-conformance tests for free. To define abstract tests, you should organize your code like this: - Put the definition of the parameterized test fixture class (e.g. FooTest) in a header file, say foo_param_test.h. Think of this as declaring your abstract tests. - Put the TEST_Pdefinitions in foo_param_test.cc, which includes foo_param_test.h. Think of this as implementing your abstract tests. Once they are defined, you can instantiate them by including foo_param_test.h, invoking INSTANTIATE_TEST_CASE_P(), and linking with foo_param_test.cc. You can instantiate the same abstract test case multiple times, possibly in different source files. Typed Tests¶ Suppose you have multiple implementations of the same interface and want to make sure that all of them satisfy some common requirements. Or, you may have defined several types that are supposed to conform to the same "concept" and you want to verify it. In both cases, you want the same test logic repeated for different types. While you can write one TEST or TEST_F for each type you want to test (and you may even factor the test logic into a function template that you invoke from the TEST), it's tedious and doesn't scale: if you want m tests over n types, you'll end up writing m*n TESTs. Typed tests allow you to repeat the same test logic over a list of types. You only need to write the test logic once, although you must know the type list when writing typed tests. Here's how you do it: First, define a fixture class template. It should be parameterized by a type. Remember to derive it from ::testing::Test: template <typename T> class FooTest : public ::testing::Test { public: ... typedef std::list<T> List; static T shared_; T value_; }; Next, associate a list of types with the test case, which will be repeated for each type in the list: typedef ::testing::Types<char, int, unsigned int> MyTypes; TYPED_TEST_CASE(FooTest, MyTypes); The typedef is necessary for the TYPED_TEST_CASE macro to parse correctly. Otherwise the compiler will think that each comma in the type list introduces a new macro argument. Then, use TYPED_TEST() instead of TEST_F() to define a typed test for this test case. You can repeat this as many times as you want: TYPED_TEST(FooTest, DoesBlah) { // Inside a test, refer to the special name TypeParam to get the type // parameter. Since we are inside a derived class template, C++ requires // us to visit the members of FooTest via 'this'. TypeParam n = this->value_; // To visit static members of the fixture, add the 'TestFixture::' // prefix. n += TestFixture::shared_; // To refer to typedefs in the fixture, add the 'typename TestFixture::' // prefix. The 'typename' is required to satisfy the compiler. typename TestFixture::List values; values.push_back(n); ... } TYPED_TEST(FooTest, HasPropertyA) { ... } You can see samples/sample6_unittest.cc for a complete example. Availability: Linux, Windows (requires MSVC 8.0 or above), Mac; since version 1.1.0. Type-Parameterized Tests¶ Type-parameterized tests are like typed tests, except that they don't require you to know the list of types ahead of time. Instead, you can define the test logic first and instantiate it with different type lists later. You can even instantiate it more than once in the same program. If you are designing an interface or concept, you can define a suite of type-parameterized tests to verify properties that any valid implementation of the interface/concept should have. Then, the author of each implementation can just instantiate the test suite with his type to verify that it conforms to the requirements, without having to write similar tests repeatedly. Here's an example: First, define a fixture class template, as we did with typed tests: template <typename T> class FooTest : public ::testing::Test { ... }; Next, declare that you will define a type-parameterized test case: TYPED_TEST_CASE_P(FooTest); The _P suffix is for "parameterized" or "pattern", whichever you prefer to think. Then, use TYPED_TEST_P() to define a type-parameterized test. You can repeat this as many times as you want: TYPED_TEST_P(FooTest, DoesBlah) { // Inside a test, refer to TypeParam to get the type parameter. TypeParam n = 0; ... } TYPED_TEST_P(FooTest, HasPropertyA) { ... } Now the tricky part: you need to register all test patterns using the REGISTER_TYPED_TEST_CASE_P macro before you can instantiate them. The first argument of the macro is the test case name; the rest are the names of the tests in this test case: REGISTER_TYPED_TEST_CASE_P(FooTest, DoesBlah, HasPropertyA); Finally, you are free to instantiate the pattern with the types you want. If you put the above code in a header file, you can #include it in multiple C++ source files and instantiate it multiple times. typedef ::testing::Types<char, int, unsigned int> MyTypes; INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, MyTypes); To distinguish different instances of the pattern, the first argument to the INSTANTIATE_TYPED_TEST_CASE_P macro is a prefix that will be added to the actual test case name. Remember to pick unique prefixes for different instances. In the special case where the type list contains only one type, you can write that type directly without ::testing::Types<...>, like this: INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, int); You can see samples/sample6_unittest.cc for a complete example. Availability: Linux, Windows (requires MSVC 8.0 or above), Mac; since version 1.1.0. Testing Private Code¶ If you change your software's internal implementation, your tests should not break as long as the change is not observable by users. Therefore, per the black-box testing principle, most of the time you should test your code through its public interfaces. If you still find yourself needing to test internal implementation code, consider if there's a better design that wouldn't require you to do so. If you absolutely have to test non-public interface code though, you can. There are two cases to consider: - Static functions (not the same as static member functions!) or unnamed namespaces, and - Private or protected class members Static Functions¶ Both static functions and definitions/declarations in an unnamed namespace are only visible within the same translation unit. To test them, you can #include the entire .cc file being tested in your *_test.cc file. ( #includeing .cc files is not a good way to reuse code - you should not do this in production code!) However, a better approach is to move the private code into the foo::internal namespace, where foo is the namespace your project normally uses, and put the private declarations in a *-internal.h file. Your production .cc files and your tests are allowed to include this internal header, but your clients are not. This way, you can fully test your internal implementation without leaking it to your clients. Private Class Members¶ Private class members are only accessible from within the class or by friends. To access a class' private members, you can declare your test fixture as a friend to the class and define accessors in your fixture. Tests using the fixture can then access the private members of your production class via the accessors in the fixture. Note that even though your fixture is a friend to your production class, your tests are not automatically friends to it, as they are technically defined in sub-classes of the fixture. Another way to test private members is to refactor them into an implementation class, which is then declared in a *-internal.h file. Your clients aren't allowed to include this header but your tests can. Such is called the Pimpl (Private Implementation) idiom. Or, you can declare an individual test as a friend of your class by adding this line in the class body: FRIEND_TEST(TestCaseName, TestName); For example, // foo.h #include <gtest/gtest_prod.h> // Defines FRIEND_TEST. class Foo { ... private: FRIEND_TEST(FooTest, BarReturnsZeroOnNull); int Bar(void* x); }; // foo_test.cc ... TEST(FooTest, BarReturnsZeroOnNull) { Foo foo; EXPECT_EQ(0, foo.Bar(NULL)); // Uses Foo's private member Bar(). } Pay special attention when your class is defined in a namespace, as you should define your test fixtures and tests in the same namespace if you want them to be friends of your class. For example, if the code to be tested looks like: namespace my_namespace { class Foo { friend class FooTest; FRIEND_TEST(FooTest, Bar); FRIEND_TEST(FooTest, Baz); ... definition of the class Foo ... }; } // namespace my_namespace Your test code should be something like: namespace my_namespace { class FooTest : public ::testing::Test { protected: ... }; TEST_F(FooTest, Bar) { ... } TEST_F(FooTest, Baz) { ... } } // namespace my_namespace Catching Failures¶ If you are building a testing utility on top of Google Test, you'll want to test your utility. What framework would you use to test it? Google Test, of course. The challenge is to verify that your testing utility reports failures correctly. In frameworks that report a failure by throwing an exception, you could catch the exception and assert on it. But Google Test doesn't use exceptions, so how do we test that a piece of code generates an expected failure? <gtest/gtest-spi.h> contains some constructs to do this. After #includeing this header, you can use to assert that statement generates a fatal (e.g. ASSERT_*) failure whose message contains the given substring, or use if you are expecting a non-fatal (e.g. EXPECT_*) failure. For technical reasons, there are some caveats: - You cannot stream a failure message to either macro. - statement in EXPECT_FATAL_FAILURE()cannot reference local non-static variables or non-static members of thisobject. - statement in EXPECT_FATAL_FAILURE()cannot return a value. Note: Google Test is designed with threads in mind. Once the synchronization primitives in <gtest/internal/gtest-port.h> have been implemented, Google Test will become thread-safe, meaning that you can then use assertions in multiple threads concurrently. Before that, however, Google Test only supports single-threaded usage. Once thread-safe, EXPECT_FATAL_FAILURE() and EXPECT_NONFATAL_FAILURE() will capture failures in the current thread only. If statement creates new threads, failures in these threads will be ignored. If you want to capture failures from all threads instead, you should use the following macros: Getting the Current Test's Name¶ Sometimes a function may need to know the name of the currently running test. For example, you may be using the SetUp() method of your test fixture to set the golden file name based on which test is running. The ::testing::TestInfo class has this information: namespace testing { class TestInfo { public: // Returns the test case name and the test name, respectively. // // Do NOT delete or free the return value - it's managed by the // TestInfo class. const char* test_case_name() const; const char* name() const; }; } // namespace testing To obtain a TestInfoobject for the currently running test, call current_test_info()on the UnitTestsingleton object: // Gets information about the currently running test. // Do NOT delete the returned object - it's managed by the UnitTest class. const ::testing::TestInfo* const test_info = ::testing::UnitTest::GetInstance()->current_test_info(); printf("We are in test %s of test case %s.\n", test_info->name(), test_info->test_case_name()); current_test_info() returns a null pointer if no test is running. In particular, you cannot find the test case name in TestCaseSetUp(), TestCaseTearDown() (where you know the test case name implicitly), or functions called from them. Availability: Linux, Windows, Mac. Extending Google Test by Handling Test Events¶ Google Test provides an event listener API to let you receive notifications about the progress of a test program and test failures. The events you can listen to include the start and end of the test program, a test case, or a test method, among others. You may use this API to augment or replace the standard console output, replace the XML output, or provide a completely different form of output, such as a GUI or a database. You can also use test events as checkpoints to implement a resource leak checker, for example. Availability: Linux, Windows, Mac; since v1.4.0. Defining Event Listeners¶ To define a event listener, you subclass either testing::TestEventListener or testing::EmptyTestEventListener. The former is an (abstract) interface, where each pure virtual method can be overridden to handle a test event (For example, when a test starts, the OnTestStart() method will be called.). The latter provides an empty implementation of all methods in the interface, such that a subclass only needs to override the methods it cares about. When an event is fired, its context is passed to the handler function as an argument. The following argument types are used: * UnitTest reflects the state of the entire test program, * TestCase has information about a test case, which can contain one or more tests, * TestInfo contains the state of a test, and * TestPartResult represents the result of a test assertion. An event handler function can examine the argument it receives to find out interesting information about the event and the test program's state. Here's an example: class MinimalistPrinter : public ::testing::EmptyTestEventListener { // Called before a test starts. virtual void OnTestStart(const ::testing::TestInfo& test_info) { printf("*** Test %s.%s starting.\n", test_info.test_case_name(), test_info.name()); } // Called after a failed assertion or a SUCCESS(). virtual void OnTestPartResult( const ::testing::TestPartResult& test_part_result) { printf("%s in %s:%d\n%s\n", test_part_result.failed() ? "*** Failure" : "Success", test_part_result.file_name(), test_part_result.line_number(), test_part_result.summary()); } // Called after a test ends. virtual void OnTestEnd(const ::testing::TestInfo& test_info) { printf("*** Test %s.%s ending.\n", test_info.test_case_name(), test_info.name()); } }; Using Event Listeners¶ To use the event listener you have defined, add an instance of it to the Google Test event listener list (represented by class TestEventListeners - note the "s" at the end of the name) in your main() function, before calling RUN_ALL_TESTS(): int main(int argc, char** argv) { ::testing::InitGoogleTest(&argc, argv); // Gets hold of the event listener list. ::testing::TestEventListeners& listeners = ::testing::UnitTest::GetInstance()->listeners(); // Adds a listener to the end. Google Test takes the ownership. listeners.Append(new MinimalistPrinter); return RUN_ALL_TESTS(); } There's only one problem: the default test result printer is still in effect, so its output will mingle with the output from your minimalist printer. To suppress the default printer, just release it from the event listener list and delete it. You can do so by adding one line: ... delete listeners.Release(listeners.default_result_printer()); listeners.Append(new MinimalistPrinter); return RUN_ALL_TESTS(); Now, sit back and enjoy a completely different output from your tests. For more details, you can read this sample. You may append more than one listener to the list. When an On*Start() or OnTestPartResult() event is fired, the listeners will receive it in the order they appear in the list (since new listeners are added to the end of the list, the default text printer and the default XML generator will receive the event first). An On*End() event will be received by the listeners in the reverse order. This allows output by listeners added later to be framed by output from listeners added earlier. Generating Failures in Listeners¶ You may use failure-raising macros ( EXPECT_*(), ASSERT_*(), FAIL(), etc) when processing an event. There are some restrictions: - You cannot generate any failure in OnTestPartResult()(otherwise it will cause OnTestPartResult()to be called recursively). - A listener that handles OnTestPartResult()is not allowed to generate any failure. When you add listeners to the listener list, you should put listeners that handle OnTestPartResult() before listeners that can generate failures. This ensures that failures generated by the latter are attributed to the right test by the former. We have a sample of failure-raising listener here. Running Test Programs: Advanced Options¶ Google Test test programs are ordinary executables. Once built, you can run them directly and affect their behavior via the following environment variables and/or command line flags. For the flags to work, your programs must call ::testing::InitGoogleTest() before calling RUN_ALL_TESTS(). To see a list of supported flags and their usage, please run your test program with the --help flag. You can also use -h, -?, or /? for short. This feature is added in version 1.3.0. If an option is specified both by an environment variable and by a flag, the latter takes precedence. Most of the options can also be set/read in code: to access the value of command line flag --gtest_foo, write ::testing::GTEST_FLAG(foo). A common pattern is to set the value of a flag before calling ::testing::InitGoogleTest() to change the default value of the flag: int main(int argc, char** argv) { // Disables elapsed time by default. ::testing::GTEST_FLAG(print_time) = false; // This allows the user to override the flag on the command line. ::testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } Selecting Tests¶ This section shows various options for choosing which tests to run. Listing Test Names¶ Sometimes it is necessary to list the available tests in a program before running them so that a filter may be applied if needed. Including the flag --gtest_list_tests overrides all other flags and lists tests in the following format: TestCase1. TestName1 TestName2 TestCase2. TestName None of the tests listed are actually run if the flag is provided. There is no corresponding environment variable for this flag. Availability: Linux, Windows, Mac. Running a Subset of the Tests¶ By default, a Google Test program runs all tests the user has defined. Sometimes, you want to run only a subset of the tests (e.g. for debugging or quickly verifying a change). If you set the GTEST_FILTER environment variable or the --gtest_filter flag to a filter string, Google Test will only run the tests whose full names (in the form of TestCaseName.TestName) match the filter. The format of a filter is a ' :'-separated list of wildcard patterns (called the positive patterns) optionally followed by a ' -' and another ' :'-separated pattern list (called the negative patterns). A test matches the filter if and only if it matches any of the positive patterns but does not match any of the negative patterns. A pattern may contain '*' (matches any string) or '?' (matches any single character). For convenience, the filter '*-NegativePatterns' can be also written as '-NegativePatterns'. For example: ./foo_testHas no flag, and thus runs all its tests. ./foo_test --gtest_filter=*Also runs everything, due to the single match-everything *value. ./foo_test --gtest_filter=FooTest.*Runs everything in test case FooTest. ./foo_test --gtest_filter=*Null*:*Constructor*Runs any test whose full name contains either "Null"or "Constructor". ./foo_test --gtest_filter=-*DeathTest.*Runs all non-death tests. ./foo_test --gtest_filter=FooTest.*-FooTest.BarRuns everything in test case FooTestexcept FooTest.Bar. Availability: Linux, Windows, Mac. Temporarily Disabling Tests¶). If you need to disable all tests in a test case, you can either add DISABLED_ to the front of the name of each test, or alternatively add it to the front of the test case name. For example, the following tests won't be run by Google Test, even though they will still be compiled: // Tests that Foo does Abc. TEST(FooTest, DISABLED_DoesAbc) { ... } class DISABLED_BarTest : public ::testing::Test { ... }; // Tests that Bar does Xyz. TEST_F(DISABLED_BarTest, DoesXyz) { ... } Note: This feature should only be used for temporary pain-relief. You still have to fix the disabled tests at a later date. As a reminder, Google Test will print a banner warning you if a test program contains any disabled tests. Tip: You can easily count the number of disabled tests you have using grep. This number can be used as a metric for improving your test quality. Availability: Linux, Windows, Mac. Temporarily Enabling Disabled Tests¶ To include disabled tests in test execution, just invoke the test program with the --gtest_also_run_disabled_tests flag or set the GTEST_ALSO_RUN_DISABLED_TESTS environment variable to a value other than 0. You can combine this with the --gtest_filter flag to further select which disabled tests to run. Availability: Linux, Windows, Mac; since version 1.3.0. Repeating the Tests¶ Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it will fail only 1% of the time, making it rather hard to reproduce the bug under a debugger. This can be a major source of frustration. The --gtest_repeat flag allows you to repeat all (or selected) test methods in a program many times. Hopefully, a flaky test will eventually fail and give you a chance to debug. Here's how to use it: If your test program contains global set-up/tear-down code registered using AddGlobalTestEnvironment(), it will be repeated in each iteration as well, as the flakiness may be in it. You can also specify the repeat count by setting the GTEST_REPEAT environment variable. Availability: Linux, Windows, Mac. Shuffling the Tests¶ You can specify the --gtest_shuffle flag (or set the GTEST_SHUFFLE environment variable to 1) to run the tests in a program in a random order. This helps to reveal bad dependencies between tests. By default, Google Test uses a random seed calculated from the current time. Therefore you'll get a different order every time. The console output includes the random seed value, such that you can reproduce an order-related test failure later. To specify the random seed explicitly, use the --gtest_random_seed=SEED flag (or set the GTEST_RANDOM_SEED environment variable), where SEED is an integer between 0 and 99999. The seed value 0 is special: it tells Google Test to do the default behavior of calculating the seed from the current time. If you combine this with --gtest_repeat=N, Google Test will pick a different random seed and re-shuffle the tests in each iteration. Availability: Linux, Windows, Mac; since v1.4.0. Controlling Test Output¶ This section teaches how to tweak the way test results are reported. Colored Terminal Output¶ Google Test can use colors in its terminal output to make it easier to spot the separation between tests, and whether tests passed. You can set the GTEST_COLOR environment variable or set the --gtest_color command line flag to yes, no, or auto (the default) to enable colors, disable colors, or let Google Test decide. When the value is auto, Google Test will use colors if and only if the output goes to a terminal and (on non-Windows platforms) the TERM environment variable is set to xterm or xterm-color. Availability: Linux, Windows, Mac. Suppressing the Elapsed Time¶ By default, Google Test prints the time it takes to run each test. To suppress that, run the test program with the --gtest_print_time=0 command line flag. Setting the GTEST_PRINT_TIME environment variable to 0 has the same effect. Availability: Linux, Windows, Mac. (In Google Test 1.3.0 and lower, the default behavior is that the elapsed time is not printed.) Generating an XML Report¶ Google Test can emit a detailed XML report to a file in addition to its normal textual output. The report contains the duration of each test, and thus can help you identify slow tests. To generate the XML report, set the GTEST_OUTPUT environment variable or the --gtest_output flag to the string "xml:_path_to_output_file_", which will create the file at the given location. You can also just use the string "xml", in which case the output can be found in the test_detail.xml file in the current directory. If you specify a directory (for example, "xml:output/directory/" on Linux or "xml:output\directory\" on Windows), Google Test will create the XML file in that directory, named after the test executable (e.g. foo_test.xml for test program foo_test or foo_test.exe). If the file already exists (perhaps left over from a previous run), Google Test will pick a different name (e.g. foo_test_1.xml) to avoid overwriting it. The report uses the format described here. It is based on the junitreport Ant task and can be parsed by popular continuous build systems like Hudson. Since that format was originally intended for Java, a little interpretation is required to make it apply to Google Test tests, as shown here: <testsuites name="AllTests" ...> <testsuite name="test_case_name" ...> <testcase name="test_name" ...> <failure message="..."/> <failure message="..."/> <failure message="..."/> </testcase> </testsuite> </testsuites> - The root <testsuites>element corresponds to the entire test program. <testsuite>elements correspond to Google Test test cases. <testcase>elements correspond to Google Test test functions. For instance, the following program TEST(MathTest, Addition) { ... } TEST(MathTest, Subtraction) { ... } TEST(LogicTest, NonContradiction) { ... } could generate this report: <?xml version="1.0" encoding="UTF-8"?> <testsuites tests="3" failures="1" errors="0" time="35" name="AllTests"> <testsuite name="MathTest" tests="2" failures="1"* <testcase name="Addition" status="run" time="7" classname=""> <failure message="Value of: add(1, 1) Actual: 3 Expected: 2" type=""/> <failure message="Value of: add(1, -1) Actual: 1 Expected: 0" type=""/> </testcase> <testcase name="Subtraction" status="run" time="5" classname=""> </testcase> </testsuite> <testsuite name="LogicTest" tests="1" failures="0" errors="0" time="5"> <testcase name="NonContradiction" status="run" time="5" classname=""> </testcase> </testsuite> </testsuites> Things to note: - The testsattribute of a <testsuites>or <testsuite>element tells how many test functions the Google Test program or test case contains, while the failuresattribute tells how many of them failed. - The timeattribute expresses the duration of the test, test case, or entire test program in milliseconds. - Each <failure>element corresponds to a single failed Google Test assertion. - Some JUnit concepts don't apply to Google Test, yet we have to conform to the DTD. Therefore you'll see some dummy elements and attributes in the report. You can safely ignore these parts. Availability: Linux, Windows, Mac. Controlling How Failures Are Reported¶ Turning Assertion Failures into Break-Points¶ When running test programs under a debugger, it's very convenient if the debugger can catch an assertion failure and automatically drop into interactive mode. Google Test's break-on-failure mode supports this behavior. To enable it, set the GTEST_BREAK_ON_FAILURE environment variable to a value other than 0 . Alternatively, you can use the --gtest_break_on_failure command line flag. Availability: Linux, Windows, Mac. Suppressing Pop-ups Caused by Exceptions¶ On Windows, Google Test may be used with exceptions enabled. Even when exceptions are disabled, an application can still throw structured exceptions (SEH's). If a test throws an exception, by default Google Test doesn't try to catch it. Instead, you'll see a pop-up dialog, at which point you can attach the process to a debugger and easily find out what went wrong. However, if you don't want to see the pop-ups (for example, if you run the tests in a batch job), set the GTEST_CATCH_EXCEPTIONS environment variable to a non- 0 value, or use the --gtest_catch_exceptions flag. Google Test now catches all test-thrown exceptions and logs them as failures. Availability: Windows. GTEST_CATCH_EXCEPTIONS and --gtest_catch_exceptions have no effect on Google Test's behavior on Linux or Mac, even if exceptions are enabled. It is possible to add support for catching exceptions on these platforms, but it is not implemented yet. Letting Another Testing Framework Drive¶ If you work on a project that has already been using another testing framework and is not ready to completely switch to Google Test yet, you can get much of Google Test's benefit by using its assertions in your existing tests. Just change your main() function to look like: #include <gtest/gtest.h> int main(int argc, char** argv) { ::testing::GTEST_FLAG(throw_on_failure) = true; // Important: Google Test must be initialized. ::testing::InitGoogleTest(&argc, argv); ... whatever your existing testing framework requires ... } With that, you can use Google Test assertions in addition to the native assertions your testing framework provides, for example: void TestFooDoesBar() { Foo foo; EXPECT_LE(foo.Bar(1), 100); // A Google Test assertion. CPPUNIT_ASSERT(foo.IsEmpty()); // A native assertion. } If a Google Test assertion fails, it will print an error message and throw an exception, which will be treated as a failure by your host testing framework. If you compile your code with exceptions disabled, a failed Google Test assertion will instead exit your program with a non-zero code, which will also signal a test failure to your test runner. If you don't write ::testing::GTEST_FLAG(throw_on_failure) = true; in your main(), you can alternatively enable this feature by specifying the --gtest_throw_on_failure flag on the command-line or setting the GTEST_THROW_ON_FAILURE environment variable to a non-zero value. Availability: Linux, Windows, Mac; since v1.3.0. Distributing Test Functions to Multiple Machines¶ If you have more than one machine you can use to run a test program, you might want to run the test functions in parallel and get the result faster. We call this technique sharding, where each machine is called a shard. Google Test is compatible with test sharding. To take advantage of this feature, your test runner (not part of Google Test) needs to do the following: - Allocate a number of machines (shards) to run the tests. - On each shard, set the GTEST_TOTAL_SHARDSenvironment variable to the total number of shards. It must be the same for all shards. - On each shard, set the GTEST_SHARD_INDEXenvironment variable to the index of the shard. Different shards must be assigned different indices, which must be in the range [0, GTEST_TOTAL_SHARDS - 1]. - Run the same test program on all shards. When Google Test sees the above two environment variables, it will select a subset of the test functions to run. Across all shards, each test function in the program will be run exactly once. - Wait for all shards to finish, then collect and report the results. Your project may have tests that were written without Google Test and thus don't understand this protocol. In order for your test runner to figure out which test supports sharding, it can set the environment variable GTEST_SHARD_STATUS_FILE to a non-existent file path. If a test program supports sharding, it will create this file to acknowledge the fact (the actual contents of the file are not important at this time; although we may stick some useful information in it in the future.); otherwise it will not create it. Here's an example to make it clear. Suppose you have a test program foo_test that contains the following 5 test functions: TEST(A, V) TEST(A, W) TEST(B, X) TEST(B, Y) TEST(B, Z) GTEST_TOTAL_SHARDSto 3 on all machines, and set GTEST_SHARD_INDEXto 0, 1, and 2 on the machines respectively. Then you would run the same foo_teston each machine. Google Test reserves the right to change how the work is distributed across the shards, but here's one possible scenario: - Machine #0 runs A.Vand B.X. - Machine #1 runs A.Wand B.Y. - Machine #2 runs B.Z. Availability: Linux, Windows, Mac; since version 1.3.0. Fusing Google Test Source Files¶ Google Test's implementation consists of ~30 files (excluding its own tests). Sometimes you may want them to be packaged up in two files (a .h and a .cc) instead, such that you can easily copy them to a new machine and start hacking there. For this we provide an experimental Python script fuse_gtest_files.py in the scripts/ directory (since release 1.3.0). Assuming you have Python 2.4 or above installed on your machine, just go to that directory and run python fuse_gtest_files.py OUTPUT_DIR and you should see an OUTPUT_DIR directory being created with files gtest/gtest.h and gtest/gtest-all.cc in it. These files contain everything you need to use Google Test. Just copy them to anywhere you want and you are ready to write tests. You can use the scrpts/test/Makefile file as an example on how to compile your tests against them. Where to Go from Here¶ Congratulations! You've now learned more advanced Google Test tools and are ready to tackle more complex testing tasks. If you want to dive even deeper, you can read the FAQ.
https://microsoft.github.io/mu/dyn/mu_tiano_plus/CryptoPkg/Library/OpensslLib/openssl/boringssl/third_party/googletest/docs/V1_5_AdvancedGuide/
CC-MAIN-2022-27
refinedweb
11,178
54.73
Drupal Headless Architecture with JS Framework [Live Demo]March 22, 2018 In a rush? Skip to tutorial steps or GitHub repo & live demo. A few years back, I couldn't fathom having any real interest in old-school CMSs like Drupal or WordPress. And I wasn't alone: Security exploits! Bloated! Expensive! PHP nightmare! All echoes from developers' past experiences with these platforms. Yet, the times they are a-changing, as Bob Dylan sang. Not so long ago, we experimented with the WordPress REST API, which allows a decoupling of WP's backend and frontend. Conclusion? Developers can finally focus on using these CMSs solely for what they are good at—content management & administrative process. Other (much better) services can handle the rest! Today I'm going to try the same thing, this time going headless with Drupal. I'll use Drupal as a backend for a small e-commerce app powered by the React-like framework Inferno.js. My steps: - Setting up Drupal - Enabling Drupal headless mode - Creating views for our products - Consuming data in Inferno.js - Generating the cart with Snipcart First, a bit of context. Looking for alternative tools to build your headless stack? Visit our developer's guide to headless e-commerce! The age of headless Drupal Drupal is one of the biggest free open-source CMS in the world. Just like WordPress, it's mainly known for it's monolithic CMS capacities. This is slowly shifting though. Since the release of Drupal 8 back in 2015, the REST API is at the CMS' core. This feature opened up a whole new world of possibilities for Drupal developers. Why? Because for the first time, it made it easy to take a decoupled approach to site building. To better understand this step's importance, here are some problems Drupal users had with its traditional architecture—problems a headless Drupal solves: → There is no such thing as a "smooth" upgrade from Drupal 6 to 7 to 8. Each time a new major version is released, adopting that new version requires a time-consuming rebuild. This pain is relieved with Drupal only used as the backend. → In tightly-coupled Drupal development, members of the frontend team need to become Drupal developers in order to properly style a website. With a decoupled approach, frontend developers can use whatever framework they know or need. → With Drupal, customizing a site is... tedious. Even if you know the CMS by heart (see point above). With a Drupal headless architecture, the CMS no longer holds you back. All the flexibility of frontend frameworks is in your hands. → In the traditional approach, your content can become trapped inside Drupal’s sprawling web of database tables. When decoupling, all your content becomes available & portable through the API. For a real-life example of headless Drupal in action, visit this case study with an Angular frontend. The Manifesto Core Drupal users even wrote this manifesto, envisioning the headless, powerful future of Drupal: We want Drupal to be the preferred backend content management system for designers and frontend developers. We believe that Drupal's main strengths lie in the power and flexibility of its backend; its primary value to users is its ability to architect and display complex content models. We believe that client-side frontend frameworks are the future of the web. It is critically important for Drupal to be services oriented first, not HTML oriented first, or risk becoming irrelevant. We're big proponents of similar JAMstack approach here at Snipcart, so I'm happy to try this one! A word on Inferno.js Any frontend framework can be strapped to our Drupal backend. In my case, I've wanted to try for a while the fast, React-like library that is Inferno. Why Inferno? Well, I'd already had fun with React in our post about the WP REST API, and this gave me a chance to try something new. Some features that draw me towards it are: - React compatibility—it's as near as you'll get to a smaller version of React. - Real fast—supposed to be one of the fastest frontend frameworks. - Dynamic rendering on both client and server. It's also community-driven and looks great overall so let's jump right into it! To prevent you from burning everything down with your flaming hot Inferno app we'll craft a small fire protection gear shop. Drupal as headless CMS tutorial: e-commerce app Pre-requisites: Disclaimer: This tutorial uses nodes to store our products. As I painfully learned, anything in Drupal can be done in a thousand different ways. The approach I took is, I think, the easiest one for a beginner. But it doesn't leverage modules from the Drupal community. If you were to use integrations later on, such as shipping providers, you couldn't do it with this setup without writing more code. For a more enterprise integration, check out this existing Snipcart module. It uses Drupal Commerce entities which will support any of its integrations. 1. Creating a new content type in Drupal First thing on the menu: create a new content type to declare a product's specific attribute. Hop in your Drupal dashboard, go to the structure panel, and hit Content types. There you will be able to add a new content type by clicking on the Add content type button. Here's how I defined mine: Now that we've added the necessary fields, we need to alias their names so Snipcart's JSON crawler can do the mapping properly. Under the Format section click on the Settings link. There, you can declare any alias to your fields. It's this new alias that will be served in the responses instead of the field_{fieldname} format. Put it this way: With this done, you can already create products. Click on the content section, then Add content. Here's how it should look like: 2. Enabling Drupal headless mode At the moment, our products can only be shown in traditional Drupal views. We want to consume these products without a view—only the products data. To do so we'll use the RESTful Web Services native module. Go to the extended tab of the admin panel and scroll all the way down. Check the RESTful Web Services and the Serialization one, and hit the Install button. Now let's add json to our supported format. To do so hit the /admin/config/development/configuration/single/import route of your dashboard. For the Configuration type, choose the REST ressource configuration and paste the following config: langcode: en status: true dependencies: module: - basic_auth - hal - node id: entity.node plugin_id: 'entity:node' granularity: resource configuration: methods: - GET - POST - PATCH - DELETE formats: - json authentication: - basic_auth Hit import and confirm after this. Note that this is the default configuration, but for the format. We can already consume data via the HTTP API at the moment. I strongly recommend you to test it via an HTTP client such as Postman. Knowing everything works A1 can save you lots of time at this point. To test it, simply send a GET request to domain/node/1?_format=json. Don't forget that you need to have defined products for this to work. Serving all your products Right now we can only consume specific nodes, but there's no easy way for us to fetch all products. To do so, let's create a new view. Hit the Structure panel, then click on the Views section followed by the Add view button. Define the configuration as follows: Once you saved the route you can test it again with a GET at the corresponding route. If everything is set correctly you should see all your defined products. You'll also see that there's a lot more than only the products info. There's also all of Drupal's metadata about the entities, which we don't want to send to our frontend. To select what you want to serve, click on the Show attribute of the view and select Fields. Now click the Add button to the right of the Fields section. You'll be able to select only what you want to be public. Here's how it looks now: 3. Enabling CORS requests At the moment, we can consume data but only over an HTTP client; CORS (Cross-Origin Resource Sharing) requests would be blocked as they're not enabled by default. There are a couple of ways to do this, but the easiest is to jump right in the config files and override them using an IDE. Fire your editor in your Drupal folder and open the /core/core.services.yml. Override the cors.config section as follows: cors.config: enabled: true allowedHeaders: ['x-csrf-token','authorization','content-type','accept','origin','x-requested-with', 'access-control-allow-origin','x-allowed-header','*'] allowedMethods: ['GET'] allowedOrigins: ['*'] exposedHeaders: true maxAge: false supportsCredentials: true If you ran tests before getting here, I recommend clearing Drupal's cache to reload config files. Hit the Configuration panel and then the Performance button to do so. You'll see a Clear all caches button: hit it. 4. Consuming data in Inferno.js We're now ready to fetch and show data in our store! I recommend a quick read of the Inferno guidelines if you haven't played with similar frameworks before. For the demo, I decided to use their basic scaffolding to create something fast. I used the following commands: npx create-inferno-app my-app cd my-app npm start Open up your favorite editor in the project's folder and access the App.js file. First thing to do is overriding the constructor to give a default value to our products array. constructor() { super(); this.state = { products: [] } } Now let's define a componentDidMount function. This is a hook into Inferno's component lifecycle. It allows us to execute code once the component is mounted. That's where you want to run any async calls, such as fetching products. Here's how: componentDidMount() { fetch('http://{your_drupal_server}/products?_format=json') .then(x => x.json()) .then(x => this.setState({ products: x})); } Once the component is mounted, it'll fetch the products on your Drupal instance and put the result in the products variable. We now only need to iterate over the products array in order to render them. Do so in the render function as such: render() { let products = this.state.products; return ( <div className="App"> <header className="App-header"> <h1> Welcome to our Inferno powered store </h1> </header> <div class="products"> { products.map(x => ( <div class="product-details"> <img src={ `{x.image}` } <div class="product-description"> <p class="title">{ x.name }</p> <p>{ x.description }</p> <button class="snipcart-add-item" data-item-name={ x.name } data-item-id={x.id } data-item-image={ `{x.image}` } data-item-description={ x.description } data-item-url="" data-item-price={ x.price }> Buy it for { x.price } $ </button> </div> </div>)) } </div> </div> ); } 5. Setting up Snipcart Quick note: Snipcart products are defined directly in the HTML, using simple product attributes markup. Last thing needed before actually testing the cart is Snipcart's required scripts. For this purpose, let's put these directly in the index.html file: " /> And there you have it! You can run npm run start in your project's folder and you'll be able to add products to the cart! I've also added a little bit of custom CSS in main.css to make my demo a little stylish. You could make yours look and feel the way you want! Live demo & GitHub Repo Closing thoughts I have to be honest with you guys, this one gave me more trouble than I thought it would. Like I said earlier, Drupal offers thousands of different ways to do the same thing. It's probably better to carefully read the documentation first than to start coding blindingly. The Drupal ecosystem is so huge that feeling natural while developing takes time. Now that the wanderings are done, I would say following the post should take about an hour max. with an up & running Drupal instance. On the bright side, once I found the proper way to go it was actually quite easy to do and well documented. The community is big, you'll likely find a solution to any issue since someone probably had it before. For our demo, we could have developed the product entity to support more fields than only the required ones. Also, as said earlier, using a more enterprise module to leverage Drupal Commerce integrations could have been nice to play with. Maybe for another time! Now I'd like to hear from you. Have you worked with Drupal as headless CMS? WordPress maybe? How do you think they compare with out-of-the-box headless CMSs? Let's start a discussion in the comment section below! If you've enjoyed this post, please take a second to share it on Twitter.
https://snipcart.com/blog/drupal-headless-architecture-tutorial
CC-MAIN-2018-34
refinedweb
2,170
66.13
? The discussion here might help with some ideas: @JonB Not sure he wants to prohibit a second launch. As I understood (perhaps erroneously), he wants to restart its script from scratch but the previous view is still there. Yeah, this seems to work somehow, I mean I’m not quite sure why it works. Anyway, adding your codes, I run my program by shortcut, leave it without closing it, once again I click the shortcut and now the previous view is closed. But this behavior is not stable, sometimes the view is still running behind the new view. I’m trying to find what makes this difference of behavior. So far, by adding some print statement as follows: class my_thread(threading.Thread): global main_view def __init__(self): self.timestamp = time.time() threading.Thread.__init__(self) def run(self): while True: time.sleep(1) print(self.timestamp, console.is_in_background()) if console.is_in_background(): print('closing view') main_view.close() break Seems like my_thraead closes the view only when the Pythonista itself is re-opened. For example when I run this program on Pythonista (not from shortcut,) close the view and then again run the program, the output in the console shows like this: 1618751761.064212 False 1618751765.282424 False 1618751761.064212 False 1618751765.282424 False 1618751761.064212 False 1618751765.282424 False ; ; (meaning two threads are still running) and when I go back to the home screen and reopen Pythonista, the output shows like this: 1618751761.064212 True closing view 1618751765.282424 True closing view I’m not sure but it may make sense. What do you think? Still, I have some mysterious behavior so keep working on it. Thank you for your help. I didn’t think about using “console.is_in_background()”. @satsuki.kojima do you see a difference if you have or not a tab with your script open? @satsuki.kojima my solution is correct, I think, if you leave Pythonista by opening another app, like you asked.
https://forum.omz-software.com/topic/7035/problem-when-running-a-ui-program-from-shortcut/12
CC-MAIN-2022-33
refinedweb
325
85.89
.jpg) Created 24 March 2008 Requirements Prerequisite knowledge Basic familiarity with accepted quality assurance testing procedures is assumed. This includes being aware of Performance 101 : You should not load a server beyond its capabilities. If you do, the results you get will be invalid and should be thrown away. User level Intermediate Additional Requirements LiveCycle ES Load test tools You can use load test tools such as - Borland SilkPerformer - HP Mercury LoadRunner - IBM Rational Performance Tester - RadView WebLOAD Note: This article discusses performance testing Adobe LiveCycle ES applications with IBM WebSphere Application Server. For information on older (version 7.2 and earlier) Adobe LiveCycle applications, refer to the article "Performance testing Adobe LiveCycle applications with IBM WebSphere Application Server and Microsoft Windows Server 2003". Enterprise applications should not be deployed into production environments without a rigorous performance testing cycle. Ideally performance testing should occur on the same hardware as production hardware. The test environment and production environment should also have matching platform software versions, patch levels, and topologies. This article describes the best practices for performance testing applications built on LiveCycle ES software. These practices are based on Adobe's own experience testing LiveCycle ES products in the Adobe Server Performance Labs. A variety of load testing tools are available in the market, such as Mercury (now HP) LoadRunner, IBM Rational Performance Tester, and Segue (now Borland) SilkPerformer. They tend to be expensive. A serious enterprise load test configuration with three software licenses, 1,000 virtual user (VU) licenses, and a couple of additional component licenses can cost as much as US$100,000. However, the benefits they provide are truly remarkable when you consider the dire consequences of not testing an application for performance before enterprise deployment. This document provides examples based on SilkPerformer 2006 R2, which is Adobe's corporate standard. However, what this document covers can be easily applicable to other tools. This article is for developers and testers who are responsible for determining the performance of enterprise applications that use Adobe LiveCycle ES software. It is also suitable for system analysts, architects, and IT personnel who are trying to size hardware for application deployment. Although the details I provide in this article are applicable to different operating systems and performance testing tools, the screen shots I use cover the following: - IBM WebSphere 6.1 - Microsoft Windows Server 2003 Enterprise Edition SP2 - AIX 5.3 - Segue SilkPerformer 2006 R2 - Microsoft Office Excel 2007 How load test tools work Load test tools capture and play back protocol-level traffic between clients and servers. Therefore, they are generally immune from the widget/object recognition problems that typically plague function test tools. Test design Properly designed tests yield the maximum amount of usable information for the fewest number of tests. Performance tests typically run for a few hours. However, performance data collected from short-term tests tends to be highly variable and therefore less reliable. You can determine if your performance data is reliable by dividing the standard deviation by the mean and expressing the result as a percentage. Higher values are bad. Adobe's best practice is to conduct performance tests for at least one hour. Performance testing vocabulary The following is a list of terms that people in performance testing usually use: - Active users: The subset of total users who will be using the application at any given time. - Concurrent users: A subset of active users who are contacting the servers for services at any given time. This represents the number of users who have clicked a button and are currently waiting on a response from the server. It is a very small subset of the active user population—about 5% in many cases. - Virtual users: Users simulated by a load test tool. The behavior of real users can be simulated with thinktimes (defined below). - Peak hours: Hours during a typical workday that the application sees maximum usage. - Peak load: The transaction load that the application experiences during peak hours in the busiest period of the year. For someone testing an IRS application, peak load would tend to occur around April 15. - Peak concurrent users: Concurrent users hitting the servers during peak load. - Typical transaction: The single transaction that is most frequently executed during peak hours in the busiest period of the year. This transaction can be used to represent the overall usage of the application under test. - Elapsed time: The amount of time a user waits for service, usually expressed in seconds. It is also called the "response time." - Throughput: The rate at which typical transactions can be executed, usually expressed as transactions per hour. - Thinktime: The amount of time a virtual user is programmed to wait to simulate the time a real user would spend reading and filling out forms. This is usually programmed to vary randomly between three and 10 seconds. Load tests run without thinktime tend to produce unrealistic results. - PDF form complexity index: An index that represents the impact of processing a form using Adobe LiveCycle ES software. Do not use the number of pages in a form as an indicator of the load the server will experience rendering it. A better predictor is the number of interactive objects on the form, like text fields, radio buttons, drop-down list boxes, and so on. Recording scripts Although many vendors claim that it is feasible to reuse function test scripts for load testing, Adobe's best practice is to avoid it. Most load test tools come with macro recorders that generate scripts based on user interaction with the client application. You can customize and run these scripts. We recommend that—in order to minimize the complexity of load testing scripts—you code test harnesses, which can then be driven by simple load testing scripts. If you decide to implement test harnesses using servlets, having the following exception handling code in the servlets will make debugging easier: import java.io.PrintWriter; HttpServletResponse resp; catch(Exception e) { PrintWriter out = resp.getWriter(); out.print("<h2>Test Harness</h2> <p>An Exception occurred. Details below:</p>"); out.print("<font color=red>"); e.printStackTrace(out); out.print("</font>"); } Most load test tools log what the clients see during the test. The output of the previous code will appear in the client browser and get saved in the logs. This will let you get the debug stack trace of the error without digging through the server logs. The following is a simple SilkPerformer Benchmark Description Language (BDL) script that requests LiveCycle Forms ES to render and reader-enable a PDF form to the client via a custom-written test harness: //---------------------------------------------------------------------- // Recorded 05/02/2005 by SilkPerformer Recorder v7.0.0.2364 //---------------------------------------------------------------------- benchmark SilkPerformerRecorder use "WebAPI.bdh" dcluser user VUser transactions TInit: begin; TMain: 1; var dclrand fRandomThinkTime: RndUniF(2.0..4.0); dcltrans transaction TInit begin WebModifyHttpHeader("Accept-Language", "en-us"); end TInit; transaction TMain var strURL: string init ""; begin WebPageUrl(strURL, "LiveCycle Test Harness"); // Load Options WebPageLink("collateral", "Choose Test Collateral"); // Link 3 // ----------------------------- // Choose Render & Reader-Enable // ----------------------------- Thinktime(fRandomThinkTime); Print("Requesting interactive PDFForm...",OPT_DISPLAY_ALL,TEXT_BLUE); WebVerifyData("%PDF-1.6"); WebVerifyData("%%EOF"); MeasureStart("Render"); WebPageSubmit("Submit", SUBMIT001, "srvltGetForm"); // Form 1 MeasureStop("Render"); Print("Form received.",OPT_DISPLAY_ALL,TEXT_BLUE); end TMain; dclform SUBMIT001: "outputFormat":= "PDFForm", // changed "formNames":= "data.xdp", // added "dataFiles":= "data.xml"; // added The key things to take away from this code are the verification steps (highlighted) and the custom timers (also highlighted). Every PDF document starts with a beginning tag that says %%EOF. These tags can verify that the client application received the entire PDF document during the load test. If the last part of the PDF document did not make it through, the verification will fail and the load test tool will flag an error. This information is crucial. Load test scripts should always contain verification steps. Custom timers are very important because they let you put timers around key calls to the server. In this case, the most important call is the call to the servlet srvltGetForm, which is wrapped around a custom timer called "Render". Test harness for API calls Many LiveCycle use cases involve calls to LiveCycle services in a single request-response paradigm. The srvltRenderPDFForm.java test harness servlet included in the sample zip file at the top of this article calls Adobe LiveCycle Forms ES to render a PDF interactive form from an XDP form template after merging XML data with it Test harness for short-lived orchestrations A process "orchestration" is an automated workflow process designed using LiveCycle Workbench that invokes multiple LiveCycle services in sequence. Short-lived process orchestrations are synchronous, which means that code invoking the orchestration will block until the orchestration finishes executing. You can use the srvltStartOrchestration.java test harness servlet included in the sample zip file to invoke an orchestration. To use this servlet unchanged, your process orchestration would need to have a single output variable of datatype 'document' named"outdoc". Long-lived orchestrations Long-lived orchestrations involve user tasks. These are more difficult to test for performance because every user task has to be scripted separately. If your long-lived orchestration can be redesigned as a short-lived orchestration for test purposes, it is definitely recommended. Collecting performance data Most operating systems provide a large number of performance counters that you can use to determine how well an application is performing under test. Windows Task Manager Windows Task Manager on the servers can provide a lot of information and insight into the performance characteristics of the application during a load test. To record data like this, you can use tools like Windows Performance Monitor. As you can see in Figure 1, the JVM in which WebSphere application server is running currently consumes about 366 MB of memory and is running 82 threads. .jpg) Figure 2 shows that three process instances of the Adobe module XMLForm.exe are currently running. If the PoolMax property of the Adobe LiveCycle ES XMLForm module is set to 0 (unlimited pool size), this Task Manager number will indicate the number of concurrent requests that are coming in for the given load. .jpg) Tivoli Performance Viewer To get inside the JVM, you would need to use the free Tivoli Performance Viewer that ships with WebSphere application server Administration Console. You will need to install the Adobe SVG Viewer 3; the interface is shown in Figure 3. In WebSphere 5.1., this was a separately installed application. .jpg) Windows Performance Monitor Most load test tools have performance monitoring modules that let you collect performance counter data values published by Windows Performance Monitor. Try to collect data points every 5 or 10 seconds. At the very least, track the following performance counters for every test (explanations courtesy of Microsoft): - Memory – Available Mbytes: This is the amount of physical memory, in megabytes, immediately available for allocation to a process or for system use. - Memory – Committed Bytes: This is the amount of committed virtual memory, in bytes. Committed memory is the physical memory which has space reserved on the disk paging file(s). - Network Interface – NIC card instance – Bytes Total/sec: This is the rate at which bytes are sent and received over each network adapter, including framing characters. - Network Interface – NIC card instance – Packets/sec: This is the rate at which packets are sent and received on the network interface. - Paging File – page file instance - % Usage: This is the percentage of the page file instance in use. - Physical Disk – disk - % Disk Time: This is the percentage of elapsed time that the selected disk drive was busy servicing read or write requests. - Physical Disk – disk – Disk Bytes/sec: This is the rate at which bytes are transferred to or from the disk during write or read operations. - Processor – CPU instance - % Processor Time: This is the percentage of elapsed time that the processor spends to execute a non-idle thread. It is calculated by measuring the duration of the idle thread that. It is calculated by monitoring the time when the service is inactive, and subtracting that value from 100%. Also track the following counters for the java.exe process, which represents the J2EE application server instance: - Process – java – Handle Count: This is the total number of handles currently open by this process. This number is equal to the sum of the handles currently open by each thread in this process. - Process – java – Private Bytes: This is the current size, in bytes, of memory that this process has allocated but cannot be shared with other processes. - Process – java – Thread Count: This is the number of threads currently active in this process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread. - Process – java – Virtual Bytes: This is the current size, in bytes, of the virtual address space that the process is using. Use of virtual address space does not necessarily imply corresponding use of either disk or main memory pages. Virtual space is finite, and the process can limit its ability to load libraries. - Process – java – Working Set: This is the current size, in bytes, of RAM used by a process. It. rstat For UNIX operating systems such AIX, rstat can be used to collect performance data. By default, the rstat daemon is not configured to start automatically on most systems. To configure this: As root: - Edit /etc/inetd.conf and uncomment or add an entry for rstatd; for example, rstatd sunrpc_udp udp wait root /usr/sbin/rpc.rstatd rstatd 100001 1-3 2 - Edit /etc/services and uncomment or add an entry for rstatd; for example, rstatd 100001/udp 3 - Refresh services: refresh -s inetd - Start rstatd: /usr/sbin/rpc.rstatd When enabled, metrics such as shown in Figure 4 can be collected. .jpg) WebSphere Performance Servlet WebSphere comes prepackaged with an installable application called Performance Servlet, typically found at \WebSphere\AppServer\installableApps\PerfServletApp.ear. This servlet lets a performance monitoring tool monitor the Java Virtual Machine of the system under test. AIX rstat, Windows Performance Monitor and Task Manager cannot report on details within the JVM like the number of sessions, JDBC pool size, and so on. The WebSphere Performance Servlet is an application that IBM packages with WebSphere. It uses the WebSphere Performance Management Instrumentation (PMI) framework to return performance statistics as an XML file to the caller. The caller can be any application that can parse this XML and make sense of it. Segue's SilkPerformer Performance Explorer Server Monitor works this way. Newer versions of SilkPerformer provide an additional option for WebSphere 6.0 and 6.1 by way of JMX MBean Server. But this requires WebSphere libraries to be installed on the load controller which many people choose to shy away from. After installing the servlet, make sure that all servers in the cluster are restarted. Once the Performance Servlet application starts successfully, regenerate the HTTP plug-in and redeploy it to the web servers. Test to make sure that the servlet works by pointing your browser to a URL like this:<deploymentmanagerboxname>&port=<port>&connector=<connector>&username=<username>&password=<password> For example, this would work if there is a web server front end to the LiveCycle cluster: or this, if directly connecting to one of the JVM instances: In SilkPerformer Performance Monitor, choose Add Data Source > Select from Predefined Data Sources > Application Server > IBM WebSphere Application Server > IBM WebSphere 5.0 (PerfServlet). Enter the data as shown in Figure 5. AP-PS6 is the web server in this case. .jpg) Be aware that the more nodes there are in the cluster, the more data there will be in the XML returned by the Performance Servlet. SilkPerformer sometimes displays the error message shown in Figure 6. .jpg) Ignore the error message and keep trying until it works. When it does, you will see a window similar to the one in Figure 7. .jpg) Remember that every request to the Performance Servlet uses up server resources. The very act of observing what is going on affects what you are observing. Note: People refer to this phenomenon as the Heisenberg uncertainty principle after a principle formulated by the German physicist Werner Heisenberg (1901–1976) in a 1927 paper in the field of quantum mechanics. It is a good idea to restrict the frequency with which data is collected using this servlet so that you can minimize the impact on the servers. Testing Before testing, make sure that all of your servers and the test tool controller are synchronized with their system clocks. The synchronization will let you correlate error logs across multiple servers. Before each test, reboot all the servers so that each test starts off with the same baseline. In addition, delete all logs before the start of each test so that entries from previous tests do not cause confusion later. You can categorize various tests performed based on their goals, and design tests to determine maximum throughput possible. Throughput By running a series of relatively short step tests, you can chart a profile of your application that will tell you the following things: - Highest possible transactional throughput while keeping elapsed times within acceptable limits - Number of servers needed to satisfy throughput requirements The chart in Figure 8 is the result of seven one-hour step tests with 2, 4, 6, 8, 10, 12, and 14 virtual users on a single-node WebSphere cluster. It shows that the system saturates at a transaction level of about 889 transactions per hour with a mean elapsed time of 12.9 seconds. If you add more load to the system, it processes more transactions but the elapsed time starts rising. .jpg) System sizing If your required hourly throughput is 1,600 transactions per hour, the chart tells you that you need at least an additional node in your cluster. Longevity The only way to determine the long-term behavior of an application in production use is to run it for a long time under typical load. This method is just about the only way to determine memory leaks and other deployment issues that typically make the IT person hate the application. Although a test should ideally last for one week, people typically have only 48 hours for the test during the weekend. If this is your case, run the test under peak load rather than typical load. By its very nature, peak loads occur only during peak periods that are usually of a short duration. So testing longevity at peak load is not completely realistic. Debugging Before each test, stop all application servers and delete existing logs. After finishing the test, check all logs, including web server logs and application server logs. You can use additional tools such as JVM profilers to further debug problems. However, we strongly recommend that you do not run performance tests on a JVM while it is being profiled. Popular JVM profilers include Borland Optimizeit and Quest JProbe. Availability calculations Availability is an index of the stability and reliability of an application. Availability is expressed as a percentage of the following: MTTF / (MTTF + MTTR) where MTTF is mean time to failure and MTTR is mean time to recover. Failure typically means that the application stops responding and has to be restarted. Recovery typically means an application restart or a server reboot. Therefore, MTTF is the amount of time an application remains available to users, usually expressed in minutes. MTTR is the amount of time required to make the application available to users after it becomes unavailable, also expressed in minutes. Simplistically, you can determine MTTF by running the application under typical load for weeks or until it fails. If the application runs for four weeks and the recovery time is only three minutes, you already have "four 9" availability: 4 weeks uptime (MTTF) = 4 weeks×7 days×24 hours×60 minutes = 40,320 minutes where MTTR = 3 minutes. This means: Availability = [40,320 / (40,320 + 3)]×100 = 99.9925% Note: Best practice calls for this type of tests to be repeated at least three times, preferably five times. Performance considerations Semantics and academic discussions aside, you get the best performance out of your application by running it on the best-performing hardware. The hardware includes gigabit Ethernet network backbones, high rotational speed disk arrays in high-performance RAID configurations, high-clock-speed multicore CPUs with high-clock-speed front-side bus, and faster RAM. Make sure that you exclude antivirus software from scanning high I/O folders on the servers. High I/O folders include those containing WebSphere, IIS, and IBM HTTP server logs. In addition, minimize logging by setting the logging threshold to ERROR. Redirect application server logs to a separate physical disk. Avoid writing temporary files to disk. If your application has to do temporary file I/O, consider setting up a RAM disk in memory. Where to go from here Designing valid tests and conducting performance testing on hardware that reflects production hardware will avoid nasty surprises when the application is deployed to production. Please refer to the following resources to learn more about this topic: - LiveCycle Developer Center: Performance topic page - LiveCycle Product Blog - Java Performance Community (java.net) - Java Performance Tuning site (Jack Shirazi) - Barcia, Roland, Bill Hines, Tom Alcott, and Keys Botzum. IBM WebSphere Deployment and Advanced Configuration. IBM Press/Prentice Hall, 2004. - Fortier, Paul, and Howard Michel. Computer Systems Performance Evaluation and Prediction. Digital Press/Elsevier Science, 2003. - Neat, Adam G. Maximizing Performance and Scalability with IBM WebSphere. Apress, 2004. - Shirazi, Jack. Java Performance Tuning, 2nd Edition. O'Reilly & Associates, 2003.
https://www.adobe.com/devnet/livecycle/articles/lces_performance_testing.html
CC-MAIN-2018-30
refinedweb
3,595
54.02
mpi4py – High-Performance Distributed Python Python, I think it’s safe to say, is the most prevalent language for machine learning and deep learning and has become a popular language for technical computation, as well. Part of the reason for its increasing popularity is the availability of common high-performance computing (HPC) tools and libraries. One of these libraries is mpi4py. If you are familiar with the prevalent naming schemes in Python, you can likely figure out that this is a Message Passing Interface (MPI) library for Python. MPI is the key library and protocol for writing parallel applications that expand beyond a single system. At first, it was primarily a tool for HPC in the early 1990s, with version 1.0 released in June 1994. In the early days of MPI, it was used almost exclusively with C/C++ and Fortran. Over time, versions of MPI or MPI bindings were released for other languages, such as Java, C#, Matlab, OCaml, R, and, of course, Python. These bindings were created because MPI became the standard for exchanging data between processes on shared or distributed architectures in the HPC world. With the rise of machine learning, as well as Python in general, developing standard computational tools for Python seemed logical. MPI for Python (mpi4py) was developed for Python with the C++ bindings in the MPI-2 standard. The 1.0 release was on March 20, 2020, and the current release as of this writing is 3.0.3 (July 27, 2020). Mpi4py is designed to be fairly Pythonic, so if you know Python and understand the basic concepts of MPI, then mpi4py shouldn’t be a problem. Python data structures can be used to create more complex data structures. Because it’s an object-oriented language, you can create classes and objects with these data structures. Libraries such as NumPy can create new data types (e.g., N-dimensional arrays). To pass these complex data structures and objects between MPI ranks, Python serializes the data structures with the pickle standard library module, supporting both ASCII and binary formats. The marshal module also can be used to serialize built-in Python objects into a binary format that is specific to Python. Mpi4py takes advantage of Python features to maximize the performance of serializing and de-serializing objects as part of data transmission. These efforts have been taken to the point that the mpi4py documentation claims it “… enables the implementation of many algorithms involving multidimensional numeric arrays (e.g., image processing, fast Fourier transforms, finite difference schemes on structured Cartesian grids) directly in Python, with negligible overhead, and almost as fast as compiled Fortran, C, or C++ code.” Simple mpi4py Examples For the example programs, I used the mpi4py install for Anaconda and built it with MPICH2. Running mpi4py Python codes follows the MPICH2 standards. For example, a command might be: $ mpirun -np 4 -f ./hosts python3 ./script.py where script.py is the Python code that uses mpi4py. The first example is just a simple “hello world” (Listing 1) from Jörg Bornschein’s GitHub page. The MPI.COMM_WORLD variable (line 3) is how the world communicator is defined in mpi4py to access a global “communicator,” which is a default group of all processes (i.e., collective function). It is created when the mpi4py module is imported. The import also causes the MPI.Init() and MPI.init_thread() functions to be called. Listing 1: Hello World from mpi4py import MPI comm = MPI.COMM_WORLD print("Hello! I'm rank %d from %d running in total..." % (comm.rank, comm.size)) comm.Barrier() # wait for everybody to synchronize _here_ Contrast this with C or Fortran the mpi_init() is explicitly called and the world communicator, MPI_COMM_WORLD is defined. Note the difference between MPI.COMM_WORLD for mpi4py and MPI_COMM_WORLD for C and Fortran. Line 3 in the sample code sets the MPI.COMM_WORLD communicator to the variable comm. The code then uses the rank of the specific process, comm.rank, and the total number of processes, comm.size. The rank of a specific process is very similar to C, MPI_Comm_rank, and Fortran, MPI_COMM_RANK. Although not exactly the same, if you understand the basics of MPI, it won’t be difficult to use mpi4py. The code ends by calling the Barrier() function. Although not strictly necessary, it’s a nice way of ending the code to make sure all of the processes reach that point. To run the code, I used: $ mpirun -np 4 -f ./hosts python3 ./hello-world.py The output is shown in Listing 2. Listing 2: Hello World Output Hello! I'm rank 0 from 4 running in total... Hello! I'm rank 1 from 4 running in total... Hello! I'm rank 2 from 4 running in total... Hello! I'm rank 3 from 4 running in total... A second example is just a simple broadcast from Ahmed Azridev’s GitHub page (Listing 3). (Note: I cosmetically modified the code to appeal to my sensibilities and to update it to Python 3.) The code first defines a dictionary, data, but only for the rank 0 process. All other processes define data with the keyword None. Listing 3: Broadcast Example rom numpy import array from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() if rank == 0: data = { 'key1' : [10,10.1,10+11j], 'key2' : ('mpi4py' , 'python'), 'key3' : array([1, 2, 3]) } else: data = None # end if data = comm.bcast(data, root=0) if rank == 0: print("bcast finished") # end if print("data on rank %d is: "%comm.rank, data) The data is broadcast with the bcast function, which is part of communicator comm, which happens to be MPI.COMM_WORLD. In Fortran and C, the function is usually named MPI_Bcast or MPI_BCAST. For the specific function, the content of variable data is broadcast to all ranks, from the rank 0 process (root=0). Again, all of this is very similar to C and Fortran, but not exactly the same. Listing 4 shows the output from the code. Listing 4: Broadcast Output bcast finished data on rank 0 is: {'key1': [10, 10.1, (10+11j)], 'key2': ('mpi4py', 'python'), 'key3': array([1, 2, 3])} data on rank 1 is: {'key1': [10, 10.1, (10+11j)], 'key2': ('mpi4py', 'python'), 'key3': array([1, 2, 3])} data on rank 2 is: {'key1': [10, 10.1, (10+11j)], 'key2': ('mpi4py', 'python'), 'key3': array([1, 2, 3])} data on rank 3 is: {'key1': [10, 10.1, (10+11j)], 'key2': ('mpi4py', 'python'), 'key3': array([1, 2, 3])} Notice that the data is defined on the rank 0 process, but each rank printed the same data. Perhaps more importantly, the data is a dictionary, not something that is defined by default in Fortran or C. The third example is point-to-point code (Listing 5). Again, I modified the code to fit my sensibilities and to port to Python 3. I also changed the pprint() functions to print(). The code is pretty simple. The rank 0 process creates some data in the NumPy data array and sends the first part of that array to the rank **1 process and the second part of the array to the rank 2 process. To make even more sure the data goes to the correct process, it uses tags for the send() and recv() functions (i.e., tag=13 and tag=14) to distinguish the destinations (dest) and make sure the sender and receiver processes are matched correctly. It’s not the same as MPI_Send and MPI_Recv, but it’s fairly easy to figure out if you know some MPI. The output when running the code on four processes is shown in Listing 6. Notice that processes 3 and 4 didn’t contribute or do anything. Listing 5: Point-to-Point import numpy from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() if (rank == 0): data = numpy.arange(10) comm.send(data[:5], dest=1, tag=13) comm.send(data[5:], dest=2, tag=14) print("Rank %d data is: "%rank, data) elif (rank == 1): data = comm.recv(source=0, tag=13) print("Rank %d Message Received, data is: "%rank, data) elif (rank == 2): data = comm.recv(source=0, tag=14) print("Rank %d Message Received, data is: "%rank, data) # end if Listing 6: Point-to-Point Output output: Rank 0 data is: [0 1 2 3 4 5 6 7 8 9] Rank 1 Message Received, data is: [0 1 2 3 4] Rank 2 Message Received, data is: [5 6 7 8 9] More Complex Examples From these three simple examples, you’ve learned that mpi4py works exactly as MPI code written in Fortran or C with a Python twist. Mpi4py can work with Python data types by serializing and de-serializing them. The next step is to try some more complicated examples. The first example (Listing 7) is simple parallel trapezoid integration code derived from a presentation by Pawel Pomorski. The code is fairly simple. If you remember how to do numerical integration by the trapezoidal approach, this code should make sense. Listing 7: Parallel Trapezoid Method from mpi4py import MPI import math def f(x): return x*x # end def def trapezoidal(a, b, n, h): s = 0.0 s += h * f(a) for i in range(1, n): s += 2.0 * h * f(a + i*h) # end for s += h * f(b) return (s/2.) # end def # Main section comm = MPI.COMM_WORLD my_rank = comm.Get_rank() p = comm.Get_size() if (my_rank == 0): print("Number of ranks = ",p) # end if print('my_rank = ',my_rank) # Integral parameters a=0.0 # Left endpoint b=2.0 # Right endpoint n=1024 # Number of trapezoids dest=0 # Destination for messages (rank 0 process) total=-1.0 # Initialize to negative number h = (b-a)/n # h = Trapezoid Base Length - the same for all processes local_n = int(n/p) # Number of trapezoids for each process # Length of each process' interval of # integration = local_n*h. local_a = a + my_rank*local_n*h # local a (specific to each process) local_b = local_a + local_n*h # local b (specific to each process) # Each rank performs its own integration integral = trapezoidal(local_a, local_b, local_n, h) # Add up the integrals calculated by each process if (my_rank == 0): total=integral for source in range(1,p): integral=comm.recv(source=source) print("PE ",my_rank,"<-",source,",",integral,"\n") total=total+integral # end for else: print("PE ",my_rank,"->",dest,",",integral,"\n") comm.send(integral, dest=0) # end if # Print the result if (my_rank==0): print("**With n=",n,", trapezoids,") print("** Final integral from",a,"to",b,"=",total,"\n") # end if MPI.Finalize() To begin, the code breaks up the integration beginning and end points into n trapezoids, where n is also the number of processes. Then the number of intervals are divided across the number of processes, and each process performs its own integration. When finished, each process sends its result to the rank 0 process, which adds the contribution of each process to get the final answer. The command to run the code is straightforward: $ mpirun -n 4 -f ./hosts python3 trap-mpi4py.py Another sample problem (Listing 8) integrates x^2 over the interval from 0.0 to 2.0. The output contains information about what each process is doing, showing when a process sends data to the rank 0 process (e.g., PE 1 -> 0). The rank 0 process shows which process sent the data (e.g., PE 0 <- 1). Listing 8: Integration Example Number of ranks = 4 my_rank = 0 PE 0 <- 1 , 0.29166698455810547 PE 0 <- 2 , 0.7916669845581055 PE 0 <- 3 , 1.5416669845581055 **With n= 1024 , trapezoids, ** Final integral from 0.0 to 2.0 = 2.666667938232422 my_rank = 1 PE 1 -> 0 , 0.29166698455810547 my_rank = 3 PE 3 -> 0 , 1.5416669845581055 my_rank = 2 PE 2 -> 0 , 0.7916669845581055 The next example is a simple matrix-matrix multiply that breaks the matrices into a grid of blocks. It then handles communication between the blocks in a classic east, west, north, south layout. I won’t put the code here because it is a little long. I only changed the function calls to pprint from print to get it to work. The program uses four processes, and the output is just the timings of the code run (Listing 9). Listing 9: Matrix-Matrix Product Output Creating a 2 x 2 processor grid... ============================================================================== Computed (serial) 3000 x 3000 x 3000 in 1.68 seconds ... expecting parallel computation to take 3.36 seconds Computed (parallel) 6000 x 6000 x 6000 in 3.69 seconds The final example I tried, just for fun, is a parallel implementation of Conway’s Game of Life that I got from Kevin Moore’s GitHub site. Again, I won’t show the lengthy code here. Because it’s old Python 2.x style, I had to fix the printfunctions. If you’ve never played Game of Life, give it a try. There is really nothing to do: just watch the screen. The grid is initialized randomly. Because the output is animated, I can’t show anything here, but give it a try and be sure to take a look at the code. Summary Mpi4py is a very powerful Python tool that uses wrappers around standard MPI implementations such as MPICH2 or OpenMPI. If you have used MPI before, either in Fortran or C, then using it in Python is not difficult. It holds true to Python’s object-oriented nature but still uses the familiar MPI terminology. In this article, I showed a few simple programs, from Hello World, to broadcast, to point-to-point examples, which all illustrate that coding MPI is not difficult in Python. The final few “real” examples were more than just instructional: The simple trapezoidal integration example shows how easy it is to parallelize code; the matrix-matrix multiplication example breaks matrices into a grid of blocks (sometimes called tiles) and is a bit more complicated, because you need to know how to do parallel matrix multiplication; a parallel version of Conway’s Game of Life is a diverting visual example, with a bit more difficult code. You probably need to know something about this algorithm to better understand the parallel code, but it’s still fun to run, and you can always change the initialization from random to something more deliberate. The Author Jeff Layton has been in the HPC business for almost 25 years (starting when he was four years old). He can be found lounging around at a nearby Frys enjoying the coffee and waiting for sales.
https://www.admin-magazine.com/HPC/Articles/mpi4py-High-Performance-Distributed-Python/(tagID)/253
CC-MAIN-2022-21
refinedweb
2,419
65.32
Never mind. I solved my issue. All I had to do was make frame a static public object. Never mind. I solved my issue. All I had to do was make frame a static public object. My issue is that in my code, I create a new JFrame object called frame. In another method I want to close that frame. Any suggestions? // Non-relevant code private static class oddbutton... all i needed to was restart the IDE. Thanks I've written code to draw a rectangle. I'm pretty sure i did everything right. This is my code: package hmm; import javax.swing.*; import java.awt.*; import java.awt.event.*; I'm trying to display an oval but the oval doesn't show. This is my code: package guitut; import javax.swing.*; import java.awt.*; public class GUItut { what are the errors? forgive me for being a noob at java programming When i try running my program i get this error: java.lang.NoSuchMethodError: main Exception in thread "main" Java Result: 1 This is my code: package button_game; import javax.swing.*; ... im very sorry Im sorry again. I messed up and posted "while (randomInt != randomInt)" it should be "while (number != randomInt)" "!=" means does not equal and that's why your program stops. a variable can never... Ok i see. I'm sorry i forgot to add a bracket so the while loop should look like this while (randomInt != randomInt) { if (number < randomInt) { c.print("The number you have guessed is too... ur welcome : ) Try using a while loop instead of a for loop and change the position of your loop. Also in your if statements have the user enter input. It would look something like this //your code int... Im not sure exactly what you meant by starting a new thread. Do you mean start it inside: private static class A1 implements ActionListener{ public void actionPerformed(ActionEvent e){ ... sorry this is the error: method main in class button_game.Button_Game cannot be applied to given types; required: java.lang.String[] found: no arguments reason: actual and formal argument... I am trying to start a new thread to call main() but i can't understand the error. package button_game; import javax.swing.*; import java.awt.*; import java.awt.event.*; public class... never mind they un-align when i post the code By whole thing, I mean that I want the whole program to run again and again as long as you keep clicking Button A1 and it to exit when you click Button B1. Here is my revised code: I think i got... I want to loop the whole program so that if you click on button A, the whole thing restarts and the buttons shift positions. If you click button B, the program exits. Alright i think this is what... please someone respond! im trying to create a game where there are two buttons and you have to click the one with the even number to continue or else you lose. I need help on how to make it loop. I am a noob at java...
http://www.javaprogrammingforums.com/search.php?s=3bd1edb0d414090e652349c57c31db0f&searchid=203291
CC-MAIN-2016-30
refinedweb
515
85.59
This document aims to detail, at a high-level, everything that happens between machine power on and a component (v1 or v2) running on the system. Outline: Kernel The process for loading the Fuchsia kernel (zircon) onto the system varies by platform. At a high level the kernel is stored in the ZBI, which holds everything needed to bootstrap Fuchsia. Once the kernel (zircon) is running on the system its main objective is to start userspace, where processes can be run. Since zircon is a micro kernel, it doesn't have to do a whole lot in this stage (especially compared to Linux). The executable for the first user process is baked into the kernel, which the kernel copies into a new process and starts. This program is called userboot. Initial processes Userboot is carefully constructed to be easy for the kernel to start, because otherwise the kernel would have to implement a lot of process bootstrap functionality (like a library loader service) that would never be used after the first process has been started. Userboot’s job is really straightforward, to find and start the next process. The kernel gives userboot a handle to the ZBI, inside of which is the bootfs image. Userboot reads through the ZBI to find the bootfs image, decompresses it if necessary, and copies it to a fresh VMO. The bootfs image contains a read-only filesystem, which userboot then accesses to find an executable and its libraries. With these it starts the next process, which is bootsvc. Userboot may exit at this point, unless the userboot.shutdown option was given on the kernel command line. Bootsvc, the next process, is dynamically linked by userboot. This makes it a better home than userboot for complex logic, as it can use libraries. Because of this bootsvc runs various FIDL services for its children, the most notable of which is bootfs, a FIDL-based filesystem backed by the bootfs image that userboot decompressed. Aside from hosting various services and the bootfs filesystem, bootsvc’s main job is to start the next process, which is component manager. Just like bootsvc, component manager is stored in bootfs, which is still the only filesystem available at this point. Note that all of bootsvc’s responsibilities are currently being moved to component manager, and it will eventually be deleted from the system. After this happens, userboot will launch component manager directly instead of bootsvc. Both bootsvc and component manager mark their processes as critical, which means that if something goes wrong with either and they crash, the job that they are in is killed. Both run in the root job, which has the special property that if it is killed, the kernel force restarts the system. ![ A diagram showing that userboot comes first, then bootsvc, then component manager](/fuchsia-src/concepts/booting/images/userboot-bootsvc-cm.png) Component manager Component manager is the program that drives the v2 component framework. This framework controls how and when programs are run and which capabilities these programs can access from other programs. A program run by this framework is referred to as a component. The components that component manager runs are organized into a tree. There is a root component, and it has two children named bootstrap and core. Bootstrap's children are the parts of the system needed to get the system functional enough to run more complex software like appmgr. The root, bootstrap, and core components are non-executable components, which means that they have no program running on the system that corresponds to them. They exists solely for organizational purposes. ![ A diagram showing that fshost and driver manager, are children of the bootstrap component, appmgr is a child of the core component, and core and bootstrap are children of the root component](/fuchsia-src/concepts/booting/images/v2-topology.png) Initial v2 components Background There are two important components under bootstrap, fshost and driver manager. These two components work together to bring up a functional enough system for appmgr, which then starts up all the user-facing software. driver manager Driver manager is the component responsible for finding hardware, running drivers to service the hardware, and exposing a handle for devfs to Fuchsia. Drivers are run by driver hosts, which are child processes that driver manager starts. Each driver is a dynamic library stored in either bootfs or a package, and when a driver is to be run it is dynamically linked into a driver host and then executed. The drivers stored in packages aren't available when driver manager starts, as those are stored on disk and drivers must be running before block devices for filesystems can appear. driver manager starts a thread that waits on a synchronous open to the /system-delayed handle, and once this open call succeeds it loads the drivers in the system package. fshost Fshost is a v2 component responsible for finding block devices, starting filesystem processes to service these block devices, and providing handles for these filesystems to the rest of Fuchsia. To accomplish this, fshost attempts to access the /dev handle in its namespace. This capability is provided by driver manager. As fshost finds block devices, it reads headers from each device to detect the filesystem type. It will initially find the fvm block, which contains partitions for other block devices. Fshost will use devfs to cause driver manager to run the fvm driver for this block device, which causes other block devices to appear for fshost to inspect. It does a similar thing when it discovers a zxcrypt partition, as the disk will need to be decrypted to be usable. Once fvm and zxcrypt are loaded, fshost will find the appropriate block devices and start the minfs and blobfs filesystems, which are needed for a fully functioning system. Currently fshost runs a memfs for its outgoing directory, and mounts handles into this memfs as filesystems come online. This means that attempting to access a fshost-provided directory too early will result in components seeing an empty directory. The requests are not pipelined in such a way that they are ignored until the given filesystem is available. To work around this fshost provides two directories, /pkgfs-delayed and /system-delayed, which do ignore requests until they are able to properly service them. The /pkgfs-delayed handle is provided to component manager, which uses it to load components that are stored in packages. appmgr Appmgr runs the v1 component framework, which coexists with the v2 component framework. Appmgr is stored in a package, unlike fshost and driver manager which are stored in bootfs, so component manager uses the /pkgfs-delayed handle from fshost to load appmgr. Capabilities from the v2 framework can be forwarded to the sys realm in appmgr, and services managed by sysmgr can be exposed to the v2 framework. By this mechanism, the two frameworks can access capabilities from each other and cooperate to run the system. Drivers, filesystems, and v1 components come online Component manager generally starts components lazily on-demand in response to something accessing a capability provided by the component. Components may also be marked as "eager", which causes the component to start at the same point its parent starts. In order to start the system running, appmgr is marked as an eager component. Since appmgr is stored in a package this causes component manager to attempt to load appmgr, and thus access the /pkgfs-delayed handle from fshost, causing fshost to be started. Once running, fshost attempts to access the /dev handle from driver manager, which causes driver manager to start. Together they bring up drivers and filesystems, eventually culminating in pkgfs running. At this point fshost starts responding to requests on the /pkgfs-delayed handle, and component manager finishes loading appmgr and starts it. ![ A sequence diagram showing that appmgr loading begins due to it being an eager component, fshost starting due to the /pkgfs-delayed handle, driver manager starting due to the /dev handle, block devices appearing, filesystems appearing, and then appmgr successfully starting.](/fuchsia-src/concepts/booting/images/boot-sequence-diagram.png) Initial v1 components When appmgr is started it creates a top-level realm called the "app" realm. Into this realm it launches the first v1 component, sysmgr. Sysmgr’s job is to manage the "sys" realm, which is created under the "app" realm. The sys realm holds a large number of FIDL services, the exact set of which is determined by sysmgr configuration files. Components running in the sys realm are allowed to connect to these sysmgr-managed services. Service connections for the sys realm are handled by sysmgr, which will lazily start components as services they provide are needed. There is also a set of components that sysmgr will start eagerly, each of which may or may not also provide FIDL services for the sys realm. ![ A diagram showing the app realm holding the sysmgr component and the sys realm, and the sys realm holding other components.](/fuchsia-src/concepts/booting/images/appmgr-realm-layout.png) The rest of the v1 components With the initial set of v1 components launched, they will cause other components to be launched through accessing FIDL services and by directly launching them with services provided by appmgr. It is at this point that the remaining set of components on the system can be run.
https://fuchsia.dev/fuchsia-src/concepts/booting/everything_between_power_on_and_your_component
CC-MAIN-2020-24
refinedweb
1,559
51.48
Itertools Recipe: Power Set power_set() This one’s one of my favorite recipes. The Power Set is basically a set that exhaustively says “what would this group look like if this element were/weren’t in the group”… but for every possible combination. The resulting set winds up containing 2^n elements, as each element can “either be in the set, or not”. As you might imagine, this could lead to a ton of messy, complicated flow-control. Or a slick one-liner using itertools def power_set(iterable): "powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)" s = list(iterable) return chain.from_iterable(combinations(s, r) for r in range(len(s)+1)) Demo letters = ['a', 'b', 'c', 'd'] for group in power_set(letters): print(group, end=' ') () ('a',) ('b',) ('c',) ('d',) ('a', 'b') ('a', 'c') ('a', 'd') ('b', 'c') ('b', 'd') ('c', 'd') ('a', 'b', 'c') ('a', 'b', 'd') ('a', 'c', 'd') ('b', 'c', 'd') ('a', 'b', 'c', 'd') Why this works It helps to read this one inside-out, I think. The Generator Comprehension First, notice that the generator comprehension is executing over ... for in range(len(s) + 1). This step is feeding the r that’s used in combinations(). The second argument is “return all possible combinations of the original set s of r elements”. So we obviously want to start with 0 (no elements are in the set). Then move on to what the set looks like when only one of each element are in it. Then all combinations of 2, etc, etc. Thus, the last set generated should be len(s)==r, where every element is returned as in the set. However, because the range operator will stop at n-1 (e.g. range(4) prints 0 1 2 3), we add 1 to the length of our list s so that it’s the same. Chaining Running the function by itself yields a chain object power_set(letters) <itertools.chain at 0x277f6030be0> This is because we’re still lazily yielding the values generated by the function. chain.from_iterable() is going to assemble one big iterable from arbitrarily-many little ones. Can’t spitball an immediate use for this just yet, but this basically means that we can define the Power Set once, and pass it around, iterating to get the next combinations as we need them. ourset = power_set(letters) for i in range(4): print(next(ourset), end=' ') () ('a',) ('b',) ('c',) for i in range(4): print(next(ourset), end=' ') ('d',) ('a', 'b') ('a', 'c') ('a', 'd') for i in range(4): print(next(ourset), end=' ') ('b', 'c') ('b', 'd') ('c', 'd') ('a', 'b', 'c') for i in range(4): print(next(ourset), end=' ') ('a', 'b', 'd') ('a', 'c', 'd') ('b', 'c', 'd') ('a', 'b', 'c', 'd')
https://napsterinblue.github.io/notes/python/internals/itertools_powerset/
CC-MAIN-2021-04
refinedweb
468
68.3
Good morning I am trying to run the USB cdc cdc example in a K20 evakit. Since no SDK is provided for MCUXpresso for this board I try to build the project by hand copying another for a differenct eva board. I have a hard fault when the function USB_DeviceApplicationInit() and then the SYSMPU_Enable(SYSMPU, 0); is called. I tried to delete the call to SYSMPU_Enable but the device has a malfunction. Any suggestion on what happens?? Thank You Pietro Hi Alexis Thank You for the suggestion. These bundle examples are not equipped for the MCUXpresso. It does not look so simple to just export. If It is possible just express on the observation regarding the MK20D10_features.h file. This file reports #define FSL_FEATURE_SOC_SYSMPU_COUNT (1) but it seems this processor has not, since when I try to execute SYSMPU_Enable( .... ) there is a hard fault. If you confirm I feel at least comfortable. Thank You Pietro Hello Pietro, In the SYSMPU_Enable, the CESR register is modified. This register doesn't exist in the 72 MHz so you can use this function. The SDK available for MCUXpresso is for the 100 MHz so is no compatible with the 72MHz. In this case, you will need to use the code bundle I mention or if you can, change to the 100 MHz version. Best Regards, Alexis Andalon Please It is a simple request... Pietro Hello pietrodicastri, For this specific MCU, as you mention there isn't an SDK but is a bundle of examples available, you can find them in the following link. There is an example using the USB as CDC device. Best Regards, Alexis Andalon Hei I still need a help. The feature file I include is MK20D10_features.h. Is it the correct one TWR K20D72M board?? In this file I read #define FSL_FEATURE_SOC_SYSMPU_COUNT (1) Is it correct? In the function USB_DeviceApplicationInit( void ) Is the conditional call to SYSMPU_Enable( .... ) as here #if (defined(FSL_FEATURE_SOC_SYSMPU_COUNT) && (FSL_FEATURE_SOC_SYSMPU_COUNT > 0U)) SYSMPU_Enable(SYSMPU, 0); #endif /* FSL_FEATURE_SOC_SYSMPU_COUNT */ When the SYSMPU_Enable( ... ) is called the hard fault is triggered. I am beginning to thine the FSL_FEATURE_SOC_SYSMPU_COUNT should be 0. for this processor. Anyway I commented the SYSMPU_Enable( ... ) call. The composite cdc cdc device gives error with windows 10. From the device manager I get Device Descriptor Request Failed. I tried to install the driver with the one in the SDK but it says the best driver is already installed. Please a help Pietro Hei If someone is willing to help me I can drop here the project. I am working on the TWR K20D72M board Thank You Pietro
https://community.nxp.com/t5/Kinetis-Software-Development-Kit/SYSMPU-Enable/m-p/1064008
CC-MAIN-2020-40
refinedweb
429
67.35
. Not if you use the DataGrid in SilverLight where is that in WPF? Or the WatermarkTextBox? Yes only if Silverlight is a TRUE subset of .NET. I understand that SL 2.0 is beta and beta can have things that are not prime time. But when SL 2.0 goes live are we going to have controls in WPF that are not but are in SL 2.0? Not if you use e.Handled to stop propagation of a routed event… Where are the controls at ? Or is it just a movie player and gradient builder ? >> Not if you use the DataGrid in SilverLight where >> is that in WPF? Or the WatermarkTextBox? Good call… we are missing some controls in WPF right now… we are working on addressing that… look for more information on that soon. >> Where are the controls at ? The current set of controls are in the Silverlight 2 SDK.. more information here: I noticed that Scott mentioned in his "fix up the xmlns" that the namespace difference might be cleared up. If that is done, any hope for creating Assemblies that can compile both Silverlight and full .NET assemblies from one codebase. I’m currently tricking VS.NET into letting me share classes between assemblies (follow the link above for the how), but that method would be seriously tedious for sharing Controls between Silverlight and WPF. At Mix08 , se This is so far from reality it’s just not funny. Maybe in beta 2 this will be possible, but there are so many fundamental things lacking in SLB1 that people are just used to in WPF that it isn’t. Even down to the way things are defined in XAML – where’s {x:Type} ? How do we have a generic.xaml for a control library for compiles for both platforms? You simply cannot. Sharepoint SharePoint Slipstreams recently announced [Via: slennon ] WPF Usability Design For Forms:… Infragistics is one of the major ISVs developing controls for Microsoft platforms. I favorably reviewed their NetAdvantage controls for ASP.NET for InfoWorld in 2006. They have also released controls for Windows Forms and Windows Presentation Framework This is actually not as convenient as it sounds. Suppose I have a project with 250 c# files in it, which compiles fine in silverlight. Now I want to compile it in WPF, I have to open the project file to change all the references, and recompile. If this sounds tedious, it gets worse when maintain this project, add a new file, remove a file … etc. Keep the 2 project files in sync is a pain in neck. Any easier ways?
https://blogs.msdn.microsoft.com/brada/2008/03/11/single-source-code-base-for-silverlight-and-wpf-solutions/
CC-MAIN-2017-26
refinedweb
436
76.11
vember, 2015Valuation and Common Sense. 5th edition. Table of contents Chapter123456789101112131415161718192021222324252627282930313233343536373839404142434445 Downloadable at: Pablo FernandezIESE Business School, University of Navarra I would like to dedicate this book to my wife Lucia and my parents for their on-going encouragement,invaluable advice and for their constant example of virtues: hope, fortitude, good sense I am very grateful to mychildren Isabel, Pablo, Paula, Juan, Lucia, Javier and Antonio for being, in addition to many other things, a sourceof joy and common sense.The book explains the nuances of different valuation methods and provides the reader with the tools foranalyzing and valuing any business, no matter how complex. The book uses 253 figures, 444 tables, and morethan 170 examples to help the reader absorb these concepts.This book contains materials of the MBA and executive courses that I teach in IESE Business School. It alsoincludes some material presented in courses and congresses in Spain, US, Austria, Mexico, Argentina, Peru,Colombia, UK, Italy, France and Germany. The chapters have been modified many times as a consequence ofthe suggestions of my students since 1988, my work in class, and my work as a consultant specialized invaluation and acquisitions. I want to thank all my students their comments on previous manuscripts and theirquestions. The book also has results of the research conducted in the International Center for Financial Researchat IESE.This book would never have been possible without the excellent work done by a group of students andresearch assistants, namely Jos Ramn Contreras, Teresa Modroo, Gabriel Rabasa, Laura Reinoso, Jose MCarabias, Vicente Bermejo, Javier del Campo, Luis Corres, Pablo Linares, Isabel Fernandez-Acin and AlbertoOrtiz. It has been 25 years since we began and their contribution has been essential.Chapters of the book have been revised by such IESE Finance Professors as Jos Manuel Campa, JavierEstrada and M Jess Grandes, who have provided their own enhancements.I want to thank my dissertation committee at Harvard University, Carliss Baldwin, Timothy Luehrman, AndreuMas-Colell and Scott Mason for improving my dissertation as well as my future work habits. Special thanks go toRichard Caves, chairman of the Ph.D. in Business Economics, for his time and guidance. Some other teachersand friends have also contributed to this work. Discussions with Franco Modigliani, John Cox and Frank Fabozzi(from M.I.T.), and Juan Antonio Palacios were important for developing ideas which have found a significant placein this book.I would like to express my deepest gratitude to Rafael Termes, Juanjo Toribio, Natalia Centenera, Jos MCorominas and Amparo Vasallo, CIF Presidents and CEOs respectively, for their on-going support and guidancethroughout. The support provided by CIFs own sponsoring companies is also greatly appreciated.Lastly, I want to thank Vicente Font (Professor of Marketing at IESE) and don Jos Mara Pujol (Doctor andPriest) for being wonderful teachers of common sense. ContentsCh1 Company valuation methods1. Value and price. What purpose does a valuation serve?2. Balance sheet-based methods (shareholders equity). 2.1. Book value. 2.2. Adjusted book value. 2.3. Liquidationvalue. 2.4. Substantial value. 2.5. Book value and market value3. Income statement-based methods. 3.1. Value of earnings. PER. 3.2. Value of the dividends. 3.3. Sales multiples.3.4. Other multiples. 3.5. Multiples used to value Internet companies4.5.2. Deciding the appropriate cash flow for discounting and the companys economic balance sheet5.2.1. The free cash flow. 5.2.2. The equity cash flow. 5.2.3. Capital cash flow5.3. Free cash flow. 5.4. Unlevered value plus value of the tax shield. 5.5. Discounting the equity cash flow5.6. Discounting the capital cash flow. 5.7. Basic stages in the performance of a valuation by cash flow discounting6. Which is the best method to use? 7. The company as the sum of the values of different divisions. Break-up value8. Valuation methods used depending on the nature of the company9. Key factors affecting value: growth, margin, risk and interest rates10. Speculative bubbles on the stock market. 11. Most common errors in valuations When is profit after tax a cash flow? 6. When is the accounting cash flow a cash flow? 7. Equity cash flow anddividends. 8. Recurrent cash flows. 9. Attention to the accounting and the managing of net income Ch4 Discounted cash flow valuation methods: perpetuities, constant growth and general case1. Introduction. 2. Company valuation formulae. Perpetuities. 2.1. Calculating the companys value from the equitycash flow (ECF). 2.2. Calculating the companys value from the free cash flows (FCF). 2.3. Calculating thecompanys value from the capital cash flows (CCF). 2.4. Adjusted present value (APV). 2.5. Use of the CAPMand expression of the levered beta3. VTS in perpetuities. Tax risk in perpetuities. 4. Examples of companies without growth5. Formulae for when the debts book value (N) is not the same as its market value (D). (r Kd)6. Formula for adjusted present value taking into account the cost of leverage. 6.1. Impact of using the simplifiedformulae for the levered beta. 6.2. The simplified formulae as a leverage-induced reduction of the FCF. 6.3 Thesimplified formulae as a leverage-induced increase in the business risk (Ku). 6.4. The simplified formulae as aprobability of bankruptcy. 6.5. Impact of the simplified formulae on the required return to equity7. Valuing companies using discounted cash flow. Constant growth. 8. Company valuation formulae. Constant growth8.1 Relationships obtained from the formulae. 8.2. Formulae when the debts book value (N) is not equal to itsmarket value (D). 8.3. Impact of the use of the simplified formulae9. Examples of companies with constant growth. 10. Tax risk and VTS with constant growth11. Valuation of companies by discounted cash flow. General case. 12. Company valuation formulae. General case.13. Relationships obtained from the formulae. General case. 14. An example of company valuation15. Valuation formulae when the debts book value (N) and its market value (D) are not equal16. Impact on the valuation when D N, without cost of leverage17. Impact on the valuation when D N, with cost of leverage, in a real-life case.Appendix 1. Main valuation formulae. Appendix 2. A formula for the required return to debt Method 10. Using the risk-free-adjusted equity cash flows discounted at the risk-free rate2. An example. Valuation of the company Toro Inc. 3. ConclusionAppendix 1. A brief overview of the most significant papers on the discounted cash flow valuation. Appendix 2Valuation equations according to the main theories. Market value of the debt = Nominal value. Appendix 3. Valuationequations when the debts market value (D) is not equal to its nominal or book value (N). Appendix 4. Dictionary. Ch7 Three Residual Income Valuation Methods and Discounted Cash Flow Valuation1. Economic profit (EP) and MVA (market value added = Equity market value Equity book value)2. EVA (economic value added) and MVA (market value added)3. CVA (cash value added) and MVA (market value added)4. First valuation. Investment without value creation. 5. Usefulness of EVA, EP and CVA6. Second valuation. Investment with value creation. 7. ConclusionsAppendix 1. The EP (economic profit) discounted at the rate Ke is the MVA. Ap 2. Obtainment of the formulas forEVA and MVA from the FCF and WACC. Ap 3. The CVA (cash value added) discounted at the WACC is the MVA.Ap 4. Adjustments suggested by Stern Stewart & Co. for calculating the EVA. Ap 5. Dictionary Ch9 Valuing Companies by Cash Flow Discounting: Fundamental relationships and unnecessarycomplications1. Valuation of Government bonds2. Extension of the valuation of Government bonds to the valuation of companies2.1 Valuation of the Debt. 2.2 Valuation of the shares3. Example. 4. 1st complication: the beta () and the market risk premium.5. 2nd complication: the free cash flow and the WACC. 6. 3rd complication: the capital cash flow and the WACCBT7. 4th complication: the present value of the tax savings due to interest payments8. Fifth complication: the unlevered company, Ku and Vu. 9. Sixth complication: different theories about the VTS10. Several relationships between the unlevered beta (U) and the levered beta (L)11. More relationships between the unlevered beta and the levered beta12. Mixing accounting data with the valuation: the Economic Profit13. Another mix of accounting data with the valuation: the EVA (economic value added)14. To maintain that the levered beta may be calculated with a regression of historical data15. To maintain that the market has a MRP and that it is possible to estimate it16. Some errors due to using unnecessary complicationsExhibit 1. Concepts and main equations. Exhibit 2. Main results of the example. Exhibit 3. Some articles aboutthe Value of Tax Shields (VTS). Comments from readers. Ch11 Optimal Capital Structure: Problems with the Harvard and Damodaran Approaches1. Optimal structure according to a Harvard Business School technical note2. Critical analysis of the Harvard Business School technical note. 2.1. Present value of the cash flows generated bythe company and required return to assets. 2.2. Leverage costs. 2.3. Incremental cost of debt. 2.4. Required returnto incremental equity cash flow. 2.5. Difference between Ke and Kd. 2.6. Price per share for different debt levels.2.7. Adding the possibility of bankruptcy to the model. 2.8. Ke and Kd if there are no leverage costs. 2.9. Ke and Kdwith leverage costs. 2.10. Influence of growth on the optimal structure3. Boeings optimal capital structure according to DamodaranTable of contents, glossary - 4 4. Capital structure of 12 companies: Coca Cola, Pepsico, IBM, Microsoft, Google, GE, McDonalds, Intel, Walt Disney,Chevron, Johnson & Johnson and Wal-Mart. Ch14 Market Risk Premium used in 82 countries in 2012: a survey with 7,192 answers1. Market Risk Premium (MRP) used in 2012 in 82 countries2. Differences among professors, analysts and managers of companies. 3. Differences among respondents4. References used to justify the MRP figure. 5. Comparison with previous surveys6. MRP or EP (Equity Premium): 4 different concepts. 7. ConclusionExhibit 1. Mail sent on May and June 2012. Exhibit 2. 9 comments of respondents that did not provide the MRPused in 2012. Exhibit 3. 12 comments of respondents that did provide the MRP used in 2012. Appendix 1. Graphs withaggregate data of the countries (each point represents a country). Appendix 2. Differences between professors,analysts and managers Ch20 A solution to Valuation of the Shares after an Expropriation: The Case of ElectraBul1. Preliminary Analysis of the VERAVAL Valuation1.1. Comparison of the Valuation with ElectraBul Dividends. 1.2. Comparison with the Profits Expected by theVERAVAL Valuation Itself. 1.3. Comparison with Multiples from other Companies in Similar Industries2. Valuation Using ALL Data Contained in the VERAVAL Valuation2.1. Main Data of the VERAVAL Valuation. 2.2. Valuation Using the Adjusted Present Value (APV) Method.2.3. Valuation Using the WACC. 2.4. Valuation Given that the Projections were made in 2010 constant dollars.2.5. Comparison of the Valuation with Similar Company Multiples3. Valuation Using ALL Data Contained in the VERAVAL Valuation Except the Country Risk Premium4. VERAVAL Valuation Misconceptions1. The Present Value of the Cash Flows is miscalculated. 2. Ke is miscalculated. 3. The WACC is miscalculated.4. Does not take into account that Forecasts are expressed in 2010 Constant Dollars. 5. The Country Risk Used ishigh. 6. The Book Value of the Shares is miscalculated. 7. The Beta of Shares is miscalculated5. Conclusion Ch21 Valuation of an expropriated company: The case of YPF and Repsol in Argentina1. Short history of Repsol in YPF. 2. The months before the expropriation. 3. Precedent transactions of YPF shares4. Analyst reports about YPF. 5. Vaca Muerta: a huge oil and gas shale. 6. YPF's bylaws valuation methodology7. Cash Flows of Repsol due to its investment in YPFExhibits. 1. The biggest companies in Argentina. 2. Argentina: some indicators. 3. Some reactions to theexpropriation. 4. Balance Sheets and P&Ls of YPF. 1999-2011.. 5. Additional information about YPF. 6. 85analyst reports on YPF in the period April 2011- April 2012 that included target price. 7. Expectations on YPF ofthe analysts. 8. About Repsol. 9. Vaca Muerta: unconventional resources 3. Errors in the calculation of the residual value. 3. A. Inconsistent cash flow used. 3. B. The D/E ratio used to calculatethe WACC is different than the one resulting from the valuation. 3. C. Using ad hoc formulas that have noeconomic meaning. 3. D. Arithmetic averages instead of geometric averages. 3. E. Using the wrong formula. 3. F.Assume that a perpetuity starts a year before it really starts4. Inconsistencies and conceptual errors. 4.A. About the free cash flow and the equity cash flow. 4.B. Errors whenusing multiples. 4.C. Time inconsistencies. 4.D. Other conceptual errors5. Errors when interpreting the valuation. Confusing Value with Price. Asserting that a valuation is a scientific fact, notan opinion. Considering that the goodwill includes the brand value and the intellectual capital6. Organizational errors. 6. A. Valuation without any check of the forecasts provided by the client. 6. B. Commissioninga valuation from an investment bank and not having any involvement in it. 6. C. Involving only the financedepartment in valuing a target company. 6. D. Assigning the valuation of a company to an auditor.Appendix 1. List of errors. Appendix 2. A valuation with multiple errors of an ad hoc method Ch34 EVA and Cash value added do NOT measure shareholder value creation1. Accounting-based measures cannot measure value creation2. EVA does not measure the shareholder value creation by American companies3. The CVA does not measure the shareholder value creation of the worlds 100 most profitable companies4. Usefulness of EVA, EP and CVA. 4.1. The EVA, the EP and the CVA can be used to value companies. 4.2. EVA,EP and CVA as management performance indicators5. Consequences of the use of EVA, EP or CVA for executive remuneration. 6. Measures proposed for measuringshareholder return. 7. What is shareholder value creation?. 8. An anecdote about the EVAExhibit 1. Correlation of increase of MVA with EVA and with the increase of EVA, and market value (MV) in 1997 Ch39 Value of tax shields (VTS): 3 theories with some common sense0. Definition of VTS. 1. General expression of the value of tax shields. 2. Valuation of a firm whose debt policy isdetermined by a book-value ratio. 3. Valuation of firms under alternative financing strategies. 4. Required return toTable of contents, glossary - 8 equity and WACC. 5. A numerical example. 6. The correlation between the tax shields and the free cash flow.Conclusions. References. Appendix 1. VTS equations according to the main theories. 7. Ch41 Discount Rate (Risk-Free Rate and Market Risk Premium) used for 41 countries in 2015: a survey1. Market Risk Premium (MRP), Risk Free Rate (RF) and Km [RF + MRP] used in 2015 in 41 countries. 2. Changesfrom 2013 to 2015. 3. RF used in 2013 and 2015 for US, Europe and UK vs. yield of the 10-year Government bonds. 4.Previous surveys. 5. Expected and Required Equity Premium: different concepts. 6. Conclusion. Exhibit 1. Mail sent onMarch 2015. Exhibit 2. Some comments and webs recommended by respondents. Ch42 Huge dispersion of the RF and MRP used by analysts in USA and Europe in 20151. RF and MRP used in 156 valuation reports. 2. Evolution of the 10-year Government bonds yield for the six countries.3. Degrees of freedom of different analysts. 4. MRP in 2015 according to Damodaran. 5. MRP and RF. Where do theycome from? 6. Two common errors about and MRP. 7. Expected, Required and Historical MRP: different concepts.8. Conclusion. Exhibit 1. RF and MRP used in each of the156 valuation reports. Exhibit 2. Details of some valuationreports. Exhibit 3. MRP in 2015 according to Damodaran. Ch43 Meaning of the P&L and of the Balance Sheets: Madera Inc1. Income Statements and Balance Sheets of Madera Inc. 2. Meaning of the figures on the Income Statement. 3.Meaning of the figures on the Balance Sheet. 4. Analogy with the annual accounts of a family. 5. Evolution of the"shareholders equity" account.Exhibit 1. Synonyms of some P&L and Balance Sheet items Exhibit 2. EnglishSpanish accounting dictionary. Exhibit 3. Balance Sheet and P&L de Walt Disney Co (2007 2014) Ch44 Net Income, cash flows, reduced balance sheet and WCR (Working Capital Requirements)1. Financial statements of Madera Inc. 2. Accounting cash flow, equity cash flow, debt cash flow, free cash flow andcapital cash flow. 3. Transformation of accounting into collections and payments. 4. Analysis of the collection fromclients, payments to suppliers and Inventory. 4.1. Collection period. 4.2. Payment period. 4.3. Days of Inventory. 4.4.Gross Margin of the company. 4.5. Linking payments, collections, inventory and gross margin. 5. The reduced balance.WCR (Working Capital Requirements). Exhibit 1. Reduced balance for 12 US companies. Exhibit 2. Synonyms andconfusion of terms. Exhibit 3. Balance sheet and P&L of Coca Cola, Pepsico, IBM, Microsoft, Google, and GE. GlossaryAccounting cash flow. Net Income plus depreciation.Adjusted Book Value Difference between market value of assets and market value of liabilities. Also called Net SubstantialValue or Adjusted Net Worth.Adjusted present value (APV). The APV formula indicates that the firm value (E + D) is equal to the value of the equity ofthe unlevered company (Vu) plus the value of the tax shield due to interest payments.Arbitrage pricing theory (APT) An asset pricing theory that describes the relationship between expected returns onsecurities, given that there are no opportunities to create wealth through risk-free arbitrage investments.Arbitrage. The purchase and sale of equivalent assets in order to gain a risk-free profit if there is a difference in their prices.Arbitration Alternative to suing in court to settle disputes between brokers and their clients and between brokerage firms.Benchmark Objective measure used to compare a firm or a portfolio performance.Beta. A measure of a securitys market-related risk, or the systematic risk of a security.Binomial option pricing model. A model used for pricing options that assumes that in each period the underlying securitycan take only one of two possible values.Black-Scholes formula. An equation to value European call and put options that uses the stock price, the exercise price, therisk-free interest rate, the time to maturity, and the volatility of the stock return. Named for its developers, FischerBlack and Myron ScholesBook value (BV) The value of an asset according to a firms balance sheet.Break-up Value Valuation of a company as the sum of its different business unitsCall Option. Contract that gives its holder (the buyer) the right (not the obligation) to buy an asset, at a specified price, at anytime before a certain date (American option) or only on that date (European option).Capital Asset Pricing Model (CAPM) Equilibrium theory that relates the expected return and the beta of the assets. It isbased on the mean-variance theory of portfolio selection.Capital Cash Flow (CCF) Sum of the debt cash flow plus the equity cash flow.Capital Market line. In the capital asset pricing model, the line that relates expected standard deviation and expected returnof any asset.Capital structure. Mix of different securities issued by a firm.Capitalization Equity Market Value.Cash budget. Forecast of sources and uses of cash.Cash dividend. Cash distribution to the shareholders of a company.Cash Earnings (CE) Net income before depreciation and amortization. Also called Accounting Cash Flow and Cash Flowgenerated by operations.Cash Flow Return on Investment (CFROI) The internal rate of return on the investment adjusted for inflation.Cash Value Added (CVA) NOPAT plus amortization less economic depreciation less the cost of capital employed.Collection period. The ratio of accounts receivable to daily sales.Companys value (VL) Market value of equity plus market value of debtConstant growth model. A form of the dividend discount model that assumes that dividends will grow at a constant rate.Consumer Price Index Measures the price of a fixed basket of goods bought by a representative consumer.Convertible debentures Bonds that are exchangeable for a number of another securities, usually common shares.Correlation Coefficient The covariance of two random variables divided by the product of the standard deviations. It is ameasure of the degree to which two variables tend to move together.Cost of capital. The rate used to discount cash flows in computing its net present value. Sometimes it refers to the WACCand other times to the required return to equity (Ke).Cost of Leverage The cost due to high debt levels. It includes the greater likelihood of bankruptcy or voluntaryreorganization, difficulty in getting additional funds to access to growth opportunities, information problems, andreputationCovariance. It is a measure of the degree to which two asset returns tend to move together.Credit Rating Appraisal of the credit risk of debt issued by firms and Governments. The ratings are done by private agenciesas Moodys and Standard and Poors.Credit Risk. The risk that the counterpart to a contract will default.Cumulative preferred stock. Stock that takes priority over common stock in regard to dividend payments. Dividends maynot be paid on the common stock until all past dividends on the preferred stock have been paid.Current asset. Asset that will normally be turned into cash within a year.Current liability. Liability that will normally be repaid within a year.Debt Cash Flow (CFd) Sum of the interest to be paid on the debt plus principal repayments.Debts Market Value (D) Debt Cash Flow discounted at the required rate of return to debt (may be different than the Debt'sbook value).Debts book value (N) Debt value according to the balance sheet.Default risk. The possibility that the interest of the principal of a debt issue will not be paid.Table of contents, glossary - 10 Default Spread Difference between the interest rate on a corporate bond and the interest on a Treasury bond of the samematurity.Depreciation (Book) Reduction in the book value of fixed assets such as plant and equipment. It is the portion of aninvestment that can be deducted from taxable income.Depreciation (Economic) ED (economic depreciation) is the annuity that, when capitalized at the cost of capital (WACC),the assets value will accrue at the end of their service life.Derivative. Financial instrument with payoffs that are defined in terms of the prices of other assets.Discounted dividend model (DDM). Any formula to value the equity of a firm by computing the present value of all expectedfuture dividends.Discounted value of the tax shields (DVTS) Value of the tax shields due to interest payments.Dispersion. Broad variation of numbers.Diversifiable risk. The part of a securitys risk that can be eliminated by combining it with other risky assets.Diversification principle. The theory that by diversifying across risky assets investors can sometimes achieve a reduction intheir overall risk exposure with no reduction in their expected return.Dividend payout ratio (p) Percentage of net income paid out as dividends.Dividend yield. Annual dividend divided by the share price.Duration. A measure of the sensitivity of the value of an asset to changes in the interest rates.Earnings Per Share (EPS) Net Income divided by the total number of shares.Economic Balance Sheet Balance sheet that has in the asset side working capital requirements.Economic Profit (EP) Profit after tax (net income) less the equitys book value multiplied by the required return to equity.Economic Value Added (EVA). NOPAT less the firms book value multiplied by the average cost of capital (WACC) andother adjustments implemented by the consulting firm Stern Stewart.Efficient portfolio. Portfolio that offers the highest expected rate of return at a specified level of risk. The risk may bemeasured as beta or volatility.Enterprise value (EV) Market value of debt plus equityEquity Book Value (Ebv) Value of the shareholders equity stated in the balance sheet (capital and reserves). Also calledNet Worth.Equity Cash Flow (ECF) The cash flow remaining available in the company after covering fixed asset investments andworking capital requirements and after paying the financial charges and repaying the corresponding part of the debtsprincipal (in the event that there exists debt).Equity Market Value (E) Value of all of the company's shares. That is each share's price multiplied by the number of shares.Also called Capitalization.Equity value generation over time Present value of the expected cash flows until a given year.Exercise price. Amount that must be paid for the underlying asset in an option contract. Also called strike price.Fixed-income security. A security such as a bond that pays a specified cash flow over a specific period.Franchise Factor (FF) "Measures what we could call the growths ""quality"", understanding this to be the return above thecost of the capital employed."Free Cash Flow (FCF) The operating cash flow, that is, the cash flow generated by operations, without taking into accountborrowing (financial debt), after tax. It is the equity cash flow if the firm had no debt.Goodwill Value that a company has above its book value or above the adjusted book value.Gross domestic product (GDP). Market value of the goods and services produced by labor and property in one countryincluding the income of foreign corporations and foreign residents working in the country, but excluding the income ofnational residents and corporations abroad.Growth (g) Percentage growth of dividends or profit after tax.Growth Value The present value of the growth opportunities.Homogenous expectations. Situation (or assumption) in which all investors have the same expectations about the returns,volatilities and covariances of all securities.IBEX 35 Spanish stock exchange indexInterest Factor The PER the company would have if it did not grow and had no risk. It is -approximately- the PER of a longterm Treasury bond.Internal rate of return (IRR). Discount rate at which an investment has zero net present value.Leverage ratio. Ratio of debt to debt plus equityLeveraged buyout (LBO). Acquisition in which a large part of the purchase price is financed with debt.Levered beta (bL) Beta of the equity when the company has debtLevered Free Cash Flow (LFCF) Equity cash flowLiquidation Value Companys value if it is liquidated, that is, its assets are sold and its debts are paid off.Market portfolio. The portfolio that replicates the whole market. Each security is held in proportion to its market value.Market risk (systematic risk). Risk that cannot be diversified away.Market Value Added (MVA) The difference between the market value of the firms equity and the equitys book value.Market Value of Debt (D) Market Value of the DebtTable of contents, glossary - 11 Market-to-book ratio (E/Ebv) It is calculated by dividing the equity market value by the equity book value.Net Operating Profit After Tax (NOPAT) Profit after tax of the unlevered firm.Non systematic risk. Risk that can be eliminated by diversification. Also called unique risk or diversifiable risk.Par value. The face value of the bond.Pay in Kind (PIK) Financial instruments that pay interest or dividends using new financial instruments of the same type,instead of paying in cash.Payout ratio (p) Dividend as a proportion of earnings per share.Perpetuity. A stream of cash flows that lasts forever.Put Option Contract that gives its holder the right to sell an asset, at a predetermined price, at any time before a certain date(American option) or only on that date (European option).Real prices. Prices corrected for inflation.Recurrent Cash Flows Cash Flows related only to the businesses in which the company was already present at thebeginning of the year.Relative PER The companys PER divided by the countrys PER or the industry's PER.Required Return to Assets (Ku) Required return to equity in the unlevered companyRequired Return to Equity (Ke) The return that shareholders expect to obtain in order to feel sufficiently remunerated forthe risk (also called Cost of Equity).Residual income. After-tax profit less the opportunity cost of capital employed by the business (see also Economic ValueAdded and Economic Profit).Residual value Value of the company in the last year forecasted.Retained earnings. Earnings not paid out as dividends.Return on assets (ROA). Accounting ratio: NOPAT divided by total assets. Also called ROI, ROCE, ROC and RONA. ROA= ROI = ROCE = ROC = RONA.Return on Capital (ROC) See Return on assetsReturn on Capital Employed (ROCE) See Return on assetsReturn on equity (ROE). Accounting ratio: PAT divided by equity book value.Return on investment (ROI). See Return on assetsReverse valuation Consists of calculating the hypotheses that are necessary to attain the shares price in order to thenassess these hypotheses.Risk Free Rate (RF) Rate of return for risk-free investments (Treasury bonds). The interest rate that can be earned withcertainty.Risk premium. An expected return in excess of that on risk-free securities. The premium provides compensation for the riskof an investment.Security market line. Graphical representation of the expected return-beta relationship of the CAPM.Share buybacks Corporations purchase of its own outstanding stock.Share repurchase. A method of cash distribution by a corporation to its shareholders in which the corporation buy shares ofits stock in the stock market.Shares beta It measures the systematic or market risk of a share. It indicates the sensitivity of the return on a share tomarket movements.Shareholder Return The shareholder value added in one year divided by the equity market value at the beginning of theyear.Shareholder Value Added The difference between the wealth held by the shareholders at the end of a given year and thewealth they held the previous year.Shareholder Value Creation Excess return over the required return to equity multiplied by the capitalization at the beginningof the period. A company creates value for the shareholders when the shareholder return exceeds the required returnto equity.Shareholder Value Destroyer A company in which the required return to equity exceeds the shareholders return.Specific risk. Unique risk.Stock dividend. Dividend in the form of stock rather than cash.Stock split. Issue by a corporation of a given number of shares in exchange for the current number of shares held bystockholders. A reverse split decreases the number of shares outstanding.Substantial Value Amount of investment that must be made to form a company having identical conditions as those of thecompany being valued.Systematic risk. Risk factors common to the whole economy and that cannot be eliminated by diversification.Tax Shield The lower tax paid by the company as a consequence of the interest paid on the debt in each period.Treasury bill. Short-term, highly liquid government securities issued at a discount from the face value and returning the faceamount at maturity.Treasury bond or note. Debt obligations of the federal government that make semiannual coupon payments and are issuedat or near par value.Treasury stock. Common stock that has been repurchased by the company and held in the companys treasury.Table of contents, glossary - 12 NotationAPVBVCAPMCCFCECFCFdCFROICPICVADDCFDepDivDPSDVTSEEBITEBITDAEBTEbvECFEDEGEMUEPEPSEVEVAFADFCFFFgGGLGNPGOVGuIIBEX 35 InpIRRKdKeKTL KTUKuLFCFMVANNFANI NOPATNPVNSPpPATPBTPERPMPVrRFRMROAROCROCEROEROGIROIRONASS&P500 Market return.Return on Assets. It is calculated by dividing theNOPAT by the equity and debt (at book value).Also called ROI, ROCE, ROC and RONA. ROA= ROI = ROCE = ROC = RONA.Return on CapitalReturn on Capital EmployedReturn on Equity. It is calculated by dividing thenet income by the shares' book value.Return on Gross InvestmentReturn on InvestmentReturn on Net AssetsSalesStandard and Poors 500 IndexShares betaDebts betaTable of contents, glossary - 13 Pablo FernandezIESE Business School, University of NavarraLuTTBRTSRUECVL Levered BetaUnlevered Beta or beta of the assetsTax rateTotal Business Return.Total Shareholder Return.Union of European Accounting ExpertsValue of the levered company November, 2015Valuation and Common Sense. 5th edition. Table of contentsVuWACCWACCBTWACCbvWCR 21. I want to congratulate you for making such a big contribution to the valuation area. The book is very detailed; I thinkeveryone can learn a lot from it. All finance professors and analysts should know about this book.22. Have you included in your valuation methods the notion of Corporate Social Responsibility?23. You have been providing valuable knowledge to many of us in the developing nations, like Malaysia. I truly appreciateyour generosity to share.24. I love the title of the second chapter. I once attended a gathering of people in Chicago who were discussing energyhedging. One of the participants commented that every well run publicly traded company had four sets of books (in theUS): one for the tax authorities, one for the SEC, one for the accountants and one for the board and management.25. Much more can be said about intellectual capital.26. Chapter 9 looks to be the best summary exposition available.27. I can see that you have expended a tremendous amount of effort in presenting a very informative treatise on valuationtheory and methodology that will serve as an outstanding instructional tool and reference source.28. My past experiences with your work have all been exceptional. You have introduced a mathematical and logical rigorthat has been sorely lacking for some time. Thanks for efforts in this endeavor.29. Valuing Internet companies: I like it. I just ran a private auction process to sell 80% of the shares of a profitable, growingprivately held Internet retailer, and am close to the topic. The best offers came it at approximately 6 times trailing 12months adjusted EBITDA, with many offers in the 4-5x trailing EBITDA range. One positive factor you should mention isthat Internet retail benefits from expected growth in that every year a few more percent of people feel comfortablepurchasing on the Internet, given their fear of putting credit card details on the net. This gives the whole Internet retailcategory a natural 3-5% growth per year, just by more people joining the universe of Internet retail customers.30. Excellent, impressive way to get your book out at zero cost to the reader. Your organization is the best that I have seen. Ilooked at a number of chapter synopses and found them very good also.31. I cant believe it to have free access to such a monument of financial litterature!32. I like the idea of your 4 definitions of ERP33. For more than 50 years I have been involved with business finance, including two different periods as CFO of largepublicly traded companies in the US. I have read a couple of the chapters you created and find them both enlighteningand very useful, unlike all too many finance texts. I particularly like some of your chapter titles and the brevity and focusof your work.34. I enjoy reading the in-depth valuation topics. Is it possible to obtain a bound volume of the book with, perhaps, yourautograph inside? I would, obviously, be honored to pay for it and it would be a fine addition to my library, especiallysince you are cited by some others as the "Damodaran of Espaa."35. I have spent years trying to reconcile two aspects of value investing. As value investors, we like to think of ourselves asvery long-term investors, taking the view that we buy shares as if we were buying the whole company. Yet, the valuediscipline (and indeed even Ben Graham) demand that we sell once certain valuation levels are reached, which brings uscloser to traders After 44 years in the business, though I have learned to live with the contradiction, it still bothers mysense of harmony.36. Thank you once again for this very welcome contribution to the field.37. Thank you for writing such good chapters. They have been really helpful in understanding some of the concepts ofvaluation38. You have a very clear writing style which explains complicated subjects really well. I look forward to learning more aboutyour thoughts on business valuation.39. I came across one of your articles from your book and was so impressed by the content that I started mining for otherarticles written by you. Your writings are so concise, easy to comprehend that I feel like its akin to talking to you inperson.40. I would like to congratulate you for your work and hope we would learn more from you and other scholars like you. Total Pages15181013 Tables Figures Downloadable at: 15121 804 2012 1418 31 16 20 127 59 00 1516 925 06 132625 13156 772 182217221651116161219189191211162791110121717471251715121716721 91512912101312313151231051252310115140405210105013444 613432214621151017282146074680352153107253.
https://it.scribd.com/document/303830981/SSRN-id2209089
CC-MAIN-2019-51
refinedweb
6,012
50.84
> > I create a global value called Component1 (it can be anything but a > table) > > e.g. Component1 = function() end > > > > Then I try to require 'Tests'. I end up with > > > > compat-5.1.lua:122: name conflict for module `Tests.Component1' > > Can you list the exactly contents of your Feature1.lua file? I'm not sure > I'm fully understanding your problem, but it may be just a question of > line reordering. See below. The entire contents of the Feature1.lua is module "Tests.Component1" The problem I'm experiencing is that it is possible to either prevent a library from loading or loading it into unexpected tables by declaring global variables. See below. > > > I have not tested this with work6. Is this the intended behaviour? This > > implies that one can have a namespace A.B, but not have the B table an > element of A. > > B can be a "global". > > That's the part that lost me. Since you have "Component1" both as a Lua > file (part of the Test package) and a directory (where Feature1.lua is > located) I'm not sure what module hierachy you want. > > Do you want to have both the A.B.C namespace and a global (declared in > A/B.lua) called C? If that is the case then you can try defining C before > the definition of namespace A.B.C, as in (not tested) > > ./A/B.lua > ----------- > C = function() end -- this is C, global > > module "A.B.C" -- from this line forward every "global" ends in the A.B.C > namespace > > function D() -- this is A.B.C.D > C() -- calls the global defined before the module definition > end > ---------- > > Would that work for you? Maybe I'm completely off mark here... > This would work, but I do not have control over C = function()... . C is not a function that the package needs. It is just a "global" function that the user created before requiring the package. The presence of the function causes the name conflict. I'll try to explain my setup: I want tables (containing namespaces) as follows: Tests Tests.Component1 -- Contains all the tests for Component1 These namespaces are created by 3 files: ./Tests.lua - Loads all the components ./Tests/Component1.lua - Loads all the features of component 1 ./Tests/Component1/Feature1.lua - Loads the tests for Feature1 The location of the files is necessitated by the conversion of package name to file name in the require statements. The contents of the Tests.lua and Component1.lua files are just require statements to load the sub packages, thus: ./Test.lua contains module "Tests" require 'Tests.Component1' ./Tests/Component1.lua contains require 'Tests.Component1.Feature1' ./Tests/Component1/Feature1.lua must make its contents available in the Component1 namespace, thus: module "Tests.Component1" ... some more code to implement tests for feature 1. When there exists a global variable with the name "Component1", the loading of the tests will fail if the variable does not reference a table. I don't know beforehand which global variables will exist. I can probably restructure this to make it work, but that will not make the issue go away. To summarise: It is possible to either prevent a library from loading or loading it into unexpected tables by declaring global variables with this implementation of compat. Is this the intended behaviour? Jarno
http://lua-users.org/lists/lua-l/2005-05/msg00391.html
CC-MAIN-2020-16
refinedweb
556
69.28
Board index » C Language All times are UTC I tried using bubble-sort and strcmp but to no use. I had a lot of trouble implementing it. Here's my struct: struct tapelist /* Global structure definition */ { int tape_number; /* variable thet holds the tape's number in the inventory list */ char artist_name[25]; /* array that holds the name of the artist */ char album_name[25]; /* array that holds the name of the album */ int year_realese; /* variable that holds the year when a specific album was realeased */ char record_label[25]; /* array that holds the name of the record company which realeased a specific album */ int num_tracks; /* variable that holds the number of tracks that are found on a specific album */ char tape_brand[15]; /* array that holds the brand of the tape */ char tape_type[11]; /* array that holds the type of tape that a specific album was recorded on */ char tape_status[10]; /* array thet holds what status is the current tape enjoying */ } tape[10]; /* declares 10 occurences of this structure */ And here's my miserable excuse for a sorting function: /******************************* Sort ***********************************/ void sort(void) { /* Start sort */ tapelist ara[MAX]; fillArray(ara); printf("Here are the unsorted numbers:\n"); printArray(ara); /* prints the unsorted array */ sortArray(ara); /* Sorts the array */ printf("Here are the sorted numbers:\n"); printArray(ara); /* prints the newly sorted array */ return 0; void fillArray(int ara[MAX]) { /* puts random numbers in the array */ int ctr; int temp; temp = 0; /* initialize variable */ for (ctr=0; ctr<MAX; ctr++) { temp++; ara[ctr] = tape[temp].artist_name); } /* forces numbers to range between 0 and 99 */ return; void sortArray(int ara[MAX]) { /* sorts the array */ int temp; /* temporary variable to swap with */ int ctr1, ctr2; /* need two loop counters to swap pairs of numbers */ for (ctr1=0; ctr1<(MAX-1); ctr1++) { for (ctr2=(ctr1+1); ctr2<MAX; ctr2++) /* test pairs */ { if (strcmp ara[ctr1], ara[ctr2]) >=0 ) { { temp = ara[ctr1]; /* pair is not in order */ ara[ctr1] = ara[ctr2]; ara[ctr2] = temp; /* "float" the lowest to the highest */ if (strcmp (ara[ctr1], ara[ctr2]) >= 0) /* do something */ The fact that you're comparing strings at all tells me that you probably want to do something other than what you asked about. But as I said, I'm answering what you asked, not what is implied. Now, as far as sorting the array in a struct goes, let me make up a new struct: #include <stdio.h> #include <string.h> /* This could be local to main(), but for this example I'll leave it here. */ struct s_foo { int data [20]; char bar [15]; int main (void) { int i; for (i = 0; i < 20; i++) /* make up some values */ foo.data[i] = 30 - i; strcpy (foo.bar, "string"); /* We want to sort the data array in the foo struct. We'll ignore bar. */ dump_array (foo.data, 20); sort_array (foo.data, 20); dump_array (foo.data, 20); for (i = 0; i < size; i++) printf ("%d ", array[i]); putchar ('\n'); Sure. I'll put the general suggestions at the top, for those of you who don't want to wade through the appended code. In order to program well, you have to master a set of related skills and habits. Stuff like - Start out with a small piece of code. Make sure it's correct. Then make a small addition to it. Make sure it's correct. Make another small addition. Etc. You might recognize this as "stepwise refinement", or some other buzzword. This sort of thing is absolutely crucial to making reasonable progress, especially with an unfamiliar language, and especially for new programmers. - At each point in the code, know what all variables are, and what valid operations can be performed on them. You can't use a variable if you don't know what it holds, or if you don't know what operations are valid. - This is really a corollary to the above rule: minimize global variables and keep procedures small and simple. - Never worry about efficiency unless you have to. Particularly at the beginning level, you should write code to be understandable. If you can't understand your own code, how can you be sure it does what you want? :struct tapelist /* Global structure definition */ /* contents deleted */ : } tape[10]; /* declares 10 occurences of this structure */ :/******************************* Sort ***********************************/ You should always start out with a list of prototypes for all functions. :void sort(void) :{ /* Start sort */ : tapelist ara[MAX]; What is the definition of "tapelist"? You gave the definition of "struct tapelist" above, but it's not the same thing (unless you explicitly define it to be the same with "typedef"). [deletia] : return 0; ^^^^^^^^ void functions can't return values. :} /* End sort */ :/****************************** fillarray ******************************/ : :void fillArray(int ara[MAX]) :{ [more deletia] : for (ctr=0; ctr<MAX; ctr++) : { temp++; ara[ctr] = tape[temp].artist_name); } /* forces ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This assignment makes no sense. "ara[ctr]" is an integer variable. "tape[temp].artist_name is an array of characters -- the assignment is simply not compatable. : numbers to range between 0 and 99 */ : : return; :} : [still more deletia] :void sortArray(int ara[MAX]) :{ : { if (strcmp ara[ctr1], ara[ctr2]) >=0 ) { ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Of course, this is not syntactically legal. Beyond that, you seem to be trying to tread ara[ctr1] and ara[ctr2] as if they were strings, when they are declared as integers. -Dave -- *** *** *** *** *** *** *** *** *** *** *** *** *** * * * * * * * * * * * * * * * * * * * * * * * * * * 1. sort struct 2. Sorting structs 3. How to sort structs by particular member? 4. Q: Sorting structs 5. sorting structs with qsort()?? 6. struct member sort 7. help me sort my array of structs 8. Sort array of struct 9. sorting an array of structs 10. struct member sort 11. struct sort 12. Question About Sorting an Array of Structs
http://computer-programming-forum.com/47-c-language/316be0eef88b9c90.htm
CC-MAIN-2020-34
refinedweb
936
71.85
Hi all, On Wednesday 22 September 2004 20:09, Christian Perrier wrote: > Quoting Alastair McKinstry (mckinstry@computer.org): >> Incidentally, I've a new bug in iso-codes asking >> for the common name of F.Y.R. Macedonia to be changed to >> "Macedonia". I think I'll leave that to post-sarge ... > > This is a consequence of a discussion I had with the Macedonian group > who could possibly work on Macedonian translations. > It looks like > there's a quite strong feeling among Macedonian people > (at least those who care about Free Software) > against what we currently have in iso-codes. > I wrote them that this issue first needs to be raised before we (the > Debian community) can decide what is most appropriate. > Leaving this post-sarge may hurt the MK people. > So, if you have the opportunity of modifying iso-codes, > that would be fine. > This would avoid some "hey, *they* released sarge with the offensive name" > reactions... I doubt it would prevent valiant we-have-been-offended-by-Debian reactions if 'Former Yugoslav Republic of Macedonia' were changed to 'Macedonia'. There is a greek province that is called Macedonia, and has been called that for a long time. So there is a conflict of interest here, because two regions can not have the same name in contexts where either might be meant, and Debian is an international project, so this does apply to Debian too. Debian can not call them both the same, just like it can not have two packages with the same name. The government of Greece, and presumably also the people of the Greek Province Macedonia, strongly object to the former yugoslav republic Macedonia being called Macedonia. The Greek government seems to do this from the point of view that it might obscure the perception of naturalness of the right of the government to decide what happens in Greek Macedonia. I think the people of Greek Macedonia have a different objection : they were always called Macedonia, and now another region is trying to take their name away from them, which is offensive. It is possible that the peope of their northernly neighbour state also used to call their region Macedonia. Their republic has only come into existence recently, and this may only have been possible by their perception of them being part of a people that are 'Macedonians', that therefore obviously live in 'Macedonia'. The reality of the matter is that while formerly these two Macedonias existed in two separate namespaces, the Republic Macedonia has recently entered the western namespace. Still, who would want to be called "Former" something ; that gives the impression that what you are now is not important. I think Debian has only 2 choices here : 1) use the names 'Republic Macedonia' and 'Macedonia'. 2) use the names 'Republic Macedonia' and 'Greek Macedonia'. The details of how exactly these names should be can be bickered over bitterly yet, but we do need to make the distinction sooner or later, and so we would better do it right now. The reason i have used the names 'Republic Macedonia' and 'Greek Macedonia' to illustrate the choices above is that they are both names to be proud of. A republic is a state whose leaders are not chosen because they were born in one specific family, so the potential leaders have to compete, which raises the quality of the leadership. (I live in a monarchy, and i would trade it in for a republic any day). A republic is also an independent country, that can decide for itself what laws it wants and how it wants to deal with other countrys. This is something that Greek Macedonia can not claim. 'Greek' is also a very positive qualification : In one sense it refers to the history and tradition of ancient 'Ellas, where democracy and logic were invented, In another it refers to modern Greece, of which i know but little ; i happened to see mr. Simitis on greek television once, and i think Greeks can be proud to have ministers like that. So i think i have chosen names that are acceptable in themselves. In this my opinion, Republic Macedonia can not be called 'Macedonia' in Debian. This is sufficient to resolve the nameconflict, though not sufficient to completely avoid ambiguity. I expect Debian to be able to use any names chosen correctly, so it is not stricly necessary for Debian to go further than this. International politics's politicians seem to have agreed to not use 'Macedonia' for Republic Macedonia, but imo Debian should make the best choice regardless. The greek government may be unwilling to accept any other name than Macedonia as a reference to Greek Macedonia, but as 'Macedonia' does in fact have two possible interpretations, no matter what choices are made, i expect in international contexts Greek Macedonia will be referred to as greek Macedonia, though not in front of Greek officials, as this seems to be important to them and this courtesy does not cost anything. I think, but it is only my half-informed opinion, that there is a real chance that the Greek government would take actions against Debian if Debian did not call Greek Macedonia 'Macedonia' ; for a government that forced katharevousa this seems but a little thing. Afaik Debian does not have an entry for that in any language- or country-chooser (yet), so we don't seem to have a problem here. Furthermore, i think that the greek nation and culture (and socker team :-) are an asset to this world, and i would not want to unnecessarily antagonize the Greeks. In the long run i expect that, Republic Macedonia being a country and Greek Macedonia being a province, the name 'Macedonia' will be interpreted as referring to Republic Macedonia. If the macedonian translators want to speed up this expected development, what better can they do than translate Debian into their language ? Trying to achieve primacy artificially seems to me an unfruitfull undertaking, that would not be good for Debian, so i would like (if i were asked) to advise: "Make Love, not war". Important for Debian is that _all_ it's developers, including Republic-Macedonians as well as Greek-Macedonians, can be part of our undivided FreeSoftware Community if they so desire. Christian, Alastair, i'm not CC'ing you because i assume you read -boot. I am CC'ing -project because this topic needs to be there if anywhere. Follow-ups to -project only please. I have also CC'd debian-l10n-hellas, not only because both sides should be heard, but also because i am hoping that everyone that works toward Debian's goals can work together in harmony, resolving problems among themselves, better than the politicians have been able to. I can only hope that Greek and RepublicMacedonian developers read this on project, as i didn't find a per-country list of developers on Debian's website. Christian, could you please forward this to the group of macedonian translators that is in need of this discussion ? I wish that this controversy will not prevent the people on either side of the border to live in peace with their (more or less distant) neighbours. greetings, Siward ---------------------------------- "Do your own thing !" -- Jimi Hendrix, not getting caught in the student riots du jour.
https://lists.debian.org/debian-boot/2004/09/msg01324.html
CC-MAIN-2015-14
refinedweb
1,216
57.3
Device and Network Interfaces - user SMP command interface #include <sys/scsi/impl/usmp.h> ioctl(int fildes, int request, struct usmp_cmd *cmd); The smp driver supports this ioctl(2), which provides a generic user-level interface for sending SMP commands to SMP target devices. SMP target devices are generally SAS switches or expanders. Each usmp call directs the smp(7D) driver to express a specific SMP function, and includes the data transfer to and from the designated SMP target device. The usmp_cmd structure is defined in <sys/scsi/impl/usmp.h> and includes the following: caddr_t usmp_req; /* address of smp request frame */ caddr_t usmp_rsp; /* address of smp response frame */ size_t usmp_reqsize; /* byte size of smp request frame */ size_t usmp_rspsize; /* byte size of smp response frame */ int usmp_timeout; /* command timeout */ The fields of the usmp_cmd structure have the following descriptions: The address of the buffer containing the smp request frame. The data format should conform to the definition in the Serial Attached SCSI protocol. The address of the buffer used to hold the smp response frame. The size in byte of the smp request frame buffer. The size in byte of the smp response frame buffer. The size of the buffer should not be less than eight bytes. If the buffer size is less than eight bytes the smp(7D) driver immediately returns EINVAL. If the buffer size is less than that specified for the specific SMP function in the Serial Attached SCSI protocol definition, the response data might be truncated. The time in seconds to allow for completion of the command. If it is not set in user-level, the default value is 60. The common headers of smp request and response frames are found in two structures: usmp_req and usmp_rsp, both of which are defined in <sys/scsi/impl/smp_frames.h>. The structures include the following fields: struct usmp_req { uint8_t smpo_frametype; /* SMP frame type, should be 0x40 */ uint8_t smpo_function; /* SMP function being requested */ uint8_t smpo_reserved; /* reserved byte */ uint8_t smpo_reqsize; /* number of dwords that follow */ uint8_t smpo_msgframe[1] /* request bytes based on SMP function plus 4-byte CRC code */ } struct usmp_rsp { uint8_t smpi_frametype; /* SMP frame type, should be 0x41 */ uint8_t smpi_function; /* SMP function being requested */ uint8_t smpi_result; /* SMP function result */ uint8_t smpi_rspsize; /* number of dwords that follow */ uint8_t smpi_msgframe[1]; /* response bytes based on SMP function */ } The ioctl supported by the SMP target driver through the usmp interface is: The argument is a pointer to a usmp_cmd structure. One or more of the usmp_cmd, usmp_req or usmp_rsp structures point to an invalid address. A parameter has an incorrect, or unsupported value. An error occurred during the execution of the command. Device has gone. No memory available. The response buffer is shorter than required, and the data is truncated. A process without PRIV_SYS_DEVICES privilege tried to execute the USMPCMD ioctl. Command timeout. See attributes(5) for a description of the following attributes: ioctl(2), attributes(5), smp(7D), mpt(7D) ANSI Small Computer System Interface – 4 (SCSI-4) usmp commands are designed for topology control, device accessibility, and SAS expander and switch configuration. Usage of usmp is restricted to processes running with the PRIV_SYS_DEVICES privilege, regardless of the file permissions on the device node. User-level applications are not required to fill in the four bytes of SAS CRC code in the SMP request frame. The smp(7D) driver manages this for usmp if the SAS HBA does not.
https://docs.oracle.com/cd/E26502_01/html/E29044/usmp-7i.html
CC-MAIN-2018-13
refinedweb
568
51.89
Hashable data types and why do we need those Last Thursday we have done a short (2 hours long) video code review with veky. We haven't done any new one for while, so we had some issues with the sound. The last will be fixed for the next video and hopefully we will start doing those code reviews more regularly. Veky starts with the review of his mission - Completely Empty. During this he explains a lot of things about hashes in Python for me, so I've decided to leave some notes in a separate blogpost. First things fist. What is a hash? Hash is like a global index for objects in Python or integer interpretation of any Python object. >> hash(40) <<< 40 >>> hash('cio') <<< -4632795850239615199 >>> hash((1,2,3)) <<< 2528502973977326415 Well, not any object, only immutable objects can have a hash (an exception will be shown in the and of the post). Don't worry if you don't know what is the difference between mutable and immutable objects and why mutable objects can't have hashes. I'll explain it later. >> hash([1,2]) TypeError: unhashable type: 'list' >>> hash({‘a’: 1}) TypeError: unhashable type: 'dict' Why do we need those indexes (hashes) in Python, especially with such restrictions? Python uses hashes in sets and in dict keys in order to quickly find values there and keep values unique. That's why a list or dict can't be an element of set or key of dict (an exception will be shown in the and of the post). >> {[1,2]} TypeError: unhashable type: 'list' >>> {{}:1} TypeError: unhashable type: 'dict' This is why values inside the set can't be mutable (an exception will be shown in the and of the post). Mutable means capable of change or of being changed. Here is a classical example that illustrates that the list is a mutable type: >> a = [1,2] >>> b = a >>> b.append(3) >>> a <<< [1, 2, 3] Which is impossible for tuple >> a = (1,2) >>> a.append(3) AttributeError: 'tuple' object has no attribute 'append' … because tuple is an immutable type, even if we try to trick Python >> hash(a) <<< 3713081631934410656 >>> a = (1,2,3, []) >>> hash(a) TypeError: unhashable type: 'list' >>> {a} TypeError: unhashable type: 'list' But Python still allows you to check if two lists are equal, right? >> [1,2,3] == [1,2,3] <<< True >>> [1,2,3] == [1,2,2] <<< False So why doesn't Python simply check whether two objects are equal or not before adding a new one into a set without caring about the object's mutability. First of all, it is not as fast as using hashes, and the second reason is that the mutable object can be changed after being added into a set. Let's imagine we can add a list into set >> a = [1,2] >>> b = [1] >>> wow = {a, b} >>> len(wow) <<< 2 >>> b.append(2) >>> a == b <<< True >>> len(wow) ??? If a and b are equal then what should be the length of the set wow? If 2, then how is it possible that the set contains not unique elements? If 1, then this is an unexpected behaviour. That's why Python will raise an exception on line 3: wow = {a, b} I hope we've given you a very simple explanation of hashable objects and why do we need them. PS (StefanPochmann): Python is a very flexible language and you can define the method __hash__ for an object so it becomes hashable. In such a way we can get an answer to our previous question what will be the length of wow >> class List(list): def __hash__(self): return hash(tuple(self)) >>> a = List([1,2]) >>> b = List([1]) >>> wow = {a,b} >>> b.append(2) >>> a == b <<< True >>> len(wow) <<< 2 >>> wow <<< {[1,2], [1,2]} >>> len({a, b}) <<<
https://py.checkio.org/blog/hashable-data-types-and-why-do-we-need-those/
CC-MAIN-2019-47
refinedweb
641
77.16
I want to get image from video and store it in ‘.jpeg’ or ‘.png’ format please help me how to do this with opencv My code is import cv2 vidcap = cv2.VideoCapture('video1.mp4') success,image = vidcap.read() count = 0; print "I am in success" while success: success,image = vidcap.read() cv2.imwrite("frame%d.jpg" % count, image) # save frame as JPEG file if cv2.waitKey(10) == 27: # exit if Escape is hit break count += 1 Here i am trying to get the image from video frame by frame and save it as frame1,frame2,frame3 Best answer This is what I use to read in a video and save off the frames: import cv2 import os def video_to_frames(video, path_output_dir): # extract frames from a video and save to directory as 'x.png' where # x is the frame index vidcap = cv2.VideoCapture(video) count = 0 while vidcap.isOpened(): success, image = vidcap.read() if success: cv2.imwrite(os.path.join(path_output_dir, '%d.png') % count, image) count += 1 else: break cv2.destroyAllWindows() vidcap.release() video_to_frames('../somepath/myvid.mp4', '../somepath/out')
https://pythonquestion.com/post/how-to-get-image-from-video-using-opencv-python/
CC-MAIN-2020-16
refinedweb
178
75
Why do we keep instance variables private? We don’t want other classes to depend on them. Moreover it gives the flexibility to change a variable’s type or implementation on a whim or an impulse. Why, then programmers automatically add or override getters and setters to their objects, exposing their private variables as if they were public? Accessor methods Accessors (also known as getters and setters) are methods that let you read and write the value of an instance variable of an object. public class AccessorExample { private String attribute; public String getAttribute() { return attribute; } public void setAttribute(String attribute) { this.attribute = attribute; } } Why Accessors? There are actually many good reasons to consider using accessors rather than directly exposing fields of a class Getter and Setter make API more stable. For instance, consider a field public in a class which is accessed by other classes. Now, later on, you want to add any extra logic while getting and setting the variable. This will impact the existing client that uses the API. So any changes to this public field will require a change to each class that refers it. On the contrary, with accessor methods, one can easily add some logic like cache some data, lazily initialize it later. Moreover, one can fire a property changed event if the new value is different from the previous value. All this will be seamless to the class that gets the value using accessor method. Should I have Accessor Methods for all my fields? Fields can be declared public for package-private or private nested class. Exposing fields in these classes produces less visual clutter compare to accessor-method approach, both in the class definition and in the client code that uses it. If a class is package-private or is a private nested class, there is nothing inherently wrong with exposing its data fields—assuming they do an adequate job of describing the abstraction provided by the class. Such code is restricted to the package where the class is declared, while the client code is tied to class internal representation. We can change it without modifying any code outside that package. Moreover, in the case of a private nested class, the scope of the change is further restricted to the enclosing class. Another example of a design that uses public fields is JavaSpace entry objects. Ken Arnold described the process they went through to decide to make those fields public instead of private with get and set methods here. Private fields + Public accessors == encapsulation Consider the example below public class A { public int a; } Usually, this is considered bad coding practice as it violates encapsulation. The alternate approach is public class A { private int a; public void setA(int a) { this.a =a; } public int getA() { return this.a; } } It is argued that this encapsulates the attribute. Now is this really encapsulation? The fact is, Getters/setters have nothing to do with encapsulation. Here the data isn't more hidden or encapsulated than it was in a public field. Other objects still have intimate knowledge of the internals of the class. Changes made to the class might ripple out and enforce changes in dependent classes. Getter and setter in this way are generally breaking encapsulation. A truly well-encapsulated class has no setters and preferably no getters either. Rather than asking a class for some data and then compute something with it, the class should be responsible for computing something with its data and then return the result. Consider an example below, public class Screens { private Map screens = new HashMap(); public Map getScreens() { return screens; } public void setScreens(Map screens) { this.screens = screens; } // remaining code here } If we need to get a particular screen, we do code like below, Screen s = (Screen)screens.get(screenId); There are things worth noticing here.... The client needs to get an Object from the Map and casting it to the right type. Moreover, the worst is that any client of the Map has the power to clear it which may not be the case we usually want. An alternative implementation of the same logic is: public class Screens { private Map screens = new HashMap(); public Screen getById(String id) { return (Screen) screens.get(id); } // remaining code here } Here the Map instance and the interface at the boundary (Map) are hidden. Getters and Setters are highly Overused Creating private fields and then using the IDE to automatically generate getters and setters for all these fields is almost as bad as using public fields. One reason for the overuse is that in an IDE it’s just now a matter of few clicks to create these accessors. The completely meaningless getter/setter code is at times longer than the real logic in a class and you will read these functions many times even if you don't want to. All fields should be kept private, but with setters only when they make sense which makes object Immutable. Adding an unnecessary getter reveals internal structure, which is an opportunity for increased coupling. To avoid this, every time before adding the accessor, we should analyse if we can encapsulate the behaviour instead. Let’s take another example, public class Money { private double amount; public double getAmount() { return amount; } public void setAmount(double amount) { this.amount = amount; } //client Money pocketMoney = new Money(); pocketMoney.setAmount(15d); double amount = pocketMoney.getAmount(); // we know its double pocketMoney.setAmount(amount + 10d); } With the above logic, later on, if we assume that double is not a right type to use and should use BigDecimal instead, then the existing client that uses this class also breaks. Let’s restructure the above example, public class Money { private BigDecimal amount; public Money(String amount) { this.amount = new BigDecimal(amount); } public void add(Money toAdd) { amount = amount.add(toAdd.amount); } // client Money balance1 = new Money("10.0"); Money balance2 = new Money("6.0"); balance1.add(balance2); } Now instead of asking for a value, the class has a responsibility to increase its own value. With this approach, the change request for any other datatype in future requires no change in the client code. Here not only the data is encapsulated but also the data which is stored, or even the fact that it exists at all. Conclusions Use of accessors to restrict direct access to field variable is preferred over the use of public fields, however, making getters and setter for each and every field is overkill. It also depends on the situation though, sometimes you just want a dumb data object. Accessors should be added to a field where they're really required. A class should expose larger behaviour which happens to use its state, rather than a repository of state to be manipulated by other classes. More Reading - See more at {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/getter-setter-use-or-not-use
CC-MAIN-2017-43
refinedweb
1,142
63.19
Write a file named bmr.py with a function named st_jeor which, if given (mass, height, age, sex), returns Mifflin St Jeor estimate of the basal metabolic rate, an estimate of the Calories consumed to keep the body alive. Assume mass is given in kilograms, height in centimeters, age in years, and sex is either "male" or "female". In the formula Formula from Wikipedia s is +5 if sex is "male" and −161 if it is "female"; m, h, and a represent mass, height, and age respectively. When you run bmr.py, nothing should happen. It defines a function, it does not run it. If in another file (which you do not submit) you write the following: import bmr print(bmr.st_jeor(74.7, 162.9, 19, 'female')) you should get the following output: 1509.125 Consider adding the other formulas listed in Wikipedia: harris_benedict, revised_harris_benedict, and katch_mcardle ( katch_mcardle will require a body fat percentage estimate. How much do they differ? The unit multipliers in the equation all cancel out; you can ignore them. Have you tried other cases besides (74.7, 162.9, 19, 'female')? To know you got them right, try changing one argument at a time and see if it creates the expected size change in the output; for example, adding 10 kg should add 100 Calories.
http://cs1110.cs.virginia.edu/w03-bmr.html
CC-MAIN-2017-43
refinedweb
220
74.19
I wanted to add multiple mouse inputs to my applications. I've written several multi-player games which use Nintendo's Wiimote, and have found that it is easiest to develop the apps using mice first, then add Wiimote support later. Microsoft just released a new MultiPoint SDK, but it doesn't support C++, and I wanted to add this support to some older apps I wrote in VC++ 6.0. So maybe, this will help someone else. There are several apps I've released on my blog (nwpodcast.blogspot.com) that use or will use this with mice or Wiimotes, and I've always enjoyed CodeProject, so I thought I'd contribute. There are several options out there of drivers people have written - a Google search for "MouseParty", "raw_mouse", and "ManyMouse" will help you learn more about what is available. I chose to use Ryan Gordon's "ManyMouse" code. You can find it here:. It provides the raw button, mouse-wheel, and movement data to use. You must use XP or greater to have the OS actually see multiple mouse devices, though I haven't actually confirmed this. For my MFC-based Dialog project, I simply created a new ManyMouse object, and then monitored and maintained the individual cursors for each mouse manually. I tried to document the demo code because that is the best way to see how it works. First, we init the ManyMouse object like this: int m_mice = ManyMouse_Init(); The next thing is to override the mouse related messages from Windows - like WM_LBUTTONDOWN and WM_MOUSEMOVE. In these functions we just detect which mouse moved, and adjust the tracking cursor data for each: WM_LBUTTONDOWN WM_MOUSEMOVE void CManyMouseDlgDlg::OnLButtonDown(UINT nFlags, CPoint point) { CString junk; if(m_2cursors){ //mask out 2nd mouse movement from primary mouse to avoid cursor movement ManyMouseEvent event; while (ManyMouse_PollEvent(&event)) { if (event.type == MANYMOUSE_EVENT_BUTTON){ if(event.device == 1){ //primary mouse device m_m0=TRUE; m_hit = m_cursor1point; Invalidate(FALSE); //must do this to paint the message... return; } if(event.device == 0){ //2nd mouse device m_m1=TRUE; m_hit = m_cursor2point; Invalidate(FALSE); //must do this to paint the message... return; //don't pass on the lbutton event if mouse1 did it... } } } //end while } } void CManyMouseDlgDlg::OnMouseMove(UINT nFlags, CPoint point) { if(m_2cursors){ //mask out 2nd mouse movement from primary mouse to avoid cursor movement ManyMouseEvent event; while (ManyMouse_PollEvent(&event)) { if (event.type == MANYMOUSE_EVENT_RELMOTION) { if(event.device == 1){ //main mouse moved, don't do anything! if(event.item == 0) { //item = 0 means movement in X axis m_cursor1point.x = m_cursor1point.x + event.value; } else { //item = 1 means movement in the Y axis m_cursor1point.y = m_cursor1point.y + event.value; } m_cursor1moved = TRUE; } if(event.device == 0){ if(event.item == 0) { //item = 0 means movement in X axis m_cursor2point.x = m_cursor2point.x + event.value; } else { //item = 1 means movement in the Y axis m_cursor2point.y = m_cursor2point.y + event.value; } m_cursor2moved = TRUE; }//endif device=1 }//endif type = relmotion }//end while Invalidate(FALSE); //must do this to paint the message... } else {//endif 2cursors CDialog::OnMouseMove(nFlags, point); //act normal for mouse0 } } Note that I keep it pretty simple and exposed - it can be cleaned up quite a bit, but this is to explain it, right? The other thing to note is that I invalidate the window to force it to redraw and allow me to manually draw in the multiple cursors or indicate if we had a button press: if(m_m0){ junk.Format("Button 0 FIRED!"); m_m0 = FALSE; dc.TextOut(m_hit.x,m_hit.y,junk); } if(m_2cursors){ //draw the 2nd cursor if(m_cursor2moved){ //redraw the 2nd cursor dc.MoveTo(((int)m_cursor2point.x-20), (int)m_cursor2point.y); dc.LineTo(((int)m_cursor2point.x+20),(int)m_cursor2point.y);//horizontal hair dc.MoveTo((int)m_cursor2point.x, (int)m_cursor2point.y-20); dc.LineTo((int)m_cursor2point.x, (int)m_cursor2point.y+20);//vertical hair m_cursor2moved = FALSE; } } That is basically it. Yeah, it is crude, and yeah, managed C++, C#, or the SDK are better ways to go, but I wanted to figure out how to make it work. Couple of things to look out for: First, the VC++ 6.0 compiler has bugs, and they show up in the form of Link and Build Errors. "fatal error LNK2005:" shows up so you have to tell the compiler to use the "Force file output" option in the Customization feature for the Project settings. The next joy of the 6.0 compiler is a debug bug causing a "fatal error LNK1103:" error. This only happens for the debug version build, and not for the release, so to overcome it, the MS recommendation is 'don't do a debug build'. Finally, after you have added Ryan Gordon's ManyMouse .c and .h files to your MFC project, you have to tell the compiler to not use pre-compiler processing for each of those files. You should be able to build and test the application now. Plug in a second USB mouse and have fun testing.
http://www.codeproject.com/Articles/34795/Support-Multiple-Mouse-Inputs-in-a-Dialog-Box
CC-MAIN-2016-22
refinedweb
820
64.51