text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
BondWorks
The past week was a terrible example in the emotional swings a city, a country, and most of the world could go through in very short time span. The G-8 Chieftains meeting was rudely overshadowed by the terrorist bombing of the London transit system. Just one day after the IOC announced that London has won the right to host the 2012 Summer Olympics, the city woke up to a deplorable act by a group of nut-bars that claimed to have Al-Qaeda connections and agenda. After a 2 hour rush to safe havens such as US Treasury securities, the market decided that if that is the best the terrorist could do, it is just not good enough to lose any sleep over or sell any securities. The stock market bounced back with vengeance and by Friday's close stocks worldwide were well above pre-bombing levels. One market expert astutely observed that by now a terror premium has been built into the market. The trailing p/e of the NASDAQ composite dipped briefly below a panic stricken 44 on Thursday morning, only to bounce back to a more normal (including a hefty dose of terror premium of) 45.35. Treasuries spiked on the terror news, but ended the week under water again, and even the weaker than expected employment data could not provide enough support to keep them in the green column. Meanwhile the Fed is expected to continue raising rates.
NOTEWORTHY: The economic calendar was overshadowed by the events described above this past week. The employment data was below consensus even with the positive revisions to the previous months' numbers. The workweek measure was disappointing, while hourly earnings increase was subdued. Most of the rest of the indicators last week were positive. Consumer and manufacturing surveys topped expectations again, ISM Services was rock solid bouncing back above 60 after a decent bounce in the Manufacturing ISM Survey the week previous. The monthly employment figures in Canada were positive. While Weekly Jobless Claims have been moving sideways, the Challenger Grey Layoff Survey has shown a significant increase in corporate layoff announcement. The increase in this metric does not bode well for the employment picture ahead. Next week is going to be busy again, with Trade Data, Retail Sales, and inflation data highlighting the schedule.
INFLUENCES: Fixed income portfolio managers are becoming less bearish. (RT survey rose to another multi-month high reading of 46% bulls a week ago. This metric is now into neutral territory from a contrarian perspective.) The 'smart money' commercials are long 93k contracts (a sizeable decrease from last week's 192k). This number is becoming slightly positive again for bonds. Seasonals are neutral and choppy heading into July. Bonds spiked up on Thursday and continued the recent pattern of Friday sell-offs. On the technical front, bonds still have a positive bias, but the market seems to be taking 3 steps forward and 2 steps back.
RATES: US Long Bond futures closed at 116-27, down almost a dollar this week, while the yield on the US 10-year note increased 5 basis points to 4.10%. The market seems to be settling into a trading range around the 4% level on the US 10 year note. The Canada - US 10 year spread was steady at -20 basis points. We are officially neutral on this spread at this point. The belly of the Canadian curve outperformed the wings by another basis point last week and held the break through the 40 bps level. Selling Canada 3.25% 12/2006 and Canada 5.75% 6/2033 to buy Canada 5.25% 6/2012 was at a pick-up of 38 basis points. Assuming an unchanged curve, considering a 3-month time horizon, the total return (including roll-down) for the Canada bond maturing in 2013 is the best value on the curve. The inflection point on the Canadian yield curve is moving out. During the past 6 months the best value maturity date has alternated between the 2011 and 2012 issues, now this point is shifting further out to the 2013 area. Bond market participants, not only in the Canadian government bond market but also in provincial and corporate issues, are advised to shift the focus of their investments accordingly. In the long end, the Canada 8% bonds maturing on June 1, 2023 continue to be cheap on a relative basis.
CORPORATES: Corporate bond spreads moved in slightly last week. Long TransCanada Pipeline bonds were 2 basis points tighter at 121, while long Ontario bonds were in .5 to 46.0. A starter short in TRAPs was recommended at 102. As a new recommendation we advised to sell 10 year Canadian Bank sub-debt at a spread of 58 bps over the 10 year Canada bond. This spread closed at 57 basis points last week.
BOTTOM LINE: Neutral continues to be the operative word on bonds. An overweight position in the belly of the curve is still recommended for Canadian accounts. The inflection point on the Canadian yield curve is shifting from the 2011-2012 and to the 2013 maturity area. Short exposure for the corporate sector is advised. We recommended an increase in short corporate exposure this week.
TweetTweet
|
http://www.safehaven.com/article/3434/bondworks
|
CC-MAIN-2016-44
|
refinedweb
| 876
| 62.98
|
Reverseme Windows Keygen
June 22, 2010 Leave a comment
This one was challenging for me, and took me several hours, but was fun. I got caught up on certain parts that may not have been too difficult, but, yeah…
You can download the executable here Ice9.zip.
The first thing I noticed is probably the ‘trick’ which was simply a call to isdebuggerpresent. I modified the assembly immediately after from JNE to JE so that it only runs if a debugger is present, allowing me to attach my debugger.
00401071 74 0A JE SHORT Ice9.0040107D
This took a lot of trial and error. My strategy was to replicate the logic. Once I got to the point ‘ecx at 0040119c’ I was home free.
#include <iostream> #include <string> using namespace std; void main (int argc, char *argv[]) { if ( argc != 2) { cout<<"Bad usage, enter a name > 4 letters"<<endl; return; } string name = argv[1]; string ostring = name; int i; //first reverse the string for (i=0; i<name.length(); i++) { name[i] = ostring [name.length()-i-1]; } if (name.length() < 4) { cout << "name must be more than 4 letters chief"<<endl; return; } int v1 = 0; int cum = 0; for (i=1; i<name.length(); i++) { v1 = name[i]; if (name[i] <= 90) { if (v1 >= 65) v1 += 44; } cum += v1; } //ecx at 0040119C cum = 9 * (12345 * (cum + 666) - 23); char chr_403119 [122]; unsigned int v; i=0; //no bounds checking do { v = cum; cum /= 0xA; chr_403119[i++] = v % 10 + 48; } while (v / 10); chr_403119[i] = '\0'; printf ("%s", chr_403119); string serial = ""; //reverse the string for (; i >= 0; --i) { serial += chr_403119[i]; } cout<<serial<<endl; //append all chars except the 'first' three to the end for (i=3; i< ostring.length(); i++) { serial += ostring[i]; } cout<<serial<<endl; }
My plan on this one, since it was interesting enough and because it’s relatively easy to break at the final value, is to break this a completely different way. I’d like to write a python debugging script that bypasses the isdebuggerpresent and just grabs the final value in the compare at 004011FF. This should be relatively straightforward, and hopefully a good ‘hello, world’ to the world of python debugging. Stay tuned.
|
https://webstersprodigy.net/tag/crackmes/
|
CC-MAIN-2019-43
|
refinedweb
| 369
| 68.3
|
It would seem to be less obvious to find since it goes against the expectations of current developers and lead to more follow-up efforts. On 04.08.2010, at 14:30, Stefan Monnier <address@hidden> wrote: >> The CL manual documents mapc accepting multiple sequences, but I always >> get the built-in version. I was told about cl-mapc which appears not to >> be documented in the manual. Actually, based on mapcar/mapcar* I >> expected to find a function mapc* instead. > >> So what I'd like is to change the documentation in (cl) Mapping over >> Sequences from mapc to mapc* and rename cl-mapc to mapc*. > > Actually, to keep the namespace cleaner, I'd rather move mapcar* to > cl-mapcar. > > > Stefan
|
http://lists.gnu.org/archive/html/bug-gnu-emacs/2010-08/msg00166.html
|
CC-MAIN-2014-52
|
refinedweb
| 121
| 61.46
|
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
I'm using a form with form validator.
When validation error occurs on a page I'd like to log this event along
with ip-address from which a request was made. The problem with this is
that validator knows nothing about request and form has no error info
from validator. My current code is this:
def writePageContent(self):
formProcessed, data = self.processForm()
if data == 'invalid':
forms = self.formDefinitions()
form = forms['Login']
fv = form._formValidators[0]
message = fv.errorMessage # set by validator
addr = self.request().remoteAddress()
log.warn("Login from %s failed: %s", addr, message)
Is there any better way to achieve this?
Best way, imho, would be to make self.processForm() method return error
info from validator.
--
Regards, max.
|
http://sourceforge.net/p/webware/mailman/webware-discuss/thread/20030210075557.GA30902@malva.ua/
|
CC-MAIN-2015-18
|
refinedweb
| 140
| 60.92
|
How to enumerate controls for Dropbox
By
VeeDub, in AutoIt GUI Help and Support
Recommended Posts
Recently Browsing 0 members
No registered users viewing this page.
- By rudi
Hello,
the following script is running quite nicely for a friend of mine doing a year of work and travel to have a backup at home for all the pics and movies taken with her mobile phone,
The facts:
At home: a VMWARE virtual machine with dropbox installed for her DB account
On-the-Road: One mobile phone, Camera Uploads are activated for her dropbox account, one Laptop.
The idea is to have a copy of *ALL* pictures and movies taken with the mobile phone *OUTSIDE* the dropbox folder, so that the DB Max Size is never exceeded.
The script is running at home and doing this:
Copy all content from the "Camera Upload Folder" within the dropbox folder to some folder *OUTSIDE* the DB folder Move all content from the "Camera Upload Folder" do some other folder *INSIDE* the DB folder to indicate, that the backup copies at home were done successfully
Dropbox is also installed on the Laptop she has with her. So on the Laptop she checks from time to time the destination folder inside the dropbox folder and moves the pics / movies on the laptop to some other folder outside the dropbox as well. By that final step the images are moved out of the dropbox on the mobile phone as well, so that there is always space left to sync more pics / movies.
As moving pics / movies out of the dropbox folder on the laptop doesn't touch the copies in the mobile phone's "Gallery", she has all the pics / movies at all three locations:
Mobile Phone Laptop VM at home
The folder names propably are different for non-German localized Windows and Dropbox, just modify them to meet your localization.
DBox-Sync.au3
- By timmy2
I have the impression that the traditional method for processing responses to a GUI is to assign variables to each GUICreateButton (or Pic) and then use Switch/Case/Endswitch to detect when any Control is clicked. In the tutorials I've seen about Koda it appears to use this method, too.
While 1 Global $nMsg = GUIGetMsg() Switch $nMsg Case $GUI_EVENT_CLOSE Exit Case $Pic2 Call("verizon") Case $Pic3 Call("skype") Case $PicExit Exit EndSwitch WEnd But in a few examples I've seen a script use a different method. The script always includes the following option near the top:
Opt("GUIOnEventMode", 1) ...and then, later, after creating each Control for the GUI, there's the function, GUICtrlSetOnEvent. In these cases a very simple While/WEnd loop is used to wait for the user to respond.
I happened to employ this second method in a recent script where I used "canned" controls (checkbox and buttons). Later in the same script I used the GuiCreatePic and Switch/Case/EndSwitch method because my GUI was all custom images. (I'm not sure if that's necessary, but it's what I've deduced.) The second GUI failed to respond to any mouse clicks, but eventually I figured out the cause was the GUIOnEvenMode being enabled at the top of the script.
This is when I realized I don't understand the reasoning behind choosing between these two methods. And I'm having no luck phrasing an appropriate search criterion. Is there an overview somewhere that explains the two methods and -- most importantly -- describes when each is appropriate?
- By guinness
#include <Array.au3> #include <ButtonConstants.au3> #include <GUIConstantsEx.au3> ; Proof of concept for using the control id $BUTTON_ROWS_COLUMNS = 8 Local Enum $eCTRL_HWND, $eCTRL_VALUE, $eCTRL_MAX Local $aMsg[1][$eCTRL_MAX], _ $iButtonHeight = $iHeight / $BUTTON_ROWS_COLUMNS, _ $iButtonWidth = $iWidth / $BUTTON_ROWS_COLUMNS, _ $iControlID = 0 For $i = 0 To $BUTTON_ROWS_COLUMNS - 1 For $j = 0 To $BUTTON_ROWS_COLUMNS - 1 $iControlID = GUICtrlCreateButton($i & ',' & $j, $i * $iButtonWidth, $j * $iButtonHeight, $iButtonWidth, $iButtonHeight, $BS_CENTER) ; Increase the size of the array if the control id is greater than or equal to the total size of the array. If $iControlID >= UBound($aMsg) Then ReDim $aMsg[Ceiling($iControlID * 1.3)][$eCTRL_MAX] EndIf ; Add to the array. $aMsg[$iControlID][$eCTRL_HWND] = GUICtrlGetHandle($iControlID) $aMsg[$iControlID][$eCTRL_VALUE] = 'Sample string for the control id: ' & $iControlID Next Next ; Clear empty items after the last created control id. ReDim $aMsg[$iControlID + 1][$eCTRL_MAX] ; Display the array created. _ArrayDisplay($aMsg) Local $iMsg = 0 While 1 $iMsg = GUIGetMsg() Switch $iMsg Case $GUI_EVENT_CLOSE ExitLoop Case $aMsg[$eCTRL_HWND][$eCTRL_HWND] To UBound($aMsg) ; If $iMsg is greater than 0 and between the 0th index of $aMsg and the last item then display in the console. If $iMsg > 0 Then ConsoleWrite('Control Hwnd: ' & $aMsg[$iMsg][$eCTRL_HWND] & ', ' & $aMsg[$iMsg][$eCTRL_VALUE] & @CRLF) EndIf EndSwitch WEnd GUIDelete($hGUI) EndFunc ;==>Example
- By!
-
|
https://www.autoitscript.com/forum/topic/201902-how-to-enumerate-controls-for-dropbox/page/2/?tab=comments
|
CC-MAIN-2020-40
|
refinedweb
| 786
| 55.07
|
Help:Files
File on Wikipedia means a data file for an image, a video clip, or an audio clip, including document length clips; or a midi file (a small, computer-instructions file). A page for the file will contains a comprehensive description.
Search for files, or upload your own file. (See Uploading files below.) A search lists every file page containing all the search terms found on the file page. From the search box enter File:descriptive terms. For example, include the terms image, video, or midi in the query. Then, discovering the page name you can help:edit the wikitext of any page and insert that media. This is an easy way to significantly improve articles. (See Using files below.) For example, the page title "File:CI 2011 swim 04 jeh.theora.ogv" will appear in the search results for File: swim video.
There are three semantic differences from the normal wikilink syntax when working with a file page:
- [[File:pagename]] will transclude the file, inserting the image, video or audio into the rendered page in a file link; however for MIDI files, it works as usual and a link to the file page will be inserted. A file link is a transclusion from the File namespace, complete with transclusion parameters.
- [[:File:pagename]], with the initial colon, will link the image, video or audio file page;
- [[Media:pagename]] will render a link which can activate the image or audio or video of a data file directly, on its own page (separate from the rendered page or the file page).
For backward compatibility with older pages the alias Image: (now deprecated) is still available instead of File: in wikilinks or in the search box, but "image" will now refer to more types of data files than just images.
Uploading files
File:Illustrating Wikipedia brochure.pdf The first step in using an image or other media file is to choose an upload server. Some files must use Wikipedia's upload server. Many files can use the Wikimedia Commons upload server which homes files at Wikimedia Commons. (Commons does not allow fair use. If the image is non-free then you may need to upload it to Wikipedia.) All files uploaded are mirrored between Wikipedia and Wikimedia Commons, and searchable from either one. (See Special:Filelist.)
The preferred formats
- for images: SVG, PNG, JPEG, and XCF. The GIF and TIFF formats are recognized, and other image formats may be too.
- for audio: MIDI, and also Ogg with FLAC, Speex, or Vorbis codecs.
- for video: WebM, and also Ogg with the Theora video codec.
You may have to rename your file for Wikipedia: see Naming files below. Also, please bear in mind that the Exif format of many digital cameras, smartphones, and scanners may embed personal metadata, and that if your media files are handled by unknown persons, "steno programs" can imbed hidden information in them.
High resolution images and animated .gif files may pose a problem for performance, but see the problem description in terms of bandwidth and reader's computing power at Consideration of image download size. For photographs in JPEG format, upload the best quality and highest resolution version available; these will be automatically scaled down to low-resolution thumbnails when needed.
Once the file is uploaded, please verify its file page image quality and description, considering how it key words help tag it for proper indexing in a search result. If a file of the same name exists on both Wikipedia and Commons, the Wikipedia file will be displayed.
Copyrighted materials cannot be uploaded to either location; see Wikipedia:Image use policy. Files subject to any restrictions whatsoever, even "for use on Wikipedia only", may not be free enough. In case an image is non-free content, use low-resolution, low-bandwidth files.
Using files
- For all the details on the fields of a file link, see Wikipedia:Extended image syntax.
Search for and find one of many existing image files, or upload your own file. Knowing the file's page name you can then edit your page and refer to that file to insert it into your wikitext. You will wikilink the page name, which will in turn include its file (of that name) in the page you edit. Take for example File:Wikipedesketch1.png. Use the following all on one line (with no line breaks). Then the results will be as shown in the image to the right:
[[File:Wikipedesketch1.png|thumb|alt=A cartoon centipede ... detailed description.|The Wikipede edits ''[[Myriapoda]]''.]]
The above link contains "fields":
- the page name, "File:Wikipedesketch1.png"
- "thumb", short for thumbnail and referring here to the reader's default size for images (See Help:User preferences to specify your own thumbnail sizes.)
- the alt text, such as might read "A cartoon centipede with seven hands reads a book, lifts another, types on a laptop, and holds a bottle". Alt text is intended for visually impaired readers or those with browsers or computers that do not display images. It should describe the gist of the picture's appearance in detail
- the caption, as "The Wikipede edits Myriapoda." The caption is intended for viewers of the image and explains the meaning while using terms that refer directly to items as they appear in the image.
Text and captions need have little text in common. A reader of the article can click on the thumbnail, or on the small double-rectangle icon
below it, to go to the corresponding file page.
By.
For examples of all these techniques, see Picture tutorial.
Naming files
File names should be clear and descriptive, without being excessively long. While the image name doesn't matter much to the reader (they can reach the description page by simply clicking on the image), it matters for editors. It is helpful to other contributors and for maintenance of the encyclopedia if images have descriptive or at least readable file names. For example, File:Skyline Frankfurt am Main.jpg is more manageable than File:14004096 200703230833355477800.jpg.
To avoid accidental overwriting of images or other media, generic filenames should not be used when uploading. For example, a picture of an album cover should not be given the name File:Cover.jpg. Sooner or later someone else will try to do the same thing, and that could overwrite the old image. Then the new image will appear wherever the old one was seen before—an album article would then show the wrong album cover.
Renaming files
Renaming a file page is different than renaming other kinds of pages. The page name of a file page is renamed by a file mover. A file mover is a user granted special rights. Unless you have been granted file mover rights, you must make a request to rename the page.
The request to rename a page is made by adding the following template to the wikitext file page, anywhere on the page:
{{Rename media|new filename|reason for name change}}
This will add the file page to Category:Wikipedia files requiring renaming, where a file mover will notice it.
The most common and accepted reasons a file mover will change a name are:
- Uploader request
- Changing from a meaningless to a descriptive title
- Changing from a misleading name to an accurate name
- Correcting important errors denoting, for example the spelling of a proper noun, or a false historical date
- Harmonizing file names with a set of related names
- Disambiguating files with very similar names
- Remove pejorative, offensive or crude language
The bolded words are description enough for the reason for a name change.
See also
- Wikipedia:Creation and usage of media files
- Help:Viewing media
- Wikipedia:File names
- Wikipedia:Images - an overview
- Wikipedia:File namespace noticeboard
- mw:Help:Images - on MediaWiki.org
|
https://infogalactic.com/info/Help:Files
|
CC-MAIN-2017-39
|
refinedweb
| 1,296
| 62.17
|
Technical Articles
Importing @sap/cds common.cds to your CAP project using the CDS Graphical Modeler
SAP CDS ships a common.cds file that includes various aspects that can be used in your CDS model. In this blog post, we’ll demonstrate how to import the @sap/cds common.cds into your CDS model and how to use the aspects to your CDS entities in your CAP project.
When you open your CDS file using the CDS Graphical Modeler, you will see below screen:
Now you can click “+” button and select “Import common.cds” menu item:
Then select a few aspects in the dialog:
Click “Select” button to close the dialog. When you see the CDS file, you will be able to find the import statement for CDS common:
namespace my.bookshop; using { cuid , managed , temporal } from '@sap/cds/common'; entity Books { key ID : Integer; title : String; stock : Integer; }
Now create an “Authors” entity using the CDS Graphical Modeler:
Select the Authors entity and click “Include Aspect” toolbar item:
and you will be able to see the include aspect dialog:
Select a few of them in the dialog and close the dialog:
Then you will see the inherited properties from the aspects from the selected aspects in CDS common for Authors entity:
As a summary, we demonstrate how to import @sap/cds common.cds into the CDS model and include the selected aspects to the CDS entity in the CDS Graphical Modeler, and the CDS Graphical Modeler provides an easy way of including those CAP built-in aspects and inherit the properties for CDS entities.
|
https://blogs.sap.com/2021/05/06/importing-cds-common-aspects-to-your-cds-model/
|
CC-MAIN-2022-27
|
refinedweb
| 265
| 54.36
|
def home_page(request): return render(request, 'home.html', {'form': ItemForm()})
This blog post is a first rough draft of a planned appendix to my book. It follows on from Chapter 9, which is all about forms and validation. You can take a look at it here
If you want to check out the code to have a play with the examples, you’ll find them on GitHub under the chapter_09 branch and the appendix_II branch
As you’ll see, the content starts out sounding a lot like a "proper" chapter for a book, and turns into more of a blog post and request for comments. Please do let me know what you think!
There’s been some interesting discussion with minds much greater than my own, such as those of Messrs Russell Keith-MaGee and Trey Hunner, which, for one reason or another, has taken place as line comments on github. Do check it out:
My basic conclusion for how to test CBGVs is now: make sure you have lots of short, single-assertion tests for your views, and it will be easy to adjust to using class-based views from function-based ones, and vice-versa. Cf the re-cap at the end of chapter 11: (scroll right to the end)
And the updated version of this post / appendix:
This appendix follows on from Chapter 9, in which we implemented Django forms for validation, and refactored our views. By the end of that chapter, our views were still using functions.
The new shiny in the Django world, however, is class-based views. In this chapter, we’ll refactor our application to use them instead of view functions. More specifically, we’ll have a go at using class-based generic views.
It’s worth making a distinction at this point, between class-based views and class-based generic views. Class-based views are just another way of defining view functions. They make few assumptions about what your views will do, and they offer one major benefit over view functions, which is that they can be subclassed. This comes, arguably, at the expense of being less readable than traditional function-based views. The main use case for plain class-based views is when you have several views that re-use the same logic. We want to obey the DRY principle. With function-based views, you would use helper functions or decorators. The theory is that using a class structure may give you a more elegant solution.
Class-based generic views we’ll soon see, the devil is in the detail.
I should say at this point that I’ve not used either kind of class-based views much. I can definitely see the sense in them, and there are potentially many use cases in Django apps where CBGVs would fit in perfectly. However, as soon as your use case is slightly outside the basics — as soon as you have more than one model you want to use, for example, I’ve found that using class-based views becomes much more complicated, and you end up with code that’s harder to read than a classic view function.
Still, because we’re forced to use a lot of the customisation options for class-based views, implementing them in this case can teach us a lot about how they work, and how we can unit tests them.
My hope is that the same unit tests we use for function-based views should work just as well for class-based views. Let’s see how we get on.
Our home page just displays a form on a template:
def home_page(request): return render(request, 'home.html', {'form': ItemForm()})
Looking through
the options, Django has a generic view called
FormView — let’s see how that
goes:
from django.views.generic import FormView [...] class HomePageView(FormView): template_name = 'home.html' form_class = ItemForm
We tell it what template we want to use, and which form. Then, we
just need to update urls.py, replacing the line that used to say
lists.views.home_page:
url(r'^$', HomePageView.as_view(), name='home'),
And the tests all check out! That was easy..
$ python3 manage.py test lists Creating test database for alias 'default'... ...................... --------------------------------------------------------------------- Ran 22 tests in 0.134s OK Destroying test database for alias 'default'... $ python3 manage.py test functional_tests Creating test database for alias 'default'... .... --------------------------------------------------------------------- Ran 4 tests in 15.160s OK Destroying test database for alias 'default'...
So far so good. We’ve replaced a 1-line view function with a 2-line class, but it’s still very readable. This would be a good time for a commit…
Next we have a crack at the view we use to create a brand new list, currently
the
new_list function. Looking through the possible CBGVs, we
probably want a
CreateView, and we know we’re using the
ItemForm class,
so let’s see how we get on with them, and whether the tests will help us:
class NewListView(CreateView): form_class = ItemForm def new_list(request): form = ItemForm(data=request.POST) if form.is_valid(): list = List.objects.create() Item.objects.create(text=request.POST['text'], list=list) return redirect(list) else: return render(request, 'home.html', {"form": form})
I’m going to leave the old view function in views.py, so that we can copy code across from it. We can delete it once everything is working. It’s harmless as soon as we switch over the URL mappings, this time in:
url(r'^new$', NewListView.as_view(), name='new_list'),
Now running the tests gives 3 errors:
$ python3 manage.py test lists Creating test database for alias 'default'... ...................EEE ====================================================================== ERROR: test_redirects_after_POST (lists.tests.test_views.NewListTest) --------------------------------------------------------------------- Traceback (most recent call last): File "/home/harry/Dropbox/book/source/appendix_II/superlists/lists/tests/test_views.py", line 33, in test_redirects_after_POST data={'text': 'A new list item'} [...] File "/usr/local/lib/python3.3/dist-packages/django/forms/models.py", line 370, in save fail_message, commit, construct=False) File "/usr/local/lib/python3.3/dist-packages/django/forms/models.py", line 87, in save_instance instance.save() File "/home/harry/Dropbox/book/source/appendix_II/superlists/lists/models.py", line 26, in save self.full_clean() File "/usr/local/lib/python3.3/dist-packages/django/db/models/base.py", line 926, in full_clean raise ValidationError(errors) django.core.exceptions.ValidationError: {'list': ['This field cannot be null.']} ====================================================================== ERROR: test_saving_a_POST_request (lists.tests.test_views.NewListTest) --------------------------------------------------------------------- [...] django.core.exceptions.ValidationError: {'list': ['This field cannot be null.']} ====================================================================== ERROR: test_validation_errors_sent_back_to_home_page_template (lists.tests.test_views.NewListTest) --------------------------------------------------------------------- [...] django.template.base.TemplateDoesNotExist: No template names provided --------------------------------------------------------------------- Ran 22 tests in 0.114s FAILED (errors=3) Destroying test database for alias 'default'...
TODO: talk through decoding traceback.
Let’s start with the third — maybe we can just add the template?
class NewListView(CreateView): form_class = ItemForm template_name = 'home.html'
That gets us down to just two failures. They’re both to do with dealing
with valid POST requests. CBGVs that deal with forms want you to put
any custom code for valid forms in a method called
form_valid. We can
just copy across some of the code from the old view function:
class NewListView(CreateView): template_name = 'home.html' form_class = ItemForm def form_valid(self, form): list = List.objects.create() Item.objects.create(text=form.cleaned_data['text'], list=list) return redirect(list)
That gets us a pass!
$ python3 manage.py test lists Ran 22 tests in 0.117s OK $ python3 manage.py test functional_tests Ran 4 tests in 15.157s OK
And we can even save two lines (DRY) by taking advantage of the real point of CBVs: inheritance!
class NewListView(CreateView, HomePageView): def form_valid(self, form): list = List.objects.create() Item.objects.create(text=form.cleaned_data['text'], list=list) return redirect('/lists/%d/' % (list.id,))
And all the tests still pass.
How does it compare to the old version? I’d say that’s not bad. We save some boilerplate code, and the view is still fairly legible. So far, I’d say we’ve got one point for CBGVs, and one draw.
This took me several attempts. And I have to say that, although the tests
told me when I got it right, they didn’t really help me to figure out the
steps to get there… Mostly it was just trial and error, hacking about
in functions like
get_context_data,
get_form_kwargs and so on.
One thing I did do which improved my codebase was to add a new unit test:
class ListViewTest(TestCase): [...] def test_list_view_displays_form_for_existing_lists(self): correct_list = List.objects.create() response = self.client.get('/lists/%d/' % (correct_list.id,)) self.assertIsInstance(response.context['form'], ExistingListItemForm)
It’s another good example of the "each test should test one thing" heuristic: that check on the form class could very easily have been tacked onto the end of a different test, but having it separate means I’m immediately told exactly what’s wrong, rather than potentially having the error masked by an earlier failure.
TODO: consider moving this test into ch. 9?
Anyway, after much hacking and swearing, this is the solution I eventually got to work:
class ViewAndAddToList(CreateView, SingleObjectMixin): template_name = 'list.html' model = List form_class = ExistingListItemForm def get_form(self, form_class): self.object = self.get_object() if self.request.method == 'POST': data={ 'text': self.request.POST['text'], 'list': self.object.id } else: data = None return form_class(data=data)
I also had to add a
get_absolute_url on the
Item class:
(I did try to use
get_form_kwargs instead of
get_form, but it didn’t want
to work for me. Perhaps some CBGV expert out there has a neater solution??)
class Item(models.Model): [...] def get_absolute_url(self): return self.list.get_absolute_url()
Let’s see the old version for comparison?
def view_list(request, list_id): list = List.objects.get(id=list_id) if request.method == 'POST': form = ExistingListItemForm(data={ 'text': request.POST['text'], 'list': list.id }) if form.is_valid(): form.save() return redirect(list) else: form = ExistingListItemForm() return render(request, 'list.html', {'list': list, "form": form})
Not a great improvement. Same number of lines of code, 15. If anything, the function version is better because it has one more line of whitespace. And it’s definitely more readable.
As I was working through this, I felt like my "unit" tests were sometimes a little too high-level. They told me whether I was getting things right or wrong, but they didn’t offer many clues on exactly how to fix things.
I occasionally wondered whether there might be some mileage in a test that was closer to the implementation — something like this:
def test_as_cbv(self): our_list = List.objects.create() view = ViewAndAddToList() view.kwargs = dict(pk=our_list.id) self.assertEqual(view.get_object(), our_list)
But the problem is that it requires a lot of knowledge of the internals of Django CBVs to be able to do the right test setup for these kinds of tests. And you still end up getting very confused by the complex inheritance hierarchy.
I’d be interested to hear how other people out there are testing their CBVs?.
|
http://www.obeythetestinggoat.com/testing-django-class-based-generic-views.html
|
CC-MAIN-2021-17
|
refinedweb
| 1,818
| 67.25
|
Printing variable name
How would I return a variable name in a function. E.g. If I have the function:
def mul(a,b): return a*b a = mul(1,2); a b = mul(1,3); b c = mul(1,4); c
This would return:
2 3 4
I would like it to return:
a = 2 b = 3 c = 4
How would I do this?
One answer could be:
(Sorry, but this is strictly speaking a valid answer.)
If something else is needed, something "more general", and this is certainly the case, than please describe this generality. Note also that the name of the variables "live" only in the "namespace of the code", it is not a good idea to let them live also "outside", as output... For testing purposes one may try something like...
(using this old fashioned string formatter, that may become soon obsolete, but it is the most simple one...)
(So what is the reason for such prints? Three prints as above can be understood also in the
form...)
|
https://ask.sagemath.org/question/40862/printing-variable-name/?sort=oldest
|
CC-MAIN-2019-43
|
refinedweb
| 172
| 78.59
|
Just want the component? Find it at GitHub
Possibly the most frequently re-implemented code across any React component is that used to pass properties through to child components. This stems from the fact that you generally need some sort of input to make the component useful, while you don’t want these component-specific properties polluting the
props on your children.
This would be all well and good if it wasn’t for the fact it is so easy to avoid re-implementing this over and over again! I mean, you’re already defining the properties you consume in
propTypes – why repeat yourself?
In fact, by passing your React classes through a Higher-Order Component, you can easily add a method which returns all props except the ones specified in
propTypes – making writing components that-much-easier (and consistent). I’ll show you how to do it in a moment, but first lets have a look at:
The Old Way Of Doing Things
Say we wanted to write a
NavBar component for our app, which is possible to display in two flavours:
'dark' and
'light'. To do this, we’ll accept a
theme property and apply different styles to the underlying
<div> by assigning a theme-specific
className.
To make our NavBar useful in a variety of contexts, we want to make sure that properties such as
style and
onClick are passed into the generated
<div>. But given that we don’t want our
theme property to appear in the DOM, we can’t just write
<div {...this.props} /> and pass through everything.
How do we accomplish this? Well, the immediately obvious way of doing this is to go through and manually transfer each of the props we want to pass through, rendering something along the lines of this:
const classes = `NavBar NavBar-${this.props.theme} ${this.props.className}`; return <div style={this.props.style} onClick={this.props.onClick} className={classes}>{this.props.children}</div>;
But this breaks down when we want to use the component in ways we didn’t forsee – like if we wanted to add an
onKeyDown handler, for example.
Our component really shouldn’t have to worry about how it’s consumer wants to use the underlying
<div>, and as such, it makes sense to pass through all of the received
props except those we specifically want to use. To do this, we might follow React’s transferring props documentation and try using ES7’s experimental object rest properties feature:
const { this.props.theme, className, children, ...other } = this.props; const classes = `NavBar NavBar-${theme} ${className}`; return <div {...other} className={classes}>{children}</div>;
This isn’t a bad way of doing things – but it could be done better. That’s where
propTypes comes in.
Don’t Repeat Yourself
The
propTypes class property of your React components is how you tell React about the various properties you expect to receive. For our
NavBar class, it might look something like this:
NavBar.propTypes = { theme: React.PropTypes.oneOf(['dark', 'light']), className: React.PropTypes.string, };
During development, React uses this to alert you when your components aren’t behaving as expected. But that doesn’t prevent us from using it in other ways!
In particular, we can use it to get a list of properties we don’t want to pass through to our child components:
const omit = Object.keys(NavBar.propTypes)
And then using the except package on npm, we can easily extract these keys from
this.props:
const other = except(this.props, omit)
Putting this together, we could write our
NavBar component’s
render function like so:
const classes = `NavBar NavBar-${this.props.theme} ${this.props.className}`; const other = except(this.props, Object.keys(NavBar.propTypes)) return <div {...other} className={classes}>{this.props.children}</div>;
Great, we’re not repeating ourself anymore! However, we still have the small problem of the snippet being our most complicated and unwieldy one so far.
Once upon a time, we may have tried to mitigate this by generating our
other object from a method in a React mixin – but with ES6 classes, we can go one better.
Higher Order Components
While ES6 classes may look special on the surface, under the hood they’re just sugar for vanilla functions with a bunch of prototype methods. And just like vanilla functions, we can pass them around as arguments and define new properties on their prototype after the fact.
This allows us to modify or wrap our ES6 classes programatically. People call the functions that do this “Higher-Order Components”.
Let’s built a higher order component which adds a
passthrough method to the prototype of whatever function we pass in. Actually, maybe let’s get you to write it for practice. It’ll look something like this:
function addPassthroughMethod(component) { // Add your `passthrough` method to component here }
Once you’ve had a shot, check our answer against mine by touching or hovering your mouse over the box below:
import except from 'except' export default function addPassthroughMethod(component) { // TODO: define this as a getter instead of as a fn component.prototype.other = function passthrough() { const omit = Object.keys(component.propTypes || {})).concat('children') return except(this.props, omit) } }
There are some properties, like
children, which you never want to automatically pass through – you can add these in your
passthroughfunction so you don’t need to add them to propTypes every time.
Great! Now you can add a
passthrough method to any React component just by running
addPassthroughMethod on it. Using the new method is as simple as ensuring your
propTypes are up to date, and then passing the result of
passthrough() into one of the components in your
render function:
class MyComponent extends React.Component { render() { return <div {...this.passthrough()}>{this.props.children}</div> } } MyComponent.propTypes = { ... } addPassthroughMethod(MyComponent)
It couldn’t get any simpler. Or could it?
Improving readability with ES7 decorators and class properties
The great thing about Higher-Order Components is they can be used as ES7 class decorators! Combined with ES7’s class properties proposal, you can accomplish the whole thing in an pleasingly simple manner:
@addPassthroughMethod class Paper extends React.Component { static propTypes = { ... } render() { return <div {...this.passthrough()}>{this.props.children}</div> } }
See the ES7 decorators and class properties proposals for more details.
Some people may argue against using Decorators/Higher-Order Components to modify the passed in component, and suggest that it would make be more elegant to extend the passed-in class with a
rendermethod which passes through the
passthroughprops to the existing
rendermethod as parameters.
While this may look more “functional”, the reality either way is that the decorated class needs to know that it will be decorated. Given that it is easy to compose multiple decorators which modify the prototype, I’d say this is the more pragmatic option.
Theres an NPM module for that
Now you know how to write your own passthrough decorator, and they say knowing is half the battle! But is there any point finishing the battle off when you can just
npm install something which does all this (and more)?
npm install react-passthrough
Just like the above example, react-passthrough adds a
passthrough() method to your React components. However, unlike our
addPassthroughMethod function above, react-passthrough let’s you specify which properties you’d like to always
omit (defaulting to
['children']), as well as which properties you’d always like to
force inclusion of. Here is an example of usage:
import passthrough from 'react-passthrough' @passthrough({force: ['disabled', 'tabindex'], omit: ['children', 'form']}) class Control extends React.Component { render() { return <div {...this.passthrough()}>{this.props.children}</div> } }
react-passthrough was extracted from my Memamug (my open-source React app) – and it isn’t alone! If you’ve found this useful, you’ll may also find some utility from my other components and articles. Sign up for my mailing list to learn about them! In return for your e-mail, you’ll also immediately receive 3 bonus print-optimised PDF cheatsheets – on React, ES6 and JavaScript promises.
Great post. Thank you. Best explanation and usage of es7 decorators I have ever seen. You really nailed it down with simple to complex approach.
Thank you again.
Now that PropTypes checkers will be stripped out of React in production () and people are even very keen on removing PropTypes declarations too. (), is this still safe to use in production?
It appears to me that the mainstream’s opinion is that PropTypes is just for development. For that reason, in long term, it should be replaced with something like typescript or flow. What are your view of PropTypes as a runtime contract?
|
http://jamesknelson.com/building-a-property-passthrough-higher-order-component-for-react/
|
CC-MAIN-2017-34
|
refinedweb
| 1,435
| 54.42
|
TTY_IOCTLSection: Linux Programmer's Manual (4)
Updated: 2002-12-29
Index Return to Main Contents
NAMEtty ioctl - ioctls for terminals and serial lines
SYNOPSIS
int ioctl(int fd, int cmd, ...);
DESCRIPTIONThe ioctl() call for terminals and serial ports accepts many possible command arguments. Most require a third argument, of varying type, here called argp or arg.
Use of ioctl makes for non-portableThe termios structure of a tty can be locked.
The lock is itself a termios structure, with non-zero root can do this.
Get and Set Window SizeWindow sizes are kept in the kernel, but not used by the kernel (except in the case of virtual consoles, where the kernel will update the window size when the size of the virtual console changes, e.g. by loading a new font).
--zero, nobody knows what will happen.
(SVr4, UnixWare, Solaris, Linux treat tcsendbreak(fd,arg) with non-zero arg like tcdrain(fd). SunOS treats arg as a multiplier, and sends a stream of bits arg times as long as done for zero arg. DG/UX and AIX treat arg (when non-zero) as a timeinterval pointing at /dev/console or /dev/tty0.
Controlling tty
- TIOCSCTTY int arg
- Make the given tty the controlling tty of the current process.
The current process must be a sessionvoid
- If the given tty was the controlling tty of the current process, give up this controlling tty. If the process was session leader, then send SIGHUP and SIGCONT to the foreground process group and all processes in the current session lose their controlling-zero, and clear it otherwise.
If the CLOCAL flag for a line is off, the hardware carrier detect (DCD) signal is significant, and an open(2) of the corresponding ttyFor the TIOCLINUX ioctl, see console_ioctl(4).
Kernel debugging
#include <linux/tty.h>
- TIOCTTYGSTRUCT struct tty_struct *argp
- Get the tty_struct corresponding to
fd.
RETURN VALUEThe ioctl() system call returns 0 on success. On error it returns -1 and sets errno appropriately.
ERRORS
- ENOIOCTLCMD
- Unknown command.
- EINVAL
- Invalid command parameter.
- EPERM
- Insufficient permission.
- ENOTTY
- Inappropriate fd.
EXAMPLECheck); }
SEE ALSOioctl(2), termios(3), console_ioctl(4), pty(7)
Index
- NAME
-
- SYNOPSIS
-
- DESCRIPTION
- Get and Set Terminal Attributes
-
- Locking the termios structure
-
- Get and Set Window Size
-
- Sending a Break
-
- Software flow control
-
- Buffer count and flushing
-
- Faking input
-
- Redirecting console output
-
- Controlling tty
-
- Process group and session ID
-
- Exclusive mode
-
- Line discipline
-
- Pseudo-tty ioctls
-
- Modem control
-
- Marking a line as local
-
- Linux specific
-
- Kernel debugging
-
- RETURN VALUE
-
- ERRORS
-
- EXAMPLE
-
- SEE ALSO
-
|
http://www.thelinuxblog.com/linux-man-pages/4/tty_ioctl
|
CC-MAIN-2014-15
|
refinedweb
| 417
| 54.93
|
A service for placing prioritised packages with expiry times on a queue and having a consumer notified of the packages
Project Description
A service for placing prioritised packages with expiry times on a queue and having a consumer notified of the packages
How it works
This service monitors a Redis sorted set and calls a consumer function when a new package arrives or the current highest priority package expires. The consumer function can be a regular Python function or an asyncio coroutine.
How to install
pip install kamikaze
The consumer function
The consumer function is the function that is called when a new message comes to the top of the queue. The function should be of the format:
def consumer_function(package, *args): """ Does stuff with packages and optional args passed from the command line """
Long running consumer functions
If the consumer function is long running then it should yield control of the loop when possible. Otherwise the kamikaze service will be slow to react to changes in the queue.
Fast running consumer function
If the consumer function is fast then there will be no need to yield control to the main loop until it is complete.
Running the service
Start the service by running the following:
kamikaze service <consumer-function-path> --consumer-function-args
The consumer function should be the full path to the python coroutine. It must be in your $PYTHONPATH.
Give the --help flag for a full list of options.
Tools
Pushing a Package
Use the push command to add a package to the queue:
kamikaze push <payload> <ttl> <priority>
Removing a Package
Use the remove command to remove a package from the queue:
kamikaze remove <payload>
List Packages on Queue
Use the list command to list all packages on the queue:
kamikaze list
Running the examples
Yielding example
An example of a yielding function can be run like so:
kamikaze service example_consumer.consumer.yielding_consumer_func
Blocking example
An example of a blocking function can be run like so:
kamikaze service example_consumer.consumer.blocking_consumer_func
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/kamikaze/
|
CC-MAIN-2018-13
|
refinedweb
| 356
| 52.94
|
ContentsWhat is log4net? Advantages of log4net.To do logging using log4net.dll in CF 2.0 we need to do a couple of things. Use a new Device Application project from Smart device project in Visual Studio 2008. Convert Log4net.dll into a .Net Compact Framework 2.0 Class Library (*.dll).Add the new log4netCF.dll to the project using "Add reference". Now you will see the output in the simulator. Figure 1 Select Smart Device Project Figure 2 Select Device Application Figure 3 Device Application Main Form Figure 4 Errors using log4Net Figure 5 Change Project Properties Figure 6 Change Project Properties Figure 7 Add Log4netCF.dll Figure 8 Config.xml File Figure 9 Change Properties of Config.xml File Figure 10 Add 3 Buttons to Main Screen Figure 11 Create LoggerFigure 12 Create Log Button Click Event Code Figure 13 Open Log Click Event Code Figure 14 Clear Button Click Event Code Figure 15 Opt 1 Main Screen Figure 16 Opt 2 Click On Create Log Button Figure 18 Opt 4 Click Open Log File to read It Figure 17 Opt Log.txt is Created Now What is log4net?Log4net is an open source library that allows .NET applications to log output to a variety of sources (for example, the console, SMTP or files). Log4net is a port of the popular log4J library used in Java. The full details of log4net can be found at
The advantages of log4net are:1. Works with .NET 1.0 & 1.1: The much improved logging of EntLib 2.0 and above is only available if your application is running on .NET 2.0 or greater. log4net however works on all versions of .NET.2. Simpler install: When using the Enterprise Library there are some services you really should install. This is as simple as running a bat file included with EntLib but it does complicate your deployment process.3..4. Appender Buffering: Buffering support with some appenders lets log4net queue up log entries and write them in a single batch. If you are writing entries to the database then buffering is a good way to improve performance.5. More platforms: The Enterprise Library does not support the .NET Compact Framework whereas log4net does.To do logging using log4net.dll in CF 2.0 we need to do a couple of things; they are:
Using the project
Now to use the new Device Application project from the Smart device project in Visual Studio 2008.To use new Device application project,go to the "File" menu in Visual Studio 2008 then select "New" => "Project..." then select "Smart Device Project".
Figure 1 Select Smart Device ProjectPress "Ok" and now select "Device application Project".
Figure 2 Select Device ApplicationPress "Ok" and you will see the screen like this:
Figure 3 Device Application Main FormConversion
Now to convert the Log4net.dll into the .Net Compact Framework 2.0 Class Library (DLL file).If you use the log4net.dll then it will give you an error like this after writing the code for the logging data from the application.
Figure 4 Errors using log4NetWhy do we need to covert?The problem with log4net and its Compact Framework support is that is mostly not maintained over the time. Or Simply We are making the .dll file that can compatible with Compact Framework 2.0.So we need to convert the log4net.dll to a CF 2.0 Class Library (DLL file) and then you can use that in you project. So here is the procedure:
Figure 5 Change Project PropertiesSet the Assembly name and the default namespace = "log4net", then:
Figure 6 Change Project PropertiesNow right-click the log4netCF project and select Build. It should build without any error and you have a working Compact Framework 2.0 log4net assembly.Add the DLL to the project
Add the new log4netCF.dll to the project using "Add reference".We need to provide a reference of the newly created log4netCF.dll to our project.
Figure 7 Add Log4netCF.dllAdd the new Config.xml file to your project or you can create a file with the name config.xml and add it to that file to the source project as a existing item.The Config.xml file contains a LogFileAppender and a DebugAppender and they are both set to log all levels.
Figure 8 Config.xml FileChange the Properties of config.xml file "Copy to output directory =Copy Always".
Figure 9 Change Properties of Config.xml FileI have used 3 buttons and 1 TextBox with multiline enabled on the Main Form; see the following image.Create log Button will create the log file and if it is exists then it will append the log to the file. Open Log File Button will read the log file and display on the screen.Clear button will clear the screen.
Figure 10 Add 3 Buttons to Main ScreenCreate the logger; see the following image:
Figure 11 Create LoggerOn the click event of the Create Log Button, write the code as in the following:
Figure 12 Create Log Button Click Event CodeOn the click of the Open Log button:
Figure 13 Open Log Click Event CodeOn click of the Clear Button:
Figure 14 Clear Button Click Event CodeRun the project and select the deploy option.
It will deploy your project to the Simulator.Now you will see the output in the Simulator.
Conclusion: We can use log4net.dll in the Compact Framework 2.0 and its easy and stable. Thanks for reading.
Web Application Projects and Web Site Projects in Visual Studio
Set Multiple Startup Projects in Visual Studio
|
http://www.c-sharpcorner.com/UploadFile/cb88b2/logging-using-log4net-in-compact-framework-2-0-in-visual-stu/
|
CC-MAIN-2013-48
|
refinedweb
| 933
| 76.93
|
unless I've messed something, this is the usual python behavior for
loading modules. The problem is that your Root is not in your sys
path. You could add it to your path, it in your index.psp you could do
something like
<%
import sys
if [ull path to root] not in sys.path:
sys.path.append([full path to root])
%>
That should add root to your sys path then
<%@ page imports="FyreSite:FyreSite"%>
should work fine.
mind you the code is untested but ti should work
Jose
> -------- Original Message --------
> Subject: [Webware-discuss] Subdirectories
> From: "Lethalman" <lethalman@...>
> Date: Sun, October 31, 2004 6:00 am
> To: Webware-discuss@...
>
> (first sorry for my poor English)
> Webware is the best python framework for me, great job!
>
> This is my problem.
> I'm trying to connect some psp/servlets from a subdirectory to the root
> directory, but i can't import some modules.
> For example:
> Root/
> - FyreSite.py
> - index.psp
> - database/
> --- Config.fs
> --- DB.fs
> - admin/
> --- Admin_Panel.py
> --- index.psp
>
> Now, when i try to get admin/index.psp i can't import the class FyreSite
> from FyreSite.py and extend the PSP page for FyreSite and Admin_Panel
> I'm trying to find any solution to solve this stupid problem...
> This is the header of admin/index.psp:
> <%@ page imports="sys,os"%>
> <%
> self.dir = os.path.split(os.path.split(self.serverSidePath())[0])[0]
> sys.path.insert(0, self.dir)
> %>
> <%@ page imports="FyreSite:FyreSite"%>
> <%@ page extends="FyreSite,Admin_Panel"%>
> ...
>
> But it can't find the module named FyreSite... so i tried:
> ...
> sys.path.insert(0, self.dir)
> from FyreSite import FyreSite
> %>
> <%@ page extends="FyreSite,Admin_Panel"%>
>
> And it doesn't work... really i would like to extend Admin_Panel to
> FyreSite:
> class Admin_Panel(FyreSite)
>
> then extend admin/index.psp to Admin_Panel:
> <%@ page extends="Admin_Panel"%>
>
> But i can't extend Admin_Panel to FyreSite first because i haven't
> self.serverSidePath()
>
> Is there an alternative against copying FyreSite.py to each subdirectory
> or set a prefix Admin_ to each file then bring them to the Root
> directory and remove the subdirectories?
>
> Please help me...
>
> --
> Fyrebird Hosting Provider - Technical Department
>
>
>
> -------------------------------------------------------
> This SF.Net email is sponsored by:
> Sybase ASE Linux Express Edition - download now for FREE
> LinuxWorld Reader's Choice Award Winner for best database on Linux.
>
> _______________________________________________
> Webware-discuss mailing list
> Webware-discuss@...
>
|
http://sourceforge.net/p/webware/mailman/webware-discuss/thread/20041031185414.23040.qmail@webmail03.mesa1.secureserver.net/
|
CC-MAIN-2015-32
|
refinedweb
| 385
| 52.26
|
Bugtraq
mailing list archives
| There seemed to be no patch for Linux kernel to remove execute permission
| from the stack (to prevent most buffer overflow exploits), so I decided to
| make one, I include it at the end of this message. I heard some rumours that
| GCC assumes stack frame to be executable when dealing with nested functions,
| but I couldn't reproduce that. I'm running this patched kernel for a day now,
| and everything (well, except for the exploits) seems to work fine. However,
| some programs may depend on the stack being executable... I'd like to hear
| any reports of this.
Hopefully Linus will NOT install this patch without providing a configure
option to enable/disable it. It is not a rumor but a fact that GCC depends on
the stack being executable in order to support passing the address of nested
functions via trampolines. I really would prefer not to have a new quanity of
tests fail.
Alternatively, you could try to convince RMS and Kenner that trampolines are a
bad idea (I've been trying for 8 years), or root out all of the hidden
assumptions in the compiler that trampolnes are on the stack (been there, tried
it, gave up).
Here is a test case for trampolines:
#include <stdio.h>
int
g (int a, int b, int (*gi) (int, int))
{
printf ("Inside g, a = %d, b = %d, gi = 0x%.8lx\n", a, b, (long)gi);
fflush (stdout);
if ((*gi) (a, b))
return a;
else
return b;
}
void
f (void)
{
int i, j;
int f2 (int a, int b)
{
printf ("Inside f2, a = %d, b = %d\n", a, b);
fflush (stdout);
return a > b;
}
int f3 (int a, int b)
{
printf ("Inside f3, i = %d, j = %d\n", i, j);
fflush (stdout);
return i > j;
}
if (g (1, 2, f2) != 2) {
printf ("Trampoline call returned the wrong value\n");
fflush (stdout);
abort ();
}
i = 4;
j = 3;
if (g (5, 6, f3) != 5) {
printf ("Trampoline call returned the wrong value\n");
fflush (stdout);
abort ();
}
}
int
main (void)
{
printf ("Before trampoline call\n");
fflush (stdout);
f ();
printf ("Trampoline call succeeded\n");
fflush (stdout);
return 0;
}
By Date
By Thread
|
http://seclists.org/bugtraq/1997/Apr/32
|
CC-MAIN-2014-42
|
refinedweb
| 362
| 63.22
|
Matthew Schmidt replied on Fri, 2008/02/01 - 7:51am
Daniele Gariboldi replied on Fri, 2008/02/01 - 9:39am
I use seam (+JSF 1.2 + facelets + richfaces) + hibernate + tomcat + spring.
I started with jsf 1.1 and had to add a lot of 3rd party libs to solve common problems with JSF and web development.
Spring was a must from the start, and now it's well integrated in seam.
Seam let me consolidate and use fewer libs to manage for example onload page actions, or problems with redirects.
I think if you develop with JSF seam is a must, but think twice before using JSF: it has a deep learning curve and it's not flexible. Often you have to change your decisions and web design decision because of JSF and its components.
I would say seam is good despite JSF.
Rick Hightower replied on Fri, 2008/02/01 - 12:55pm
in response to:
Daniele Gariboldi
I work with people who think that. I also work with folks who think JSF is quite natural. We do fairly complex, feature rich webapplicatoins and its hard to imagine doing these apps without JSF.
I teach JSF classes, Spring MVC classes and in the past taught Struts classes (I also write courseware which is painful to do). I find that the students pick up JSF the fastest. JSF works best when you are building an application with rich features. If your webapplication is more like a website and less like an application than you are better off using something else. Plust JSF forces some behavior on your apps, if you are not happy with that behavior or can't abide by it, then JSF is not a good fit (When that happens, I use Spring MVC). However, I think there is a vast market for JSF based applications.
I recently wrote a series highlighting JSF development and how easy it is: JSF Tutorial Part 1, JSF Tutorial Part 2.
Carlos Sanchez replied on Fri, 2008/02/01 - 1:27pm
Seam is too JBoss centric. Spring is more "open", at least for now, we'll see. That's an important point to have into account: community, users, support, ...
Rick Hightower replied on Fri, 2008/02/01 - 1:31pm
in response to:
Carlos Sanchez
I agree they get that rep, whether its deserved or not. What can they do to change it? Also, Seam relies heavily on EJB3 or at least it did at first, which I think hurts it. I think if they advertised their Spring support more and added some Guice support, things would be better.
Cay Horstmann replied on Fri, 2008/02/01 - 1:50pm
I find it interesting that you say that people pick up JSF the fastest. Maybe it isn't as complex as its detractors make it out to be :-)
I find that my students pick up JPA very quickly as well, so Seam sounds like a winner for gluing the two together.
Seam doesn't need Hibernate persistence, but it is a bit of an effort to strip out the unnecessary pieces when you deploy on an app server other than JBoss. I think it would be good if the Seam packaging was more vendor-neutral.
Rick Hightower replied on Fri, 2008/02/01 - 2:01pm
in response to:
rouletteroulette rouletteroulette
David,
Good to hear from you. It has been a while since we been in the trenches together. Hopefully soon....
Very good point. We took a look at Seam in March 2007 and came up with a similar conclusion for our project. It was nice, but integrating yet another framework into what we were already doing seemed like asking for trouble. Plus the Spring support seemed nascent (at the time).
RestFaces and Apache Orchestra seem like good choices as well.... We should talk more often. I did not know you guys did that... I will have to pick your brain.
I have spoke to some folks that do really like working with Seam. Someone just integrated Seam and Crank (out of the blue).
Jim Hazen replied on Fri, 2008/02/01 - 2:04pm
* Are you using Seam now, if so what do you think?
Have used in the past. Not yet in production.
* How is the Seam learning curve?
Pretty shallow actually. Since it's geared towards smoothing the edges and easing JSF/EJB/JPA development you end up learning the framework quickly by solving one previous headache at a time. A J2EE Seam development cookbook would be a great resource.
* What is your experience working with Seam?
Been playing with it since early betas. Evolves at an amazing pace and yet continues to refine the total picture of J2EE web development. The Netbeans of frameworks. High quality, focused, integrated.
* What features of Seam can you not live without?
Annotations, contextual variables. Contextual variables = information discovered by a bean that can be scoped and injected into other beans and JSF EL.
* How good is the Seam generation tools?
Can't comment on the IDE tools however using seam-gen to get started on a new project without having to worry about dependencies is pure gold.
* Is Seam the best way to write a JPA, JSF based application?
Absolutely.
* How would you position Seam and Spring? competitors or complementary? overlap or augmentation?
Complementary for now. Because of the Spring integration both frameworks can coexist. However I'd anticipate that in the future Seam will prove to have the best tools for my jobs and my use of Spring beans will dissapear over time.
* If you are using Seam and Spring, and you had to live without one of them, which one would it be?
Spring
* Do you use Seam with or without EJB3?
Both. EJB provides many very useful middleware services in a nice standard package. With EJB 3.1 dropping the business interface requirement I don't see why anyone wouldn't want to use EJB with Seam. Unless they're using Groovy with Seam.
* Do you use Seam with or without Spring?
Both
* Where do you think Spring and Seam overlap and when they do overlap which one is better?
Dependency injection = Seam wins. Utility features = Seam has far supperior JSF support. As well as good EJB and JPA support. Is the Seam EJB/JPA support better than Spring? Yes. Both frameworks can inject an EJB or an EntityManager but in my book Seam consistently provides utility for working with EJB and JPA. Seam was first with great EJB/JSF integration (improved EL, improved components, data binders, converters, validators) not just variable resolver integration. It was also first with great JPA integration. EntityHome and Query frameworks are excellent value adds over prior XXXTemplate functionality. Seam also provides a zero config OSIV solution, something that many Spring users unfortunately still struggle with. I'm sure the same features can and perhaps have already been implemented in current versions of Spring. However Seam has gone out of its way to ensure it excels at supporting JSF+business services+data in a clean and integrated way. If I need a tool for a JSF/JPA problem I'm going to reach for Seam first. If I later find that Spring does a better job, I'll use the Seam/Spring bridge and be happy use the Spring bean.
* Do you think Spring JSF support is weak or strong?
Weak. When put next to JBoss' continued involvement in JSF and the JSF ecosystem (components, tools, frameworks, standards bodies) Spring doesn't compare. Sure, if someone were so inclined they could use the Spring framework to privide many of the same services that JBoss employees have already written for Seam. If someone were so inclined...weak.
Dan Allen replied on Fri, 2008/02/01 - 2:19pm
Seam is great software that is user focused. We get caught up in this word "framework" and sort of forget the real reason we are writing the software, to serve the needs of our users or our client's users. That sounds like a pretty lofty goal, so let me quickly get to the substance.
While entertaining for us developers, users really don't want to spend their days paginating through endless result sets. What they want is consolidation. They want advanced, real-time searches, reports in the form of PDF or Excel, charts, emails, file uploads, dynamic graphics, page flow wizards, workspaces, etc. Basically, they want all that stuff that is really hard to develop, or at least harder than dropping the database into a CRUD generation tool. With Seam, you get both the CRUD generation tool and then all of that extra stuff too.
The real question is, how difficult is it to accomplish these tasks in Seam and is it easier than using an alternative. I am not going to lie to you and say that you don't have to think. Even if you do focus your mind on the task, some people are going to be better than others at picking it up. My feeling, though, is that Seam requires you to type the least and get the most bang for your buck. It accomplishes this using annotations, XHTML-based templates, and JSF components. I have gone on too long without examples, so let me dish a couple out. (next post)
Rick Hightower replied on Fri, 2008/02/01 - 2:26pm
in response to:
Jim Hazen
Great to hear from you Jim. It is good to hear from someone who has used Seam. Thanks for the insight and opinions. Thanks for the detailed comment.
Jim Hazen replied on Fri, 2008/02/01 - 2:37pm
in response to:
Dan Allen
Dan Allen replied on Fri, 2008/02/01 - 2:38pm
in response to:
Carlos Sanchez
.
Dan Allen replied on Fri, 2008/02/01 - 7:07pm
You can add a pie chart to your page as follows (numbers are made up):
<p:piechart
<p:data
<p:data
<p:data
<p:data
</p:piechart>
The result is a nice JFreeChart piechart with four equal parts. Let's say you want to upload a file:
<s:fileUpload
The action handler (a Seam component perhaps) will have two properties populated when the form is submitted, a byte[] property named logo with the image data and a String property named logoContentType with the image content type sent by the browser. Now let's say you want to create a PDF:
<p:document
<p:font<p:paragraph>Framework Market Share Report</p:paragraph></p>
<p:piechart
<ui:repeat
<p:data
</ui:repeat>
</p:piechart>
<ui:repeat
<p:font<p:paragraph>#{_framework.name}</p:paragraph></p:font>
<p:image
<p:font<p:paragraph>#{_framework.summary}</p:paragraph></p:font>
</ui:repeat>
</p:document>
The PDF is rendered and pushed to the browser when the URL of this template is requested (perhaps /frameworkReport.seam). How about a component that handles this logic.
@Name("frameworkAction")
public class FrameworkAction {
@In private EntityManager entityManager;
@DataModel List<Framework> frameworks;
@Factory("frameworks")
public void loadFrameworks() {
frameworks = (List<Framework>) entityManger.createQuery("select f from Framework f").getResultList();
}
}
Granted, I went a little overboard on the component definition, but I did just made it up off the top of my head. The point is, Seam just gets you right to the features and you are having fun and getting the requirements done at the same time. To see more great examples of Seam and to learn all of the intimate details of components, context variables, conversations, page flows, business processes, JavaScript remoting, security, extended persistence contexts, and more check out my book Seam in Action. I have worked *very* hard on giving you all of the critical information that you not only need to use Seam, but to develop web applications with Java in general.
--
Dan Allen
Software Consultant / Author of Seam in Action / Committer on the JBoss Seam project
Dan Allen replied on Fri, 2008/02/01 - 2:54pm
You might be asking yourself, what about Spring? Seam and Spring are both competitors and complements. What Seam does for Spring is bring it state. That means extended persistence contexts, conversations, page flows, etc. You can use Spring to do what Spring does best and leave Seam in charge of maintaining state for the UI. In that regard, Seam and JSF have a similar relationship. Seam does not paint the UI, at least not a majority of it. That is left up to the extremely rich set of JSF components. My personal favorite is RichFaces because it looks nice and has just about all the components that I need on a daily basis. It also has the nice benefit of Ajax4jsf, which I discuss in my third IBM developerWorks article.
I apologize for this post being long and not well organized, but my point here has been to give you some substance, rather than another "you should use Seam" cheer that is mostly shallow. There are some rough spots in Seam, JSF, and just about any other framework we use. But with Seam, those rough spots are far outweighed by its ability to get you doing the advanced parts of your application very early on. You no longer have to dread those wild and crazy requirements that come from the user. If there is one thing to take away from this post, that would be it.
--
Dan Allen
Software consultant / Author of Seam in Action / Committer on JBoss Seam project
Rick Hightower replied on Fri, 2008/02/01 - 2:56pm
in response to:
Dan Allen
.
[/quote]
Bogus or not, that is a common perception so how do they overcome it. BTW I have a lot of respect for Carlos Sanchez... he is one of the smartest guys I've worked with and a real OS visionary so you may not agree with him, but you should at least hear him out.
Dan Allen replied on Fri, 2008/02/01 - 3:02pm
in response to:
Rick Hightower
I never made a claim that Carlos is stupid. I am very familiar with his work and I know he is extremely smart. What I was saying was not make stupid comments.
People think Seam is JBoss focused because people keep saying it is, not because it is. Perhaps it is my personal mission, but I hope to oust this myth. There have been a couple of members of the Seam project who have skipped sleep many at night to try to get Seam working on all application servers, a lot of that time spent because the application server has bugs, not because of bugs in Seam. There is a very strong effort and while I don't want to offend Carlos, he offended the work done by very dedicated folks on the Seam project. So if you have something to say, *back it up*.
--
Dan Allen
Software Consultant / Author of Seam in Action / Committer on the JBoss Seam project
Andrew Barton replied on Fri, 2008/02/01 - 3:28pm
in response to:
rouletteroulette rouletteroulette
Dan Allen replied on Fri, 2008/02/01 - 4:01pm
in response to:
Andrew Barton
Seam definitely does not require the use of EJB3. In fact, when I use Seam personally, I never use the EJB3 piece. I just use regular-old JavaBeans and annotate them with @Name and @Scope(ScopeType.CONVERSATION) to get a stateful component. I am also a bit confused as to why there is this notion that Seam is any larger than Orchestra. The core of Seam is about a 800K JAR file which bootstraps a JSF PhaseListener. That's pretty much all there is to it. You can then add additional features a la carte. What's better is that instead of just having a conversation scope, you can have Seam manage the persistence context so that it is extended over the lifetime of the conversation without you even having to think about it. Recently, JBoss Seam added Maven 2 support so you can start using Seam in your project in a handful of steps (). Keep watching for more information in this area.
Remember, Seam is not necessarily an all or nothing choice. You can use it for its strengths and still get the great features of other frameworks such as Spring, GWT, and Crank.
--
Dan Allen
Software Consultant / Author of Seam in Action / Committer on the JBoss Seam project
Magir Nrave replied on Fri, 2008/02/01 - 4:53pm
Rick Hightower replied on Sat, 2008/02/02 - 1:47am
in response to:
Dan Allen
I do a lot of consulting and I talk to a lot of decision makers. There are a lot of decision makers (guys who hold the purse strings) out there who feel similar to the way Carlos does (I am personally on the fence). So if it is a misconception, it seems to be common one. So how do you think JBoss can dispel this?
Dan Allen replied on Sun, 2008/02/03 - 2:24am
in response to:
Rick Hightower
Now that is a great question! As with all misinformation, the first step is admitting that it exists. Okay, we admit it exists. Next, you have to focus on your interop. JBoss hired a Seam developer specifically for this purpose. The Seam reference documentation is getting beefed up with a whole bunch of sections on running Seam on the various application servers. With that started, the next big step is building a great community. My mouth is zippered because I don't want to be the spoiler on this, but just know that before you are kissing your loved one on the cheek this Valentine's day, there is going to be a long awaited announcement in this area. Finally, you just keep making the software better, because at the end of the day, we really do just pick the best software out there.
On a side note, I really like what you are doing with Crank and I do hope that Seam developers have full opportunity to take advantage of the work you have done there. Integrating infrastructure like that into Seam is not difficult and worth it, as Seam offers a wide-range of integrations that act as a nice complement.
--
Dan Allen
Software Consultant / Author of Seam in Action / Committer on the JBoss Seam project
John Denver replied on Sun, 2008/02/03 - 6:08am
Spring Forever!. The best technology happened to Java, Changed the way on how to write middleware. I use Wicket+Spring also SpringMVC+Spring and sometimes JSF+Spring depend the project.Also Spring 2.5 include many annotations that you can do the same thing as seam with JSF and Spring and you don't need a heavyweight appserver.
Really EJB3 and Seam came to late why this projects didn't exist in 2003, Now is Spring Time!.
Magir Nrave replied on Sun, 2008/02/03 - 7:59am
in response to:
Dan Allen
Now that is a great question! As with all misinformation, the first step is admitting that it exists. Okay, we admit it exists.
[/quote]
It's amazing how the very next comment (by Sidewinder) proves this
Rick Hightower replied on Mon, 2008/02/04 - 1:30am
in response to:
Dan Allen
Thanks. I was not sure many folks noticed it. Good to hear.
Seam seems to have a lot of features.
Jim Hazen replied on Mon, 2008/02/04 - 3:30pm
in response to:
Rick Hightower
[quote=rhightower]So how do you think JBoss can dispel this?[/quote]
I think there are a few things that JBoss needs to be aware of/address.
1. Spring was first and it's already in production. Most development shops are already using Spring to some degree. They sold Spring by painting EJB as a heavyweight devil and Spring as a framework that was "lightweight" enough to do everything.
2. Developers are not dissapointed with Spring. Where there were a lot of developers thirsty for an alternative to EJB 2.x, there are much fewer actively looking to replace Spring. Spring development is active. There is room for improvement in every framework and Spring has actively evolved, introduced new beneficial improvements and made itself easier to use. There's a worry that Spring/Seam will leap frog each other release after release and since it's too difficult/costly to constatly switch from one to the other, shops stick which what they're running (Spring).
3. Selling Seam by selling JSF/EJB/JPA is a problem. While it's true that Seam makes using these great technologies easier, many assume the converse, that you need to use these technologies in order to use Seam. I hear time and time again that there's no point in Seam because we're using WebWork/Spring/Hibernate already. Is the JSF/EJB/JPA stack better? If so it'll need to be demonstrably better, a mere 10-15% better won't justify a migration.
--
IMHO Seam's Spring integration is the best thing it has going for it in terms of a selling point. Selling Seam as the best Spring plugin ever could greatly increase adoption. Spring is far too entrenched, and honestly, too good to be thrown out en mass. There isn't enough wrong with it to toss it.
However JBoss could/should argue that there are more things that could be better with Spring, enter Seam. Seam as a Spring plugin gives Spring developers some excellent new features. In the end both JBoss and Interface 21 have the common goal of delivering frameworks and tools to help Java developers. I have respect for both camps and could care less about brand loyalty. Seam's Spring support caught my eye because it demonstrated JBoss' willingness to achive this goal, giving me the developer the best tools regardless of camp or philosophy.
Once Spring users start to use and experience the wonders of their new Seam plugin, introducing them to active injection, conversation scope, the benefits of UI component driven and event driven development with JSF, ..., should be a much easier task. From there developers have added choice. They can leverage the Seam features that present added value to their project on a feature by feature basis. They can mix in Seam over time without any fear of "loosing Spring".
Personally I already like Seam (for its great JSF and JPA support), however I'm beginning to realize that fighting against Spring is a loosing battle. Once in the door, Seam my very well change things from within. At this point though I don't really care if Seam completely replaces Spring in my projects as long as I have the opportunity to use the Seam features that will help me now. I can't use those features without getting Seam in house as an "approved framework" and I can't do that without selling it as a way of decreasing our Spring development time while in no way replacing or ripping out Spring.
Rick Hightower replied on Mon, 2008/02/04 - 4:22pm
in response to:
Jim Hazen
Jim,
Thanks for your insight. You, David and Andy have given me a lot to think about.
How do you think Seam does against Apache Orchestra?
How do you think Seam Security does against Acegi?
It seems that Spring has mostly ignored JSF. There is very little support for JSF (I know what support is there and its not much). Unless you count Spring WebFlow, which.... another topic.
Jim Hazen replied on Mon, 2008/02/04 - 5:52pm
in response to:
Rick Hightower
I haven't taken a look at Apache Orchestra. From comments here it sounds like it implements some Seam conversation context features. But since I already have Seam I haven't looked at Orchestra.
Frankly we liked Acegi because of its tight and transparent integration into CAS our enterprise single sign-on provider. Now that we're moving away from CAS the value of Acegi (or really any security framework) has diminished for me. I don't need security driven display, or domain object filtering enforced at the application tier. Down the road if I do end up evaluating a security framework again, I'll reach for whatever most simply exposes the features I'm looking for.
I will say though that I personally would take a look at Seam Security first. When JBoss decides to do something they tend to carry the concept throughout the framework. Where as in my experience Spring as been good at providing additional granuals that a developer could composite if they wanted to, but tend to leave things at that level; able to be bolted on if so desired but not tightly woven out of the box. More and more I just want things to work out of the box and I'd like for core themes (like security) to be baked in and handled consistently throughout the range of core functions.
If for example if I had a web service being called by a remoted JSON object that had to choose an appropriate workflow based on parameter values and the transparent security credentials of the caller (as defined by the web user they were logged in as). I'm more confident that Seam would have a prepackaged solution than Spring. It may be that both frameworks can accomplish the same thing, but for whatever reason I assume I'll have to jump through fewer hoops to do it with Seam.
Rick Hightower replied on Tue, 2008/02/05 - 5:16am
in response to:
Jim Hazen
Rick Hightower replied on Wed, 2008/02/06 - 8:12pm
in response to:
Dan Allen
It was nice meeting you Dan. I hope your book does well.
BTW There are a few books on Seam already... what do you think your book has that the other book's don't? what is the differentiator for your book?
Rainer Eschen replied on Fri, 2008/02/15 - 1:51am
Rick, we're using Crank for some time now, but without the JSF integration from the Crank examples. We preferred ICEfaces and had no time to port this. So, at the moment we don't get Crank-paged tables and the like. We use ICEfaces plus its Facelets integration, but Spring-managed backing beans and Acegi, Crank on Hibernate. We came from a EJB 2.x/3.x environment and used Crank to trim our architecture. Do you think we should have a look at Seam to ease our JSF development?
Springsteam Blog - Next Generation Java Development
|
http://java.dzone.com/news/seam-201-released
|
CC-MAIN-2014-41
|
refinedweb
| 4,419
| 71.95
|
On 9/19/2010 1:37 PM, mafeusek at gmail.com wrote: > > Hallo Group Members. From time to time I see in python code following > notation that (as I believe) extends namespace of MyClass. No, it does not affect MyClass, just the instance dict. > class MyClass: > def __init__(self): > self.__dict__["maci"]=45 Have you seen exactly this usage? > > If the class has a .__setattr__ method, the first bypasses that method, the second results in it being called. The direct __dict__ access is most useful within a .__setattr__ method to avoid infinite recursion. > myCl = MyClass() > print myCl.maci -- Terry Jan Reedy
|
https://mail.python.org/pipermail/python-list/2010-September/587628.html
|
CC-MAIN-2016-44
|
refinedweb
| 101
| 72.12
|
Depth First Search (DFS)
Authors: Siyong Huang, Benjamin Qi
Contributors: Andrew Wang, Jason Chen
Recursively traversing a graph.
From the second resource:
Depth-first search (DFS) is a straightforward graph traversal technique. The algorithm begins at a starting node, and proceeds to all other nodes that are reachable from the starting node using the edges of the graph.
Depth-first search always follows a single path in the graph as long as it finds new nodes. After this, it returns to previous nodes and begins to explore other parts of the graph. The algorithm keeps track of visited nodes, so that it processes each node only once.
Application - Connected ComponentsApplication - Connected Components
Focus Problem – try your best to solve this problem before continuing!
A connected component is a maximal set of connected nodes in an undirected graph. In other words, two nodes are in the same connected component if and only if they can reach each other via edges in the graph.
In the above focus problem, the goal is to add the minimum possible number of edges such that the entire graph forms a single connected component.
Solution - Building RoadsSolution - Building Roads
Solution
Pro Tip
Some problems that can be solved with DFS, such as Comfortable Cows, may be more easily solved with a queue (described in the BFS module).
ProblemsProblems
Application - Graph Two-ColoringApplication - Graph Two-Coloring
Focus Problem – try your best to solve this problem before continuing!
Graph two-coloring refers to assigning a boolean value to each node of the graph, dictated by the edge configuration. The most common example of a two-colored graph is a bipartite graph, in which each edge connects two nodes of opposite colors.
In the above focus problem, the goal is to assign each node (friend) of the graph to one of two colors (teams), subject to the constraint that edges (friendships) connect two nodes of opposite colors. In other words, we need to check whether the input is a bipartite graph and output a valid coloring if it is.
Solution - Building TeamsSolution - Building Teams
The idea is that we can arbitrarily label a node and then run DFS. Every time we visit a new (unvisited) node, we set its color based on the edge rule. When we visit a previously visited node, check to see whether its color matches the edge rule.
C++
#include <cstdio>#include <vector>const int MN = 1e5+10;int N, M;bool bad, vis[MN], group[MN];std::vector<int> a[MN];void dfs(int n=1, bool g=0)
Java
Warning!
Because Java is so slow, an adjacency list using lists/arraylists results in TLE. Instead, the Java sample code uses the edge representation mentioned in the optional block above.
import java.io.*;import java.util.*;public class BuildingTeams{static InputReader in = new InputReader(System.in);static PrintWriter out = new PrintWriter(System.out);public static final int MN = 100010;public static final int MM = 200010;
ProblemsProblems
Module Progress:
Join the USACO Forum!
Stuck on a problem, or don't understand a module? Join the USACO Forum and get help from other competitive programmers!
|
https://usaco.guide/silver/dfs?lang=cpp
|
CC-MAIN-2022-40
|
refinedweb
| 519
| 53.31
|
Recently, tests.” Tad is a practitioner and proponent of Test-After Development. When you practice Test-After Development, you write application code first and then you write a unit test that tests the application code.
From the perspective of someone who practices Test-Driven Development, this gets things backwards. I believe that it is an essential part of Test-Driven Development that you must write your unit test before writing any application code. Why does it matter?
Test-Driven Development is first and foremost an application design methodology. If you write your unit tests after you write your application code, then you are not driving the design of your application with your unit tests. In other words, Test-After Development ignores the Driven in Test-Driven Development.
In order to support Test-Driven Development, the ASP.NET MVC framework needs to support two things: testability and incremental design (what Martin Fowler calls Evolutionary Design). If you are only interested in Test-After Development, then you will ignore this second requirement to the detriment of those of us who are interested in true Test-Driven Development.
Let’s consider a concrete scenario: building a forums application.
Building a Forums Application with Test-Driven Development
Here are the steps that I would follow to build a forums application by using Test-Driven Development:
1. Write a list of user stories that describe what the forums application should do. These user stories should be non-technical (the type of thing that a customer would write).
2. Pick a user story and express the user story in a unit test.
3. Write just enough code to pass the unit test. In other words, do the simplest thing that could possibly work to pass the unit test.
4. Consider refactoring my code to improve the design of my application. I can fearlessly refactor because my code is covered by unit tests (see RefactorMercilessly).
5. Repeat steps 2 – 3 until I have completed the application (keeping in mind that the user stories might change over the course of the process of writing the application).
So, I might start with a list of user stories that look like this:
1. Can see all of the forum posts
2. Can create a new forum post
3. Can reply to a forum post
And, I would express the requirement embodied in the first user story with a unit test that looks like this:
[TestMethod] public void CanListForumPosts() { // Arrange var controller = new ForumController(); // Act var result = (ViewResult)controller.Index(); // Assert var forumPosts = (ICollection)result.ViewData.Model; CollectionAssert.AllItemsAreInstancesOfType(forumPosts, typeof(ForumPost)); }
This unit test verifies that invoking the Index() action on the Forum controller class returns a collection of forum posts. Currently, this unit test fails (I can’t even compile it) because I have not created a ForumController or ForumPost class.
Following good Test-Driven Development design methodology, at this point, I am only allowed to write enough code to make this unit test pass. And, I should make the test pass in the easiest and simplest way possible (I’m not allowed to go off and write a massive forums library however tempting that might be).
To make this test pass, I need to create a ForumsController class and a ForumPost class. Here’s the code for the ForumsController class:
using System.Collections.Generic; using System.Web.Mvc; using Forums.Models; namespace Forums.Controllers { public class ForumController : Controller { // // GET: /Forum/ public ActionResult Index() { var forumPosts = new List<ForumPost>(); return View(forumPosts); } } }
Notice how simple the Index() method is. The Index() method simply creates a collection of forum posts and returns it.
From the perspective of good software design, this controller is horrible. I’m mixing responsibilities. My Data access code should go in a separate class. And, even worse, this controller doesn’t actually do anything useful at the moment.
However, from the perspective of Test-Driven Development, this is exactly the right way to initially create the Forums controller. Test-Driven Development enforces incremental design. I am only allowed to write enough code to pass my unit tests.
Test-Driven Development forces developers to focus on writing the code that they need right now instead of writing code that they might need in the future. Two of the important guiding principles behind Test-Driven Development are “Keep It Simple, Stupid” (KISS) and “You Ain’t Gonna Need It” (YAGNI) (see Wikipedia and C2).
Eventually, after repeating the cycle of writing a unit test and writing just enough code to pass the test, you will start to notice duplication in your code. At that point, you can refactor your code to improve the design of your code. You will be able to refactor your code fearlessly because your code is covered by unit tests.
The important point here is that the design of your application should be driven by your unit tests. You don’t start with design principles and create an application. Instead, you incrementally improve the design of your application after each cycle of test and code.
Building a Forums Application with Test-After Development
A proponent of Test-After Development takes a very different approach to the process of building an application. Someone who practices Test-After Development starts by writing application code and then writes a unit test after the application code is written. More to the point, a proponent of Test-After Development makes all of their design decisions up front.
The crucial difference between Test-Driven Development and Test-After Development is a difference in belief about the importance of incremental design. Practitioners of Test-Driven Development take baby steps in improving the design of an application. Practitioners of Test-After Development attempt to implement good design from the very start.
Here are the steps that a practitioner of Test-After Development would take to building a forums application:
1. Create a list of user stories.
2. Consider the best design for the application (create separate controller and repository classes).
3. Write application code that follows the design.
4. Write unit tests for the code.
5. Repeat steps 2 – 4 until the forums application is completed.
Unlike someone who practices Test-Driven Development, a proponent of Test-After Development would start by creating separate Forums controller and repository classes.
For example, the Forums controller would look like this:
using System.Web.Mvc; using TADApp.Models; namespace TADApp.Controllers { public class ForumsController : Controller { private IForumsRepository _repository; public ForumsController() :this(new ForumsRepository()){} public ForumsController(IForumsRepository repository) { _repository = repository; } public ActionResult Index() { var forumPosts = _repository.ListForumPosts(); return View(forumPosts); } } }
And, the repository class would look like this:
using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace TADApp.Models { public interface IForumsRepository { IEnumerable<ForumPost> ListForumPosts(); } public class ForumsRepository : IForumsRepository { private ForumsDBEntities _entities = new ForumsDBEntities(); #region IForumsRepository Members public IEnumerable<ForumPost> ListForumPosts() { return _entities.ForumPostSet.ToList(); } #endregion } }
Next, the proponent of Test-After Development would create a unit test for the Forums controller that looks like this:
[TestMethod] public void CanListForumPosts() { // Arrange var mockRepository = new Mock<IForumsRepository>(); mockRepository.Expect(r => r.ListForumPosts()).Returns(new List<ForumPost>()); var controller = new ForumsController(mockRepository.Object); // Act var result = (ViewResult)controller.Index(); // Assert var forumPosts = (ICollection)result.ViewData.Model; CollectionAssert.AllItemsAreInstancesOfType(forumPosts, typeof(ForumPost)); }
This unit test mocks the Forums repository (by mocking the IForumsRepository interface) and verifies that the Forums controller returns a set of forum posts.
Unit Tests versus TDD Tests
One place where the proponent of Test-Driven Development and the proponent of Test-After Development strongly differ is on the subject of unit tests. I disagree with Tad about the purpose — and the correct way to write — unit tests.
When I practice Test-Driven Development, I start with a test and then I write just enough code to pass the test. I use the tests as a safety net for change. In particular, I use the tests as a safety net so I can fearlessly refactor my application code to improve the design of my application.
When using Test-Driven Development to create the forums application, my first test verified that the Forums controller returns a list of forum posts. I would keep that test even after I refactor the design of my application to migrate my data access logic into a separate repository class. I need the original test to verify that I haven’t broken my original application code when refactoring my application to have a better design.
My unit tests flow directly from the user stories. After I add a unit test, I almost never remove it. I might refactor my unit tests to prevent code duplication in my tests. However, I don’t change what the unit tests are tests for.
A proponent of Test-After Development, in contrast, is constantly changing their tests. When Tad rewrites his application logic, Tad rewrites his unit tests. Tad’s unit tests are driven by his application design.
From the very beginning, Tad would create a separate Forums controller class and repository class. He would create distinct sets of unit tests for the Forums controller and the repository class. When Tad refactors his application to improve the design of his application, Tad rewrites his unit tests.
Suppose, for example, that both Tad and I decided to add support for validation to the Forums application. If someone submits a forum post with an empty Title, we both want to display a validation error message.
I would take the approach of testing whether or not the forums controller returns a validation error message in ModelState when I attempt to create an invalid forum post. My unit test would look something like this:
); }
This test verifies that a validation error message is included in model state when you attempt to create a new forum post without supplying a subject. Regardless of how I end up refactoring my application (for example, to use a separate validation service layer), I would keep this unit test to verify that my application continues to satisfy the requirement expressed by the user story.
Tad, on the other hand, would never create a test that verifies whether or not the Forums controller returns a validation error message. Tad would argue that it is not the responsibility of a controller to perform validation. The responsibility of a controller is to control application flow.
Tad would write a unit test for his validation logic. However, the nature of his unit tests would be dependent on the architectural design of his application. If Tad uses a separate service layer to contain his validation logic, then he would write unit tests that verify the behavior of the service layer. If Tad uses validator attributes to perform validation, then he would write unit tests that verify the presence of the expected validator attributes.
Tad would argue that my unit tests aren’t really unit tests at all. Over time, as the design of my application evolves, my unit tests start to resemble functional (or acceptance) tests. They are really verifying the outputs of the application given a certain input. My unit tests are independent of the application design.
I would agree with Tad, but I would argue that the tests that you write when performing Test-Driven Development have a different purpose than standard unit tests. A TDD test, in contrast to a unit test, does not necessarily test a separate unit of code. Instead, a TDD test is used to test “little areas of responsibility, and that could be a part of a class or it could be several classes together” (see Martin Fowler).
This is not to say that a TDD test is the same as an acceptance test. An acceptance test is used to test an application end-to-end (with the database and UI hooked up). A TDD test, on the other hand, is not an end-to-end test. A TDD test does not have external dependencies and it is designed to be executed very fast. A TDD test is used to test whether a particular requirement derived from a user story has been satisfied (see Uncle Bob).
From the perspective of Test-Driven Development, the purpose of unit tests is to drive the design of an application. A unit test tells me what application code I need to write next. For example, I don’t know how I will implement my validation logic when I create a unit test. I should not be making these design decisions up front. My test tells me what I am allowed to do and what I must do. The unit test provides me with the minimum and maximum criterion for success.
My primary objection to Tad’s approach to building applications is that it forces premature design decisions. Tad makes design decisions first and then creates his unit tests. I create my unit tests first and then create my design (see Jeff Langr). Tad’s approach does not allow for an Evolutionary Approach to design.
Conclusion
So why should any of this matter? The ASP.NET MVC framework was designed to be highly testable. Therefore, it should keep proponents of both Test-Driven Development and Test-After Development happy. Right?
The point of this blog entry is to claim that the ASP.NET MVC framework needs to support more than testability to support Test-Driven Development. To enable Test-Driven Development, the ASP.NET MVC framework was designed to support both testability and incremental design. From the perspective of a practitioner of Test-Driven Development, if a framework does not enable you to get from point A to point B by taking baby design steps, then there is no way to get to a well designed application at all.
Right in time. Just what i needed. 🙂
A good distinction between TAD and TDD – but I fail to see how MVC makes a choice here. From what I can tell, Phil and team devoted a massive amount of time to keeping this framework as testable as possible.
In terms of incremental design – you bring up validators:
>>> If someone submits a forum post with an empty Title, we both want to display a validation error message.
Good test. You then bring how you would test it:
>>>I would take the approach of testing whether or not the forums controller returns a validation error message in ModelState when I attempt to create an invalid forum post
… and that’s not a good Unit Test. You’re not actually testing that a forum post cannot have an empty title – you’re testing whether the Controller and ModelState work.
Taking your query literally:
>>> If someone submits a forum post with an empty Title, we both want to display a validation error message
This is *not* the purview of Unit Testing, nor does it have anything to do with incremental design. This is something that’s a bit tough to deal with (TAD or TDD) in ASP.NET but you can do it with a tool like WatiN or Watir.
To me the core of the issue isn’t that ASP.NET MVC “doesn’t support incremental design” – it’s what you choose to incrementally design. In face I’ll go so far as to say “what does ASP.NET MVC have to do with your unit tests right now anyway”?
Design your model – focus on your validations in whichever layer you choose. You can crank out ViewModel’s even! And when you’re ready – layer on the web app. All your testable stuff is handled and you won’t need to worry about logic in the Controller or ModelState because, as you very well mention – they don’t belong there anyway :).
Hi Rob,
No one is suggesting that the MVC framework is not highly testable – that is why we all love it. The point of this blog post is to challenge Tad’s assumption that there is no difference between test-driven development and Test-After Development except when you write your tests. Also, I believe that you should unit test any application functionality that could break (unless testing would take too long)
Well, to be honest, I’m the one of the ‘Test-After’. Thanks for your article, I’ll come to the ‘Test Driven’!
Test-after folks commonly make this mistake when comparing TDD/TAD. You hit the nail on the head, it’s about driving design. However in your post you are writing code test *exactly* as they would look if you were doing TAD.
If you were doing TDD, you probably wouldn’t have any controllers yet. You’d start with an entity. Then a repository. Then you might introduce another concept, such as validation.
Tying these to a UI is another aspect.
Good unit tests test things in isolation. This is why testing that your controller puts stuff in ModelState is not such a great idea. Sure we want to make sure this happens eventually, but this is not the stuff from which you drive good design.
Don’t get me wrong, I applaud your effort in bring TDD to more widespread understanding and adoption, but this post seems to just be proving the TAD folks point.
Stephen, thanks for the post. The TDD/TAD distinction can be nuanced at first, but once people get the hang of TDD, it yields far greater benefits beyond TAD. TAD is just about testing. TDD is testing, too, but also design, analysis, examination of design choices, experience from an API consumer perspective, and so much more.
Rob and Ben both bring up good points and it may seem like nitpicking. Both are experienced practitioners and have found flaws with your approach. Please do not take their criticisms as attacks, but rather them seeing you as someone they can have a higher level conversation with. Basically, you’ve reached Level 1, now let’s talk about Level 2 and higher!
Keep up the effort and don’t stop learning. I believe TDD really can make anyone a better developer, but it requires some discipline and rigor which is hard to learn on one’s own without an experienced practitioner to help someone through the tough spots. It’s hard to teach these practices via a blog post. Live evens and one-on-one pairing sessions are usually the most effective.
The trick is… how do you capture and share that with other people en masse? That’s where we have the hardest time.
Intresting Post Stephen 🙂 Looking forward to seeing all the responses and then blog posts that spawn from this.
@Ben: I’m not so sure i agree with everything in your reply. I’m 150% behind the ‘driving design’ comment, so we can put that aside. I want to talk about your suggestion that controllers shouldn’t be handled yet, but entities and repository.
AFAIK, Stephen is engineering an asp.net mvc _website_. So far he hasn’t really thought about his persistence (aka repository/ies). Nor how this will be modelled. He’s thinking about a website. So he’s generated his stories / scenario’s and picks the first one and starts with that. Ok, lets show a list of posts. and off he goes, baby steps to get the first unit test going, driven by his design of his end goal -> a website.
Now I’m not necessarily saying that’s the best or right way to do something. I’m still a bit undecided personally, on what to ‘start with’. But I can understand his journey based upon his personal design-direction.
Traditionally, I would create the stories and scenarios, then delve deeper and say how this is going to be model’d and then even look at the persistence of this. Even more common or traditional is how people start with database design BEFORE any website code or design. I personally feel this way is flawed .. and I too used to do this years ago – I’m guessing a majority of us might have.
For myself, i’m still not sure what is the best approach to start with. Like, i have my website stories and scenarios. ie. my finite domain. Now do i start to model this domain? Do i go as far as modelling a repository, even if it’s just a fake test repo (ie. in memory lists)? Or is the ‘better/more practical’ approach to wait to do that stuff a bit later and get to there, _when u need it_. So, if i was going to have a resource (i nearly said ‘page’ but that is sooooo 90’s/early 00’s/web forms and we’ve evolved beyond that) that lists blog posts, lets start with that -> and a controller is the key to that, right? We’re trying to list blog posts on an asp.net mvc app. scenario is /forum/index or /forum/list . I don’t see anything about a database or how posts are modelled.
I’m just trying to understand all this stuff – not personally attack you or anyone else, so please don’t misread my words as anything mean or rude .. if anything it’s the complete opposite. 🙂
At least we’re all starting to think about unit testing, even if it’s before or after .. at least it’s part of the equation.
Thank. Gawd. For. ASP.NET MVC.
-PK-
Ah, thanks for the clarification of the term “unit test”. In our safety-critical software environment we call your TDD unit tests, “requirement-based tests” (RBT). These tests verify the required *functionality* or behavior of the app/function. DO-178B is the gov’t standard that spells all this out as well as what integration test are and what they need to test.
I have struggled with my push for TDD in my division as when one hears one must write unit tests first, their jaws drop and give me the “are you insane” glare. To them, unit test are super, low-level test verifying the rigor and robustness of the implementation. Writing these types of tests first would me absurd…as you point out in your posting under the Tad scenario. As you point out, the big value is in thewriting of RBT ahead of code to give you the safety net and drive design.
Thanks again.
I would love it if people would just have tests period so we had the opportunity to argue TAD versus TDD. 😉
I am definitely on the side of TDD, especially if you have a good set of use cases to start with. TDD forces you to think of the behaviors before you actually code them. I am becomming a bit more a fan of BDD, as well, although I do not have a BDD framework right now to truly take it the full nine yards.
As for TAD. When combined with proper refactoring, you can end up with the same endpoint as TDD. TDD, at least in every case I can think of, will get you there faster, but the person who states “the only difference is when the tests are written” may have a point if he is, in fact, refactoring to a proper design. Note, however, that the endpoint is the same … not the journey … nor the time spent getting there.
Great article.
I’ve been playing with MVC I have Vs 2008 standard edition I installed XUNIT for testing
it’s a nightmare are there any tutorials on how to make it work or on how to work with it in MVC
I keep getting this error
Error 1 The type or namespace name ‘HttpContextBase’ could not be found (are you missing a using directive or an assembly reference?)ProjectsNerdDinnerNerdDinner.TestsRoutesRouteFacts.cs 101 29 NerdDinner.Tests
I dont think any of the learning material I’ve
came across is much use if you cant do the tests every thing seems to center around TDD
what would the work around be It would be nice to use VS for testing and not some open source bug infested softwear
@PK where you’re headed is something that DDD tries to tackle a bit (in my very small knowledge of it). FIrst – I agree that we overthink things. It’s the tool – it lets us :).
At the same time in terms of “where to start” – start small! Know that you can change course – rename your tests – expand – whatever! The only thing you should “know” when doing TDD is what’s in front of you in the form of requirements.
So – in Stephen’s case – he laid out a set of User Stories that are interaction-focused:
1. Can see all of the forum posts
2. Can create a new forum post
3. Can reply to a forum post
The thing about these is that they don’t describe the behavior of the application (more to the point – the “title is required” behavior under test isn’t part of the story here… ). In that I think we have the conceptual break. What might be a better place to start (in getting to your question) is pushing your client to understand the behavior of the thing you’re trying to build:
1) The Forum consists of Posts, written by Users who want to ask each other questions.
2) A Post has a Title and Date which must be filled out, and an Author with an Email that are also required.
3) Duplicate posts (same body and text) are not allowed
4) Authors cannot post more than 1 post every 4 minutes (to avoid spam).
In this we’re focusing on behaviors and the responsibility of creating this list falls on you asking the right questions so you can model the right behaviors. In this small list we probably have mess of tests. Where to get started? Write the first sentence out as a test title, knowing you will change it.
Ideally you can keep the reviews to the conversation level and use something like Balsamiq to keep your client happy WRT visuals. Ideally you pull as much behavioral knowledge out and test it – with a growing Model that does what you client wants it to do (sans visuals). Once that’s done – plugging in the UI is trivial.
PS – it’s fine to think of your app from the outside-in. I pushed this idea in my MIX 09 talk (thinking like a scripter). I believe in this since you’re trying to deliver an experience and not battleship gray, lifeless forms. That said – there is a point when you start that, and in my mind it’s after you work up your model :).
Hello Stephen,
thanks for the post on this issue, which is very important in TDD projects.
I have had the chance to work with TDD. We had a team of developers who are new to the .NET Framework.
To explain to them to write the test before the code, was really wiered to them.
Here’s how we introduced TDD to the team (critics are more than welcome):
Instead, we actually explained to them, to think of a test as a way to conceive the method, class or function we create.
It helps us think about how we want to use our code, it also helps us understand the code we write, and avoid too much untested copy and paste.
With this in mind, some team members started calling, asking for help on C#, but the nice thing, is they would ask for help on the unit test first, as they see, that, every technical lead and project manager, is in favor of this method.
One drawback though. We had an issue using Libraries like Entreprise Library from Microsoft. Alhtough Ent. Lib. is United Tested, team some team members would say it’s too complex to use such a library with TDD.
In my opinion, writing and testing a data access layer, requires a lot of care, as simple as it might seem, consequences can be tuff, as it is a heavily used layer.
To stay away from conflicts, we kind of cheated. I took an open source data access library for .net 2.0 :), I had used it in many projects, wrote unit tests for the main methods we need, and started our Data Access layer based on the code we used. Everyone was happy, but no one actually asked me where the code comes from, because I understood it and explained it well to them. Then I actually mentionned where the code came from.
What about the cost, concerning the projet, and the timeline ?
Do we have to rewrite everytime, a data access layer, it will definitely push us towards copy and paste ?
Stephen, thanks for this post. It allows for a great discussion.
I’m a big believer in the incremental design and you are quite right when you say that TDD simplifies the process. I having done test after several times I had indeed the need to go back to the tests to change them in order to comply with changes in the architecture. If I had done higher level tests as you propose this wouldn’t have happened.
Having said that I still find very hard to push TDD in a lot of companies, the initial time it takes to right the tests often scares managers as the can’t see that this time will be far smaller than the time developers are going to spend of fixing bugs. What I try to do is convince people to do any amount of testing (even if it’s TAD) in order for them to see the benefits of unit testing. I don’t think that TAD is completely wrong, it’s just a different thing than TDD.
I do agree with you that the MVC framework allow for TDD because that is where many people are heading, and if you are going to do TDD you should do it the right way or don’t call it TDD at all 🙂
Phil and the team are doing a great job and this kind of discussion only adds to the process. I can’t tell you how good it is to see that you guys are having these kind of discussions at Microsoft. It shows how much you care about the future of MVC.
Good job guys!
@rick1 — It sounds like you are missing the reference to the System.Web.Abstractions.dll in your unit test project. Select the Test project, select the menu option Project, Add Reference and add the reference from under the .NET tab. Make sure that you have downloaded and installed Service Pack 1 and the ASP.NET MVC framework. You can download the ASP.NET MVC framework from.
I’m still working my way through what proper TDD is, but it seems to me the test from your example doesn’t really satisfy the requirement. The requirement is that a user can list all the forum posts, yet you are only testing that a collection of items of the correct type are returned. You are not testing that, in fact, all of the forum posts are listed (Your test would pass, even if the collection was always returned as empty). To truly test this requirement, you would need some sort of pre-condition (ie. a know list of posts), and then verify the returned collection contained this list. At least, this is my understanding of how TDD should function.
Also, you mention that the TAD approach requires rewriting the tests as the application design changes. However, wouldn’t you need to refactor the existing test as your design evolved (ie. when you added a repository to your application design, wouldn’t you want to mock that repository in your existing test)? This is just my understanding of TDD and Unit Testing so far, someone please correct me if I’m way off 😉
@Ryan McIlmoyl — The goal is to create one or more tests that express the intention behind the user story. I would start with the test that checks whether a collection of forum posts was returned. But, this single test might not capture everything that you intended with the original story of being able to see all of the forum posts. For example, you might flesh out the story by saying that the posts should be returned in a certain order, or that you should get at least one post back, and so on. In that case, you would write additional tests to capture these more refined intentions.
@Stephen
Thanks for clarifying that one. I think one of the problems in understanding TDD is that the examples used are often trivial and simple, making it hard to relate to the actual problems we as developers are trying to solve. (Although I understand the point of your post was not an in-depth description of what TDD is, but rather to contrast with the TAD approach).
Another issue I’ve seen with TAD is the same issue we often see with optimizations. They all get put on the pile of stuff to do ‘after’, and then never get done.
@Stephen
Thanks bingo that got rid of all the errors how simple I’m going through your sample chapters of MVC Unleashed when I have spare time I use ASP.NET 3.5 Unleashed every step of the way Great book I couldn’t live without it
I’m realy looking forward to this new book
Thanks for the great work you guys are doing
The pay now or pay later mindset of the TDD folks is rediculous. I know its passe at this point but business people prefer RAD with TAD over TDD. They want to be able to touch and feel something as soon as possible, so they can start making changes. I guess its just the industry you are in. Shooting Rockets into space – TDD. Writing a Business app with constantly changing requirements – TDD costs too much up front.
The community needs to distinguish between Test First and Test After development better. In fact, I think TDD should stand for Test Driven Design. The power of Test First development is that a lot of coding decisions that are somewhat arbitrary become meaningful, hence your design is affected by your tests. For example, I could make a static helper class or use a fancy design pattern like the Command pattern to do the same thing. Which is better? From a testing viewpoint, it is more difficult to unit test static classes, so I go with the the code that makes my testing easier or even doable in some cases. This is a very contrived example, but the gist is that my code is written to be testable from the get go which produces a much different product than throwing tests on at the end. Test first development forces thought about the dependencies in your code to a higher level.
Test first and test after development isn’t slower in the sense that you can rework your code at a much faster rate and it’s in general better written. I wish the stigma that this process is slower would be quantified with some real data.
Rob Conery makes an excellent point and I kind of take it as people define unit tests differently and you have to test what makes sense for your app and your development process.
Thank you for another fine post. I’m very interested in the TDD approach and I think the comments to this post are a great conversation about it. I’d love to see more webcasts where various practitioners of this art (TDD) show their methodologies; quite a mind-opener. I want to design my own projects this way, but I must say it’s close to impossible to do TDD in the office environments I’ve worked in. The leaders/managers will be like, “we need to encorporate these various external libraries and API’s, and this other team has these services we need to use, and the UI has to completely change like so & so, and there’s this stack of action items we need to plow through, and it needs to be done by end-of-month because so & so has already booked a flight to Florida to show the most important client how great it all is”… in the face of that, I’m not at liberty to open up Visual Studio and start typing [TEST] WhenUserTypesValueIntoCityComboBoxAndValueIsNotFoundInListAppropriateCommandsAndActionsBecomeDisabledAndEventFromIncomingSubscribedServiceIsBlockedAndNotAllowedToDisplayModalDialog(). I friggin’ swallow more Mountain Dew and start slamming code down, then turn around, become the user for a few minutes while another developer invokes the external service, hit Ctrl-Shift-Alt-Elbow plus a bunch of additional keystrokes and see if I can break the application. (No, I don’t believe that’s an appropriate methodology, merely a way I’ve been able to stay employed in various scenarios I’ve found myself in.)
Hi Stephen, great article, great effort.
I dislike these types of posts. They come across as evangelical rather than logical. There is a whole lot of hype around TDD at the moment but remember that it is just the current fad at not the magic bullet that everyone wants it to be. There will be another fad around the corner and another and another ad nauseum.
Personally I use TDD for some projects and for others I design up-front, write the code and add the tests. There are benefits to both approaches and there are drawbacks. You are correct that TDD drives the design and this is probably the most important part of TDD. However, sometimes the YAGNI principle that tends to go hand-in-hand with TDD (yes I know it is a generally good principle elsewhere too) causes the TDD practioner to go down the wrong path commercially. Remember too that you don’t necessarily need TDD to drive design. You could perhaps… just do some design.
I worry that with many developers who don’t fully understand the principles, you end up with worse quality code or with people stuck in the refactoring loop and becoming inefficient, ineefective and unproductive. Too often I have seen developers spend ten days developing a simple module that should have taken one day without TDD. I have been guilty of this myself having been trapped in the “must have 100% coverage” mentality. I have also seen people struggle for an hour to test a piece of code that took a minute to write and that they *know* works because it is incredibly simple but awkward to test because of the limitations of the testing tools.
I guess we are just using a fancy new hammer to bash in that screw.
I’m really trying to get on this TDD boat somewhere.
Thanks for the great post, Steve.
Here is my 2 cents: I have been reading about TDD for more than 6 months. And whenever people say the most important thing in TDD is the *Driven* part, I used to say to myself ah.. whatever, I get it. But, I have started *practicing* TDD for a couple of days now and it seems somebody struck me with a sledge hammer, listening to other people talking about TDD just doesn’t cut it, You really have to *experience* it. For anybody who really thinks they *get* TDD and yet don’t practice it I suggest you actually practice TDD for a week, that’s when you’ll actually *get* it. And thanks for the good work Steve.
@Bob Saggett
Totally agree with you.
@Stephen
Great article, thanks for pointing out the difference.
But as Bob said TDD is not the holy grail. Just a good pattern for the moment
First of all,Stephen, thank you for your post. I go along with you on TAD is not TDD.
In fact, I’d go further. As an idea for the future, do you think that it could be possible to create a mechanism to force to develop a test before the application? A silly example, not being able to build the solution if there’s no at least one test associated with every public method.
In my view, the point is that everybody know how to write code directly, and you can see the results immediately. But, writing tests before developing is annoying.
Therefore, Why not to force us to write the test before?
Thanks Stephen for the interesting comparison of TDD and TAD. I found it is very useful information to me.
I want to point out a mistake in your test case for TAD. Under //Assert section, there is no meaning to assert if the Forums controller returned a list of post, because you just did returned an (empty) list on line 6. Instead, you may want to verify the mock’s expectations, mockRepository.VerifyAll(). Therefore, the TAD test case becomes meaningful. It actually confirms that controller.Index() should call into repository.ListForumPosts()). This is a white-box test that verifies behavior. On the other hand, the TDD test case is pretty much a black-box, which verifies the state only.
Will you share your thoughts on white-box vs black-box unit test?
Regardless of anything else it is funny that you disabled comment on the subsequent post.
You deleted comments? Stephen I don’t know what to say. Why write posts like that if you don’t want to hear what people have to say?
Stephen:
Thanks for the comment. I apologize if it sounds like I was criticizing your post here, as that was not the intent. I was trying to build on the idea based on a comment someone had made to me about your article, which I viewed as a misunderstanding of your point.
I really do see the industry obsession with UI as a primary reason for apps being tightly coupled with little Separation of Concern. I also see the focus on testing MVC from the UI down as an impedement to good design in many cases.
Continue to post on your TDD efforts in MVC. I believe it helps get the focus on proper application design.
Peace and Grace,
Greg
@Colin – Disabling comments on the other post was a difficult decision. I found the comments interesting and valuable. However, the last few comments had very little to do with the blog post and I didn’t want the conversation to degenerate. My plan is to re-enable the original comments in a few days.
I’m not sure I agree that a TDD approach means you won’t have to rewrite tests if your application logic changes. I’m currently trying to keep a TDD approach going developing an MVC app, and have had to scrap tons of tests frequently due to both application logic changes and design/refactoring changes. This could well mean I’m not doing TDD correctly- I do suspect I’m ‘cutting corners’ when it comes to incremental design. For example, I write the test first, watch it fail, but then ‘know’ I’ll need to use a repository and just go ahead with that approach to pass the test.
So I suspect my coverage isn’t great, and tests aren’t 100% kosher. But even with this number of tests I find myself worrying about refactoring, renaming, etc. “because I don’t want to have to rewrite the tests.” I usually go ahead with it anyway, but it is something I’ve noticed…
Nice post and good comments.
finally i find this thank for posting this..
@Stephen:
Great post as usual!
I think I’m on board with the benefits of using the test first TDD approach. But what about when you begin to refactor, would you recommend writing tests before doing that?
For example: In the controller there is a Create() method that adds a post. I know this works because I have a test that covers this controller method, so with warm and fuzzies I begin to refactor.
I decide I want to move my validation logic into a service class. Would you:
a) Write a new TDD test for my service class method and then implement that method?
b) Simply move the validation logic from my controller class to my service class (no need to write a new test at all because my original test still passes)?
c) Move the validation logic from my controller class to my service class and then write a new unit test to cover the new service method?
Nice post! It’s nice to see people advocating the design benefits of TDD. This is one of the biggest challenges I have at Microsoft – people see TDD as purely a test technique and argue that tests before or after makes no difference. They are wrong, IMHO.
I would modify your steps to TDD a little bit. It is important to write just enough code to make your code compile AND fail the test. It is important to make the test FAIL as this is a test of your test. You want to see the red, in the red-green-refactor step as validation that the code you write is immplementing the right functionality (seeing the green).
Code coverage is actually another important distinction between test-after and test-first. My line (vs. path) coverage with TDD automatically approaches 100% as I never write any code without having corresponding tests. That is harder to achieve with test-after. I can also use coverage feedback (not the number, but covered lines) in a different way. Coverage is now interesting but not mandatory, but with test-after you really need code coverage to measure how well you are testing. Yet another advantage of writing tests first.
I have written a set of TDD-related blog posts that complement yours quite nicely here:
blogs.msdn.com/…/default.aspx
Thank you for stating the difference!
This is exactly the argue I had with one of the “old school TDD” folks. I care about the design of my code, therefore my specifications are written first, not to have tests as guard dogs to make sure any deviation from the original intent throws exceptions just because an irresponsible team member changes code prior to make observation first.
10x
to be honest too, im same as Jack.. im one of the test-after.. this post is a really great one.. thx for sharing this Stephen.
Thanks to this post!
to be honest too, i’, also same as Jack.. im one of the test-after.. this post is a really great one.. thx for sharing this Stephen.
Helpful post…
Hi Stephen,
Firstly, great post, I’m new to TDD and still ultimately trying to fully comprehend what you should test and when so I hope no one takes my comments too strongly and slates me for it if I’m totally in the wrong. I’ve started a project and having read your post I’ve realised I’m heading down the route of TAD. This got me thinking a great deal and compared the arguments and having thought about TAD I’m not overly concerned that I might be going down the wrong path. It seems to me that it comes down to the preference of the developer and the project on where you start testing.
In my project I already know that I want to split out my layers into a repository and service layer. This is where I’ve started my coding rather from the website which seems to be your preference to start. I don’t think there is anything wrong with either my starting point or yours. Please feel free to correct me if you disagree. In my case because I know where the project is heading and is likely to head/expand in the future it seemed good sense to split from the outset and save me some time rather than refactor out further down the line. That said, I’m still trying to take babysteps and create tests on the repository and then the service before testing my mvc controllers. This way I can ensure I have got adequate tests on each unit in each layer and my user story is covered by a test on the mvc controller to test my expected outcome. I read the Fowler white paper where he commented:
‘If.’
This is what got me thinking it is important to have some plan from the outset as to the design, especially if you have a developer who has a great deal of experience and is likely to see the benefits of using certain design patterns from the outset. It really doesn’t matter if you start testing/developing on the service/repository layer as long as tests are in place to capture any future changes that are likely to unwittingly break your application in the future. That said, I think once I have my tests in place it’s a good opportunity, and a safety net as you rightly point out, that I can go back and refactor to see if I can better my code. In this respect, am I not following a mixed path of TDD and TAD?
Can’t TAD & TDD work together? I’m building my first MVC app using guidance from the ASP.NET MVC Storefront Starter Kit and started down the TDD path. But once I’ve finallized how I’m going to do all of my CRUD screens, I hand off to TAD because the design is solidified and the only real change is what fields I’m validating. This allows me to keep moving forward and get my prototype done without getting bogged down updating hundreds of tests on nearly identical objects. If I had a junior developer on staff, then I’d have him/her banging out my tests. To use the engineering example from above, then TDD comes up with ‘the Dog house’ and then TOD can test out the variations of the dog houses.
Thank you for stating the difference!
This is exactly the argue I had with one of the “old school TDD” folks
Nice post!
I am always late to these “parties”! 🙂
Thanks for keeping developers taking about testing.
I’m looking many code here..:)
but this is great test.
Okay this code have been test after development. Keep it;s online good job
Apropos one of the responses above, in my experience I have found that stories, no matter how well written, fail to capture all the detailed behavior of the features in the application. In particular, the writer of the stories will leave out the constraints, edge cases and negative cases. For example, subject of the blog post is required, body cannot be empty.
One thing I am experimenting with now is to have the testers start at the same time as the developers. After a few days of the sprint (I am following SCRUM), the testers need to have the test cases done. The testers have the mind-set to look at not only the positive “happy” cases but also on the constraints and out-of-the ordinary cases. The QA test cases for a feature, in my opinion, really capture everything the testers can think of how the feature will be used. I then get the developers to review each test case and make sure that their code and their tests capture these cases.
Another issue I am hoping to avoid is developers using the missing details in the story as a crutch for parts of features, especially outside the happy case not working.
My theory is that the test cases is what QA will test the feature with. If each test case is coded and working, there should really be practically zero bugs.
Let’s see how this goes. Would love to hear your comments.
I think it is just a concept rather than approach. Test first and test after development is not slower in the sense that you can re-work your code at a much faster rate and it is in general should be better written. Looking at it in the different view make me laugh a bit. hehe. btw good article though
Ya really i agree with you. Its very useful. Thanks.
Thanks Stephen! Good code, you always giving the best.
Interesting post, especially since I am not sure there really is a holy grail (TDD or TAD). I am somewhere between the TDD and the TAD.
My question is whether you believe that TDD ALWAYS yields the better results. I’m still not conviced it does.
Let’s assume you are implementing a simple application based on ASP.NET MVC which always uses the same design principles (fetch data from repository, run a few business rules, perhaps write some data back to repository, return view).
About 90% of these action methods do not require any new design. So TDD would not have any advantage over TAD when implementing those.
Since I am usually coding slower when using TDD than when using TAD, I see no reason to use TDD in those instances. Doing it nevertheless seems like following the methodology for the sake of the methodology to me, the price being wasted development time.
Don’t get me wrong, I am not saying TDD is worthless. When implementing something non-trivial (in other words, coding something where I am not sure upfront about the design), then TDD is definitely useful. But for implementing data-driven web pages with ASP.NET MVC, I found that this is rarely the case.
Thanks stephen. I look through the code and it definitely help. Just what i was looking for. thanks.
thanks useful script
fweq this is given in attachment. I understand that very well. Thanks.
fweq this is given in attachment. I understand that very well. Thanks. Online education accreditation | Distance education accreditation
thanks useful script International Education accreditation |International Accreditation
Pretty good post. I just found your site and wanted to say that I have really enjoyed browsing your posts.In any case I’ll be subscribing to your blog and I hope you post again soon!
great article.
A concern that pops up when I read about TDD, with the “do the minimum right now” code approach, is how do you get a team of developers to work together, solving common problems just once?
Without a guiding design up front, wouldn’t developers solve comparable problems with new code? I’ve seen too many projects working as a collection of individual developers rather than a team, with the result of a huge amount of duplication. TDD seems to me like it steers you down that path, and it’s a rocky path to hell.
What have I not ‘got’ about TDD?
Nice post and good comments.
Hi..great information here..
i’m see many code here.
and try to understand.. 🙂
Good post! Thanks
Really interesting.
I have read a lot about this on other articles written by other people, but I must admit that you is the best.
Quite an interesting post. It was quite new to me.… ( Custom Logo Design – stationery design – logo graphic design )
nice article.
Termpaper| Term Papers| Term Paper Writing| Term Paper Help|
I dislike these types of posts.
Custom Term Paper| Buy Term Paper| Online Term Paper| UK Term Paper|
f3e It’s lucky to know this, if it is really true. Companies tend not to realize when they create security holes from day-to-day operation.
Monsters Vs Aliens movie download
Moonraker movie download
Morning Light movie download
Musafir movie download
My Best Friend’s Girl movie download
My Bloody Valentine movie download
My Blue Heaven movie download
My Super Ex-Girlfriend movie download
Namastey London movie download
Necessary Evil movie download
Nekromantik movie download
Network movie download
Never Back Down movie download
Never Surrender movie download
New In Town movie download
Next Day Air movie download
Night Train movie download
Nightwatch movie download
Retail Logos | Sports Logos | Consulting Logo
Engineering Logos | Media DF
|
https://stephenwalther.com/archive/2009/04/08/test-after-development-is-not-test-driven-development
|
CC-MAIN-2021-31
|
refinedweb
| 9,354
| 63.09
|
I've been trying to push for PHPTAL use in several projects in the last years but I haven't been very successful. I ended up writing my own version from scratch a couple of months ago, leveraging new PHP 5.3 features and the bundled XML parsers + Tidy for legacy templates.
The main point in implementing my own version was to make it extremely modular to support custom storage (file, string, pdo...), easily extendable namespaces, tales modifiers, pre/post filters and code generation in other languages besides PHP (Javascript). While PHPTAL supports most of that stuff it's showing its age and doesn't makes use of modern PHP's OO features. That's why I target 5.3, so I can tell management that the library is using 'latests technology' and is easily hookable into existing frameworks like ZF or Symfony. Using a BSD like license was also required in some projects. As for integration with IDEs, being an XML format I guess the only thing needed is to implement a DTD and a RelaxNG schema to cover most editors. I also agree on that it needs to be more 'marketting friendly', the home page is an invaluable resource given that the manual is quite complete but it does not sale the product. And a section with tips and tricks (zebra rows, working with javascript, ...) would also help a lot newcomers. Iván On Fri, May 30, 2008 at 5:33 PM, Patrick Burke <[EMAIL PROTECTED]> wrote: > Anton, yes I am a developer of PHPTAL but I'm not on the PHTPAL > developer list. My modifications have been specific to the company I > was implementing it for but I've made more general modifications along > the way, like some memcached/compiled templates trickery I used at the > last place I did a gig for, and I'm ready to make a formal > contribution so the PHPTAL Developers can tell me what's up/what they > think. But even if my own contributions get shot down I'd like to see > development of PHPTAL geared towards the people I have to pitch to: > CTOs, IT Managers, Directors/Producers of Internet Programming, etc. > Just for a while anyways, just until there's undeniable awareness of > PHPTAL in that segment of IT personnel. I think it's only a matter of > adding and promoting a few more "enterprise" features to the project > (I really, reeeeally hate that "enterprise" term but...you probably > know what I mean when I say it so that's why I'm using it). > > I've probably got another couple of months before I get up to speed on > the latest version of PHPTAL and then implement the memcached thingy I > did. Hopefully I'm an "official" PHPTAL developer at that point. > > But do you get what I'm saying about the direction of development in > the short term? It's just that I know too many CTO types that have > actually heard of Smarty but are baffled by this new-fangled PHPTAL > stuff, and they aren't even keen on hearing about it until I start > sneaking in comments like, "See, that problem just simply doesn't ever > happen when you use PHPTAL", or "We wouldn't worry about that > bottleneck if we had something like PHPTAL's system". > > Know what I mean? > > Hew > _______________________________________________ PHPTAL mailing list PHPTAL@lists.motion-twin.com
|
https://www.mail-archive.com/phptal@lists.motion-twin.com/msg00244.html
|
CC-MAIN-2018-39
|
refinedweb
| 566
| 58.42
|
Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.
A useful feature of Seam DAO components is that they can be declaratively instantiated in the Seam components.xml file, so you do not even need to write any data access code. Let’s look at an example for the Person entity bean adopted from previous examples. Since the DAO now manages the entity bean, you no longer need the @Name annotation on the entity bean:
@Entity public class Person implements Serializable { private long id; private String name; private int age; private String email; private String comment; ... Getter and Setter Methods ... }
|
http://my.safaribooksonline.com/book/programming/java/9780137151660/the-seam-crud-application-framework/ch16lev1sec3
|
CC-MAIN-2013-48
|
refinedweb
| 108
| 53.71
|
Google Groups
First steps with Julia, performance tips
Wes McKinney
May 1, 2012 12:36 PM
Posted in group:
julia-dev
hey guys,
I plan to have a tinker with Julia now and then, being mainly a
Python/Cython hacker. Compiled Julia from git master today in advance
of Stefan's talk tonight.
I was curious if Julia does any optimization of array expressions, so
I set up a very simple benchmark: (Note, I am _not_ trolling even if
it seems like it and just looking to understand Julia's JIT and what's
going on)
function test1()
n = 1000000
a = randn(n)
b = randn(n)
c = randn(n)
for i=1:500
result = sum(a + b + c)
end
end
@time test1()
On my machine this takes about 14-15ms per iteration. OK. Time to fire
up IPython and see how NumPy (not actually super optimized, I've had
to reimplement loads of things in NumPy by hand in Cython) does:
In [6]: timeit (a + b + c).sum()
100 loops, best of 3: 11 ms per loop
OK, not bad. Time for NumExpr:
In [13]: import numexpr as ne
In [12]: timeit ne.evaluate('sum(a + b + c)')
100 loops, best of 3: 6.9 ms per loop
So, how much performance is actually left on the table? C function:
double add_things(double *a, double *b, double *c, int n) {
register double result = 0;
register int i;
for (i = 0; i < n; ++i)
{
result += *a++ + *b++ + *c++;
}
return result;
}
Wrapped in Cython and compiled:
cdef extern from "foo.h":
double add_things(double *a, double *b, double *c, int n)
def cython_test(ndarray a, ndarray b, ndarray c):
return add_things(<double*> a.data,
<double*> b.data,
<double*> c.data, len(a))
In [6]: timeit cython_test(a, b, c)
100 loops, best of 3: 2.26 ms per loop
Turns out doing C isn't even really necessary, straight Cython will do:
def cython_test2(ndarray[float64_t] a, ndarray[float64_t] b,
ndarray[float64_t] c):
cdef:
Py_ssize_t i, n = len(a)
float64_t result = 0
for i in range(n):
result += a[i] + b[i] + c[i]
return result
In [5]: timeit cython_test2(a, b, c)
100 loops, best of 3: 2.25 ms per loop
So here's the question: am I doing it wrong? Even in NumExpr above you
can get much better performance in array operations with no true JIT
(it has a VM that tries to eliminate temporaries). But NumExpr is
extremely limited. At minimum I was very surprised that vanilla
Python, temporaries and all, wins out over Julia in this simple
benchmark. Note that the %timeit function disables Python's GC which
may be having an effect in the timings.
cheers and looking forward to tonight's talk,
Wes
Next post
|
https://groups.google.com/forum/?_escaped_fragment_=msg/julia-dev/KDNaN1ZCcck/bLbeqS_ZlBEJ
|
CC-MAIN-2018-13
|
refinedweb
| 461
| 67.28
|
How is data deleted? Aerospike separates the data into two parts: index and value. The index is always stored in DRAM, the value can be stored in either SSD or DRAM (with or without disk for persistence). When a record is deleted, the reference to it is removed from the index. The actual data is not removed from the disk. Another process will find that the data on disk is not being used and reclaim the space.
Note that it is possible for deleted object to reappear. For this to happen the following has to occur:
- The node must be configured to load data from disk. This means that in the file “/etc/aerospike/aerospike.conf” that the variable “cold-start-empty” be set to false for the namespace.
- That data has been deleted, but not yet removed from disk. (i.e. the index entry has been removed). The node has failed.The process “asd” has stopped either due to machine failure or the process was killed.
In this case when the node starts, it will read the data from disk and rebuild the index. Because the data has not been removed from disk, the node will think it is still active and build a new index entry for it. So the deleted object will return. If you know you will be taking down a node, you can prevent deleted data from returning by using the fast restart feature. This will hold the index in memory even when the database process has gone down.
|
https://discuss.aerospike.com/t/how-is-data-deleted/183
|
CC-MAIN-2018-30
|
refinedweb
| 254
| 83.66
|
in reply to Speeding up/parallelizing hundreds of HEAD requests
I'm taking another approach to this problem... based on the comments from theorbtwo. The current code looks like this:
sub gimme_guten_tables {
my ($decoded, $maximum) = @_;
$decoded =~ s,<li>\n(.*?)\n</li>,$1,g;
$decoded =~ s,(.*?)<br><description>.*?</description>,$1,g;
$decoded =~ s,<ul>(.*?)</ul>,$1,g;
$decoded =~ s,<li>(.*?)</li>,$1,g;
$decoded =~ s,<\/?ol>,,g;
$decoded =~ s,<html xmlns:<body><ul>
+,,;
$decoded =~ s,</ul></body></html>\n.*,,;
$decoded =~ s,^\n<a,<a,g;
my @gutenbooks = ($decoded =~ /([^\r\n]+)(?:[\r\n]{1,2}|$)/sg);
my $guten_tables;
my ($link_status, $plkr_type, $html_type, $text_type);
my $count = 1;
for my $line (@gutenbooks[0 .. $maximum-1]) {
if ($line && $line =~ m/(.*?)(?: \((\d+)\))?<\/
+a>/) {
my $splitguten = join('/', split(/ */, $1));
my $clipguten = substr($splitguten, -2, 2, '');
my $readmarks = $3 ? $3 : $1;
my $title = $2;
$title =~ s,by (.*?)</a>,</a> by $1,g;
my %gutentypes = (
plucker => {
'mirror' => "
+er/$1/$1",
'content-type' => 'application/prs.plucker',
'string' => 'Plucker',
'format' => 'pdb'
},
html => {
'mirror' => "
+guten/$1/$1-h/$1-h.htm",
'content-type' => 'text/html',
'string' => 'Marked-up HTML',
'format' => 'html'
},
text => {
'mirror' => "
+plitguten/$1/$1.txt",
'content-type' => 'text/plain',
'string' => 'Plain text',
'format' => 'txt'
},
);
for my $types ( sort keys %gutentypes ) {
my ($status, $type) = test_head($gutentypes{$types}{mirror});
if ($status == 200) {
$gutentypes{$types}{link} =
qq{<a href="$gutentypes{$types}{mirror}">$gutentypes{$t
+ypes}{format}</a>\n};
} else {
$gutentypes{$types}{link} =
qq{<s>$gutentypes{$types}{format}</s>};
}
}
$guten_tables .= qq{<tr>
<td width="40" align="center">$count</td>
<td width="40" align="right">$readmarks</td>
<td width="500">
<a href="">$title</a>
</td>
<td align="center">$gutentypes{plucker}{link}</td>
<td align="center">$gutentypes{html}{link}</td>
<td align="center">$gutentypes{text}{link}</td>
</tr>\n};
$count++;
}
}
$guten_tables =~ s,\&,\&,g;
$guten_tables =~ s,>\n\s+<,><,g;
return $guten_tables;
}
sub test_head {
my $url = shift;
my $ua = LWP::UserAgent->new();
$ua->agent('Mozilla/5.0 (Windows; U; Windows NT 5.1;) Firefox/2.0.0
+.6');
my $request = HTTP::Request->new(HEAD => $url);
my $response = $ua->request($request);
my $status = $response->status_line;
my $type = $response->header('Content-Type');
my $content = $response->content;
$status =~ m/(\d+)/;
return ($1, $type);
}
[download]
In this code, I'm taking an array, @gutenbooks, splitting out the etext id ($1) and the etext title ($2), and creating a hash of the 3 different formats of that work (pdb, html, txt).
For each link I create, I pass it through test_head(), and check to see if it returns a '200' status or not. If the link is a '200' (i.e. exists, and is valid), I create a clickable link to it. If the link is NOT '200', then I don't link to it (i.e. I don't create a link that the user can click, to get a 404 or missing document).
What I'd like to try to implement, is a way to take all of the links at once, pass them into some sub, and parallelize the HEAD check across them and return answers based on that check.
But here is where I'm stuck...
I have no experience with LWP::Parallel, LWP::ParallelUA, LWP::Parallel::ForkManager and the like (passing references, callbacks, etc.)
Can some monk give me a strong nudge in the right direction?
The docs for these modules assume I am just statically definiing the urls I want to check... and I can't do that; everything will be coming out of a dynamic, ever-changing array.
Thanks.
You could add two lines to your code above to achieve your goal.
...
async{
...
}
...
[download]
Of course, a complete solution would add a few more lines in order to terminate slow or absent mirrors. And a couple (2 or 3) more to share the results of the asynchronous calls with the main thread of the code.
The total absence of the word "threads" from your question and responses suggests that you will not consider such a solution...and I've gotton out of the habit of expending time producing and testing solutions that will likely simple be ignored. But for the problem you are trying to solve, threads is the simplest, fastest, easiest to understand solution.
It is also the case that I am not currently in a position to offer a tested solution, and unfortunate that even those here that do not dismiss threads as a viable solution, rarely seem to offer code.
C'est la v.
|
http://www.perlmonks.org/index.pl/jacques?node_id=639482
|
CC-MAIN-2018-17
|
refinedweb
| 732
| 56.45
|
Web 2.0, Meet JavaScript 2.0
Well I suppose it's an undeniable fact about us programmer-types - every now and then we just can't help but get excited about something really nerdy. For me right now, that is definitely JavaScript 2.0. I was just taking a look at the proposed specifications and I am really, truly excited about what we have coming.OP!
Had to start with this one - it's so big it had to be first. Introducing actual classes and interfaces into the language is likely the most radical change that will come with JavaScript 2.0. Technically speaking, OOP will be nothing new for JavaScript. JavaScript already has objects and even offers full support for inheritance through the prototype chain - however, prototypal inheritance in JavaScript is tricky business and can often produce unexpected results (especially for those accustomed to classical inheritance in languages such as Java and C#).
Not only is JavaScript introducing classes into the language,
I'll be honest and admit that I wasn't sure if the error gets thrown on assignment, or when you attempt to read. For now, examples in the documentation are sparse, but either way this will be a handy feature. The mechanism that makes this new operator possible is union types (also new). I won't be delving into those in this article, but they would definitely be worth reading up on at the original source.Real Namespaces
JavaScript developers have long been implementing namespaces by stuffing everything into a single global object. While this is not a bad convention (and it's much better than cluttering the global namespace), the reality is that it abuses the purpose of objects for the sake of simulating pseudo-namespaces. Well, have a guilty conscience no more, because now you have real bonafide namespaces that are actually made for being, well,));
Even in this short, fanciful example you can see how your code can start to take on a highly organized and logical structure - much like many of the server side languages you may have worked with. Personally, I think this is one of the strongest improvements found in the specifications.Conclusion
Well, needless to say, JavaScript 2.0 is shaping up to be a devastatingly awesome improvement. The specifications go on for about 40 pages of size 12 font, so I'm not even going to try and provide a complete overview. But as I've said, everything I've mentioned above can be found in the proposed language overview (PDF) - and there's several more goodies to be found in there as well. Thanks for reading!
- Login or register to post comments
- 2991 reads
- Printer-friendly version
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
kkkkkkkkkkkkkkk... replied on Wed, 2008/03/19 - 5:23pm
Thanks for the write-up, this is really exciting to me.
I have a feeling that JS, if widely-adopted from the outset, could drive web applications into a whole new realm. Although JS 1.x has always been a very powerful language, its always been hindered by its lack of support for some key features found in other languages (real classes, namespaces, constants, etc), and I'm very glad to see these issues being addressed. Currently, many of these problems are addressed in an ad-hoc manner that is many times inelegant, wordy, confusing, and prone to implementation problems.
I just hope that A) backwards compatibility will be a non-issue, and B) adoption is universal and full.
Jeremy Martin replied on Wed, 2008/03/19 - 5:29pm
in response to: kenman
|
http://css.dzone.com/news/web-20-meet-javascript-20
|
crawl-002
|
refinedweb
| 615
| 59.64
|
ATTR, VOP_SETATTR — get and set attributes on a file or directory
#include <sys/param.h>
#include <sys/vnode.h>
int
VOP_GETATTR(struct vnode *vp, struct vattr *vap, struct ucred *cred);
int
VOP_SETATTR(struct vnode *vp, struct vattr *vap, struct ucred *cred);.
VOP_GETATTR() returns 0 if it was able to retrieve the attribute data via
*vap, otherwise an appropriate error is returned. VOP_SETATTR() returns
zero if the attributes were changed successfully, otherwise an
appropriate error is returned.
[EPERM] The file is immutable.
[EACCES] The caller does not have permission to modify the file
or directory attributes.
[EROFS] The file system is read-only.
VFS(9), vnode(9), VOP_ACCESS.
|
http://gnu.wiki/man9/VOP_GETATTR.9freebsd.php
|
CC-MAIN-2018-22
|
refinedweb
| 107
| 51.95
|
Commit e8f3010f authored byBrowse files
Committed by
Catalin Marinas
arm64/efi: isolate EFI stub from the kernel proper
Since arm64 does not use a builtin decompressor, the EFI stub is built into the kernel proper. So far, this has been working fine, but actually, since the stub is in fact a PE/COFF relocatable binary that is executed at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we should not be seamlessly sharing code with the kernel proper, which is a position dependent executable linked at a high virtual offset. So instead, separate the contents of libstub and its dependencies, by putting them into their own namespace by prefixing all of its symbols with __efistub. This way, we have tight control over what parts of the kernel proper are referenced by the stub. 139 additions and 20 deletions
|
https://gitlab.flux.utah.edu/xcap/xcap-capability-linux/-/commit/e8f3010f7326c00368dbc057bd052bec80dfc072
|
CC-MAIN-2021-25
|
refinedweb
| 143
| 51.52
|
Advanced Namespace Tools blog 26 February 2017
Implementing /srv Namespaces Part Three
Even though the patch for private /srv namespaces was working right, there were a couple details that I felt weren't quite right. One was the calls to srvclose that happened during process exit after the sgrp data structure was nilled out. The other was the fact that the base sgrp for process 1 was created on its first access to srv. I also felt when working in a private /srv namespace that I sometimes wanted to be able to share something back to a process in a different srvgroup, but without a shared /srv, the only way to do that was via /net listeners.
Setting Up a srv group for Process 1
This requires making a change to /sys/src/pc/main.c, and similarly for other arches. It's just a matter of adding the expected code to userinit() in the same place that pgrp, egrp, fgrp, and rgrp are set.
p->sgrp = smalloc(sizeof(Sgrp)); p->sgrp->ref = 1;
Setting this up here means that it will no longer wait until process 1 accesses the srv device to assign this information, which means that branches could be trimmed from rfork handling in sysproc.c. The downside to doing this here is that it also needs to be added to pc64/main.c and similarly for main.c in every arch. At the moment, I only provide the modified configuration files for pc and pc64, so I followed that example and only provide the modified main.c in those subdirs of the frontmods dir. Anyone who decides to use ANTS on a different arch will need to add those lines to their arch's main.c.
Rearranging sgrp = nil during Process Exit
The problematic sequence of events happened during pexit() in proc.c. I had added cleanup of the srv group pointer in similar fashion to how other similar resources were cleared:
/* nil out all the resources under lock (free later) */ qlock(&up->debug); fgrp = up->fgrp; up->fgrp = nil; egrp = up->egrp; up->egrp = nil; rgrp = up->rgrp; up->rgrp = nil; sgrp = up->sgrp; up-); if(sgrp != nil) closesgrp(sgrp); if(dot != nil) cclose(dot); if(pgrp != nil) closepgrp(pgrp);
The issue is that when closefgrp(fgrp) is called, it may be closing file descriptors from /srv, but the process sgrp has already been nilled out. This is what caused calls to srvclose within devsrv with up->sgrp as nil. The solution was to change the sequencing and wait to nil sgrp until after closefgrp() completes:
/* nil out all the resources under lock (free later) */ qlock(&up->debug); fgrp = up->fgrp; up->fgrp = nil; egrp = up->egrp; up->egrp = nil; rgrp = up->rgrp; up-); /* sgrp is nilled out here because closefgrp may need srvclose */ qlock(&up->debug); sgrp = up->sgrp; up->sgrp = nil; qunlock(&up->debug); if(sgrp != nil) closesgrp(sgrp); if(dot != nil) cclose(dot); if(pgrp != nil) closepgrp(pgrp);
With this change made, the srvprocset() function I had added to devsrv to guard against nil pointers in sgrp became unnecessary. It was no longer being hit from srvinit() because process 1 was created with an sgrp, and it was no longer being hit from srvclose() during process exit because the sgrp pointer was maintained until after closefgrp().
Bringing Back a Global srv with devzrv
Using rfork V to enter a clean /srv namespace had been working as designed, but the absence of a global srv felt restricting in some cases. A sub-environment may wish to share pieces of its namespace via srvfs or provide execution resources via hubfs, and with no global srv available, this would require using aux/listen1 to share resources exclusively via /net.
The solution for maximum flexibility is to provide a duplicate #s device under the name of #z which maintains the previous /srv behavior and doesn't split after rfork V. Amusingly enough, I had already prepared this as a patch months ago when first investigating independent srvs. Making a copy of the srv device under a different name was trivial: two functions needed a different public name (zrvname and zrvrenameuser) and they needed to be invoked as appropriate. For instance, srvname is called by devproc.c when printing information from the namespace file:
if(strcmp(cm->to->path->s, "#M") == 0){ srv = srvname(cm->to->mchan); if(srv == nil) srv = zrvname(cm->to->mchan); i = snprint(buf, nbuf, "mount %s %s %s %s\n", flag, srv==nil? cm->to->mchan->path->s : srv, mh->from->path->s, cm->spec? cm->spec : ""); free(srv); }
Note the use of the kernel-internal-label #M for the mount driver. What this code is doing is checking if a channel comes from the mount device, and if so, it asks devsrv to tell it the name of the service which is providing that chan. All I added was the additional check of zrvname if the first check of srvname returns nil. I can only admire the fearless use of the ternary within the snprint by the previous coder.
|
http://doc.9gridchan.org/blog/170226.srv.implement.pt3
|
CC-MAIN-2017-22
|
refinedweb
| 848
| 59.03
|
Hi,
I would like to initialize a zero loss tensor which will be used to accumulate losses dynamically based on some condition. Sometimes, there will not be any loss accumulation and in this case, when I backpropagate, it throws the following run time error
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Sample code looks like this:
import torch # some inputs x = torch.rand(10, 3) # some targets y = torch.rand(10, 1) # some parameter W = torch.nn.Parameter(torch.rand(3, 1)) # total loss total_loss = torch.tensor([0.0]) # some condition if torch.rand(1) > 0.5: loss = torch.sum((torch.matmul(x, W) - y) ** 2) total_loss += loss total_loss.backward()
So, the question is how to initialize this zero loss tensor for handling this?
|
https://discuss.pytorch.org/t/how-to-initialize-zero-loss-tensor/86888
|
CC-MAIN-2022-21
|
refinedweb
| 132
| 58.89
|
[~Solved]How to ignore the system's DPI setting?
- Asperamanca
I had a puzzling effect in a GraphicsScene, where I use QGraphicsTextItem. There are two items which show the same data. One of them shows a bigger font that the other.
!(Two texts)!
(The plain texts are different in the screenshot to make the items more easily distinguishable, but the effect occurs even if the text is identical)
The solution lies in the system's DPI setting. When creating the QFont for a newly inserted item, Qt seems to recognize the system's DPI setting ("large fonts") and creates a font that looks slightly bigger.
The second item is instead created from data read from an XML stream. The data itself is identical, but the QFont seems to be created in a context where the system's DPI setting cannot be recognized. Hence it's displayed smaller.
The problem is that for this graphical representation, I absolutely cannot have the system's DPI setting fool around with my object sizes. They must look the same in relation to each other (e.g. a text in relation to a nearby graphic), regardless of the local system settings.
How do I tell Qt to ignore the system's DPI setting?
Hello,
I was wondering if you found any solution as to ignoring DPI settings. Thanks again, I have similar issue.
Thanks
- Asperamanca
Right, I completely forgot about that thread.
My solution is to set the font size in pixels (setPixelSize). I calculate the desired pixel size according to a "point size at 96 dpi", which is easy:
@int pointSizeToPixelSize(const int pointSize)
{
return ((pointSize * 4) / 3);
}@
"4 / 3" is a simplification of "96 / 72", so for any other dpi, just calculate "dpi / 72"
Thank you, it worked.
|
https://forum.qt.io/topic/24674/solved-how-to-ignore-the-system-s-dpi-setting
|
CC-MAIN-2018-26
|
refinedweb
| 295
| 63.9
|
Fraction Simplifier [TUT] [C++] [Python!!]
INTRADUCKTION
Hello Hello, it is I, the famous Whippingdot!! Thank you for the roses, thank you. Anyway, this is a tutorial on creating a fraction simplifier. This is going to be a
three four(maybe) part series of creating calculators. It gets harder and longer as time goes but...who cares. We will conquer all calculators. BTW I will remove the '[Python!!]' from the title if you want me to, as it is not a Python tutorial, but a C++(I put that Python thing there cause more people wud look at this 🤓🤓). This was supposed to be a four-part series of python calculators but @elipie created a hate post on python tutorials. Should we start...I think we should 😁😁. Also, if you didn't know, this is a tutorial on creating a fraction simplifier in C++. A fraction simplifier is a program that finds the most simplified form of a fraction you input.
Note: One thing I forget to say later is that in C++ you end almost all executable lines with a semi-colon(this excludes function's end curly parenthesis, #include, etc.)
Prerequisites: You should know what a fraction is and what a numerator is(the top of a fraction) and what a denominator is(the bottom of a fraction)
NOICE STOWART
Let us start this. Ok, first, look at the program. We are going to run along each line of it. I would just tell you the meaning of each line, but that wouldn't be a great tutorial would it. So let us start from scratch. I actually created this calculator in Python first but recently(a few mins ago) I converted it to C++(it was easy). I hope the people who came for the Python tag stayed cause C++ is similar to Python so you will understand this.
JUS STOORT ALRAEDY
EEEUUU(BOX)STEEM
Ok, first, at the start of C++ programs in which we need to 'print' something out or take an 'input' I always put
#include <iostream>. What this does is it includes the 'iostream' library. 'iostream' stands for 'input-output stream'. Using this library we can now use the commands 'cout' and 'cin' which mean 'c output' and 'c input' respectively. C++ is derived from C(a programming language) and is basically C but enhanced(some people would say bloated) so that is why it is 'c output' and 'c input'. There are other things you can include to get different ways to input and output but I use 'iostream' cause I think it is the easiest way.
UBING NOMESPOCE YES TEE DEE SEBI-CODON
I used to put
using namespace std; at the start of my program too but I stopped doing it when someone on repl.it told me not to use it. I don't exactly remember their reason why but on the internet the reason is:
Some people had said that is a bad practice to include the using namespace std in your source files because you're invoking from that namespace all the functions and variables. When you would like to define a new function with the same name as another function contained in the namespace std you would overload the function and it could produce problems due to compile or execute. It will not compile or executing as you expect.
This basically means that if you put
using namespace std; at the start, it would include all functions from the namespace meaning if you create a function in your program with the wrong name(meaning a name which is defined in the namespace) even by mistake the program will have the output you did not expect. It might use the function from the namespace and not the function your created meaning your whole program goes 'kaboom'.
The replacement we use for
using namespace std;(cause we do need some functions from it) is that we only put
using std::functionname; for each function. If we need the cout function(which is in the namespace) we would put
using std::cout; in the program. This is what we did for lines 3-4:
using std::cout; using std::cin; using std::endl;
DEPINITOON POR TOSE TREE FUNCSOONS
cout is the function for outputting(if you couldn't guess) cin is inputting and endl is a function we use in cout at the end of the line. It means 'go to the next line in the console'.
WEAR CUD GEET
ELECTRICUTED EXOCUTEED
The place where you put all your code for the program and all is inside the main function. The main function is defined right before the code. In C++ you define functions using the variable type of the function followed by the name with parentheses(for arguments) and curly parentheses'{}'. Inside the curly parentheses is the code so that is usually why people write it like this:
int main() { // This is a comment(meaning the stuff in here doesn't run). The code is put in these curly parentheses }
but you can also write it like this:
int main() { // Code here }
or like this:
int main() {// Code here}
People don't really do the last type as that makes everything squished together making it look very bad and you can't really understand the code. I used to use the second method but many people used the first one so I switched.
Also, one thing to note is that arguments in a function are things which the function uses later. When you call a function you have to put arguments in the parentheses. The arguments have to be the same type as defined by the function. The main function does not have to be called as it is called when you run the program. In any IDE you use it is the same, the main function you never call, but it has to be called main. The main function used to need to have arguments in the parentheses but nowadays you don't need to put those arguments.
The code by now:
#include <iostream> using std::cout; using std::cin; using std::endl; int main() { }
TE
CALL OF DUTY COD
Ok, so now inside the main function we have
int hcf = 0; on the eighth line. The 'int' is the variable type. I forgot to explain this for the main function but it means that the type of the variable is an integer. For the main function, it means the main function returns an integer(that is why at the last line you have
return 0; which is not needed nowadays, but it is good practice). We are saying the hcf variable is an integer and it is equal to 0. This is for now and it will change with the future code. The next two lines define other variables called 'numerator' and 'denominator' which we will get from the user in the next lines.
TE NOIXT LIONS
Here we have:
cout << "Enter The Numerator: "; cin >> numerator;
We are first outputting to the console with
cout. The thing that is outputted has to be followed by two less than symbols. There is no meaning to this that I know, and it is just how the cout function was made. We are outputting "Enter The Numerator: " but without using
endl after as we want to get the input on the same line, not on a new line. The next line has the
cin which takes an input, and again here we follow it by two greater than symbols from what I know have no meaning. We then put the variable name that we want to have the input stored in, which in this case is the numerator integer variable we defined earlier.
cin automatically goes to a new line after taking the input so we don't need to use
endl here too.
After this, we do the same thing but with the denominator so there is not anything to explain.
TE EBEN MOAR NOIST LIONS
Ok so this is WAAAAY more complicated:
for (int counter = 2; counter <= numerator && counter <= denominator; counter += 1) { if (numerator % counter == 0 && denominator % counter == 0) { hcf = counter; } }
So, first, we are using a for loop. What a for loop is, is it does something a specified amount of times or for the number of letters in a word or stuff like that. You can also use a while loop(runs while something is true) here but I prefer a for loop as it means fewer lines. After putting for we have parentheses in which we put the statement. Inside the parentheses, we first create a new variable called counter. We say it is an integer and it is equal to two, and then we put a semi-colon closing that statement. Next, we tell the for loop to run while the variable counter we just created is less than or equal to(<=) numerator and(the double &, '&&') the variable counter(again) is less than or equal to(<=) the denominator. We then have a semi-colon to end that statement and then we have the last statement in the parentheses which is
counter += 1 which means
counter = counter + 1 which basically increases the value of the counter by one each time the for loop runs. The first time though the counter does not get incremented by one. Inside the curly braces for the for loop(the code to be executed while the condition in the for loop is true) we have an if statement. This means 'if' something is true do the code inside it. The if statement starts with if and then has parentheses. Inside the parentheses, there is a statement which means 'if the remainder(% is the remainder operator) of the numerator divided by the counter is 0 and(&&) the remainder of the denominator divided by the counter is 0, do the code inside my curly parentheses'. I hope you understood that. Inside the curly parentheses, it says
hcf = counter; as to find the hcf(highest common factor) of two numbers you need to find the number which divides both numbers equally, and in this case, it is counter whenever the if statement is true. Now this for loop will continue running until the counter is more than the numerator or the denominator. By this time you would've gotten the highest number which divides both the numerator and denominator equally.
YEND OB 'POG'RAM
Now we have the last few lines:
if (hcf != 0) { numerator /= hcf; denominator /= hcf; } cout << "The most simplified form of your fraction is: " << numerator << "/" << denominator << endl;
This is another 'if' statement underneath the for loop. It says 'if' hcf is not equal to(!=) 0 then do the code in the curly parentheses. We have this if statement as if both numbers do not have a common factor other than one, the hcf will stay 0 as defined earlier as we kept the counter's starting value as 2. If you change the counter's starting value to 1 then we don't need this if statement and we can directly write the code inside of it without this if statement. Some of the code inside of it is
numerator /= hcf; which means
numerator = numerator / hcf. The next line is the same thing as this one and so it means the same thing. This is basically dividing the top and bottom of the fraction by the hcf, which results in the simplest form of the fraction.
After that, we have a final cout statement which outputs some text, has less than symbols(that is what you use instead of + which is in Python), has 'numerator' which means it outputs the value of numerator, more less than symbols, a slash so it looks like a fraction, more less than symbols, 'denominator' which outputs the value of the variable 'denominator', more less than symbols, and endl, which ends the line and makes the console go to the next line.
We then have the
return 0; we talked about earlier and that is all.
EN' COD
#include <iostream> using std::cout; using std::cin; using std::endl; int main() { int hcf = 0; int numerator = 0; int denominator = 0; cout << "Enter The Numerator: "; cin >> numerator; cout << "Enter The Denominator: "; cin >> denominator; for (int counter = 2; counter <= numerator && counter <= denominator; counter += 1) { if (numerator % counter == 0 && denominator % counter == 0) { hcf = counter; } } if (hcf != 0) { numerator /= hcf; denominator /= hcf; } cout << "The most simplified form of your fraction is: " << numerator << "/" << denominator << endl; // Not Needed: return 0; }
BOI BOI
Thank you for reading this full post(if you did read it) and I hope I helped you. I hope this helped you learn C++ and made you want to learn more. Hope you guys and galls have a good day. Tell me if there is a mistake in the post, like if I forgot to put a semi-colon in the explaining bits or something. That would be really helpful. I really hope you take an interest in C++ as it is a really
mean rude hard irritating furious confusing nice and useful language to learn. Bye bye now!!
BTW i forgot to tell you but i was going to write a post in the share section with a bunch of rubbish for my 200 cycle special but i realized i wud REALLY get hated on so I decided this wud be enough(I was going to do this anyway). See chu loiter.
P.S. @Codemonkey51 please make your program say I am not a spammer just cause I said cycles
once twice. Change it ploise, bye!!
Tank o, abso oi em goeeng tvo moik me langage troonlooter wen me oonbanned(I am banned right now, that is why I didn't make a proper program for my cycle special) @zplusfour
👀👀 i am banned from coding mainly(i am also banned from most internet but welp...who cares) @FlaminHotValdez
@Whippingdot like ur parents don't allow u or u legit have the internet blocked or sumth
My parents banned me from coding but they said do not use the computer other than for school but i am still using it here cause they didn't completely block the internet @FlaminHotValdez
Till my exams are over(it was supposed to be for 4 months but my dad reduced it. It ends next week) @FlaminHotValdez
Ah yes. Also elipie's rant was on the basic python tutorial, and this is for doing something specific in python that not many people have done before, so yeah.
Eh people learning C++ is better than people learning Python(almost everyone knows it). Also check out my first meme:
@FlaminHotValdez
|
https://replit.com/talk/learn/Fraction-Simplifier-TUT-C-Python/117946
|
CC-MAIN-2022-21
|
refinedweb
| 2,446
| 66.88
|
This is your resource to discuss support topics with your peers, and learn from each other.
04-27-2010 02:48 AM - edited 04-27-2010 03:21 AM
Hello,
I have the following classes:
public class QueueScreen extends MainScreen{}
public class GetQueue {
QueueScreen screen;
public GetQueue(QueueScreen screen){
this.screen = screen;
}
}
The GetQueue class ultimately updates the UI of the QueueScreen by adding adding a new instance of the same ButtonField to the screen multiple times with screen.add(). What I am looking to do is when one of those buttons has focus, add a particular Menu Item with makeMenu() from within the GetQueue class.
To re-iterate with an example: With the GetQueue class, I am adding the same button to the QueueScreen many times (and uniquely identifying the button with setCookie()). Each time one of those buttons is highlighted, I want to add a particular menu item to QueueScreen.
I have tried adding a makeMenu() method in QueueScreen and then accessing the method from getQueue. This did not work, however. The only potential way I can think of is to use screen.addMenuItem() when the ButtonField has focus, and then removeMenuItem when the ButtonField does not have focus. I could not get this to work either, because each button is essentially the same (only differing in its cookie).
What is the best way to accomplish this? Thanks.
Solved! Go to Solution.
04-27-2010 04:05 AM
I like this idea.
"use screen.addMenuItem() when the ButtonField has focus"
Why doesn't this work?
04-30-2010 04:34 PM
I was able to get it working. My mistake was that I was trying to manipulate the focus() and unfocus() in my actual buttonfield class. By overriding the buttonfield in the class its implemented in did the trick.
Thanks!
04-30-2010 05:07 PM
I appreciate it, but do I deserve kudos for telling you I like your idea?!
Anyway, glad you got it going....
|
https://supportforums.blackberry.com/t5/Java-Development/Trouble-invoking-makeMenu-from-a-separate-non-Screen-class/m-p/494211
|
CC-MAIN-2016-36
|
refinedweb
| 329
| 72.76
|
import "go.uber.org/multierr"
Package multierr allows combining one or more errors together.
Errors can be combined with the use of the Combine function.
multierr.Combine( reader.Close(), writer.Close(), conn.Close(), )
If only two errors are being combined, the Append function may be used instead.
err = multierr.Append(reader.Close(), writer.Close())
This makes it possible to record resource cleanup failures from deferred blocks with the help of named return values.
func sendRequest(req Request) (err error) { conn, err := openConnection() if err != nil { return err } defer func() { err = multierr.Append(err, conn.Close()) }() // ... }
The underlying list of errors for a returned error object may be retrieved with the Errors function.
errors := multierr.Errors(err) if len(errors) > 0 { fmt.Println("The following errors occurred:") }
Errors returned by Combine and Append MAY implement the following interface.
type errorGroup interface { // Returns a slice containing the underlying list of errors. // // This slice MUST NOT be modified by the caller. Errors() []error }
Note that if you need access to list of errors behind a multierr error, you should prefer using the Errors function. That said, if you need cheap read-only access to the underlying errors slice, you can attempt to cast the error to this interface. You MUST handle the failure case gracefully because errors returned by Combine and Append are not guaranteed to implement this interface.
var errors []error group, ok := err.(errorGroup) if ok { errors = group.Errors() } else { errors = []error{err} }
Append appends the given errors together. Either value may be nil.
This function is a specialization of Combine for the common case where there are only two errors.
err = multierr.Append(reader.Close(), writer.Close())
The following pattern may also be used to record failure of deferred operations without losing information about the original error.
func doSomething(..) (err error) { f := acquireResource() defer func() { err = multierr.Append(err, f.Close()) }()
Combine combines the passed errors into a single error.
If zero arguments were passed or if all items are nil, a nil error is returned.
Combine(nil, nil) // == nil
If only a single error was passed, it is returned as-is.
Combine(err) // == err
Combine skips over nil arguments so this function may be used to combine together errors from operations that fail independently of each other.
multierr.Combine( reader.Close(), writer.Close(), pipe.Close(), )
If any of the passed errors is a multierr error, it will be flattened along with the other errors.
multierr.Combine(multierr.Combine(err1, err2), err3) // is the same as multierr.Combine(err1, err2, err3)
The returned error formats into a readable multi-line error message if formatted with %+v.
fmt.Sprintf("%+v", multierr.Combine(err1, err2))
Errors returns a slice containing zero or more errors that the supplied error is composed of. If the error is nil, the returned slice is empty.
err := multierr.Append(r.Close(), w.Close()) errors := multierr.Errors(err)
If the error is not composed of other errors, the returned slice contains just the error that was passed in.
Callers of this function are free to modify the returned slice.
Code:
err := multierr.Combine( nil, // successful request errors.New("call 2 failed"), errors.New("call 3 failed"), ) err = multierr.Append(err, nil) // successful request err = multierr.Append(err, errors.New("call 5 failed")) errors := multierr.Errors(err) for _, err := range errors { fmt.Println(err) }
Output:
call 2 failed call 3 failed call 5 failed
Package multierr imports 6 packages (graph) and is imported by 91 packages. Updated 2018-11-02. Refresh now. Tools for package owners.
|
https://godoc.org/go.uber.org/multierr
|
CC-MAIN-2018-51
|
refinedweb
| 588
| 50.53
|
by Oliver Choy
Created
November 21, 2012
There are times when custom namespace is needed in a system for organization and management purposes. Without registering the namespace with CRX, properties with custom namespace would not be accepted. In this blog post I will talk about two ways of registering a namespace in CRX.
To illustrate, let’s take a look at the behavior of CRX without registering any namespace. Let me go ahead and enter a property that has namespace in it:
Upon saving, I would get the following error:
Now let’s proceed with registering the namespace. There are two ways of doing this:
Register namespace via CRX Console
- Namespace can be added via Node Type Administration in CRX Console.
- In the Node Type Administration window, click on “Namespaces” which is located at far right of the toolbar.
- At the bottom of the Namespaces window, click on “New”.
- Enter the URI and the Namespace mapping and click Ok. And you should see the namespace added:
- Voila! It’s that easy. And now you can add the property again with the registered namespace:
Register custom namespace via CND file
- Namespace can also be registered via a CND file. The CND file can be deployed with any CRX packages (install folder, or via the package manager).
- Once the package is installed on CRX, any namespaces in CND files found inside the package would be registered automatically.
- Here’s the content of the CND file:
- That’s it! It’s nothing more than a mapping=uri pair.
|
http://blogs.adobe.com/contentmanagement/2012/11/21/how-to-add-custom-namespace-in-crx/
|
CC-MAIN-2017-34
|
refinedweb
| 255
| 64.2
|
I could not see a gaussian filter in the python imaging library, but its simple enough to write one…
import ImageFilter
from PIL import Image
from numpy import *
def gaussian_grid(size = 5):
"""
Create a square grid of integers of gaussian shape
e.g. gaussian_grid() returns
array([[ 1, 4, 7, 4, 1],
[ 4, 20, 33, 20, 4],
[ 7, 33, 55, 33, 7],
[ 4, 20, 33, 20, 4],
[ 1, 4, 7, 4, 1]])
"""
m = size/2
n = m+1 # remember python is 'upto' n in the range below
x, y = mgrid[-m:n,-m:n]
# multiply by a factor to get 1 in the corner of the grid
# ie for a 5x5 grid fac*exp(-0.5*(2**2 + 2**2)) = 1
fac = exp(m**2)
g = fac*exp(-0.5*(x**2 + y**2))
return g.round().astype(int)
class GAUSSIAN(ImageFilter.BuiltinFilter):
name = "Gaussian"
gg = gaussian_grid().flatten().tolist()
filterargs = (5,5), sum(gg), 0, tuple(gg)
im = Image.open('/home/rcjp/tmp/test.png')
im1 = im.filter(GAUSSIAN)
im1.save('/home/rcjp/tmp/testfiltered.png')
Advertisements
Thanks. I am using a normal pdf to disperse a 2d spatial population at each time step in a grid map. This is close to what i need.
Jon
Comment by Jon Allen — August 7, 2009 @ 1:55 pm
As of 1.1.5 there appears to be Gaussian Blur in the PIL, but I don’t think it’s documented. I wrote a quick post about it:
Regards,
Comment by Aaron Fay — May 28, 2011 @ 6:39 pm
|
https://rcjp.wordpress.com/2008/04/02/gaussian-pil-image-filter/
|
CC-MAIN-2017-26
|
refinedweb
| 255
| 65.22
|
Prevent Indexing Duplicates¶
When indexing documents, it is common to have duplicate documents received by the search system. One can either remove the duplicates before sending the duplicates to Jina or leave it to Jina for handling the duplicates.
To prevent indexing duplicates, one needs to add
_unique for the
uses_before option. For example,
Python API¶
from jina.flow import Flow from jina.proto import jina_pb2 doc_0 = jina_pb2.Document() doc_0.text = f'I am doc0' doc_1 = jina_pb2.Document() doc_1.text = f'I am doc1' def assert_num_docs(rsp, num_docs): assert len(rsp.IndexRequest.docs) == num_docs f = Flow().add( uses='NumpyIndexer', uses_before='_unique') with f: f.index( [doc_0, doc_0, doc_1], output_fn=lambda rsp: assert_num_docs(rsp, num_docs=2))
Under the hood, the configuration yaml file, :file:
executors._unique.yml, under the :file:
jina/resources is used. The yaml file is defined as below
YAML spec¶
!DocIDCache with: index_path: cache.tmp requests: on: [SearchRequest, TrainRequest, IndexRequest, ControlRequest]: - !RouteDriver {} IndexRequest: - !TaggingCacheDriver with: tags: is_indexed: true - !FilterQL with: lookups: {tags__is_indexed__neq: true}
jina.executors.indexers.cache.DocId.
In Jina, the document ID is by default generated a new hexdigest based on the content of the document. The hexdigest is calcuated with blake2b algorithm. By setting
override_doc_id=True, users can also use customized document ids with Jina client and add
tags to map to their unique concepts.
Warning
When setting
override_doc_id=True, a customized id is only acceptable if
it is a hexadecimal string
it has an even length
Warning
Be careful when using _unique keyword as a cache executor, it will not set any workspace where to store actual data and it will use as workspace the folder where it runs, which may not be where the actual indexers store their data which can be inconvenient. If you want to store the cache in a specific workspace while keeping the same functionality, just copy the yaml description under jina/resources/executors._unique.yml and add the desired workspace under metas.
!DocIDCache with: index_path: cache.tmp metas: name: cache workspace: $WORKSPACE ...
|
https://docs.jina.ai/master/chapters/prevent_duplicate_indexing/index.html
|
CC-MAIN-2021-04
|
refinedweb
| 333
| 50.84
|
I am trying to create a loop in Python with numpy that will give me a variable "times" with 5 numbers generated randomly between 0 and 20. However, I want there to be one condition: that none of the differences between two adjacent elements in that list are less than 1. What is the best way to achieve this? I tried with the last two lines of code, but this is most likely wrong.
for j in range(1,6):
times = np.random.rand(1, 5) * 20
times.sort()
print times
da = np.diff(times)
if da.sum < 1: break
Since you are using numpy, you might as well use the built-in functions for uniform random numbers.
def uniform_min_range(a, b, n, min_dist): while True: x = np.random.uniform(a, b, size=n) np.sort(x) if np.all(np.diff(x) >= min_dist): return x
It uses the same trial-and-error approach as the previous answer, so depending on the parameters the time to find a solution can be large.
|
https://codedump.io/share/F77wAOnULgT3/1/constraining-random-number-generation-in-python
|
CC-MAIN-2017-17
|
refinedweb
| 172
| 76.72
|
Submission + - BBC iPlayer to stream for Linux & Mac users->
Link to Original Source
.
*sigh* Charon (my server box) was up and down last night. What I initially put down to hardware failure actually appears to be driver/kernel related. Under heavy load the wifi driver managed to hang the networking stack requiring (since the box is headless) a hard-restart.
More wodges of code now. This nicely shows the autoboxing of Scalars and actually does something useful that you
namespace PerlEmbedExamples {
using System;
using Perl;
Well I've been playing with embedding Perl within C# and have managed to write the first 'useful' program. The following is a re-write of the LWP simple test program in C#.
namespace PerlEmbedExamples {
using System;
using Perl;...
Well, not much of an entry for the weekend
On Saturday Jennie and I got up early and went around bits of the National Science week stuff here with my parents and their French friends. All very nice and very tiring. Still my mother hinted at some exciting financial arrangements so things could be good on that front.
Gig last naight at Homerton went OK. It was odd working with people we haven't worked with before - they are so unreliable. Right in the middle of '3 words' there was the biggest block I have ever come accross
Computers are useless. They can only give you answers. -- Pablo Picasso
|
http://slashdot.org/~rjw57/tags/dupe
|
CC-MAIN-2015-35
|
refinedweb
| 233
| 64.51
|
This article is the next step in Catharsis documented tutorial. Catharsis is a web-application framework gathering best-practices, using ASP.NET MVC (preview 5), NHibernate 2.0. All needed source code you can find here.
At this chapter you can find summary of the Entity layer. From my experience, what I always wanted and expected from tutorial - give users (colleagues) the quick description of what could not be seen - easily seen from the first observation. If there is something what make a sense as a whole, explain it; what could be easily read from code do not describe, and force developers to examine...
The first step to efficiently work with Catharsis is to use her Guidance. If you’re not experienced with MS Guidance, than do take your time, and examine it. At least install Catharsis.Guidance and try to create your own solution. Next step must be observation of class generator abilities. Right-Click on any Project library and in the context menu you'll see the picture of a cat and quick description of available action.
Try it; try it; try it. Your tests and (hopefully) next few chapters will help you to get quickly in.
Catharsis.Guidance can help you create whole infrastructure for handling any entity (Entity means Person, Contract, Client, Car, Table, Goods item...). If you'd like to work in steps, you can use class generators on every layer as well. Let’s have a closer look at these Guidance wizards.
Person
Contract
Client
Car
Table
Goods item
(Re)Create all entity’s infrastructure menu item can be found on projects Entity.dll and Web.dll.
(Re)Create objects on the current layer could be accessible from any project (right-click).
At this chapter we'll discuss Entity tier, which is about plain objects. Guidance generator for this project will produce 2 files with two classes. The first is ‘Entity.cs’ file containing persistent entity with its properties. The second is searching object, which could be used and filled on UI, and next on data layer used to filter list of searched entities.
Objects in Entity.dll are (even have to be) plain, method-less. Their ancestors provide some basic needed functionality for handling. It is built-in and you don't have to care when creating new entity. You can concern only on the business case. There are two main base objects. They have some similar behavior:
base
The simpler one is Persistent which is derived from. Catharsis.Entity.PersistentObjects.SimplePersistentObject. It comes from Billy McCafferty SharpArchitecture 0.7.3 (Catharsis uses many great already created things and is also proud to inform about it) Every such an object provides this property and methods:
Persistent
Catharsis.Entity.PersistentObjects.SimplePersistentObject
public virtual int ID { get; protected set; }
public abstract string ToDisplay();
public virtual bool IsTransient();
public override bool Equals(object obj);
public override int GetHashCode();
// protected
protected abstract string GetDomainObjectSignature();
Property ID will be the database unique key. Catharsis uses that value as the main and only indicator on UI (when navigating from the list into the detail, object ID is used etc.). All entities has ID and its type is always int, by default. That behavior can be changed if needed. You can also change auto-incrementing generator to some Guid based one, to make more difficult for attackers gues which number will come next. It is upon you.
ID
int
The ‘indicator-method‘ IsTransient() can help you to decide between already persisted and newly created objects (If ID is not set yet object is transient).
IsTransient()
There are some methods needed to distinguishing among entities (of the same type) GetDomainObjectSignature(), GetHashCode(), Equals(). The first one must be implemented in every derived class (its abstract) and all together then can help you decide if instances are equal, nevertheless one of them is transient (without DB yet provided ID).
GetDomainObjectSignature()
GetHashCode()
Equals()
When implementing abstract GetDomainObjectSignature() you should return string based on properties(y) which together represent unique business key. In typical case it will be the ‘Code’ property.
GetDomainObjectSignature()
Persistent does not implement the IComparable interface (at current version). If you need such, or any other functionality, which all of your object should share, Persistent class is the right place to implement it. Encapsulation is the pillar of OOP, but the usage is not so often as it should be ...
IComparable
In cases you need to display entity and you don't want solve which property to use, Catharsis enforce you to provide ToDisplay() method. On UI layer it then could be simply used in combo-boxes, lists etc. And what’s more the object.ToString() method can still serve you for other purposes.
ToDisplay()
object.ToString()
The second base class (and more powerful one) is Tracked derived from Catharsis.Entity.PersistentObjects.TrackedPersistentObject. Its main purpose is to extend Persistent and provide tracking functionality. The list of implemented methods:
Tracked
Catharsis.Entity.PersistentObjects.TrackedPersistentObject
LifeCondition Condition { get; }
bool MakeAlive();
bool MakeExpired();
bool IsReadOnlyMode { get; }
bool IsTrackedMode { get; }
IList<IChangeSet> Changes { get; set; }
IChangeSet NewChange();
IChangeSet AddChange(IChangeSet changeSet);
DateTime ValidFrom { get; set; }
DateTime ValidTo { get; set; }
DateTime? ToDate { get; set; }
As you can see, all of these built-in features are concerned on changes tracking. They follow Tracking PAX design pattern (to see the power of Catharsis for storing changes see Chapter XIII - Tracking, PAX design pattern). If you’ll decide to derive from this base, you’ll get tracking functionality without any other needed coding. Any change on ‘Alive’ object is stored. Therefore can be simply restored by changing Historical DateTime property. That's another Catharsis framework feature.
PAX
DateTime
We should also mention TrackedPair base class, which will help you to handle collections on Tracked objects. For example, there are AppRole and AppUser objects built-in Catharsis. Any AppRole instance can contain any AppUser instance in its inner collection (AppUser.AppRoles and the opposite AppRole.AppUsers). Items in these collections are time-dependent. It means that User ‘Radim’ can be in role ‘Admin’ from beginning of the year 2007 till the end of 2008. End even then from June 2010 till he will leave … (DateTime positive infinity). The time stamp for being in role could be crucial (for example security issues, auditing, proving …).
TrackedPair
AppRole
AppUser
AppUser.AppRoles
AppRole.AppUsers
DateTime
There is lot of functionality built-in Catharsis to allow you handle pair collections as simple as it could be. The best way to understand is to examine AppRole and AppUser implementation.
Your own new entity class is derived from one of above uncovered bases. It should contain any property and should NOT contain any method. The example comes from (and will follow) previous chapters:
public class Person : Persistent
{
public virtual string Code { get; set; }
public virtual string SecondName { get; set; }
public virtual string FirstName { get; set; }
public override string ToDisplay()
{
return SecondName + " " + FirstName;
}
protected override string GetDomainObjectSignature()
{
return Code;
}
}
The amount of properties is not limited, for tutorial purposes are three enough. The first (up to 3) string type can be created for you when using Guidance.
string
The more you are experienced with NHiberante, the more you are familiar with unlimited potentialities for searching. You can create as many nested Criteria as you (your customers) need to satisfy your (their) needs. If Contract has Supplier, which has Subject, which has Address, which has City … you can easily ask for ‘Contracts’ from ‘Prague’ coming supplier’s.
Criteria
Supplier
Subject
Address
City
supplier
To use these features you have to collect filtering criteria from application user. Catharsis provides separate UI handler for every Entity - searching screen. Whatever you want to allow user to use as filter, you should provide on that screen. Filled ‘form’ is then very simply (please examine provided source code) bind to Search object, which can be send to Dao and converted to filtering criteria.
Search
Dao
Searching object is good example to prove that Catharsis is a solid OOP framework …
We’ve started with guidance and described what Entity-tier is about. Let’s resume the generator usage. Create new folder in Entity project called People (if not exists), and right click on it.
People
As shown on the picture, select the item (Re)Create ‘Entity’ class. On the next two pages of the wizard you have to fill in the entity name: Person. That name will be used for the file name as well. And you can get up to 3 properties of string type.
The last input expects the namespace – by default it is the folders name (People). If you are on the project level, the Root namespace suggestion will appear. If you do not change it, then Guidance will create items on the root level of the project - not in the Root directory nor in the Root namespace (It comes from Guidance demand for unempty value for the namespace) Next page allows you to check if Entity will be Tracked (see Chapter 13 Tracking, PAX design pattern) or not. You can also decide if Entity should be handled as CodeList value. For example Country, Currency (built-in Catharsis) or your own values could be handled as the CodeList values.
namespace
Root
CodeList
The difference in their behavior comes from two assumptions. The set of possible values is limited and not so large to be displayed in combo-box or even radio-button set. And that values are not changing in time (at least not so often). Other good example could be your own CodeList for evaluating: bad, neutral, good.
The advantage of CodeList objects comes from built-in localization for these objects. CodeLists can be adjusted or even translated and therefore displayed to user in friendly form, despite of the fact that are handled and stored by ‘ID’ or ‘Code’ without obvious meaning (1 – bad, 2 – neutral, 3-good).
CodeLists
Code
Click ‘Finish’. Two new files will be added to the Entity project (or they will replace old ones – be careful, all your previous changes will be overwritten).
Both generated files (classes) are only the starting point which could help you concern on the business case. Now come into play your customer needs and requirements.
You have to implement as many properties (not only string type) as newly created object needs to satisfy customer’s needs. And also as the collection of application entities will growth the same could be applied to the searching object. New properties will extend the filtering power of your searching screen.
For example amount values could be filtered by properties ‘EqualOrLess’ and ‘Greater’, the same goes for DateTime values, etc…
How to persist Entity uncovers Chapter VIII - Data layer.
Enjoy Catharsis.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
AppRoleAppUserPair
ITimeDependent
RoleId
UserId
ValidFrom
ValidTo
xyAppRoleAppUser
AppRoleAppUserId
AppUserId
AppRoleId
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/29616/Web-Application-Framework-Catharsis-Part-VII-Entit?msg=2756898
|
CC-MAIN-2015-22
|
refinedweb
| 1,832
| 55.54
|
Sailfin in detail : Part 2 annotated servlets
By prasads on Feb 11, 2009
There are two ways in which we can define a SIP Servlet to be specific in a SIP Application. One of them is the traditional way of defining it in the sip.xml using the
> servlet <element and the other one is to use the @SipServlet annotation..
This blog takes the example of an application and explains how @SipServlet annotation can be used alongwith the servlet-mapping mechanism and also explains the use the @SipApplication annotation.
@SipServlet.
name
used to specify the name . In the absence of this attribute the class name ( not he Fully Qualified , but just the plain class name ) is taken as the servlet-name.
applicationName.
description
Description for the servlet
loadOnStartup
This maps to the load-on-startup element in the sip.xml and specifies the order in which the servlet has to be loaded when the application is starting
All these attributes are optional. Now, lets look at this sample application which has two SIP Servlets, one of which is annotated and the other one is defined in the sip.xml. The annotated servlet has a mapping in the sip.xml.
The snippet below shows the definition of the RegisterServlet defined by an annotation.
The snapshot below shows the sip.xml for defining the servlet-mapping for the RegisterServlet., as name="Registrar" then we would need to mention the the servlet-name in the servlet-mapping element as "Registrar".
@SipApplication annotation
Now, lets take a look at how the @SipApplication annotation can be used. This annotation is a package level annotation and is defined in a package-info.java file. The package-info.java looks like this :
@javax.servlet.sip.annotation.SipApplication(
name="AnnotatedApp",
sessionTimeout=30,
distributable=true)
package net.java.servlet;
Note the package definition at the end of the file. This is the package for which this annotation is defined and all Sip Servlets within this package would be part of the SIP Application defined by this annotation. If there are other servlets in other packages that need to be added to this application , then the user needs to specify the name of the application in the applicationName attribute of the @SipServlet annotation. For example,
package com.example;
@SipServlet(applicationName="AnnotatedApp")
public class MySipServlet extends SipServlet {
adds the above defined SIP Servlet to the SIP Application named AnnotatedApp , even though it belongs to another package.
The @SipApplication annotation has the following attributes :
name
states the name of the application
displayName
maps to the < display-name > element in sip.xml
description
maps to the < description > element in sip.xml
distributable
maps to the < distributable > element in sip.xml
smallIcon
maps to the < small-icon > element in sip.xml
largeIcon
maps to the < large-icon > element in sip.xml
proxyTimeout
maps to the < proxy-timeout > element in sip.xml
sessionTimeout
maps to the < session-timeout > element in sip.xml
mainServlet
maps to the < main-servlet > element in sip.xml
The mainServlet element is used to specify a single servlet as the main-servlet and this will be the servlet invoked for all requests and the servlet defined as the main-servlet is responsible for delegating the incoming requests. In case the main-servlet is invoked the servlet-mapping mechanism is not used.
But that calls for another blog post !!
|
https://blogs.oracle.com/prsad/entry/sailfin_in_detail_part_2
|
CC-MAIN-2015-27
|
refinedweb
| 557
| 56.25
|
I have found this Python function for testing whether or not a number is prime; however, I cannot figure out how the algorithm works.
def isprime(n):
"""Returns True if n is prime"""
if n == 2: return True
if n == 3: return True
if n % 2 == 0: return False
if n % 3 == 0: return False
i = 5
w = 2
while i * i <= n:
if n % i == 0:
return False
i += w
w = 6 - w
return True
Let's start with the first four lines of the function's code:
def isprime(n): if n == 2: return True if n == 3: return True if n % 2 == 0: return False if n % 3 == 0: return False
The function tests to see if
n is equal to 2 or 3 first. Since they are both prime numbers, the function will return
True if
n is equal to either.
Next, the function tests to see if
n is divisible by 2 or 3 and returning
False if either is true. This eliminates an extremely large amount of cases because half of all numbers above two are not primes - they are divisible by 2. The same reason applies to testing for divisibility by 3 - it also eliminates a large number of cases.
The trickier part of the function is in the next few lines:
i = 5 w = 2 while i * i <= n: if n % i == 0: return False i += w w = 6 - w return True
First,
i (or index) is set to 5. 2 and 3 have already been tested, and 4 was tested with
n % 2. So, it makes sense to start at 5.
w is set to 2.
w seems to be an "incrementer". By now, the function has tested for all even numbers (
n % 2), so it would be faster to increment by 2.
The function enters a
while loop with the condition
i * i <= n. This test is used because every composite number has a proper factor less than or equal to its square root. It wouldn't make sense to test numbers after the square root because it would be redundant.
In the
while loop, if
n is divisible by
i, then it is not prime and the function returns
False. If it is not,
i is incremented by the "incrementer"
w, which, again, is faster.
Perhaps the trickiest part of the function lies in the second-to-last line:
w = 6 - w. This causes the "incrementer"
w to toggle between the values 2 and 4 with each pass through loop. In cases where
w is 4, we are bypassing a number divisible by 3. This is faster than remaining at 2 because the function already tested for divisibility by both 2 and 3.
Finally, the function returns
True. If the function hasn't detected any cases where
n is divisible by something, then it must be a prime number.
|
https://codedump.io/share/3M8Ex0Irqg2F/1/why-does-this-prime-test-work
|
CC-MAIN-2017-30
|
refinedweb
| 480
| 76.05
|
New iOS App for the OW – Beta testers wanted
- SeeTheInvisible last edited by
Hey OW riders,
I have some good news for you: A new app for the Onewheel with very useful alarming features is about to be released to the Apple app store.
As this app provides features which are critical when it comes to riders safety, I decided to start a short beta test phase with 5 experienced everyday riders for the next few days. The duration of this phase is not defined, it will end when the feedback from beta testers tells me that everything is working fine :).
So if you want to be a beta tester, there are only a few conditions which have to be met:
- You ride your OW everyday (at least one battery charge a day)
- You like to give feedback about features, bugs and whatever you think
- Your up to now riding experience exceeds 200 miles (or 300 km)
- You really want to have „WheelBuddy” :)
Let’s come to WheelBuddy’s initial feature set (I‘m also going to post a short video on Youtube about the app, but this is going happen whithin the next few days):
Safety related (Sound and message alarming – also when the app is in background)
- Regeneration warning: The regeneration warning tells you when you have to burn battery charge before continuing your ride downwards – otherwise it will shut down and you’re going to eat on whatever you ride
- Low battery warning: Well, this is self explanatory, it warns you before your Onewheel kicks you off due to a low battery shut down
Misc
- Distance warning: This message suits perfectly when riding a flat track, then it tells you when you pass the point of no return (50% battery charge left)
- Autoconnect: you can select one Onewheel to which the app should automatically connect (works also when the app is in background)
- Alias for your Onewheel: This is useful to distinguish your board from others, when riding in a group (Besides it’s also nice to give your Onewheel a name :))
- Control the Light
- Select the riding mode
- See your Onewheel’s movements live on the display (with foot pad sensor display)
- Fully charged notification: this one is useful to know when it’s time to ride again after getting a charge
- Swich between imperial and metric units
Displayed values
- Battery percentage
- Current speed
- Top speed (of the trip)
- Odometer (trip & total)
- Current Consumption (Ah)
- Current Regeneration (Ah)
This is just the initial set of features, there are many more features plannend like logging and displaying trip data, predicting the range according to your riding profile and much more.
So if you want be a beta tester and you meet the criteria I mentioned above, just drop me an email with your iPhone’s UDID at andreas.huss@c-thermal.com and you’ll get the app within next few hours.
How to find your device’s UDID:
Remember, only the first 5 riders will get beta access, but for the others: don’t worry we’ll do exhaustive testing and you’ll get the perfectly working app soon available on the app store :)
Cheers & have a good night,
Andi
Awesome I can't wait for the finished product.
@SeeTheInvisible amazing.. Well done
Amazeballs. Looks sweet
Great work. I am excited to use it.
Yes!!! This is great news! Wish I could beta test, but just picked up my ow this week.
hey buddy,
congratulations - those features seem to be perfect with all the relevant data on just one screen, but what I really do appreciate are those alarms which could save me from looking at my iphone each 5 minutes to avoid a desaster.;-)
I do fullfill the milage and usage criterias, with two onewheels (556miles and 120miles) and live on the top of a hill, which means that those alarms would be perfect for me.
also you do have to count me in since compatriots have to be threated prior;-)
sent you a mail with the details and hope to hear from you soon!
- SeeTheInvisible last edited by
Great to hear that you like it.
In the meanwhile I made a short video of the app:
@SeeTheInvisible can't wait for this. Well done.
This app looks amazing! Can't wait until it's available. And I had no idea the Onewheel could detect left-to-right tilt too! Pretty cool.
I have now already had about 4 rides using the app - the last two of those rides together with Mr.SeeTheInvisible who introduced me to his awesome tracks yesterday in the evening!;-)
So alltough I have to admit that I am probably biased, I want to share my experience with the app:
First of all: I really love that thing for getting that damn phone out of my hand and let me just focus on the experience of the ride, still beeing able to know all relevant information in an accoustic way (intuitive sounds by the way!)
Since I rode on different terrain than usually, I did not yet had the chance to really test the regeneration overcharge alarm, so I can not yet say if this works as it should.
The turnaround warning is very useful for me, since I often do trips to explore new tracks and here it is really nice to have an acoustic signal, to indicate that it could be time to turn around if you don´t want to carry. Of course this is only a very rough indication, because it depends on the terrain (up/down), but especially in the streets it is very usefull.
the "battery low" acoustic indicator for me is one of my favorite use cases.
A lot of my rides are estimated in a way that I start with 100% and return with something between 3% and 7%. It is obviously that when I vary my riding style or exploring some side-tracks, it is always risky in terms of stranding somewhere without any battery left. Because of this I frequently had to montor the original OW app when I was getting close to low battery. Now that I have this configureable alarm, my "WheelBuddy" just tells me when it is time to stopy playing and carefully try to get back - all with the phone in my pocket where it belongs too.
The "fully charged" alert is funny. If you are close enough to the charging onehweel, it makes a motivating sound when fully charged to let you know its finished. Happend just once to me.
Connection: works perfect. It is always connected as soon as I am close to my onewheel. It automatically reconnects (in the background) when I lost connection, because I e.g. walked away
Switching between both Onehweels is fine and easy, but I would like to also give an alias to more than my primary OWs to be easily able to distinguish between several OWs when riding in a group (don´t want to remember all those serial numbers)
The screens user interface: For me this is all I need.
I am not into playing with a lot of folderol in an app, I just want to have the most important parameters all on one screen and this is just exactly what it does. Now that I am finally not forced to do so anymore, I never want to look at my phone during the ride. With your app, the only reason for me to look at the screen during a ride is, when I want to see my current trips top speed - a nice feature which I think could be even evolved - more to that later on.
Bugs:
I had one bug when swiching of the onwheel during my trip. One of the alarms then notified me at a wrong level - please correct this, because I really have to rely on those alarms, like I normally do. That´s one of the main added values your app provides to me.
It happened to me once, that I had the wheel spinning fast on a terrain with a lot of gravel. This led to an top-speed of about 35 km/h which was definitly wrong. Since those stats stay for the whole trip, I therefore loose my "real" top speed. Maybe you could detect/calculate somehow if the wheel is spinning without traction (->unrealistic acceleration?) and then just drop those false values?
Suggestions:
I would love to have one more additional "critical level" alarm. My use case is the following: I will set the "low battery" alarm to around 15%, this indication will leave enough energy to let me try to find my shortest way back home. I would then configure the "critical level" alarm so somewhere around 3%, so I know I have to take it slow because the OW might switch off at any time (in case I do not feel the empty-battery-pushback, because of riding too hard). For me this would be very helpfull.
Additional "trip top speed" motivation - just a funny idea, definitly not a must have
For me it was interesting to check the top speed every now and then, to see if I can increase it. I think this feature could be extended. Maybe there could also be an accoustic short notification which triggers each time, you have a new top speed, starting above an (configureable) speed level. For example: I configure my "irresponsible top speed motivation level" to 23km/h. The first time I exceed this level, there is a short (intuitive/motivating) sound which indicates that a new trip top speed has been reached. Whenever a new trip top speed has been established, the sound comes again and so on...this way again I wouldn´t have to look at the app during the ride. I would then just check the last trip top speed in the app after the ride.
For future development a separate statistics screen would be nice, where I can check the rides stats after the ride (history of speed, incline, km´s, ...)
eventually in the future even with the possibility to share/compare it with friends or the community (eg "rider board" with top speed of the day, most miles per day, ...)
As mentioned above, I already love my "WheelBuddy" and I will never ever use the original app again;-)
In my opinion you just have to fix that bug and you can already release it. New features later on are welcome, but this app is allready by far better then the original one.
Thanks a lot!!
Got the app yesterday, so far it does what its suppose to, and does it great.
Im also an complete idiot.. I asked for features that was already there, Im just used to have in app settings, not under iphone settings ^^ So that was "fun"....
I was wondering if the one wheel logs when pushback occurs i think this would be very helpful for crash reporting in the new bad ass app.
Also I would like to see a speed warning so you would know your close to push back activation.
- Aaron Broward FL last edited by
Can't wait to get my hands on it, let me know if you want a good tester, even though I don't have 200 miles... I'm good at beta testing :)
@itwire said in New iOS App for the OW – Beta testers wanted:
@BadWolf how did you download it?
thx
Thru SeeTheInvisible as a beta tester =)
@SeeTheInvisible Hello! I would like to try Wheel Buddy if you still have room for another tester. I have two OW's with 500 and 200 miles. Thank you.
@dcosmos said in New iOS App for the OW – Beta testers wanted:
@SeeTheInvisible Hello! I would like to try Wheel Buddy if you still have room for another tester. I have two OW's with 500 and 200 miles. Thank you.
Mail him as instructed :)
Thanks. I need to get The UDID also
me test--me test!
|
https://community.onewheel.com/topic/3834/new-ios-app-for-the-ow-beta-testers-wanted
|
CC-MAIN-2021-04
|
refinedweb
| 2,006
| 72.29
|
Is it normal? Time Scale is set to 4, Fixed Timestep to 0.1 and Maximum Allowed Time to 5. I've tried to also change these values, didn't work. The ball itself is moving only because of the platform. Update isn't overused.
This is my script of rotation with mouse:
public class MouseRotate : MonoBehaviour {
public float speed = 5.0F;
void Update() {
if (Application.platform == RuntimePlatform.IPhonePlayer) {
Vector3 dir = Vector3.zero;
dir.x = speed * -Input.acceleration.y;
dir.z = speed * Input.acceleration.x;
if (dir.sqrMagnitude > 1)
dir.Normalize ();
transform.Rotate (dir.x, 0, dir.z);
} else {
float h = speed * Input.GetAxis ("Mouse X");
float v = speed * Input.GetAxis ("Mouse Y");
transform.Rotate (v, 0, -h);
}
}
}
So i really don't understand why it dissapears.
I would lower your time scale to start with, it is running your game four times faster than normal.
When i do so, it starts moving through walls and i don't know why. Colliders are set, without using mesh collider.
Also, show us the colliders of the maze walls.
Have you tried Time Scale 1, Fixed Time Step 0.1 or 0.2 and changing Collision Detection as Continuous or Continuous Dynamic on the rigidbody as described here
Answer by MrSoad
·
Nov 08, 2014 at 12:58 AM
Also what scale is the ball, place it next to a std Unity sphere and let us see its relative size. It may be that you have everything a little bit too small which could cause issues like in your video, thanks
Answer by FairGamesProductions
·
Nov 08, 2014 at 12:46 AM
So, from the video, it looks like the ball is not disappearing, it's "falling" through the platform mesh.
Set your timescale to 1, and re-make the video while showing us the editor view (not the player view) with the ball selected, so we can see the balls collider.
That should tell us a little more about what's.
Truck moves sideways when using transform.forward
1
Answer
Raycast/Non-Physics Collider Discrepancy
0
Answers
Instantiate object with joints to moving object
0
Answers
Collision of a moving object with a standard 3rd person controller
1
Answer
how to make a moving object
2
Answers
|
https://answers.unity.com/questions/826472/object-dissapears-when-moving-fast.html
|
CC-MAIN-2019-35
|
refinedweb
| 374
| 68.47
|
laserlight: i will post the info when i go home, the MinGW is in my home computer.
although outdated (per se Elysia) in office i use MSVS 2003 .Net.
Printable View
laserlight: i will post the info when i go home, the MinGW is in my home computer.
although outdated (per se Elysia) in office i use MSVS 2003 .Net.
Yeah. You are right Elysia.
My job requirements make me use Qt 4.1 when there is 4.3, SCons 0.96 (Yuck!) MSVS 2003 etc. etc.
At least it is reasonably standards compliant, unlike MSVC6 or MSVC7.At least it is reasonably standards compliant, unlike MSVC6 or MSVC7.Quote:
In other words: they force you to use it?
Hehe, I noticed that SCons 0.98 was released on 31 March.Hehe, I noticed that SCons 0.98 was released on 31 March.Quote:
SCons 0.96 (Yuck!)
laserlight this is for you:
g++:
my system:my system:Code:
Reading specs from C:/MinGW/bin/../lib/gcc/mingw32/3.4.5/specs
Configured with: ../gcc-3.4.5/configure --with-gcc --with-gnu-ld --with-gnu-as -
-host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads --disable-nls --
enable-languages=c,c++,f77,ada,objc,java --disable-win32-registry --disable-shar
ed --enable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --ena
ble-java-gc=boehm --disable-libgcj-debug --enable-interpreter --enable-hash-sync
hronization --enable-libstdcxx-debug
Thread model: win32
gcc version 3.4.5 (mingw special)
Code:
OS Name Microsoft Windows XP Professional
Version 5.1.2600 Service Pack 2 Build 2600
OS Manufacturer Microsoft Corporation
System Name INTEL
System Manufacturer INTEL_
System Model D845GBV2
System Type X86-based PC
Processor x86 Family 15 Model 2 Stepping 7 GenuineIntel ~1799 Mhz
BIOS Version/Date Intel Corp. RG84510A.86A.0028.P15.0302260937, 2/26 INTEL\Administrator
Time Zone Pacific Standard Time
Total Physical Memory 512.00 MB
Available Physical Memory 236.27 MB
Total Virtual Memory 2.00 GB
Available Virtual Memory 1.96 GB
Page File Space 864.94 MB
Page File C:\pagefile.sys
With 512MB of RAM and built-in graphics, you may not have enough memory to compile large projects. gcc and g++ are pretty memory hungry at times.
If anything else, a bit more memory will allow the system to cache more of the include files and such.
--
Mats
thanks. actually maybe it depends on system load.
this program compiled in 2 seconds only:
Code:
#include <iostream>
using namespace std;
int main()
{
cout << "Hello G++!\n";
return 0;
}
Surely 512mb RAM is plenty.
Manav: maybe theres somethin eating up all your resources, either that or you have a very old pc.
My laptop specs are:
512mb ram.
Celeron underclocked @ 630mhz
280mhz bus.
No GPU: leeches off ram instead.
OS: Xubuntu
But g++ will still compile pretty much any small project for me in less time than is noticable.
512 MB is not plenty :p
It's pretty poor these days. You can't have much running with 512 MB and expect a fast system...
Don't Insult my new laptop Elysia :devil:
On the plus side its only about 9" big and has a solid state HD that boots into xfce in around 30 seconds.
Tbh I find 512mb ram more than enough for most of the things I do with a computer. I can still run FF with a load of tabs open, pidgin, the terminal, and gedit all at once with no paging. What more should I need?
LOL. We had this discussion before Elysia. Let it go. I think you were proved wrong back then. It's a matter of what you do with your computer. 512MB Ram can be plenty, yes. It IS plenty on my case. I honestly don't feel at all the need to put in some more.
|
http://cboard.cprogramming.com/tech-board/101057-gplusplus-very-slow-when-compared-cl-2-print.html
|
CC-MAIN-2013-48
|
refinedweb
| 641
| 77.53
|
This page contains an archived post to the Java Answers Forum made prior to February 25, 2002.
If you wish to participate in discussions, please visit the new
Artima Forums.
why do women always say UPS guys are sexy?
Posted by Chin Loong on November 20, 2001 at 6:11 AM
1) What classpath i should set?
both. set your classpath to pkg1 AND pkg2
2) Tell me the Package statements for both the classes
for SUPERCLASS:package pkg1;
public class SUPERCLASS {...}
for SUBCLASS:package pkg2;
import pkg1;
public class SUBCLASS extends SUPERCLASS {...}
*or*package pkg2;
public class SUBCLASS extends pkg1.SUPERCLASS {...}
3) Also tell me the commands to run subclass being in pkg2
??? i don't understand what u mean..
|
https://www.artima.com/legacy/answers/Nov2001/messages/182.html
|
CC-MAIN-2017-51
|
refinedweb
| 121
| 66.94
|
I posted about Giving a character a new identity (by giving it some secondary weight).
Now that post, while true, only tells part of the story.
Now I am going to tell the other part....
Take the following code and you may be able to see where I am going before you even look at the results:'));
The results? They will be:
-1-1-10-1-1-10
-1-1-10-1-1-10
So what's the problem? Why does System.String.IndexOf(Char) behave differently than System.String.IndexOf(String), System.Globalization.CompareInfo.IndexOf(String, Char), and System.Globalization.CompareInfo.IndexOf(String, String), anyway?
Well, setting aside my disdain for all of the System.String shortcuts to globalization functionality that makes the real linguistics features of the System.Globalization namespace that much harder for developers both inside and outside of Microsoft to find (never mind the additional confusion about the confusing and incomplete flags they add), there is the fact that the System.String "shortcut" methods often contain actual shortcuts to try to be more performant, to try to keep from calling the "slower" globalization methods.
So this particular issue can be looked at as an over-optimization, a case where developers assumed that they would not need to call the "slower" method in this situation.
Were they wrong?
Well, in my view, yes. All of these shortcut methods are just plain bad if they ever do anything other than call the real methods in the System.Globalization namespace. Anything else makes for less maintainable code that requires modifying multiple bits if there are ever changes or problems to fix, and it is harder for testers to track all of these different places to verify correct behavior in.
Of course now I suppose it would be in some people's minds a breaking change to fix the errant method.
So let's make it more interesting and raise the stakes:'));
The results here? You know in this "Swedish "A-Ring" case?
-1-1-1-1-1-1-10
-1-1-1-1-1-1-10
So, that over-optimization is causing behavior differences in strings that are canonically equivalent in Unicode, to wit LATIN SMALL LETTER A WITH RING ABOVE versus LATIN SMALL LETTER A + COMBINING RING ABOVE.
And that is a bug, suggesting that just taking out this over-optimization case might be in everyone's best interests....
(Using the Swedish or Japanese results above is not required; it just makes the weirdness look worse. The bug is there either way)
This post brought to you by å (U+00e5, a.k.a. LATIN SMALL LETTER A WITH RING ABOVE)
If you would like to receive an email when updates are made to this post, please register here
RSS
Actually, my opinion goes the other way: the System.String "shortcuts" should never have called the System.Globalisation methods.
It's not really a matter of "optimization" vs. "user expectations" because there really ARE some cases where you want the code-point behaviour.
Doing it that way would make System.String a simple "array of code-points" and the methods on it work that way. The System.Globalization methods are the actual lingustics methods.
Ah well, too late for all this anyway :-)
Hello Dean,
If people want the code point behavior, then the ORDINAL methodology is available. These shortcuts are actually designed to work linguistically, they just don't always do so....
<<make System.String a simple "array of code-points">>
Two opinions on this kind of things:
1. All functions handling one character should be removed. Completely. From .NET, Win32 API, C++. Because doing anything on one character is a problem: search, compare, changing case, you name it. Everything should be done on strings, return strings, and so on.
It is the only way to get correct linguistic results.
2. The storage should be separated from the string itself. You need access to the code points, you access the storage explicitly.
Then you will be able to do stuff like this:
string str = "\u0061\u030a";
str.length(); // gives you linguistic info
str.storage.length(); // gives you storage info (code points)
The storage is locale-independent, the string is not. And the intention is always clear.
Ok, some more thinking on what can go wrong is needed, but these are the general ideas.
Well, actually, we use a hybrid approach:
1) For most purposes we use your #1
2) For NLS collation functions that take an LCID, we do #2 plus (we include other constructs like sort elements).
Mihai: I don't think I disagree, specifically. My suggestion would just have been that the System.String class be your "string.storage" and SOMETHING ELSE be the linguistic stuff.
I guess that's just a product of where I usually work, though. Most of my string manipulation stuff (in my day job) come from manipulating email and SMS messages, both of which are predominately ASCII-based (at least, at the level I work on - the raw protocols). If most of my work was on web pages, or a text editor or something then I suppose I'd go with your suggestion...
You can't please everybody :-)
The only problem with THAT idea (which by the way there are people on the BCL team who would have preferred that approach in retrospect) is that there would be no linguistic support in the vast majority of apps.
And I just can't be a complerte fan of that sort of approach.... :-)
<<My suggestion would just have been that the System.String class be your "string.storage" and SOMETHING ELSE be the linguistic stuff.>>
Technically, it does not matter how you call things.
But for the perception of the one reading the code, it does (think #define BOOL int, before bool was standard).
A string is something containing text, and text is associated with linguistic properties in one's mind.
And since System.String has stuff like ToUpper, it is already "too dirty" to be a plain storage (because ToUpper is a locale-sensitive operation).
So I really think that String *is* the right thing for linguistic behavior.
<<Well, actually, we use a hybrid approach>>
Maybe in the implementation. But the idea was to make this explicit, for all programmers to see, not just an internal representation thing.
When I see str.length() and str.storage.length(), the intention becomes instantly clear, without even reading the doc.
It is probably too late to do this, without breaking backward compatibility. And I was also talking about C++, which is outside MS control :-)
And the idea was philosophical anyway. I don't really expect that <<All functions handling one character should be removed. Completely. From .NET, Win32 API, C++.>>
Who am I, who's going to listen to me? :-D
So I can't tell whether you think the anomaly I mentioned in this post about System.String.IndexOf(Char) is a bug to be fixed or a backcompat issue to be left alone? :-)
Bug :-)
Since String is a linguistic thing, I would expect linguistic behavior.
If you ever expose something like System.String.Storage.IndexOf(Char), then that should work on coding units.
What I think would make sense (at least for me :-)
string st2 = "\u0061\u030a";
// Linguistic behavior ci.IndexOf(st2, "a"); // -1
// Remove as per rule #1 ci.IndexOf(st2, 'a'); // undefined API error
// Linguistic behavior st2.IndexOf("a"); // -1
// Remove as per rule #1 st2.IndexOf('a'); // undefined API error
// Add this API, with non-linguistic behavior // and not affected by CultureInfo // working on coding units st2.Storage.IndexOf("a"); // 0
// DO NOT add this API, // because non-linguistic behavior // in a CultureInfo context is dumb ci.Storage.IndexOf("a"); // undefined API error
<<System.String.IndexOf(Char) is a bug to be fixed or a backcompat issue to be left alone?>>
I think I did not answer the question.
It is clear it is a bug, but to fix, or not to fix, this is the question? Sounds like you are trying to push me in a corner :-)
Well, if it can be fixed without breaking compatibility, then yes, fix it :-)
Check with Raymond Chen :-D
Sting str="C:\Documents and Settings\asriv5\Desktop\Login.jsp";index i=str.lastIndexOf("\"); // not working why ???
plz give me the sol...
anupam
|
http://blogs.msdn.com/michkap/archive/2007/02/17/1701561.aspx
|
crawl-002
|
refinedweb
| 1,389
| 66.23
|
Hello, I am trying to create a program that plays a game that has four different coloured blocks (red, green, blue, and yellow). The computer hides three of these coloured blocks from the user and then the user will try to guess the colours and order of the blocks. After display the colour of the three hidden blocks, the computer will display how many are correct and how many colours are in the right position. Based on the following information, the user can make another guess and so on until the user has determined the correct order and colour of the blocks. We have to use the following code - checkColoursCorrect(), checkPositionsCorrect(), randomWholeNumber() and newGame() which will write the code to generate three unique numbers from 1 to 4 for the block numbers. Here is what I have so far:
package guesscolorblocks; import javax.swing.JOptionPane; import java.lang.String; public class GuessColorBlocks { private static int colours = 0, positions = 0; /** * */ public static void main(String[] args) { // Declare variables int userSelection; //Set a button text in the array Object[] options = new String[] {"Yes", "No"}; //Display the menu to allow the user to click an option int menu = JOptionPane.showOptionDialog(null, "Welcome to the Guess the Blocks Game!" + "\nReady to start?:", "Red, Green, Blue, or Yellow", JOptionPane.YES_NO_OPTION, JOptionPane.PLAIN_MESSAGE, null, options, options[0]); if (menu == JOptionPane.YES_OPTION) { playGame(); } else if (menu == JOptionPane.NO_OPTION) { System.exit(0); } } public static void playGame () { String firstGuess = "", secondGuess = "", thirdGuess = ""; int red, blue, green, yellow; String inputfirstGuess = JOptionPane.showInputDialog("Enter your first guess: (Red, Green, Blue, Yellow):"); String inputsecondGuess = JOptionPane.showInputDialog("Enter your second guess: (Red, Green, Blue, Yellow):"); String inputthirdGuess = JOptionPane.showInputDialog("Enter your third guess: (Red, Green, Blue, Yellow):"); int computerSelection = (int) ((Math.random() * 4) + 1); if (firstGuess.equalsIgnoreCase("red")) { firstGuess = "red"; } else if (firstGuess.equalsIgnoreCase("green")){ firstGuess = "green"; } else if (firstGuess.equalsIgnoreCase("blue")){ firstGuess = "blue"; } else if (firstGuess.equalsIgnoreCase("yellow")){ firstGuess = "yellow"; } if (secondGuess.equalsIgnoreCase("red")) { secondGuess = "red"; } else if (secondGuess.equalsIgnoreCase("green")){ secondGuess = "green"; } else if (secondGuess.equalsIgnoreCase("blue")){ secondGuess = "blue"; } else if (secondGuess.equalsIgnoreCase("yellow")){ secondGuess = "yellow"; } if (thirdGuess.equalsIgnoreCase("red")) { thirdGuess = "red"; } else if (thirdGuess.equalsIgnoreCase("green")){ thirdGuess = "green"; } else if (thirdGuess.equalsIgnoreCase("blue")){ thirdGuess = "blue"; } else if (thirdGuess.equalsIgnoreCase("yellow")){ thirdGuess = "yellow"; } } }
|
http://www.javaprogrammingforums.com/object-oriented-programming/15992-guessing-game.html
|
CC-MAIN-2016-18
|
refinedweb
| 371
| 51.95
|
url_audio_stream 1.0.0+3
url_audio_stream #
Dart plugin to live stream audio URLs. The package will accept both HTTP and HTTPs URLs for streaming. Specifics will be discussed below for native designs, limitations, and implementations. Any help would be greatly appreciated if possible!
Usage #
Add the dependency
dev_dependencies: url_audio_stream:
Import the package into your dart file
import 'package:url_audio_stream/url_audio_stream.dart';
Functions and usage
AudioStream stream = new AudioStream(""); stream.start(); stream.pause(); stream.resume(); stream.stop();
Android #
The Android MediaPlayer was used for audio streaming over HTTP/HTTPS. Refer to the Android for information about the MediaPlayer. The player uses setAudioAttributes method for setting up the MediaPlayer for API levels over 26. Anything under that API level will use setAudioStreamType method, which was deprecated in API level 26. Due to this adaption, the flutter compiler will give a message that the plugin is using a deprecated method.
HTTP Streams #
Android requires an edit to your android manifest to allow connection to non-HTTP sources, follow this link to edit the manifest for clear text traffic.
iOS #
The Swift AVPlayer was used for the implementation over HTTP/HTTPS. Refer to the Apple site for information about the AVPlayer. The player was designed in Swift 5 and requires a change in the Info.plist if you need to HTTP stream. According to the Apple article, iOS SDK 4.0+ is required for streaming.
HTTP Streams #
For the clear text traffic, a change will need to be done in XCode on the Runner.xcworkspace file. The NSAppTransportSecurity flag will need to be changed. It is recommended that you add an exception to the site you are streaming, rather than allowing all HTTP traffic. You can follow this StackOverflow link for changing or adding domains for streaming.
url_audio_stream #
1.0.0+1 #
Android #
- Added an ability to stream on all APIs that Flutter can run on
iOS #
- iOS can stream in the background
1.0.0+2 #
- Changed the homepage reference
1.0.0+3 #
- API comments for iOS and Android
- A detailed description was added to pubspec.yaml
- Flutter formatting was applied to the url_audio_stream.dart file
url_audio_stream_example #
Demonstrates how to use the url_audio_stream: url_audio_stream: _audio_stream/url_audio.
|
https://pub.dev/packages/url_audio_stream
|
CC-MAIN-2019-51
|
refinedweb
| 364
| 58.89
|
This is the second module in our series to help you learn about Python and its use in machine learning (ML) and artificial intelligence (AI).
Now that you know some of the basics of Python, which were discussed in the first module, we can go a bit deeper, with the lists and tuples data structures and see how to work with them.
A list is a collection of items. Lists are mutable: you can change their elements and their size. Thus they’re similar to a List<T> in C#, an ArrayList<T> in Java, and an array in JavaScript.
List<T>
ArrayList<T>
You can assign a list like this, and then access its elements by their zero-based index:
foo = [1, 2, True, "mixing types is fine"]
print(foo[0]) # 1
foo[0] = 3
print(foo[0]) # 3
The append method adds an element at the end of the list. The insert method places an element at an index you specify:
append
insert
foo = [1, 2, 3]
foo.append(4)
print(foo) # [1, 2, 3, 4]
foo.insert(0, 0.5)
print(foo) # [0.5, 1, 2, 3, 4]
To remove an element at an index, use the del keyword:
del
del foo[2]
print(foo) # [0.5, 1, 3, 4]
A tuple is another type of collection of items. Tuples are similar to lists, but they’re immutable. A tuple gets assigned like this:
foo = 1, 2, True, "you can mix types, like in lists"
You'll often see tuples formatted as (1, 2, "a"), with parentheses. Parentheses around tuple values are used to help with readability or if needed because of the context. For example, 1, 2 + 3, 4 means something different than (1, 2) + (3, 4)! The first expression returns a tuple (1, 5, 4) while the second returns (1, 2, 3, 4).
(1, 2, "a")
1, 2 + 3, 4
(1, 2) + (3, 4)
(1, 5, 4)
(1, 2, 3, 4)
Obtaining a value from a tuple works in the same way as from a list, foo[index], with index denoting the zero-based index of the element. You can see that tuples are immutable if you try to change one of the elements:
foo[index],
index
foo[0] = 3 # will raise a TypeError
That would work fine for a list, but not for a tuple.
width="602px" alt="Image 2" data-src="/KB/AI/5270746/image002.png" class="lazyload" data-sizes="auto" data->
A tuple also doesn't have the append, remove, and some other methods.
You can also return tuples from functions, and this is a common practice:
def your_function():
return 1, 2
This returns a tuple (1, 2).
(1, 2)
If you want a tuple with only one element, put a comma after that element:
foo = 1,
Python's indices are more powerful than I've demonstrated so far. They offer some functionality that doesn’t exist in C#, Java, and the like. An example is negative indices, in which -1 refers to the last element, -2 refers to the second-last element, and so on.
my_list = [1, 2, 3]
print(my_list[-1]) # 3
This works on both lists and tuples.
Also, you can take a slice of a list or a tuple by specifying the index of the starting, ending, or both starting and ending elements of the slice. This generates a new list or tuple with a subset of the elements. Here are a few examples to demonstrate:
my_list = [0, 1, 2, 3, 4, 5]
print(my_list[1:2]) # [1, 2]
print(my_list[2:]) # [2, 3, 4, 5]
print(my_list[:2]) # [0, 1]
print(my_list[0:4:2]) # [0, 2]
print(my_list[-3:-1]) # [3, 4]
print(my_list[::-1]) # [5, 4, 3, 2, 1, 0]
The slice notation is [start:stop:step]. If start remains empty, it's 0 by default. If end remains empty, it means the end of the list. The :step notation is optional. So ::-1 means "from 0 to the end of the list with step -1" and thus returns the list reversed.
[start:stop:step]
start
end
:step
::-1
Slices will never raise IndexErrors. When going out of range, they just return an empty list.
IndexErrors
Imagine you have a tuple (or a list) with a known number of elements, three for example. And suppose you'd rather have three distinct variables, one for each tuple element.
Python offers a feature called destructuring (or unpacking) to break up a collection with a single line:
my_tuple = 1, 2, 3
a, b, c = my_tuple
Now a = 1, b = 2, and c = 3.
a = 1
b = 2
c = 3
This also works for lists:
my_list = [1, 2, 3]
a, b, c = my_list
This is very useful when dealing with functions that return tuples, and there are plenty of these in the Python ecosystem, as well as when dealing with AI-related libraries.
You're probably familiar with three kinds of loops: for, foreach, and while. Python only offers while and foreach loops (which it does with a for keyword!). No worries, though. As we'll see later, it's very easy to create a loop that behaves exactly like a for loop.
for
Here’s a Python loop that iterates over a list:
fruits = ["Apple", "Banana", "Pear"]
for fruit in fruits:
print(fruit)
You can also iterate over tuples:
fruits = "Apple", "Banana", "Pear"
for fruit in fruits:
print(fruit)
Generally, you can use a for loop on every iterator. Iterators, and how you can create your own, will be discussed in more depth in later articles.
If you want a C-style for loop rather than a foreach loop, you can loop over the result of the range function, which returns an iterator over a range:
range
for i in range(10):
print(i)
The last printed number will be 9. This is equivalent to the following C snippet:
for (int i = 0; i < 10; i++) {
Console.WriteLine(i);
}
The range function offers more than just counting from zero up to a given number. You can specify a different starting number using range(x, 10), where x will be the first array element. You can specify the step size using a third argument, such as range(0, 10, 2).
range(x, 10),
x
range(0, 10, 2)
Creating a range that counts from high to low goes like this: range(10, 0, -1). The first element will now be 10 and the last will be 1. Indeed, range(0, 10) is not the reverse of range(10, 0, -1), because the second argument won’t be included in the range.
range(10, 0, -1)
range(0, 10)
A while loop in Python looks very similar to what you already know:
while condition:
# code
Python also offers break and continue statements that work exactly like the ones in C#, Java, JavaScript, and many other languages.
while True:
if input() == "hello":
break
In this module, we looked at lists and tuples in Python, and learned about indexing, destructuring, and loops. In the next article, we'll talk about generators and classes.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
|
https://codeproject.freetls.fastly.net/Articles/5270746/Python-Tuples-Lists-Destructuring-and-Loops?pageflow=FixedWidth
|
CC-MAIN-2021-49
|
refinedweb
| 1,200
| 68.3
|
While relational (comparison) of the multiple numbers we picked match the winning numbers. In a lottery with 6 numbers, this would involve 6 comparisons, all of which have to be true. Other times, we need to know whether any one of multiple conditions is true. For example, we may decide to skip work today if we’re sick, or if we’re too tired, or if won the lottery in our previous example. This would involve checking whether any of 3 comparisons is true.
Logical operators provide us with this capability to test multiple conditions.
C++ provides us with 3 logical operators:
Logical NOT
You have already run across the logical NOT unary operator in section 2.6 --
Reminder::
This way, x == y will be evaluated first, and then logical NOT will flip the boolean result.
Rule: If logical NOT is intended to operate on the result of other operators, the other operators and their operands need to be enclosed in parenthesis.
Rule: It’s a good idea to always use parenthesis to make your intent clear -- that way, you don’t even have to remember the precedence rules. does not equal 1, the conditional must be false, so y++ never gets evaluated! Thus, y will only be incremented if x evaluates to 1, which is probably not what the programmer intended!, XOR cannot be short circuit evaluated. Because of this, making an XOR operator out of logical OR and logical AND operators is challenging. However, you can easily mimic logical XOR using the not equals operator (!=):
This can be extended to multiple operands as follows:
Note that the above XOR patterns only work if the operands are booleans (not integers). If you want this to work with integers, you can static_cast them to bools.
If you need a form of XOR that works with non-boolean operands, you can use this slightly more complicated form:
Quiz
Evaluate the following:
1) (true && true) || false
2) (false && true) || true
3) (false && true) || false || true
4) (5 > 6 || 4 > 3) && (7 > 8)
5) !(7 > 6 || 3 > 4)
Quiz answers
Note: in the following answers, we “explain our work” by showing you the steps taken to get to the final answer. The steps are separated by a => symbol. For example “(true || false) => true” means we evaluated “(true || false)” to arrive at the value “true”.
1) Show Solution
2) Show Solution
3) Show Solution
4) Show Solution
5) Show Solution
Hi Alex,
There is one thing I don’t get. You wrote
I assume that the XOR of more booleans is true whenever (and only when) exactly one of the booleans are true. Am I right?
But this dos not seem true to me, even with only three booleans. In fact if
then
But, perhaps, I just got wrong what is the XOR of multi booleans. So,
would just mean
which means: if c is true then a and b are either both false or either both true, if c is false then one of a and b is true and the other is false (in order for the statement to be true), which in turn means that exactly one of the three is false or they are all true. I don’t know in which situations this expression might be useful. Especially with more than three variables the logical table becomes quite complex and hard to understand.
Thanks.
XOR of multiple booleans evaluates to true whenever an odd number of the inputs are true, and false whenever an even number of inputs are true. XOR is rarely used.
Cool! Thank you.
Hi,I am a beginner of C++.How do I start..? What Basics should a beginner must know before his coding..?
Start at the beginning of the tutorial. It will tell you everything you need to know.
Is
evaluated to
or to
? thanks.
Operator != evaluates from left to right, so (a != b != c != d) evaluates as (((a != b) != c) != d).
Typo… you put "then" instead of "than" in the statement "and also whether x is less then 20"
And sorry about my English but I’m Argentinian so I speak Spanish.
Have a nice day and your tutorial is amazing!!!
Thanks, I’ve fixed the typo. Appreciate you pointing it out.
i want to write a program where i just want to pick out cars which are black and has a brand name of toyota… now i am facing the problem that where will i declare the flag of black and toyota..
although i know this can be also done with out flag… just want to know
[/code]
#include <iostream>
int main()
{
std::cout<< "enter colour";
int colour;
std::cin>>colour;
std::cout<< "enter brand";
int brand;
std::cin>>brand;
if(colour==1 && brand==0)
{
black= 1; //colour flag
brand toyota=0; //brand flag
std::cout<<"pick yhe car"<< std::endl;
}
else
{
std::cout<<"pick another"<<std::endl;
}
return 0;
}
Alex and others please help
This evaluates as
which isn’t what you want. You need to “distribute the variable”, like so:
(Note: I changed your logical ORs to logical ANDs as well)
Thanks again Alex
Might be a good idea to say how you actually type a logical or operator (Press alt gr + that three character key underneath esc)
How you type | depends on the layout of your keyboard. In the US it’s shift-\.
When should the logical NOT be used versus the not equal to?
You should use != when you want to check if two things are not equal. You should use ! if you want to invert a result.
It’s better to use != than ! (==) because it’s both clearer what your intent is, and it’s one operator instead of two.
Alex,
great work with these tutorials. I have learned so much in such a little time. Anyhow, I had this idea that it may be cool to combine all of your Rules onto one page (each rule is linked back to the section for context reference and such). anyhow… just thinking out loud here!
Yes, this is on my to-do list. 🙂 Thanks for reminding me that I need to get around to actually doing it. 🙂
How to decide the operator precedence in a complex logical expression like this.
if(a > b && c != 0 || d <= a != 0)
We talk about precedence in lesson 3.1 -- Precedence and associativity.
Alex,
Is it true that the comparison operator == can be evaluated as bool true or false? The reason for asking this is does the comparison operator have an effect on the logical or operator. In the program above you wrote ( if ( value == 0 ll value == 1) ). If I’m not mistaken if a user enters a value of 1 or above the left side of the or is false and the right side is always true. So the if statement still evaluated as true. However if the user entered a 2, 3, or 4 . The wording in the cout statements would be wrong. A negative number would have the same effect. Wouldn’t it?
Yes, operator== evaluates its operands and returns true if they are identical, and false otherwise. If the user entered a value of 2, the left hand side would evaluate to false, and the right hand side would evaluate to false, and false || false is false. The provided statement will only evaluate to true if value evaluates to 0 or 1.
Here is a simplified way to understanding how the logical NOT is distributed using De Morgan’s law. This is under the assumption that the operators mentioned below are all boolean true.
In the following condition:
(x && y)
If one operand is true, and one false, then both are false. Both x and y have to be true.
If you don’t want the overall condition to be true, i.e. !(x && y), you have to make sure that only one of the operands is false, i.e. make sure either x is false (!x), or y is false (!y).
This way the condition will always evaluate to false. Which would be written as: !x || !y
-------------
(x || y)
Only one operand has to be true.
If you don’t want the overall condition to be true, i.e. !(x || y), you have to make sure that both operands are false, i.e. make sure x is false (!x) and y is false (!y).
This way the condition will always evaluate to false. Which would be written as !x && !y
I hope this helps anybody trying to understand this!
Edit: This is under the assumption that the *operands mentioned below (i.e. x and y) are all boolean true.
Hello, can you tell me why this works as expected, i.e. a 0 return, assuming polje.at(x).at(y) is a ‘0’ or a ‘>’
but this does not (returns 1):
Is it because (‘0’ || ‘>’) evaulates to 1 and polje.at(x).at(y) is not a char code 1? Am I right?
Yes, you are correct.
Thank you for the quick answer.
<u>Typo</u>
Hey Alex,
Above, isn’t the first ‘Short circuit evaluation’ in bold the sub-heading for the section that follows? Currently it is wrapped at the end of the paragraph before it. May seem silly, but it makes it trickier to read. Or is it a browser-specific issue? (I’m using Safari 8.0.3)
Thanks again for all these great stuff. 🙂
It was a formatting error. It’s fixed now, thanks for pointing it out!
int x = 5;
int y = 7;
if (x == y)
cout << "x does not equal y";
else
cout << "x equals y";
Why do I get this: x equals y
Your logic is inverted. You check if x is equal to y, and if so, you print that they’re not equal (and vice-versa).
In other words, your output statements need to be switched.
Hey Alex! (This does not concern this chapter)
Just want to know if I can skip a few chapters?
Like I I don’t want to study the entire 4th section, can I skip that and move on to 5th? Or if I do this will section 5th be the most difficult for me to understand?
Depends on the lesson. Some lessons are super important, others are more situationally useful. Some are one-offs, others build on concepts over multiple lessons.
#include <iostream>
int main()
{
using namespace std ;
cout << "Enter a number: " ;
int value;
cin >> value ;
if ( value > 10 && value <.
The variable closed in brackets is a variable that was never declared in the program. Variable declared in the program was named value (not nValue). I think you should update the paragraph (next to the program) in the logical AND operator section.
One typo:
Look at the example of logical AND. You left the character ‘r’ (r is missing from "Your") in your else statement. Sorry again for my bad english.
I fixed the typos. Thanks for pointing them out!
Typos.
"Logical not (NOT) is often used in conditionals"
"This is known as short circuit evaluation, and it is done primary (primarily) for optimization purposes." (User ‘Cyrus’ pointed this out)
"C++ doesn’t provide an (a) logical XOR operator."
In the following example (under ‘Quiz answers’), shouldn’t you use || instead of | for the OR operators?
"For example ‘(true | false) => true’ means we evaluated ‘(true | false)’ to arrive at the value ‘true’."
In solutions 4 and 5, you mistakenly wrote ‘&’ instead of ‘>’:
"(5 & 6 || 4 & 3) && (7 & 8) =>"
"!(7 & 6 || 3 & 4) =>"
Alex, thanks for bearing so patiently with all my critiques! That takes some serious humility.
You are right on all counts. Good eye! Thanks.
Those who have checked out De Morgan’s Law on Wikipedia, the colored region that’s important in the Venn diagram is the light blue region.
For those who’d like to understand De Morgan’s Law, I found that it doesn’t work using AND and OR logic. It’s only understandable if you use UNION(for AND) and INTERSECTION(for OR) logic.
If you’d like to understand the logic behind it, this is the best video I found explaining De Morgan’s Law on YouTube:
De Morgan’s Laws
Doesn’t C++ provide the XOR operator through ^ ?
NVM that’s Bitwise XOR, not Logical XOR!
You might want to update the link to De Morgan’s law:
Thanks! Updated.
I came up with a pretty good analogy to help me understand the logical NOT's effect on the logical AND and OR:
Door = statement
Open Door = true statement
Closed Door = false statement
Logical AND:
1) You are walking through a long corridor. There are many doors in your path. In order to reach the other side (final statement=true), all the doors have to be open.
Logical OR:
2) You are standing in a large room. There are many doors leading out of the room. In order to leave the room (final statement=true), at least one of these doors has to be open.
Now, when you add in the NOT:
1) If you didn't reach the other side, it means that at least one of the doors in your path was closed: The first one OR the second one OR the third one etc.
2) If you didn't leave the room, it means that all the doors were closed: the first one AND the second one AND the third one etc.
You can clearly see how NOT turned the AND into OR and OR into AND.
I’m sorry, but your quiz section just looks like a gigantic headache to me.
I thought I knew what it was I had to figure out, but when I checked the answers it made me more confused.
I changed the == symbol to a => symbol to (hopefully) better indicate that the quiz answers are showing you a progression of steps to get to the final answer.
eg. (true | false) => true
means we evaluated “(true | false)” to arrive at the value “true”.
Haha. pakyu all.
#include
int main()
{
using namespace std;
cout <> nValue;
if (nValue == 0 || nValue == 1)
cout << "You picked 0 or 1" << endl;
else
cout << "You did not pick 0 or 1" << endl;
return 0;
}
In your this program if I enter any letter , the output comes that I have picked 0 or 1………
can anyone explain me this….
ulors. i dont understand your comment. ../..
please alex i don’t understand about right and left shift please exp)
Hi Alex and team,
Application of DeMorgan’s Law is NOT explained in the answer of the last quiz question.
Shouldn’t it be like this ?
!(7 > 6 || 3 > 4) == !(true || false)
== (!true && !false) == (false && true) == false
Correction: The application of De Morgan’s law is NOT necessary in the answer of the last quiz question.
De Morgan’s law is only necessary if you wish to remove the parentheses from the expression. Both your’s and Alex’s answers are correct..
^ this.
I don’t understand the quiz at all. More explanation of the working there would be really helpful. I feel that I understood everything leading up to the quiz, but the format of the quiz was unexpected.
It confused me at first as well. The equality operator is stating that all the preceding (since the previous equality operator) is equivalent to the following. The first problem’s answer could be read like this.
(true && true) || false
// (true && true): If one true is true, then both trues are true, so the above statement means the same as (or ==)
true || false
// Either true or false must be true, so the above statement evaluates as (or ==)
true
If it helps to clarify, ((true && true) || false) == (true || false) == (true). Basically, the goal is to simplify the expression as far as possible, noting the simplifications with ==. Hope that helps!
This also confused me. Even after I read what you wrote, Grimercy, it helped only a little bit. I still didn’t understand the other questions. So while I drove to work I prayed to God. Then when I arrived home from work I understood everything.
First of all, in order to understand how to evaluate the questions we can refer to the tables Alex mentioned in this subchapter. I will rewrite them below.
The logical OR operator is used to test whether either of two conditions is true. If either the left operand evaluates to true, or the right operand evaluates to true, the logical OR operator returns true.
Logical OR (operator ||)
false || false == false
false || true == true
true || false == true
true || true == true
false || false == false
false || true == true
true || false == true
true || true == true
The logical AND operator is used to test whether both conditions are true. If both conditions are true, logical AND returns true. Otherwise, it returns false.
Logical AND (operator &&)
(false && false) == false
(false && true) == false
(true && false) == false
(true && true) == true
(false && false) == false
(false && true) == false
(true && false) == false
(true && true) == true
After reiterating the information in the tables from logical OR, and logical AND, we can now answer the questions.
1) (true && true) || false
== true || false
== true
1) (true && true) || false
== true || false
== true
2) (false && true) || true
== false || true
== true
3) (false && true) || false || true
== false || false || true
== false || true
== true
4) (5 > 6 || 4 > 3) && (7 > 8)
== (false || true) && false
== true && false
== false
Question 5 can be evaluated as is:
5) !(7 > 6 || 3 > 4)
== !(true || false)
== !(true)
== false
Or it can be evaluated by removing the parentheses and using De Morgan’s law:
5) !(7 > 6 || 3 > 4)
== !(true || false)
== !true && !false
== false && true
== false
As you can see, the result is exactly the same.
5) !(7 > 6 || 3 > 4)
== !(true || false)
== !(true)
== false
5) !(7 > 6 || 3 > 4)
== !(true || false)
== !true && !false
== false && true
== false
Heyyy, look at me! I have a nice, polished, streak-free new account! Check out the shine on this baby. 🙂.
This is known as short circuit evaluation, and it is done
for optimization purposes.
Should be PRIMARILY :]
so how do you get these symbols in Code::Blocks
…you type them, just like anywhere else. they’re on your keyboard
Can you use the logical operators in do…while loops? I try to check an user-entered integer if it’s between 1 and 10. I use the && operator, but I can’t get it to work properly.
Here’s the code:
Whatever value I enter, the program always goes to the next part. If I check either
or.
Hey Ronnie,
I just thought about your XOR question and came to this solution (since their is no XOR to use directly in c++)
(!america && russia) || (america && !russia)
lets just take a logic look behind this:
So basically with only 2 conditions it is easy to handle. How did you manage to pick that example?! 🙂
wish you all a nice weekend
Florian?
C++ doesn’t provide an exclusive or operator. I’ve added an example of how to mimic XOR behavior using operator!= into the lesson.
Thank you for providing such an excellent example for mimicing logical XOR behavior using the != operator.
Hello Alex,
Your workaround for the missing xor can only deal with booleans. To fix this use:
The ! operator will make sure the operands are bool before the != gets evaluated. This way it will work on anything that can be cast to bool. Note that xor has the same result when all operands are inverted.
That’s kind of neat. While not likely to be used very often, neither is XOR in general. I’ve added it to the lesson.
This only works if there is an equal number of operands.
If you have an equal number of operands and an odd number of them are true, there also must be an odd number of false operands. On the other hand, if you have an odd number of operands and an equal number of them are true, there must be an odd number of false operands.
Double negation (i.e. (!!a != !!b != !!c)) or static casts work for both odd and equal number of operands.
It appears you are correct. I’ve updated the lesson to use static_cast instead, since doing a double-negation likely has more performance impact.
Since you updated the lesson to use static_cast, you should get rid of the sentence "In this form, the logical NOT operator is used to convert the operand to an inverse boolean value. However, XOR evaluates the same way when all operands have been inverted, so this does not impact the result".
Thanks -- sentence removed!”.
Short circuit evaluation is guaranteed in C++ with the fundamental data types. However, it’s worth noting that if you override the || or && operator for your own classes, short circuit evaluation is not performed when used in those cases. For that reason, operators || and && normally aren’t overridden.
Name (required)
Website
|
http://www.learncpp.com/cpp-tutorial/36-logical-operators/
|
CC-MAIN-2017-22
|
refinedweb
| 3,487
| 73.88
|
How to Check if a Variable Exists in Python
In this article, we will learn to check the existence of variables in Python. We will use some built-in functions in Python and some custom codes including exceptions of the functions as well. Let's first have a quick look over what are variables and then how many types of variables are there in Python.
What is Variable in Python?
Variables are containers and reserved memory locations for storing data values. Variables store data that can be used when evaluating an expression. Variables in Python can store any type of data or value say, integer type, string type, float type, or a boolean value, etc. There is no need to mention the type of the variable while defining it in the program. In the Python programming language, it is necessary for the variables to be defined before they are used in any function or in the program.
Variable Example
x = 3
x is a variable of integer type becuase it holds integer value. No data type like int is used before the variable name.
In Python, all variables are expected to be defined before use. The None object is a value you often assign to signify that you have no real value for a variable, as shown below.
try: x except NameError: x = None
Then it’s easy to test whether a variable is bound to None or not.
if x is None: some_fallback_operation( ) else: some_operation(x)
Python Variable Exist or Not?
Python doesn’t have a specific function to test whether a variable is defined, since all variables are expected to have been defined before use, even if we initially assigned the variable to None object. Attempting to access a variable that hasn’t previously been defined raises a
NameError exception. This NameError exception can be handled with a try/except statement, as you can do for any other Python exception.
We can use
try-except block to check the existence of a variable that is not defined earlier before use.
try: myVarexcept NameError: # Do something.
Instead of ensuring that a variable is initialized like we see above that variable was assigned none value, you may prefer to test whether it’s defined where you want to use it. Let us see the example below.
try: x except NameError: some_fallback_operation( ) else: some_operation(x)
Now, Python brings two functions
and
locals()
globals()
to overcome this situation. These two functions will help in checking whether the two variables i.e. local variable and global variable exists or not.to overcome this situation. These two functions will help in checking whether the two variables i.e. local variable and global variable exists or not.
Checking Local Variable Existence in Python
To check the existence of a variable locally we are going to use the
locals() function to get the dictionary of the current local symbol table. It returns true if the variable is found in the local entry system else returns false.
def func(): # defining local variable local_var = 0 # using locals() function for checking existence in symbol table is_local = "local_var" in locals() # printing result print(is_local) # driver code func()
TRUE
Checking Global Variable Existence in Python
Now, To check the existence of a variable globally we are going to use the
globals() function to get the dictionary of the current global symbol table.
def func(): # defining variable global_var = 0 # using globals() function check if global variable exist is_global = "global_var" in globals() # printing result print(is_global) # driver code func()
FALSE
This way the programmer can use the locals() and globals() function provided by Python language to check the existence of the variable. The program will print TRUE if the variable exists and FALSE if the variable does not exist.
Conclusion
In this article, we learned to check the existence of variables in Python by using two built-in functions such as
locals() and
globals(). We used some custom codes as well. We learned about exceptions too. For example, we used None object and assigned it to a new variable, and checked what exceptions we are getting using examples.
|
https://www.studytonight.com/python-howtos/how-to-check-if-a-variable-exists-in-python
|
CC-MAIN-2022-21
|
refinedweb
| 685
| 61.56
|
table of contents
NAME¶
alarm - set an alarm clock for delivery of a signal
SYNOPSIS¶
#include <unistd.h>
unsigned int alarm(unsigned int seconds);
DESCRIPTION¶
alarm() arranges for a SIGALRM signal to be delivered to the calling process in seconds seconds.
If seconds is zero, any pending alarm is canceled.
In any event any previously set alarm() is canceled.
RETURN VALUE¶
alarm() returns the number of seconds remaining until any previously scheduled alarm was due to be delivered, or zero if there was no previously scheduled alarm.
CONFORMING TO¶
POSIX.1-2001, POSIX.1-2008, SVr4, 4.3BSD.
NOTES¶¶
gettimeofday(2), pause(2), select(2), setitimer(2), sigaction(2), signal(2), timer_create(2), timerfd_create(2), sleep(3), time(7)
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at
|
https://dyn.manpages.debian.org/testing/manpages-dev/alarm.2.en.html
|
CC-MAIN-2022-21
|
refinedweb
| 154
| 66.23
|
fix
I previously reported:
I still have this issue. As I can't find the way add new comment I open new bug.
Look for history of 'maaP.h':
revision 1.22
date: 2003/10/26 13:03:24; author: cheusov; state: Exp; lines: +10 -2
checking for presence of getopt(3) and SIZEOF_VOID_P define
@@ -103,9 +109,11 @@
# include <getopt.h>
#else <<=== else if !!!
#if !HAVE_GETOPT
-extern int getopt( int, char **, char * );
+extern int getopt( int, char * const *, const char * );
extern int optind;
extern char *optarg;
+#else
+# include <unistd.h>
#endif
#endif
I don't search history fully but currently this file have no "else if":
#if HAVE_GETOPT_H
# include <getopt.h>
#endif
#if !defined(HAVE_GETOPT)
So I decide return to old logic (which seems logical to me). See patch.
fix.
cheap north face jackets.
north face jackets sale
|
https://sourceforge.net/p/dict/bugs/23/
|
CC-MAIN-2017-17
|
refinedweb
| 139
| 77.94
|
Physics::Unit - Manipulate physics units and dimensions..
This page.
Describes the Scalar class and all of the type-specific classes that derive from Scalar.
Describes the command-line utility that is included with this module.
Table of all of the units predefined in the unit library, alphabetically by name.
Tables listing all the units in the unit library, grouped by type.
Describes some implementation details for the Unit module.
Implementation details for the Scalar module. is a unit of area, but
square mega meter is a unit of distance (equal to 10^12 meters).
Square or cube the next thing on the line
Square or cube the previous thing on the line.
^or
**
Exponentiation (must be to an integral power)
Any amount of whitespace between units is considered a multiplication
Multiplication or division'.
This is the approximate grammar used by the parser.
expr : term | term '/' expr | term '*' expr | term 'per' expr term : factor | term <whitespace> factor factor : primary | primary '**' integer primary : number | word | '(' expr ')' | 'square' primary | 'sq' primary | 'cubic' primary | primary 'squared' | primary 'cubed':
A few unit names and abbreviations had to be changed in order to avoid name conflicts. These are:
By default, this module exports nothing. You can request all of the functions to be exported as follows:
use Physics::Unit ':ALL';
Or, you can just get specific ones. For example:
use Physics::Unit qw( GetUnit ListUnits );.
This function defines new prefixes. For example:
InitPrefix('gonzo' => 1e100, 'piccolo' => 1e-100);
From then on, you can use those prefixes to define new units, as in:
$beautification_rate = new Physics::Unit('5 piccolosonjas / hour'); class method can be used to create new, named Unit objects. Units created with
InitUnit must have a name, however, whereas
new can).
Returns a list of all Unit names known, sorted alphabetically.
Returns a list of all the quantity types known to the library, sorted alphabetically.
Returns the number of base dimension units.
Returns the Unit object corresponding to a given type name, or undef if the type is not recognized. value of those Units would be rendered invalid when these names are removed. above).
This method returns one of:
undef
no type was found to match the unit's dimensionality
in the special case where the unit is a named prefix
the prototype unit for this type name matches the unit's dimensionality'
Returns the primary name of the Unit. If this Unit has no names, then
undef.
Returns the shortest name of the Unit. If this Unit has no names,
undef.
Returns a list of names that can be used to reference the Unit. Returns the empty list if the Unit is unnamed.
Returns the string that was used to define this Unit. Note that if the Unit has been manipulated with any of the arithmetic methods, then the
def method will return
undef, since the definition string is no longer a valid definition of the Unit.
Produces a string representation of the Unit, in terms of the base Units. For example:
print GetUnit('calorie')->expanded, "\n"; # "4184 m^2 gm s^-2".
Get or set the Unit's conversion factor (magnitude). If this is used to set a Unit's factor, then the Unit object must be anonymous.
Returns the number which converts this Unit to another. The types of the Units must match. For example:
print GetUnit('mile')->convert('foot'), "\n"; # 5280
Multiply this object by the given Unit. This will, in general, change a Unit's dimensionality, and hence its type.
Replaced a Unit with its reciprocal. This will, in general, change a Unit's dimensionality, and hence its type.'
Raises a Unit to an integral power. This will, in general, change its dimensionality, and hence its type.
Add a Unit, which must be of the same type.
Replace a Unit with its arithmetic negative.
Subtract a Unit, which must be of the same type..
This returns 1 if the two Unit objects have the same type and the same magnitude.
Here are some other modules that might fit your needs better than this one:.
|
http://search.cpan.org/~klortho/Physics-Unit-0.53/lib/Physics/Unit.pm
|
CC-MAIN-2017-04
|
refinedweb
| 674
| 66.13
|
(For more resources on Python, see here.)
So let's get on with it!
Installation prerequisites
Before we jump in to the main topic, it is necessary to install the following packages.
Python
In this article, we will use Python Version 2.6, or to be more specific, Version 2.6.4. It can be downloaded from the following location:
Windows platform
For Windows, just download and install the platform-specific binary distribution of Python 2.6.4.
Other platforms
For other platforms, such as Linux, Python is probably already installed on your machine. If the installed version is not 2.6, build and install it from the source distribution. If you are using a package manager on a Linux system, search for Python 2.6. It is likely that you will find the Python distribution there. Then, for instance, Ubuntu users can install Python from the command prompt as:
$sudo apt-get python2.6
Note that for this, you must have administrative permission on the machine on which you are installing Python.
Python Imaging Library (PIL)
We will learn image-processing techniques by making extensive use of the Python Imaging Library (PIL) throughout this article. PIL is an open source library. You can download it from. Install the PIL Version 1.1.6 or later.
Windows platform
For Windows users, installation is straightforward—use the binary distribution PIL 1.1.6 for Python 2.6.
Other platforms
For other platforms, install PIL 1.1.6 from the source. Carefully review the README file in the source distribution for the platform-specific instructions. Libraries listed in the following table are required to be installed before installing PIL from the source. For some platforms like Linux, the libraries provided in the OS should work fine. However, if those do not work, install a pre-built "libraryName-devel" version of the library. For example, for JPEG support, the name will contain "jpeg-devel-", and something similar for the others. This is generally applicable to rpm-based distributions. For Linux flavors like Ubuntu, you can use the following command in a shell window.
$sudo apt-get install python-imaging
However, you should make sure that this installs Version 1.1.6 or later. Check PIL documentation for further platform-specific instructions. For Mac OSX, see if you can use fink to install these libraries. See for more details. You can also check the website or Darwin ports website to see if a binary package installer is available. If such a pre-built version is not available for any library, install it from the source.
The PIL prerequisites for installing PIL from source are listed in the following table:
PyQt4
This package provides Python bindings for Qt libraries. We will use PyQt4 to generate GUI for the image-processing application that we will develop later in this article. The GPL version is available at:.
Windows platform
Download and install the binary distribution pertaining to Python 2.6. For example, the executable file's name could be 'PyQt-Py2.6-gpl-4.6.2-2.exe'. Other than Python, it includes everything needed for GUI development using PyQt.
Other platforms
Before building PyQt, you must install SIP Python binding generator. For further details, refer to the SIP homepage:.
After installing SIP, download and install PyQt 4.6.2 or later, from the source tarball. For Linux/Unix source, the filename will start with PyQt-x11-gpl-.. and for Mac OS X, PyQt-mac-gpl-... Linux users should also check if PyQt4 distribution is already available through the package manager.
Summary of installation prerequisites
(For more resources on Python, see here.)
Reading and writing images
To manipulate an existing image, we must open it first for editing and we also require the ability to save the image in a suitable file format after making changes. The Image module in PIL provides methods to read and write images in the specified image file format. It supports a wide range of file formats.
To open an image, use Image.open method. Start the Python interpreter and write the following code. You should specify an appropriate path on your system as an argument to the Image.open method.
>>>import Image
>>>inputImage = Image.open("C:\\PythonTest\\image1.jpg")
This will open an image file by the name image1.jpg. If the file can't be opened, an IOError will be raised, otherwise, it returns an instance of class Image.
For saving image, use the save method of the Image class. Make sure you replace the following string with an appropriate /path/to/your/image/file.
>>>inputImage.save("C:\\PythonTest\\outputImage.jpg")
You can view the image just saved, using the show method of Image class.
>>>outputImage = Image.open("C:\\PythonTest\\outputImage.jpg")
>>>outputImage.show()
Here, it is essentially the same image as the input image, because we did not make any changes to the output image.
Time for action – image file converter
With this basic information, let's build a simple image file converter. This utility will batch-process image files and save them in a user-specified file format.
To get started, download the file ImageFileConverter.py from the Packt website
. This file can be run from the command line as:
python ImageConverter.py [arguments]
Here, [arguments] are:
- --input_dir: The directory path where the image files are located.
- --input_format: The format of the image files to be converted. For example, jpg.
- --output_dir: The location where you want to save the converted images.
- --output_format: The output image format. For example, jpg, png, bmp, and so on.
The following screenshot shows the image conversion utility in action on Windows XP, that is, running image converter from the command line.
Here, it will batch-process all the .jpg images within C:\PythonTest\images and save them in png format in the directory C:\PythonTest\images\OUTPUT_IMAGES.
The file defines class ImageConverter. We will discuss the most important methods in this class.
- def processArgs: This method processes all the command-line arguments listed earlier. It makes use of Python's built-in module getopts to process these arguments. Readers are advised to review the code in the file ImageConverter.py in the code bundle of this book for further details on how these arguments are processed.
- def convertImage: This is the workhorse method of the image-conversion utility.
1 def convertImage(self):
2 pattern = "*." + self.inputFormat
3 filetype = os.path.join(self.inputDir, pattern)
4 fileList = glob.glob(filetype)
5 inputFileList = filter(imageFileExists, fileList)
6
7 if not len(inputFileList):
8 print "\n No image files with extension %s located \
9 in dir %s"%(self.outputFormat, self.inputDir)
10 return
11 else:
12 # Record time before beginning image conversion
13 starttime = time.clock()
14 print "\n Converting images.."
15
16 # Save image into specified file format.
17 for imagePath in inputFileList:
18 inputImage = Image.open(imagePath)
19 dir, fil = os.path.split(imagePath)
20 fil, ext = os.path.splitext(fil)
21 outPath = os.path.join(self.outputDir,
22 fil + "." + self.outputFormat)
23 inputImage.save(outPath)
24
25 endtime = time.clock()
26 print "\n Done!"
27 print "\n %d image(s) written to directory:\
28 %s" %(len(inputFileList), self.outputDir)
29 print "\n Approximate time required for conversion: \
30 %.4f seconds" % (endtime - starttime)
Now let's review the preceding code.
- Our first task is to get a list of all the image files to be saved in a different format. This is achieved by using glob module in Python. Line 4 in the code snippet finds all the file path names that match the pattern specified by the local variable fileType. On line 5, we check whether the image file in fileList exists. This operation can be efficiently performed over the whole list using the built-in filter functionality in Python.
- The code block between lines 7 to 14 ensures that one or more images exist. If so, it will record the time before beginning the image conversion.
- The next code block (lines 17-23) carries out the image file conversion. On line 18, we use Image.open to open the image file. Line 18 creates an Image object. Then the appropriate output path is derived and finally the output image is saved using the save method of the Image module.
What just happened?
In this simple example, we learned how to open and save image files in a specified image format. We accomplished this by writing an image file converter that batch-processes a specified image file. We used PIL's Image.open and Image.save functionality along with Python's built-in modules such as glob and filter.
Now we will discuss other key aspects related to the image reading and writing.
Creating an image from scratch
So far we have seen how to open an existing image. What if we want to create our own image? As an example, it you want to create fancy text as an image, the functionality that we are going to discuss now comes in handy. Later in this book, we will learn how to use such an image containing some text to embed into another image. The basic syntax for creating a new image is:
foo = Image.new(mode, size, color)
Where, new is the built-in method of class Image. Image.new takes three arguments, namely, mode, size, and color. The mode argument is a string that gives information about the number and names of image bands. Following are the most common values for mode argument: L (gray scale) and RGB (true color). The size is a tuple specifying dimensions of the image in pixels, whereas, color is an optional argument. It can be assigned an RGB value (a 3-tuple) if it's a multi-band image. If it is not specified, the image is filled with black color.
Time for action – creating a new image containing some text
As already stated, it is often useful to generate an image containing only some text or a common shape. Such an image can then be pasted onto another image at a desired angle and location. We will now create an image with text that reads, "Not really a fancy text!"
- Write the following code in a Python source file:
1 import Image
2 import ImageDraw
3 txt = "Not really a fancy text!"
4 size = (150, 50)
5 color = (0, 100, 0)
6 img = Image.new('RGB', size, color)
7 imgDrawer = ImageDraw.Draw(img)
8 imgDrawer.text((5, 20), txt)
9 img.show()
- Let's analyze the code line by line. The first two lines import the necessary modules from PIL. The variable txt is the text we want to include in the image. On line 7, the new image is created using Image.new. Here we specify the mode and size arguments. The optional color argument is specified as a tuple with RGB values pertaining to the "dark green" color.
- The ImageDraw module in PIL provides graphics support for an Image object. The function ImageDraw.Draw takes an image object as an argument to create a Draw instance. In output code, it is called imgDrawer, as used on line 7. This Draw instance enables drawing various things in the given image.
- On line 8, we call the text method of the Draw instance and supply position (a tuple) and the text (stored in the string txt) as arguments.
- Finally, the image can be viewed using img.show() call. You can optionally save the image using Image.save method. The following screenshot shows the resultant image.
What just happened?
We just learned how to create an image from scratch. An empty image was created using the Image.new method. Then, we used the ImageDraw module in PIL to add text to this image.
Reading images from archive
If the image is part of an archived container, for example, a TAR archive, we can use the TarIO module in PIL to open it and then call Image.open to pass this TarIO instance as an argument.
Time for action – reading images from archives
Suppose there is an archive file images.tar containing image file image1.jpg. The following code snippet shows how to read image1.jpg from the tarball.
>>>import TarIO
>>>import Images
>>>fil = TarIO.TarIO("images.tar", "images/image1.jpg")
>>>img = Image.open(fil)
>>>img.show()
What just happened?
We learned how to read an image located in an archived container.
Have a go hero – add new feature to the image file converter Modify the image conversion code so that it supports the following new functionality, which:
- Takes a ZIP file containing images as input
- Creates a TAR archive of the converted images
(For more resources on Python, see here.)
Basic image manipulations
Now that we know how to open and save images, let's learn some basic techniques to manipulate images. PIL supports a variety of geometric manipulation operations, such as resizing an image, rotating it by an angle, flipping it top to bottom or left to right, and so on. It also facilitates operations such as cropping, cutting and pasting pieces of images, and so on.
Resizing
Changing the dimensions of an image is one of the most frequently used image manipulation operations. The image resizing is accomplished using Image.resize in PIL. The following line of code explains how it is achieved.
foo = img.resize(size, filter)
Here, img is an image (an instance of class Image) and the result of resizing operation is stored in foo (another instance of class Image). The size argument is a tuple (width, height). Note that the size is specified in pixels. Thus, resizing the image means modifying the number of pixels in the image. This is also known as image re-sampling. The Image.resize method also takes filter as an optional argument. A filter is an interpolation algorithm used while re-sampling the given image. It handles deletion or addition of pixels during re-sampling, when the resize operation is intended to make image smaller or larger in size respectively. There are four filters available. The resize filters in the increasing order of quality are NEAREST, BILINEAR, BICUBIC, and ANTIALIAS. The default filter option is NEAREST.
Time for action – resizing
Let's now resize images by modifying their pixel dimensions and applying various filters for re-sampling.
- Download the file ImageResizeExample.bmp from the Packt website. We will use this as the reference file to create scaled images. The original dimensions of ImageResizeExample.bmp are 200 x 212 pixels.
- Write the following code in a file or in Python interpreter. Replace the inPath and outPath strings with the appropriate image path on your machine.
1 import Image
2 inPath = "C:\\images\\ImageResizeExample.jpg"
3 img = Image.open(inPath)
4 width , height = (160, 160)
5 size = (width, height)
6 foo = img.resize(size)
7 foo.show()
8 outPath = "C:\\images\\foo.jpg"
9 foo.save(outPath)
- The image specified by the inPath will be resized and saved as the image specified by the outPath. Line 6 in the code snippet does the resizing job and finally we save the new image on line 9. You can see how the resized image looks by calling foo.show().
- Let's now specify the filter argument. In the following code, on line 14, the filterOpt argument is specified in the resize method. The valid filter options are specified as values in the dictionary filterDict. The keys of filterDict are used as the filenames of the output images. The four images thus obtained are compared in the next illustration. You can clearly notice the difference between the ANTIALIAS image and the others (particularly, look at the flower petals in these images). When the processing time is not an issue, choose the ANTIALIAS filter option as it gives the best quality image.
1 import Image
2 inPath = "C:\\images\\ImageResizeExample.jpg"
3 img = Image.open(inPath)
4 width , height = (160, 160)
5 size = (width, height)
6 filterDict = {'NEAREST':Image.NEAREST,
7 'BILINEAR':Image.BILINEAR,
8 'BICUBIC':Image.BICUBIC,
9 'ANTIALIAS':Image.ANTIALIAS }
10
11 for k in filterDict.keys():
12 outPath= "C:\\images\\" + k + ".jpg"
13 filterOpt = filterDict[k]
14 foo = img.resize(size, filterOpt)
15 foo.save(outPath)
- The resize functionality illustrated here, however, doesn't preserve the aspect ratio of the resulting image. The image will appear distorted if one dimension is stretched more or stretched less in comparison with the other dimension. PIL's Image module provides another built-in method to fix this. It will override the larger of the two dimensions, such that the aspect ratio of the image is maintained.
import Image
inPath = "C:\\images\\ResizeImageExample.jpg"
img = Image.open(inPath)
width , height = (100, 50)
size = (width, height)
outPath = "C:\\images\\foo.jpg"
img.thumbnail(size, Image.ANTIALIAS)
img.save(outPath)
- This code will override the maximum pixel dimension value (width in this case) specified by the programmer and replace it with a value that maintains the aspect ratio of the image. In this case, we have an image with pixel dimensions (47, 50). The resultant images are compared in the following illustration.
It shows the comparison of output images for methods Image.thumbnail and Image.resize.
What just happened?
We just learned how image resizing is done using PIL's Image module, by writing a few lines of code. We also learned different types of filters used in image resizing (re-sampling). And finally, we also saw how to resize an image while still keeping the aspect ratio intact (that is, without distortion), using the Image.thumbnail method.
Rotating
Like image resizing, rotating an image about its center is another commonly performed transformation. For example, in a composite image, one may need to rotate the text by certain degrees before embedding it in another image. For such needs, there are methods such as rotate and transpose available in PIL's Image module. The basic syntax to rotate an image using Image.rotate is as follows:
foo = img.rotate(angle, filter)
Where, the angle is provided in degrees and filter, the optional argument, is the image-re-sampling filter. The valid filter value can be NEAREST, BILINEAR, or BICUBIC. You can rotate the image using Image.transpose only for 90-, 180-, and 270-degree rotation angles.
Time for action – rotating
- Download the file Rotate.png from the Packt website. Alternatively, you can use any supported image file of your choice.
- Write the following code in Python interpreter or in a Python file. As always, specify the appropriate path strings for inPath and outPath variables.
1 import Image
2 inPath = "C:\\images\\Rotate.png"
3 img = Image.open(inPath)
4 deg = 45
5 filterOpt = Image.BICUBIC
6 outPath = "C:\\images\\Rotate_out.png"
7 foo = img.rotate(deg, filterOpt)
8 foo.save(outPath)
- Upon running this code, the output image, rotated by 45 degrees, is saved to the outPath. The filter option Image.BICUBIC ensures highest quality. The next illustration shows the original and the images rotated by 45 and 180 degrees respectively—the original and rotated images.
- There is another way to accomplish rotation for certain angles by using the Image.transpose functionality. The following code achieves a 270-degree rotation. Other valid options for rotation are Image.ROTATE_90 and Image.ROTATE_180.
import Image
inPath = "C:\\images\\Rotate.png"
img = Image.open(inPath)
outPath = "C:\\images\\Rotate_out.png"
foo = img.transpose(Image.ROTATE_270)
foo.save(outPath)
What just happened?
In the previous section, we used Image.rotate to accomplish rotating an image by the desired angle. The image filter Image.BICUBIC was used to obtain better quality output image after rotation. We also saw how Image.transpose can be used for rotating the image by certain angles.
Flipping
There are multiple ways in PIL to flip an image horizontally or vertically. One way to achieve this is using the Image.transpose method. Another option is to use the functionality from the ImageOps module . This module makes the image-processing job even easier with some ready-made methods. However, note that the PIL documentation for Version 1.1.6 states that ImageOps is still an experimental module.
Time for action – flipping
Imagine that you are building a symmetric image using a bunch of basic shapes. To create such an image, an operation that can flip (or mirror) the image would come in handy. So let's see how image flipping can be accomplished.
- Write the following code in a Python source file.
1 import Image
2 inPath = "C:\\images\\Flip.png"
3 img = Image.open(inPath)
4 outPath = "C:\\images\\Flip_out.png"
5 foo = img.transpose(Image.FLIP_LEFT_RIGHT)
6 foo.save(outPath)
- In this code, the image is flipped horizontally by calling the transpose method. To flip the image vertically, replace line 5 in the code with the following:
foo = img.transpose(Image.FLIP_TOP_BOTTOM)
- The following illustration shows the output of the preceding code when the image is flipped horizontally and vertically.
- The same effect can be achieved using the ImageOps module. To flip the image horizontally, use ImageOps.mirror, and to flip the image vertically, use ImageOps.flip.
import ImageOps
# Flip image horizontally
foo1 = ImageOps.mirror(img)
# Flip image vertically
foo2 = ImageOps.flip(img)
What just happened?
With the help of example, we learned how to flip an image horizontally or vertically using Image.transpose and also by using methods in class ImageOps. This operation will be applied later in this book for further image processing such as preparing composite images.
Capturing screenshots
How do you capture the desktop screen or a part of it using Python? There is ImageGrab module in PIL. This simple line of code will capture the whole screen.
img = ImageGrab.grab()
Where, img is an instance of class Image.
However, note that in PIL Version 1.1.6, the ImageGrab module supports screen grabbing only for Windows platform.
Time for action – capture screenshots at intervals
Imagine that you are developing an application, where, after certain time interval, the program needs to automatically capture the whole screen or a part of the screen. Let's develop code that achieves this.
- Write the following code in a Python source file. When the code is executed, it will capture part of the screen after every two seconds. The code will run for about three seconds.
1 import ImageGrab
2 import time
3 startTime = time.clock()
4 print "\n The start time is %s sec" % startTime
5 # Define the four corners of the bounding box.
6 # (in pixels)
7 left = 150
8 upper = 200
9 right = 900
10 lower = 700
11 bbox = (left, upper, right, lower)
12
13 while time.clock() < 3:
14 print " \n Capturing screen at time %.4f sec" \
15 %time.clock()
16 screenShot = ImageGrab.grab(bbox)
17 name = str("%.2f"%time.clock())+ "sec.png"
18 screenShot.save("C:\\images\\output\\" + name)
19 time.sleep(2)
- We will now review the important aspects of this code. First, import the necessary modules. The time.clock() keeps track of the time spent. On line 11, a bounding box is defined. It is a 4-tuple that defines the boundaries of a rectangular region. The elements in this tuple are specified in pixels. In PIL, the origin (0, 0) is defined in the top-left corner of an image. The next illustration is a representation of a bounding box for image cropping; see how left, upper and right, lower are specified as the ends of a diagonal of rectangle.
Example of a bounding box used for image cropping.
- The while loop runs till the time.clock() reaches three seconds. Inside the loop, the part of the screen bounded within bbox is captured (see line 16) and then the image is saved on line 18. The image name corresponds to the time at which it is taken.
- The time.sleep(2) call suspends the execution of the application for two seconds. This ensures that it grabs the screen every two seconds. The loop repeats until the given time is reached.
- In this example, it will capture two screenshots, one when it enters the loop for the first time and the next after a two-second time interval. In the following illustration, the two images grabbed by the code are shown. Notice the time and console prints in these images.
The preceding screenshot is taken at time 00:02:15 as shown dialog. The next screenshot is taken after 2 seconds, at wall clock time, 00:02:17.
The two screenshots captured by the screenshot code at two seconds time interval. Notice the time and console print in the screenshots
What just happened?
In the preceding example, we wrote a simple application that captures the screen at regular time intervals. This helped us to learn how to grab a screen region using ImageGrab.
Cropping
In previous section, we learned how to grab a part of the screen with ImageGrab. Cropping is a very similar operation performed on an image. It allows you to modify a region within an image.
Time for action – cropping an image
This simple code snippet crops an image and applies some changes on the cropped portion.
- Download the file Crop.png from Packt website. The size of this image is 400 x 400 pixels. You can also use your own image file.
- Write the following code in a Python source file. Modify the path of the image file to an appropriate path.
import Image
img = Image.open("C:\\images\\Crop.png")
left = 0
upper = 0
right = 180
lower = 215
bbox = (left, upper, right, lower)
img = img.crop(bbox)
img.show()
- This will crop a region of the image bounded by bbox. The specification of the bounding box is identical to what we have seen in the Capturing screenshots section. The output of this example is shown in the following illustration.
Original image (left) and its cropped region (right).
What just happened?
In the previous section, we used Image.crop functionality to crop a region within an image and save the resultant image. In the next section, we will apply this while pasting a region of an image onto another.
Pasting
Pasting a copied or cut image onto another one is a commonly performed operation while processing images. Following is the simplest syntax to paste one image on another.
img = img.paste(image, box)
Here image is an instance of class Image and box is a rectangular bounding box that defines the region of img, where the image will be pasted. The box argument can be a 4-tupleError: Reference source not found or a 2-tuple. If a 4-tuple box is specified, the size of the image to be pasted must be same as the size of the region. Otherwise, PIL will throw an error with a message ValueError: images do not match. The 2-tuple on the other hand, provides pixel coordinates of the upper-left corner of the region to be pasted.
Now look at the following line of code. It is a copy operation on an image.
img2 = img.copy(image)
The copy operation can be viewed as pasting the whole image onto a new image. This operation is useful when, for instance, you want to keep the original image unaltered and make alterations to the copy of the image.
Time for action – pasting: mirror the smiley face!
Consider the example in earlier section where we cropped a region of an image. The cropped region contained a smiley face. Let's modify the original image so that it has a 'reflection' of the smiley face.
- If not already, download the file Crop.png from the Packt website.
- Write this code by replacing the file path with appropriate file path on your system.
1 import Image
2 img = Image.open("C:\\images\\Crop.png")
3 # Define the elements of a 4-tuple that represents
4 # a bounding box ( region to be cropped)
5 left = 0
6 upper = 25
7 right = 180
8 lower = 210
9 bbox = (left, upper, right, lower)
10 # Crop the smiley face from the image
11 smiley = img.crop(bbox_1)
12 # Flip the image horizontally
13 smiley = smiley.transpose(Image.FLIP_TOP_BOTTOM)
14 # Define the box as a 2-tuple.
15 bbox_2 = (0, 210)
16 # Finally paste the 'smiley' on to the image.
17 img.paste(smiley, bbox_2)
18 img.save("C:\\images\\Pasted.png")
19 img.show()
- First we open an image and crop it to extract a region containing the smiley face. This was already done in section Error: Reference source not found'Cropping'. The only minor difference you will notice is the value of the tuple element upper. It is intentionally kept as 25 pixels from the top to make sure that the crop image has a size that can fit in the blank portion below the original smiley face.
- The cropped image is then flipped horizontally with code on line 13.
- Now we define a box, bbox_2, for pasting the cropped smiley face back on to the original image. Where should it be pasted? We intend to make a 'reflection' of the original smiley face. So the coordinate of the top-right corner of the pasted image should be greater than or equal to the bottom y coordinate of the cropped region, indicated by 'lower' variable (see line 8) . The bounding box is defined on line 15, as a 2-tuple representing the upper-left coordinates of the smiley.
- Finally, on line 17, the paste operation is performed to paste the smiley on the original image. The resulting image is then saved with a different name.
- The original image and the output image after the paste operation is shown in the next illustration.
The illustration shows the comparison of original and resulting images after the paste operation.
What just happened?
Using a combination of Image.crop and Image.paste, we accomplished cropping a region, making some modifications, and then pasting the region back on the image.
Summary
We learned a lot in this article about basic image manipulation.
Specifically, we covered image input-output operations that enable reading and writing of images, and creation of images from scratch. With the help of numerous examples and code snippets, we learned several image manipulation operations. Some of them are:
- How to resize an image with or without maintaining aspect ratio
- Rotating or flipping an image
- Cropping an image, manipulating it using techniques learned earlier in the article, and then pasting it on the original image
- Creating an image with a text
- We developed a small application that captures a region of your screen at regular time intervals using ImageGrab.
Further resources on this subject:
- A Python Multimedia Application: Thumbnail Maker [Article]
- Python 3 Object Oriented Programming [Book]
- Objects in Python [Article]
- Python 3: When to Use Object-oriented Programming [Article]
- Python 3 Object Oriented Programming: Managing objects [Article]
|
https://www.packtpub.com/books/content/python-image-manipulation
|
CC-MAIN-2015-14
|
refinedweb
| 5,132
| 59.9
|
Hello all. I am new to C programming and was wondering if it is possible to convert an integer value to char or string. I checked a book, but only found documentation to convert from string to integer. Thanks in advance..
A char is an integer type already.sprintf
--!
int to char is done by simply casting:
int i = 10;
char c;
c = i;
Remember that chars can hold 256 numbers only. As for string, just use sprintf, like sprintf(buffer, "%d", i);, where buffer is a char array and i an integer. Remember to make the buffer as large as necessary.
--RB光子「あたしただ…奪う側に回ろうと思っただけよ」Mitsuko's last words, Battle Royale
If you want to convert from an integer to a string with a specified radix, use itoa().
itoa()
Thanks for the help. If I were adding the numeric value to the end of an existing string, would the format be:
char buffer[16} = "<some characters>";int place;char c;c = sprintf(buffer, "%d", place);?
[/edit] I tried itoa, got an undefined reference to itoa.[edit/]
Nah, just:
char buffer[16} = "<some characters>";
int place = 10;
sprintf(buffer, "%d", place);
That worked. Thanks LennyLen and X-G.
[\edit] Oops, closed the thread too soon. It appears to be overwriting what is in buffer, as opposed to adding place to the end. Any ideas?[\edit]
I tried itoa, got an undefined reference to itoa
You need to add the following to the top of your code:
#include <stdlib.h>
<stdlib.h> was included. Is it recent? Maybe my compiler does not support it yet.
It's not new, but it's not an ANSI C funtion either, though most compilers support it. Which are you using?
The following compiled fine for me using GCC.
I am using gcc 4.0.2. I copied the test program you submitted and compiled, still having problems with the itoa function. Are you coding in C or C++? I coded in C.
I compiled as C. I'm currently using GCC 3.4.2, so perhaps it's been removed in later versions.
New rule: no creating new programming forum threads until you read the thread list!
Is this week the printf assignment at the college that you three attend?
Not that I'm being hostile or anything.
-- Tomasu: Every time you read this: hugging!
Ryan Patterson - <>
CGamesPlay: My origional question was how to convert an int to char or string. To get back on topic, I tried sprintf, the new characters overwrote what was in the target string. Is there any way to add the characters to the end of the string, or did I do something wrong?
#include <stdio.h>
#include <string.h>
char buffer[20], buffer2[20];
int main() {
n = 25;
strcpy(buffer, "stuff");
sprintf(buffer2, "%d", n);
strcat(buffer, buffer2);
return 0;
}
That was it. Thanks.
|
https://www.allegro.cc/forums/thread/588784/631054
|
CC-MAIN-2016-36
|
refinedweb
| 475
| 76.62
|
15 August 2008 14:16 [Source: ICIS news]
LONDON (ICIS news)--Black Sea prilled urea prices have weakened $20-25/tonne (€13-17/tonne) on the back of limited buying activity, traders said on Friday.
With Indian buyers yet to re-emerge and the holiday/harvest season evident in the northern hemisphere, urea trade has been extremely slim, market participants said.
As a result, the key benchmark ?xml:namespace>
“
Traders Agrofertrans (AFT) sold 6,000 tonnes of prilled urea to Turkish buyers Gubretas for second-half August shipment.
The price was reported in the $840s/tonne CFR (cost and freight) including 180 days’ credit, which netted back to $785/tonne FOB (free on board) Yuzhny.
As a result of limited activity and this AFT sale, Black Sea prices were pegged at $785-800/tonne FOB Yuzhny by global chemical market intelligence service ICIS pricing.
A week previously,
Despite softening, Yuzhny prices had not dropped substantially as traders bidding for larger quantities have been unable to obtain offers below $800-805/tonne FOB.
The main Ukrainian producers were holding out for prices above this level.
“When they [
(
|
http://www.icis.com/Articles/2008/08/15/9149427/black-sea-urea-weakens-20-25t-in-quiet-market.html
|
CC-MAIN-2014-15
|
refinedweb
| 186
| 50.06
|
bos@grastorpsik.se changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|RESOLVED |REOPENED
Resolution|FIXED |
--- Comment #7 from bos@grastorpsik.se 2010-08-09 22:41:40 EDT ---
This bug (namespace polluting) still exists in 2.3.6 (out-of-the-box
configuration) when using httpd-autoindex.conf, even when disabling autoindex
for a particular site:
[Tue Aug 10 04:36:10.406428 2010] [notice] [pid 15584:tid 16384] SIGHUP
received. Attempting to restart
Digest: cleaning up shared memory
[Tue Aug 10 04:36:11.244445 2010] [notice] [pid 15584:tid 16384] Digest:
generating secret for digest authentication ...
[Tue Aug 10 04:36:11.244642 2010] [notice] [pid 15584:tid 16384] Digest: done
[Tue Aug 10 04:36:12.095625 2010] [notice] [pid 15584:tid 16384] Apache/2.3.6
(Unix) DAV/2 PHP/5.3.0 configured -- resuming normal operations
[Tue Aug 10 04:36:12.095956 2010] [notice] [pid 15584:tid 16384] Command line:
'/usr/bin/httpd'
[Tue Aug 10 04:36:16.946550 2010] [error] [pid 27512:tid 32771] [client
192.168.1.66:1146] File does not exist: /usr/apache2/icons/pdf_s.png, referer:
The file exists in <docroot>/icons/pdf_s.png but Apache tries to access it in
<wwwroot>. If autoindex is disabled is commented out the icon in question is
accessed without problems. It is also accessed if the Alias-line in
httpd-autoindex.conf is commented out, but this yields broken icons for the
FancyIndexing option.
So, I'll still vote for an Unalias-command because atleast I want autoindex /
FancyIndexing available on one site and disabled on another.
--
Configure bugmail:
------- You are receiving this mail because: -------
You are the assignee for the bug.
---------------------------------------------------------------------
To unsubscribe, e-mail: bugs-unsubscribe@httpd.apache.org
For additional commands, e-mail: bugs-help@httpd.apache.org
|
http://mail-archives.apache.org/mod_mbox/httpd-bugs/201008.mbox/%3C201008100241.o7A2fj2s019999@thor.apache.org%3E
|
CC-MAIN-2017-13
|
refinedweb
| 297
| 53.68
|
XUL::Gui - render cross platform gui applications with firefox from perl
version 0.63
this module is under active development, interfaces may change.
this code is currently in beta, use in production environments at your own risk
use XUL::Gui; display Label 'hello, world!'; # short enough? remove "Label" for bonus points'), );
this module exposes the entire functionality of mozilla firefox's rendering engine to perl by providing all of the
XUL and
HTML tags as functions and allowing you to interact with those objects directly from perl. gui applications created with this toolkit are cross platform, fully support CSS styling, inherit firefox's rich assortment of web technologies (browser, canvas and video tags, flash and other plugins), and are even easier to write than
HTML .
gui's created with this module are event driven. an arbitrarily complex (and runtime mutable) object tree is passed to
display , which then creates the gui in firefox and starts the event loop.
display will wait for and respond to events until the
quit function is called, or the user closes the window.
all of javascript's event handlers are available, and can be written in perl (normally) or javascript (for handlers that need to be very fast such as image rollovers with onmouseover or the like). this is not to say that perl side handlers are slow, but with rollovers and fast mouse movements, sometimes there is mild lag due to protocol overhead.
this module is written in pure perl, and only depends upon core modules, making it easy to distribute your application. the goal of this module is to make all steps of gui development as easy as possible. XUL's widgets and nested design structure gets us most of the way there, and this module with its light weight syntax, and 'do what i mean' nature hopefully finishes the job. everything has sensible defaults with minimal boilerplate, and nested design means a logical code flow that isn't littered with variables. please send feedback if you think anything could be improved.
just like in
HTML, you build up your gui using tags. all tags (
XUL tags,
HTML tags, user defined widgets, and the
display function) are parsed the same way, and can fit into one of four templates:
HR() <hr />
B('some bold text') <b>some bold text<b/>
in the special case of a tag with one argument, which is not another tag, that argument is added to that tag as a text node. this is mostly useful for HTML tags, but works with XUL as well. once parsed, the line
B('...') becomes
B( TEXT => '...' ). the special
TEXT attribute can be used directly if other attributes need to be set:
FONT( color=>'blue', TEXT=>'...' ).
Label( value=>'some text', style=>'color: red' ) <label value="some text" style="color: red;" />
Hbox( id => 'mybox', pack => 'center', Label( value => 'hello' ), BR, B('world') ) <hbox id="mybox" pack="center"> <label value="hello" /> <br /> <b>world</b> </hbox>
as you can see, the tag functions in perl nest and behave the same way as their counterpart element constructors in
HTML/XUL . just like in
HTML , you access the elements in your gui by
id . but rather than using
document.getElementById(...) all the time, setting the
id attribute names an element in the global
%ID hash. the same hash can be accessed using the
ID(some_id) function.
my $object = Button( id => 'btn', label => 'OK' ); # $ID{btn} == ID(btn) == $object
the ID hash also exists in javascript:
ID.btn == document.getElementById('btn')
due to the way this module works, every element needs an
id , so if you don't set one yourself, an auto generated
id matching
/^xul_\d+$/ is used. you can use any
id that matches
/\w+/
Tk's attribute style with a leading dash is supported. this is useful for readability when collapsing attribute lists with
qw//
TextBox id=>'txt', width=>75, height=>20, type=>'number', decimalplaces=>4; TextBox qw/-id txt -width 75 -height 20 -type number -decimalplaces 4/;
multiple 'style' attributes are joined with ';' into a single attribute
all
XUL and
HTML objects in perl are exact mirrors of their javascript counterparts and can be acted on as such. for anything not written in this document or XUL::Gui::Manual, developer.mozilla.com is the official source of documentation:
any tag attribute name that matches
/^on/ is an event handler (onclick, onfocus, ...), and expects a
sub {...} (perl event handler) or
function q{...} (javascript event handler).
perl event handlers get passed a reference to their object and an event object
Button( label=>'click me', oncommand=> sub { my ($self, $event) = @_; $self->label = $event->type; })
in the event handler,
$_ == $_[0] so a shorter version would be:
oncommand => sub {$_->label = pop->type}
javascript event handlers have
event and
this set for you
Button( label=>'click me', oncommand=> function q{ this.label = event.type; })
any attribute with a name that doesn't match
/^on/ that has a code ref value is added to the object as a method. methods are explained in more detail later on.
use XUL::Gui; # is the same as use XUL::Gui qw/:base :util :pragma :xul :html :const :image/; the following export tags are available: :base %ID ID alert display quit widget :tools function gui interval serve timeout toggle XUL :pragma buffered cached delay doevents flush noevents now :const BLUR FILL FIT FLEX MIDDLE SCROLL :widgets ComboBox filepicker prompt :image bitmap bitmap2src :util apply mapn trace zip :internal genid object realid tag :all (all exports) :default (same as with 'use XUL::Gui;') :xul (also exported as Titlecase) Action ArrowScrollBox Assign BBox Binding Bindings Box Broadcaster BroadcasterSet Browser Button Caption CheckBox ColorPicker Column Columns Command CommandSet Conditions Content DatePicker Deck Description Dialog DialogHeader DropMarker Editor Grid Grippy GroupBox HBox IFrame Image Key KeySet Label ListBox ListCell ListCol ListCols ListHead ListHeader ListItem Member Menu MenuBar MenuItem MenuList MenuPopup MenuSeparator Notification NotificationBox Observes Overlay Page Panel Param PopupSet PrefPane PrefWindow Preference Preferences ProgressMeter Query QuerySet Radio RadioGroup Resizer RichListBox RichListItem Row Rows Rule Scale Script ScrollBar ScrollBox ScrollCorner Separator Spacer SpinButtons Splitter Stack StatusBar StatusBarPanel StringBundle StringBundleSet Tab TabBox TabPanel TabPanels Tabs Template TextBox TextNode TimePicker TitleBar ToolBar ToolBarButton ToolBarGrippy ToolBarItem ToolBarPalette ToolBarSeparator ToolBarSet ToolBarSpacer ToolBarSpring ToolBox ToolTip Tree TreeCell TreeChildren TreeCol TreeCols TreeItem TreeRow TreeSeparator Triple VBox Where Window Wizard WizardPage :html (also exported as html_lowercase) A ABBR ACRONYM ADDRESS APPLET AREA AUDIO B BASE BASEFONT BDO BGSOUND BIG BLINK BLOCKQUOTE BODY BR BUTTON CANVAS CAPTION CENTER CITE CODE COL COLGROUP COMMENT DD DEL DFN DIR DIV DL DT EM EMBED FIELDSET FONT FORM FRAME FRAMESET H1 H2 H3 H4 H5 H6 HEAD HR HTML I IFRAME ILAYER IMG INPUT INS ISINDEX KBD LABEL LAYER LEGEND LI LINK LISTING MAP MARQUEE MENU META MULTICOL NOBR NOEMBED NOFRAMES NOLAYER NOSCRIPT OBJECT OL OPTGROUP OPTION P PARAM PLAINTEXT PRE Q RB RBC RP RT RTC RUBY S SAMP SCRIPT SELECT SMALL SOURCE SPACER SPAN STRIKE STRONG STYLE SUB SUP TABLE TBODY TD TEXTAREA TFOOT TH THEAD TITLE TR TT U UL VAR VIDEO WBR XML XMP
constants:
FLEX flex => 1 FILL flex => 1, align =>'stretch' FIT sizeToContent => 1 SCROLL style => 'overflow: auto' MIDDLE align => 'center', pack => 'center' BLUR onfocus => 'this.blur()' each is a function that returns its constant, prepended to its arguments, thus the following are both valid: Box FILL pack=>'end'; Box FILL, pack=>'end';
if you prefer an OO interface, there are a few ways to get one:
use XUL::Gui 'g->*'; # DYOI: draw your own interface
g (which could be any empty package name) now has all of XUL::Gui's functions as methods. since draw your own interface does what you mean (
dyoidwym ), each of the following graphic styles are equivalent:
g->*, g->, ->g, install_into->g.
normally, installing methods into an existing package will cause a fatal error, however you can add
! to force installation into an existing package
no functions are imported into your namespace by default, but you can request any you do want as usual:
use XUL::Gui qw( g->* :base :pragma );
to use the OO interface:
g->display( g->Label('hello world') ); # is the same as XUL::Gui::display( XUL::Gui::Label('hello world') ); use g->id('someid') or g->ID('someid') to access the %ID hash the XUL tags are also available in lc and lcfirst: g->label == XUI::Gui::Label g->colorpicker == XUL::Gui::ColorPicker g->colorPicker == XUL::Gui::ColorPicker the HTML tags are also available in lc, unless an XUL tag of the same name exists
if you prefer an object (which behaves exactly the same as the package 'g'):
use XUL::Gui (); # or anything you do want my $g = XUL::Gui->oo; # $g now has XUL::Gui's functions as methods
if you like all the OO lowercase names, but want functions, draw that:
use XUL::Gui qw( ->main:: ); # ->:: will also export to main:: # '::' implies '!' display label 'hello, world';
display LIST
display starts the http server, launches firefox, and waits for events.
it takes a list of gui objects, and several optional parameters:
debug (0) .. 6 adjust verbosity to stderr silent (0) 1 disables all stderr status messages trusted 0 (1) starts firefox with '-app' (requires firefox 3+) launch 0 (1) launches firefox, if 0 connect to skin 0 (1) use the default 'chrome://global/skin' skin chrome 0 (1) chrome mode disables all normal firefox gui elements, setting this to 0 will turn those elements back on. xml (0) 1 returns the object tree as xml, the gui is not launched perl includes deparsed perl event handlers delay milliseconds delays each gui update cycle (for debugging) port first port to start the server on, port++ after that otherwise a random 5 digit port is used mozilla 0 (1) setting this to 0 disables all mozilla specific features including all XUL tags, the filepicker, and any trusted mode features. (used to implement Web::Gui)
if the first object is a
Window , that window is created, otherwise a default one is added. the remaining objects are then added to the window.
display will not return until the the gui quits
see
SYNOPSIS , XUL::Gui::Manual, XUL::Gui::Tutorial, and the
examples folder in this distribution for more details
quit
shuts down the server (causes a call to
display to return at the end of the current event cycle)
quit will shut down the server, but it can only shut down the client in trusted mode.
serve PATH MIMETYPE DATA
add a virtual file to the server
serve '/myfile.jpg', 'text/jpeg', $jpegdata;
the paths
qw( / /client.js /event /ping /exit /perl ) are reserved
object TAGNAME LIST
creates a gui proxy object, allows run time addition of custom tags
object('Label', value=>'hello') is the same as Label( value=>'hello' )
the
object function is the constructor of all proxied gui objects, and all these objects inherit from
[object] which provides the following methods.
objects and widgets inherit from a base class
[object] that provides the following object inspection / extension methods. these methods operate on the current data that XUL::Gui is holding in perl, none of them will ever call out to the gui
->has('item!') returns attributes or methods (see widget for details) ->attr('rows') lvalue access to $$self{A} attributes ->child(2) lvalue access to $$self{C} children it only makes sense to use attr or child to set values on objects before they are written to the gui ->can('update') lvalue access to $$self{M} methods ->attributes returns %{ $$self{A} } ->children returns @{ $$self{C} } ->methods returns %{ $$self{M} } ->widget returns $$self{W} ->id returns $$self{ID} ->parent returns $$self{P} ->super returns $$self{ISA}[0] ->super(2) returns $$self{ISA}[2] ->extends(...) sets inheritance (see widget for details)
these methods are always available for widgets, and if they end up getting in the way of any javascript methods you want to call for gui objects:
$object->extends(...) # calls the perl introspection function $object->extends_(...) # calls 'object.extends(...)' in the gui $x = $object->_extends; # fetches the 'object.extends' property $object->setAtribute('extends', ...); # and setting an attribute
or at runtime:
local $XUL::Gui::EXTENDED_OBJECTS = 0; # which prevents object inheritance # in the current lexical scope $object->extends(...); # calls the real javascript 'extends' method assuming that it exists
tag NAME
returns a code ref that generates proxy objects, allows for user defined tag functions
*mylabel = tag 'label'; \&mylabel == \&Label
ID OBJECTID
returns the gui object with the id
OBJECTID . it is exactly the same as
$ID{OBJECTID} and has
(*) glob context so you don't need to quote the id.
Label( id => 'myid' ) ... $ID{myid}->value = 5; ID(myid)->value = 5; # same
widget {CODE} HASH
widgets are containers used to group tags together into common patterns. in addition to grouping, widgets can have methods, attached data, and can inherit from other widgets
*MyWidget = widget { Hbox( Label( $_->has('label->value') ), Button( label => 'OK', $_->has('oncommand') ), $_->children ) } method => sub{ ... }, method2 => sub{ ... }, some_data => [ ... ]; # unless the value is a CODE ref, each widget # instance gets a new deep copy of the data $ID{someobject}->appendChild( MyWidget( label=>'widget', oncommand=>\&event_handler ) );
inside the widget's code block, several variables are defined:
variable contains the passed in $_{A} = { attributes } $_{C} = [ children ] $_{M} = { methods } $_ = a reference to the current widget (also as $_{W}) @_ = the unchanged runtime argument list
widgets have the following predefined (and overridable) methods that are synonyms / syntactic sugar for the widget variables:
$_->has('label') ~~ exists $_{A}{label} ? (label=>$_{A}{label}) : () $_->has('label->value') ~~ exists $_{A}{label} ? (value=>$_{A}{label}) : () $_->has('!label !command->oncommand style') ->has(...) splits its arguments on whitespace and will search $_{A}, then $_{M} for the attribute. if an ! is attached (anywhere) to an attribute, it is required, and the widget will croak without it. in scalar context, if only one key => value pair is found, ->has() will return the value. otherwise, the number of found pairs is returned $_->attr( STRING ) $_{A}{STRING} # lvalue $_->attributes %{ $_{A} } $_->child( NUMBER ) $_{C}[NUMBER] # lvalue $_->children @{ $_{C} } $_->can( STRING ) $_{M}{STRING} # lvalue $_->methods %{ $_{M} }
most everything that you would want to access is available as a method of the widget (attributes, children, instance data, methods). since there may be namespace collisions, here is the namespace construction order:
%widget_methods = ( passed in attributes, predefined widget methods, widget methods and instance data, passed in methods );
widgets can inherit from other widgets using the ->extends() method:
*MySubWidget = widget {$_->extends( &MyWidget )} submethod => sub {...};
more detail in XUL::Gui::Manual
alert STRING
open an alert message box
prompt STRING
open an prompt message box
filepicker MODE FILTER_PAIRS
opens a filepicker dialog. modes are 'open', 'dir', or 'save'. returns the path or undef on failure. if mode is 'open' and
filepicker is called in list context, the picker can select multiple files. the filepicker is only available when the gui is running in 'trusted' mode.
my @files = filepicker open => Text => '*.txt; *.rtf', Images => '*.jpg; *.gif; *.png';
trace LIST
carps
LIST with object details, and then returns
LIST unchanged
function JAVASCRIPT
create a javascript event handler, useful for mouse events that need to be very fast, such as onmousemove or onmouseover
Button( label=>'click me', oncommand=> function q{ this.label = 'ouch'; alert('hello from javascript'); if (some_condition) { perl("print 'hello from perl'"); } }) $ID{myid} in perl is ID.myid in javascript
to access widget siblings by id, wrap the id with
W{...}
interval {CODE} TIME LIST
perl interface to javascript's
setInterval() . interval returns a code ref which when called will cancel the interval.
TIME is in milliseconds.
@_ will be set to
LIST when the code block is executed.
timeout {CODE} TIME LIST
perl interface to javascript's
setTimeout() . timeout returns a code ref which when called will cancel the timeout.
TIME is in milliseconds.
@_ will be set to
LIST when the code block is executed.
XUL STRING
converts an XML XUL string to
XUL::Gui objects. experimental.
this function is provided to facilitate drag and drop of XML based XUL from tutorials for testing. the perl functional syntax for tags should be used in all other cases
gui JAVASCRIPT
executes
JAVASCRIPT in the gui, returns the result
passing a reference to a scalar or coderef as a value in an object constructor will create a data binding between the perl variable and its corresponding value in the gui.use XUL::Gui; my $title = 'initial title'; display Window title => \$title, Button( label => 'update title', oncommand => sub { $title = 'title updated via data binding'; } );
a property on a previously declared object can also be bound by taking a reference to it:display Label( id => 'lbl', value => 'initial value'), Button( label => 'update', oncommand => sub { my $label = \ID(lbl)->value; $$label = 'new value'; } )
this is just an application of the normal bidirectional behavior of gui accessors:for (ID(lbl)->value) { print "$_\n"; # gets the current value from the gui $_ = 'new'; # sets the value in the gui print "$_\n"; # gets the value from the gui again }
the following functions all apply pragmas to their CODE blocks. in some cases, they also take a list. this list will be
@_ when the CODE block executes. this is useful for sending in values from the gui, if you don't want to use a
now {block}
this module will automatically buffer certain actions within event handlers. autobuffering will queue setting of values in the gui until there is a get, the event handler ends, or
doevents is called. this eliminates the need for many common applications of the
buffered pragma.
flush
flush the autobuffer
buffered {CODE} LIST
delays sending all messages to the gui. partially deprecated (see autobuffering)
buffered { $ID{$_}->value = '' for qw/a bunch of labels/ }; # all labels are cleared at once
cached {CODE}
turns on caching of gets from the gui
now {CODE}
execute immediately, from inside a buffered or cached block, without causing a buffer flush or cache reset. buffered and cached will not work inside a now block.
delay {CODE} LIST
delays executing its CODE until the next gui refresh
useful for triggering widget initialization code that needs to run after the gui objects are rendered. the first element of
LIST will be in
$_ when the code block is executed
noevents {CODE} LIST
disable event handling
doevents
force a gui update cycle before an event handler finishes
mapn {CODE} NUMBER LIST
map over n elements at a time in
@_ with
$_ == $_[0]
print mapn {$_ % 2 ? "@_" : " [@_] "} 3 => 1..20; > 1 2 3 [4 5 6] 7 8 9 [10 11 12] 13 14 15 [16 17 18] 19 20
zip LIST of ARRAYREF
%hash = zip [qw/a b c/], [1..3];
apply {CODE} LIST
apply a function to a copy of
LIST and return the copy
print join ", " => apply {s/$/ one/} "this", "and that"; > this one, and that one
toggle TARGET OPT1 OPT2
alternate a variable between two states
toggle $state; # opts default to 0, 1 toggle $state => 'red', 'blue';
bitmap WIDTH HEIGHT OCTETS
returns a binary .bmp bitmap image.
OCTETS is a list of
BGR values
bitmap 2, 2, qw(255 0 0 255 0 0 255 0 0 255 0 0); # 2px blue square
for efficiency, rather than a list of
OCTETS , you can send in a single array reference. each element of the array reference can either be an array reference of octets, or a packed string
pack "C*" => OCTETS
bitmap2src WIDTH HEIGHT OCTETS
returns a packaged bitmap image that can be directly assigned to an image tag's src attribute. arguments are the same as
bitmap()
$ID{myimage}->src = bitmap2src 320, 180, @image_data;
# access attributes and properties $object->value = 5; # sets the value in the gui print $object->value; # gets the value from the gui # the attribute is set if it exists, otherwise the property is set $object->_value = 7; # sets the property directly # method calls $object->focus; # void context or $object->appendChild( H2('title') ); # any arguments are always methods print $object->someAccessorMethod_; # append _ to force interpretation # as a JS method call
in addition to mirroring all of an object's existing javascript methods / attributes / and properties to perl (with identical spelling / capitalization), several default methods have been added to all objects
->removeChildren( LIST )
removes the children in
LIST , or all children if none are given
->removeItems( LIST )
removes the items in
LIST , or all items if none are given
->appendChildren( LIST )
appends the children in
LIST
->prependChild( CHILD, [INDEX] )
inserts
CHILD at
INDEX (defaults to 0) in the parent's child list
->replaceChildren( LIST )
removes all children, then appends
LIST
->appendItems( LIST )
append a list of items
->replaceItems( LIST )
removes all items, then appends
LIST
create dropdown list boxes
items => [ ['displayed label' => 'value'], 'label is same as value' ... ] default => 'item selected if this matches its value' also takes: label, oncommand, editable, flex styles: liststyle, popupstyle, itemstyle getter: value
too many changes to count. if anything is broken, please send in a bug report.
some options for display have been reworked from 0.36 to remove double negatives
widgets have changed quite a bit from version 0.36. they are the same under the covers, but the external interface is cleaner. for the most part, the following substitutions are all you need:
$W --> $_ or $_{W} $A{...} --> $_{A}{...} or $_->attr(...) $C[...] --> $_{C}[...] or $_->child(...) $M{...} --> $_{M}{...} or $_->can(...) attribute 'label onclick' --> $_->has('label onclick') widget {extends ...} --> widget {$_->extends(...)}
export tags were changed a little bit from 0.36
thread safety should be better than in 0.36
currently it is not possible to open more than one window, hopefully this will be fixed soon
the code that attempts to find firefox may not work in all cases, patches welcome
for the TextBox object, the behaviors of the "value" and "_value" methods are reversed. it works better that way and is more consistent with the behavior of other tags.
Eric Strom,
<asg at cpan.org>
please report any bugs or feature requests to
bug-xul-gui at rt.cpan.org , or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
the mozilla development team
this program is free software; you can redistribute it and/or modify it under the terms of either: the GNU General Public License as published by the Free Software Foundation; or the Artistic License.
see for more information.
|
http://search.cpan.org/~asg/XUL-Gui-0.63/lib/XUL/Gui.pm
|
CC-MAIN-2018-09
|
refinedweb
| 3,752
| 52.43
|
The QtXtWidget class allows mixing of Xt/Motif and Qt widgets. More...
#include <QtXtWidget>
This class is obsolete. It is provided to keep old source code working. We strongly advise against using it in new code.
Inherits QWidget.
The QtXtWidget class allows mixing of Xt/Motif and Qt widgets.
QtXtWidget acts as a bridge between Xt and Qt. When utilizing old Xt widgets, it can be a QWidget based on a Xt widget class. When including Qt widgets in an existing Xt/Motif application, it can be a special Xt widget class that is a QWidget. See the constructors for the different behaviors.
This class is unsupported and has many known problems and limitations. It is provided only to keep existing source working; it should not be used in new code. These problems will not be fixed in future releases.
Below is an incomplete list of known issues:
Constructs a QtXtWidget with the given name and parent of the special Xt widget class known as "QWidget" to the resource manager.
Use this constructor to utilize Qt widgets in an Xt/Motif application. The QtXtWidget is a QWidget, so you can create subwidgets, layouts, and use other Qt features.
If the managed parameter is true and parent is not null, XtManageChild is used to manage the child; otherwise it is unmanaged.
Constructs a QtXtWidget of the given widget_class called name.
Use this constructor to utilize Xt or Motif widgets in a Qt application. The QtXtWidget looks and behaves like an Xt widget, but can be used like any QWidget.
Note that Xt requires that the top level Xt widget is a shell. This means that if parent is a QtXtWidget, any kind of widget_class can be used. However, if there is no parent, or the parent is just a normal QWidget, widget_class should be something like topLevelShellWidgetClass.
The args and num_args arguments are passed on to XtCreateWidget.
If managed is true and parent is not null, XtManageChild is used to manage the child; otherwise it is unmanaged.
Destroys the QtXtWidget.
Activates the widget. Implements a degree of focus handling for Xt widgets.
Returns true if the widget is the active window; otherwise returns false.
Reimplemented to produce the Xt effect of getting focus when the mouse enters the widget. The event is passed in e.
Reimplemented from QWidget.
Returns the underlying Xt widget.
|
http://doc.qt.nokia.com/solutions/4/qtmotifextension/qtxtwidget.html
|
crawl-003
|
refinedweb
| 392
| 67.65
|
run java program
how to run java program i have jar file.
in jar i have so many classes.in one of file i want modify .after modifying file i want replce in jar.
please help me to how to generate class and how to replace to execute or run the program. For ex, If you have the following java program
java not run on my pc
java not run on my pc i have installed java in pc but when i run and compile any program the massage is display that 'javac' is not recognized as an internal or external command
java run time error - Java Beginners
java run time error i am getting error "main" java.lang.NullPointerException" in my program, iam sending code actually motive of program.......
please correct my program, tell me where the problem occurs and what should
run a java application - Java Beginners
run a java application Hai Deepak,
I ve set my environment...;
CLASSPATH= C:\Program Files\Apache Tomcat 4.0\common\lib\servlet.jar;
JAVA_HOME=C:\Program Files\Java;
path=C:\Program Files\Java\jdk1.6.0_06\bin;
My
How Jdbc program can be run??
created successfully and
when i compiled this program then it compiled successfully ,but when i run it ,it gives following error->
C:\Program Files\Java...How Jdbc program can be run?? import java.sql.*;
import java.util.
How to run program in new opened cmd
How to run program in new opened cmd Hello i have made a program in java to open command prompt, then how to run my left of the program in the new opened command prompt
java run time error - Java Beginners
java run time error when i compile my program it didnt show any error in my program.but i cant run my program, if i run my program means..." java.lang.ArrayIndexOutOfBoundsException: 0
at Armstrong.main(Armstrong.java:22)
Java Result: 1
java program
java program write an error.jsp , which will be called in case there will be any run time error in the system , page crash
how to run jdbc program in linux-ubuntu?
how to run jdbc program in linux-ubuntu? how to run jdbc program in linux-ubuntu?
hi
firstly download the jdk on your system using... to compile java program
javac prog.java
java prog
Java run time polymorphism
Java run time polymorphism What is run-time polymorphism or dynamic method dispatch;
java
and flexible in nature. The most significant feature of Java is to run a program easily from one computer system to another.
* Java works on distributed... for possible errors, as Java compilers are able to detect many error problem
How to Write a Calculator Program in Java?
How to Write a Calculator Program in Java?
In this Java Tutorial you will learn how to write a Calculator program in Java in easy steps. Calculator program..., subtraction, multiplication or division.
In this example of writing program in Java
Java Program
and the number of cows and bulls for each word, your program should be able to work out...Java Program Problem Statement
You are required to play a word-game with a computer. The game proceeds as follows
The computer 'thinks' of a four
Compiling a Java program
Java NotesCompiling a Java program
Turning a Java source program into an object program takes
a couple of steps. Assuming that you are using the popular... this, but that isn't normally done.
Run it with:
java Greeting
Java Program - Java Beginners
Java Program Write a program that demonstrates the use of multithreading with the use of three
counters with three threads defined for each. Three... with the step of 100.
Assign proper priority to the threads and then java program - Java Beginners
about the second line...
i have made my program but not able to click... a java program well sir, i just wanna ask you something regarding that if the question say .....write a program to check whether a number is even
java packge program - Java Beginners
java packge program my question is
how to compile and run java package program in the linux OS
i need a quick response about array and string...
i need a quick response about array and string... how can i make a dictionary type using array code in java where i will type the word then the meaning will appear..please help me..urgent applet run time error - Applet
java applet run time error Hi,
Im new to java applet.please help me. i have create a MPEG movie player in applet. when i run that program...
{
Player player = null;
/*String location="";
MediaLocator
Making Tokens of a Java Source Code
simplest way. Here after reading this lesson, you will be able to break java... class object into it which will read the name of
the java file at run time. Now...:\Java Tutorial>
Download
this program
how to run applet - Applet
how to run applet
Hi everybody
i am using connecting jdbc in applet program. this is executed successfully with appletviewer command
>...://
Java Program - Java Beginners
Java Program Hi I'm having trouble with this problem, I keep getting errors.
Write a program GuessGame.java that plays the game ?guess the number? as follows: Your program chooses the number to be guessed by selecting
The quick overview of JSF Technology
;
The quick Overview of JSF Technology
This section gives you an overview of Java
Server Faces technology, which simplifies the development of Java
Based web applications. In this JSF
java program - Java Beginners
java program ahm... i will use a table, text field and a button in java... a user will input a data to be search in the table.. after searching...[]) {
SwingUtilities.invokeLater(new Runnable() {
public void run
hello .. still doesn't run - Java Beginners
hello .. still doesn't run Iam still having a prblem in running this problem
errors are:
can not resolve symbol
import.util.Scanner
class Scanner
another error is
can not resolve symbol
method setname(java.lang.String
java
book) but i replaced with localhost in lookup method since iam running in stand alone application.Can any one help me run this program please...java can anyone tell me how to compile and run rmi in netbeans(6.9.1
java program for
java program for java program for printing documents,images and cards
Java code to run unix command - Java Beginners
Java code to run unix command Hi,
I need a java code to connect to a remote unix server and run a unix command.
Please help.
Regards,
Pratyush
Java Program
Java Program A Java Program that print the data on the printer but buttons not to be printed
Quick introduction to web services
Quick introduction to web services
... program.
Web Services allows different applications to talk to each other...
talk to java web services and vice versa. So, Web services are used
Java Thread : run() method
Java Thread : run() method
In this section we are going to describe run() method with example in java thread.
Thread run() :
All the execution code is written with in the run() method. This method is
part of the Runnable interface
java iam working for grid environment with consistency. if there is 2 or 3 files and a person accessing one file and at that time no one must... source code for this
How to run java swing application from jar files .
How to run java swing application from jar files . Hello Sir,
I developed a java swing application .Now i want to execute it as .exe... the main class program will exit" from "java virtual machine".
Plz help me
java program
java program write a program to print
1234
567
89
10
run ttime error - Java Beginners
run ttime error how to compile a client server application using RMI ? Hi friend,
I am sending you a link. This link will help you. Please visit for more information. java program to compute area of a circle.square,rectangle.triangle,volume of a sphere ,cylinder and perimeter of cube using method over riding
Making Tokens of a Java Source Code
C:\Java Tutorial>java TokenizingJavaSourceCode
Please enter a java file name: TokenizingJavaSourceCode.java
Number of tokens = 158
C:\Java Tutorial>_
Download
this
My first Java Program
My first Java Program I have been caught with a practical exam to do the following:
Write a program that takes input from a user through... You may not "cut off" any words in the middle
I can be able to code step 1
java GUI program - Java Beginners
java GUI program java program that creates the following GUI, when...() {
public void run() {
createAndShowGUI...://
Thanks.
Amardeep program to find the difference between sum of the squares and the square of the sums of n numbers
java program
java program write a program to create text area and display the various mouse handling events
java program
java program voter or non voter display
java program
java program write a java program to read a file which hold email address validate email address tohave formate @.* and replace all .com email address
|
http://roseindia.net/tutorialhelp/comment/42691
|
CC-MAIN-2014-15
|
refinedweb
| 1,519
| 64
|
statement
- Q: Does importing all classes in a package make my object file (.class or .jar) larger?
A: No, import only tells the compiler where to look for symbols.
- Q: Is it less efficient to import all classes than only the classes I need?
A: No. The search for names is very efficient so there is no effective difference.
-).
- Q:
Java 5 added an import static option that allows static variables (typically constants) to be referenced without qualifying them with a class name. For example, after
import static java.awt.Color;
It would then be possible to write
Color background = RED;
instead of
Color background = Color.RED;
Adding this "feature" wasn't the best idea because it leads to name pollution and confusion about which class constants come from. Even Sun (see References below) basically advises not to use it!
References
- Static Import,.
|
http://www.fredosaurus.com/notes-java/language/10basics/import.html
|
CC-MAIN-2021-49
|
refinedweb
| 143
| 65.83
|
On Fri, Jan 04, 2013 at 08:01:02PM -0200, Eduardo Habkost wrote: > This is a cleanup that tries to solve two small issues: > > - We don't need a separate kvm_pv_eoi_features variable just to keep a > constant calculated at compile-time, and this style would require > adding a separate variable (that's declared twice because of the > CONFIG_KVM ifdef) for each feature that's going to be enabled/disable > by machine-type compat code. > - The pc-1.3 code is setting the kvm_pv_eoi flag on cpuid_kvm_features > even when KVM is disabled at runtime. This small incosistency in > the cpuid_kvm_features field isn't a problem today because > cpuid_kvm_features is ignored by the TCG code, but it may cause > unexpected problems later when refactoring the CPUID handling code. > > This patch eliminates the kvm_pv_eoi_features variable and simply uses > CONFIG_KVM and kvm_enabled() inside the enable_kvm_pv_eoi() compat > function, so it enables kvm_pv_eoi only if KVM is enabled. I believe > this makes the behavior of enable_kvm_pv_eoi() clearer and easier to > understand. > >> > > Changes v2: > - Coding style fix > --- > target-i386/cpu.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/target-i386/cpu.c b/target-i386/cpu.c > index 82685dc..e6435da 100644 > --- a/target-i386/cpu.c > +++ b/target-i386/cpu.c > @@ -145,15 +145,17 @@ static uint32_t kvm_default_features = (1 << KVM_FEATURE_CLOCKSOURCE) | > (1 << KVM_FEATURE_ASYNC_PF) | > (1 << KVM_FEATURE_STEAL_TIME) | > (1 << KVM_FEATURE_CLOCKSOURCE_STABLE_BIT); > -static const uint32_t kvm_pv_eoi_features = (0x1 << KVM_FEATURE_PV_EOI); > #else > static uint32_t kvm_default_features = 0; > -static const uint32_t kvm_pv_eoi_features = 0; > #endif > > void enable_kvm_pv_eoi(void) > { > - kvm_default_features |= kvm_pv_eoi_features; > +#ifdef CONFIG_KVM You do not need ifdef here. > + if (kvm_enabled()) { > + kvm_default_features |= (1UL << KVM_FEATURE_PV_EOI); > + } > +#endif > } > > void host_cpuid(uint32_t function, uint32_t count, > -- > 1.7.11.7 -- Gleb.
|
https://www.redhat.com/archives/libvir-list/2013-January/msg00261.html
|
CC-MAIN-2014-15
|
refinedweb
| 270
| 50.06
|
AWS Adventures: Infrastructure as Code and Microservices (Part 3)
AWS Adventures: Infrastructure as Code and Microservices (Part 3)
It's testing time! Now that the basics are set up in AWS, it's time to make sure the pieces work. We'll run through unit tests, integration tests, and plenty more.
Join the DZone community and get the full member experience.Join For Free
Even. More. Tests. Now that you've got yourself underway, we're going to test like there's no tomorrow. Manual tests, unit tests, integration tests, we're going to test this like crazy and make sure our bases are covered.
But first, a bit of housekeeping...
Step 6: Delete Your Lambda Function
Your function is created. If you want to update code in it, you could simple make a new zip file and call updateFunctionCode. However, to make things truly immutable and atomic, meaning each individual aspect of our Lambda we can test and update individually, we’ll just delete the whole thing.
Remember, we’re not treating our server like a nice camera. Instead, we purchase a disposable one at the local drugstore/apothecary, and if it breaks, we get a new one instead of waisting time debugging a $9.25 single use electronic. This is important for deploying specific versions of code. If you deploy a git tag called “2.1.6”, but you later update code, you’ve negated the whole point of using a specific git tag since it’s not really 2.1.6, but your own version. If something goes wrong, you know for sure (mostly) that’s that version of the code and not your modification.
In build.test.js, import the non-existent deleteFunction:
const { listFunctions, createFunction, deleteFunction } = require('./build');
Add a mock method to our mockLambda:
deleteFunction: (params, cb) => cb(undefined, {})
And a mock method to our mockBadLambda:
deleteFunction: (params, cb) => cb(new Error('boom'))
And finally our two tests:
describe('#deleteFunction', ()=> { it('should delete our lambda if there', (done)=> { deleteFunction(mockLamba, (err)=> { _.isUndefined(err).should.be.true; done(); }); }); it('should not delete our lambda if it', (done)=> { deleteFunction(mockBadLambda, (err)=> { err.should.exist; done(); }); }); });
If our Lambda works, we get no error. The call gives you an empty Object back which is worthless, so we just bank on “no error is working code”. Let’s write the implementation. In build.js, put in the following code above your module.exports:
const deleteFunction = (lambda, callback)=> { var params = { FunctionName: FUNCTION_NAME }; lambda.deleteFunction(params, (err, data)=> { if(err) { // log("lambda::deleteFunction, error:", err); return callback(err); } // log("lambda::deleteFunction, data:", data); callback(undefined, data); }); };
And then add to your module.exports:
module.exports = { listFunctions, createFunction, deleteFunction };
Cool, now re-run npm test:
Let’s give her a spin. Hardcode
deleteFunction(lambda, ()=>{});at the very bottom, run
node build.js, then log into the AWS Console for Lambda, and you should no longer see ‘datMicro’ (or whatever you called it) in the left list.
Step 7: Making Testable Code by Testing It
There are a few more steps to go in making our Lambda fully functional with the API Gateway. However, we can at this point test her out in the AWS Console. That means we can test her out in JavaScript, too. Let’s take a look at the original Lambda function code:
exports.handler = (event, context, callback) => { const response = { statusCode: '200', body: JSON.stringify({result: true, data: 'Hello from Lambda'}), headers: { 'Content-Type': 'application/json', } } callback(null, response); };
A few problems with this handler. First, it’s not testable because it doesn’t return anything. Second, it doesn’t really take any inputs of note. Let’s do a few things. We’ll add some unit tests, a function that always returns true, and a random number function.
Always True
Create an index.test.js file, and add this code as a starting template:
const expect = require("chai").expect; const should = require('chai').should(); const _ = require('lodash'); const { alwaysTrue } = require('./index'); describe('#index', ()=> { describe('#alwaysTrue', ()=> { it('is always true', ()=> { alwaysTrue().should.be.true; }); }); });
Modify your package.json to point to this test for now:
"scripts": { "test": "mocha index.test.js", ... },
Now run npm test. Hopefully, you get something along the lines of:
To make it pass, create the predicate:
const alwaysTrue = ()=> true;
Then export at the very bottom:
module.exports = { alwaysTrue };
Re-run your npm test, and it should be green:
module.exports = { alwaysTrue };
Testing random numbers is hard. For now, we’ll just verify the number is within the range we specified. In index.test.js import the new, non-existent, function:
const { alwaysTrue, getRandomNumberFromRange } = require('./index');
And a basic test, as we’re not handling bounds or typing checks for now:
describe('#getRandomNumberFromRange', ()=> { it('should give a number within an expected range', ()=> { const START = 1; const END = 10; const result = getRandomNumberFromRange(START, END); _.inRange(result, START, END).should.be.true; }); });
Re-run your tests and it should fail (or perhaps not even compile).
Now implement the function in index.js:
const getRandomNumberFromRange = (start, end)=> { const range = end - start; let result = Math.random() * range; result += start; return Math.round(result); };
And export her at the bottom:
module.exports = { alwaysTrue, getRandomNumberFromRange };
Re-run your tests and she should be green:
Lastly, let’s rework our main Lambda function to always return a value, respond to a test, and become its own function as we’ll manually add it to the module.exports in a bit. In index.test.js, import the handler:
const { alwaysTrue, getRandomNumberFromRange, handler } = require('./index');
And write the first test that expects it to return a response. Since we aren’t a typed language, we’ll create a loose one via a couple predicates to determine if it’s “response like”.
const responseLike = (o)=> _.isObjectLike(o) && _.has(o, 'statusCode') && _.has(o, 'body');
And the test:
describe('#handler', ()=> { it('returns a response with basic inputs', ()=> { const result = handler({}, {}, ()=>{}); responseLike(result).should.be.true; }); });
For now, the response is always an HTTP 200. We can add different ones later. Re-run your tests and she should fail:
Now let’s modify the function signature of our handler from:
exports.handler = (event, context, callback) =>
to:
const handler = (event, context, callback) =>
Move her above the module.exports and then add her to the exports. Final Lambda should look like this:
const alwaysTrue = ()=> true; const getRandomNumberFromRange = (start, end)=> { const range = end - start; let result = Math.random() * range; result += start; return Math.round(result); }; const handler = (event, context, callback) => { const response = { statusCode: '200', body: JSON.stringify({result: true, data: 'Hello from Lambda'}), headers: { 'Content-Type': 'application/json', } } callback(null, response); }; module.exports = { alwaysTrue, getRandomNumberFromRange, handler };
Our Lambda will return random numbers in the response based on the range you give it. We’ll have to create some predicates to ensure we actually get numbers, they are within range, and then return error messages appropriately. Our response in our handler will start to be different based on if someone passes in good numbers, bad numbers, bad data, or if it’s just a test. So, we’ll need to make him dynamic. Finally, we’ll add a flag in the event to make integration testing easier.
First, the litany of predicates for input checking. You’ll need 2 helper functions to make this easier. Create a new JavaScript file called predicates.js, and put this code into it:
const _ = require('lodash'); const validator = (errorCode, method)=> { const valid = function(args) { return method.apply(method, arguments); }; valid.errorCode = errorCode; return valid; } const checker = ()=> { const validators = _.toArray(arguments); return (something)=> { return _.reduce(validators, (errors, checkerFunction)=> { if(checkerFunction(something)) { return errors; } else { return _.chain(errors).push(checkerFunction.errorCode).value(); } }, []) }; }; module.exports = { validator, checker };
Now, let’s test the new, (soon to be) parameter checked handler in a few situations. At the top of index.test.js, import the handler function:
const { alwaysTrue, getRandomNumberFromRange, handler } = require('./index');
Let’s add a new, more brutal negative test where we pass nothing:
it('passing nothing is ok', ()=> { const result = handler(); responseLike(result).should.be.true; });
Let’s look at the responses and ensure we’re failing because of bad parameters, specifically, a malformed event. One test for a good event, one for a missing end, and one for our echo statement. Since the response is encoded JSON, we create a predicate to parse it out and check the result:
const responseSucceeded = (o)=> { try { const body = JSON.parse(o.body); return body.result === true; } catch(err) { return false; } }; // ... it('succeeds if event has a start and end', ()=> { const response = handler({start: 1, end: 10}, {}, ()=>{}); responseSucceeded(response).should.be.true; }); it('fails if event only has start', ()=> { const response = handler({start: 1}, {}, ()=>{}); responseSucceeded(response).should.be.false; }); it('succeeds if event only has echo to true', ()=> { const response = handler({echo: true}, {}, ()=>{}); responseSucceeded(response).should.be.true; });
None of those will pass. Let’s make ’em pass. Open index.js, and put in the predicates first. Import her up at the top:
// Note: the below only works in newer Node, // not the 4.x version AWS uses // const { validator, checker } = require('./predicates'); const predicates = require('./predicates'); const validator = predicates.validator; const checker = predicates.checker;
Then below put your predicate helpers:
// predicate helpers const eventHasStartAndEnd = (o) => _.has(o, 'start') && _.has(o, 'end'); const eventHasTestEcho = (o) => _.get(o, 'echo', false); const isLegitNumber = (o) => _.isNumber(o) && _.isNaN(o) === false
These check the event for both a start and end property, or an echo. Lodash _.isNumber counts NaN as a number, even though NaN stands for “Not a Number” and is a Number per the ECMAStandard because “design by committee”. I wrangle the insanity by writing my own predicate that… you know… makes sense: isLegitNumber.
We’ll use them to build our argument predicates:
// argument predicates const legitEvent = (o)=> _.some([ eventHasStartAndEnd, eventHasTestEcho ], (predicate) => predicate(o) ); const legitStart = (o) => isLegitNumber(_.get(o, 'start')); const legitEnd = (o) => isLegitNumber(_.get(o, 'end'));
Now we have a lot of functions to verify if our event is acceptable. However, if it’s not acceptable, we don’t know why. Worse, users of your Lambda, both you in 2 weeks when you forgot your code, and other API consumers, won’t have any clue what they did either without cracking open the CloudWatch logs + your code and attempting to debug it.
We’ll take this a step further by using those validator and checker functions we imported above. Second, the validators:
// validators const validObject = validator('Not an Object.', _.isObjectLike); const validEvent = validator('Invalid event, missing key properties.', legitEvent); const validStart = validator('start is not a valid number.', legitStart); const validEnd = validator('end is not a valid number.', legitEnd);
These functions are normal, they just take advantage of JavaScript and just about everything being a dynamic Object. That first parameter, the string error message, you can store on the function, so if it returns false, you know WHY it returned false. The checkers will accumulate those errors using a reduce function. Third, the checkers:
// checkers const checkEvent = checker(validObject, validEvent); const checkStartAndEnd = checker(validStart, validEnd);
Two more predicates and we’re done. All Lambdas are required to have at least 1 response to not blow up. However, you and I know code either works or it doesn’t. There is middle ground, sure, but for simple stuff, it’s black and white. We’ll break those out into two predicates for creating our HTTP responses of errors:
const getErrorResponse = (errors)=> { return { statusCode: '500', body: JSON.stringify({result: false, error: errors.join('\n')}), headers: { 'Content-Type': 'application/json', } }; };
And success:
const getResponse = (data)=> { return { statusCode: '200', body: JSON.stringify({result: true, data}), headers: { 'Content-Type': 'application/json', } } };
Armed with our predicates, we can have a flexible handler, and if something blows up, we will know why. Let’s break her down into 5 steps:
const handler = (event, context, callback) => { if(_.isNil(callback) === true) { return getErrorResponse(['No callback was passed to the handler.']); } ...
We’re ok with no event and context, but no callback!? That’s crazy talk. Here’s your t3h boom.
const errors = checkEvent(event); if(errors.length > 0) { callback(new Error(errors.join('\n'))); return getErrorResponse(errors); }
If our event isn’t legit (either and echo, or having start and end numbers), we send the array of errors we get back to whoever triggered us in an error callback. Instead of “I didn’t work”, they’ll have a fighting chance of knowing why since we sent them the validation messaging.
Quick Security Note
I should point out AWS walks the line of being secure and not giving you verbose errors while sometimes giving you what they can without compromising security to help you debug as a developer. You’ll note that my checkers tend to be verbose in the hope they’ll help whoever made a mistake. However, as things scale, you must be careful not to expose public information, or reveal too much about what you DON’T validate. I’m not a security guy, I don’t have the answers beyond lots of peer review of code, automated quality checks, and automated security scanning. You’ll note the obvious of not throwing stack traces back to the client. You can see those in CloudWatch if you wish.
if(event.echo === true) { const echoResponse = getResponse('pong'); callback(undefined, echoResponse); return echoResponse; }
That’s for our future integration tests. It’s easier if our remote code is aware she’ll be pinged to see if she’s alive and well. We test for it to ensure it doesn’t negatively affect others. You can see the work that went into validating it as well as ensuring it played nice with others, yet still supported the ability to be tested without actually doing real work that could lead to leaky state.
const startEndErrors = checkStartAndEnd(event); if(startEndErrors.length > 0) { callback(new Error(startEndErrors.join('\n'))); return getErrorResponse(startEndErrors); }
Finally, we check to ensure if we’re going to do the random number generation, we have what we need from the event to do so, else, blow up and explain why. The real work is the end of the function:
const start = _.get(event, 'start'); const end = _.get(event, 'end'); const randomNumber = getRandomNumberFromRange(start, end); const response = getResponse(randomNumber); callback(undefined, randomNumber); return response; };
Now re-running your tests should result in them all passing:
Manual Test
One last manual test you can do as well is simply run her in the node REPL. In the Terminal, type “node” and hit enter.
Then import your index module by typing
lambda =require('./index') and hitting enter:
You’ll see our three functions we exposed. AWS only cares about your handler, so let’s manually test ours with some bogus data. Type
lambda.handler() and hit enter, and you should see a 500 response:
Now let’s mirror our unit test by using, and give it some good inputs via
handler({echo: true}, {}, ()=>{}); to get a basic 200 response:
You can Control + C twice to get out of Node.
Skills. Unit tests work, and a manual test works. Now you can be assured if you upload to AWS and she breaks, it’s them not you. Yes, your problem, but thankfully your self-esteem shall remain intact. Remember, part of programming is deflecting blame to others, backing it up with fancy terms like “I have lots of TDD code coverage”, then fixing “their” problem and looking like a hero.
Deploy Testable Code to AWS To Test There
Speaking of AWS, let’s redeploy and test our more testable code up on AWS. This’ll be a common task you do again and again by testing code locally, then re-deploying to test it on AWS. We’ll suffer through it for a bit so we appreciate the automating of it later.
For now, let’s adjust your makezip script in package.json to add our new files. We have to add predicates.js and our libraries which are in node_modules:
"makezip": "zip -r -X deploy.zip index.js predicates.js node_modules",
We’ll hardcode our build script, for now, to destroy our stack first, then recreate it with whatever deploy.zip it finds locally. Open up build.js, and at the bottom, let’s chain together our deleteFunction & createFunction:
deleteFunction(lambda, (err, data)=>{ log("deleteFunction"); log("err:", err); log("data:", data); createFunction(lambda, fs, (err, data)=> { log("createFunction"); log("err:", err); log("data:", data); }); });
You may get an error the first time since no function may be up there to delete and that’s ok. We’re creating and fixing one thing at a time. For now, it’s good enough if she creates your zip file and uploads it to your newly created Lambda function. We’re not differentiating between dependencies and development dependencies in node_modules, so your deploy.zip will be quite large, and may take more time to upload now that she’s not just under 1kb of text.
Run npm run deletezip, then npm run makezip, then node build… or just:
npm run deletezip && npm run makezip && node build
Log into your AWS Console and under Services choose Lambda. You should see your function in the list (they’re often sorted by newest up top). Notice she’s 4+ megs, t3h lulz. #myFLAFilesWereBigRaR #backInTheDay
Click it, and let’s test it. You’ll see a big blue button at the top called “Test”. Click it. It should blow up with our custom blow up message:
Hot, let’s see if she correctly responds to our manual integration test. Click “Actions” and “Configure Test Event”. Here, you can basically make up your own event JSON to test your Lambda and it’ll run on AWS infrastructure. Ours is pretty simple, echo true. When done click Save and Test.
Now be careful; sometimes this window has a glitch where it’ll save “echo”: “true” instead of “echo”: true. The “true” String is not the same as the true Boolean that we want. All goes well, you’ll see:
DAT PONG! Last manual test, let’s generate a random number. Again click Actions and Configure Test Event, and replace the fixture with:
{ "start": 1, "end": 10 }
Save and Test…
I got a 5, what’d you get?
Until Next Time...
That's all for now! We dived in and tested our functions, our network communication, and plenty more. Next time, we'll build a command line tool for some easy of use. After that, get ready for blue green deployments.
Published at DZone with permission of Jesse Warden , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/aws-adventures-infrastructure-as-code-and-microservices-part-3
|
CC-MAIN-2020-16
|
refinedweb
| 3,135
| 58.18
|
Introduction
Seaborn is one of the most widely used data visualization libraries in Python, as an extension to Matplotlib. It offers a simple, intuitive, yet highly customizable API for data visualization.
In this tutorial, we'll take a look at how to plot a Bar Plot in Seaborn. Seaborn
Plotting a Bar Plot in Seaborn is as easy as calling the
barplot() function on the
sns instance, and passing in the categorical and continuous variables that we'd like to visualize:
import matplotlib.pyplot as plt import seaborn as sns sns.set_style('darkgrid') x = ['A', 'B', 'C'] y = [1, 5, 3] sns.barplot
sns.barplot().
This results in a clean and simple bar graph:
Though, more often than not, you'll be working with datasets that contain much more data than this. Sometimes, operations are applied to this data, such as ranging or counting certain occurences.
Whenever you're dealing with means of data, you'll have some error padding that can arise from it. Thankfully, Seaborn has us covered, and applies error bars for us automatically, as it by default calculates the mean of the data we provide.
Let's import the classic Titanic Dataset and visualize a Bar Plot with data from there:
import matplotlib.pyplot as plt import seaborn as sns # Set Seaborn style sns.set_style('darkgrid') # Import Data titanic_dataset = sns.load_dataset("titanic") # Construct plot sns.barplot(x = "sex", y = "survived", data = titanic_dataset) plt.show()
This time around, we've assigned
x and
y to the
sex and
survived columns of the dataset, instead of the hard-coded lists.
If we print the head of the dataset:
print(titanic_dataset.head())
We're greeted with:
survived pclass sex age sibsp parch fare ... 0 0 3 male 22.0 1 0 7.2500 ... 1 1 1 female 38.0 1 0 71.2833 ... 2 1 3 female 26.0 0 0 7.9250 ... 3 1 1 female 35.0 1 0 53.1000 ... 4 0 3 male 35.0 0 0 8.0500 ... [5 rows x 15 columns]
Make sure you match the names of these features when you assign
x and
y variables.
Finally, we use the
data argument and pass in the dataset we're working with and from which the features are extracted from. This results in:
Plot a Horizontal Bar Plot in Seaborn
To plot a Bar Plot horizontally, instead of vertically, we can simply switch the places of the
x and
y variables.
This will make the categorical variable be plotted on the Y-axis, resulting in a horizontal plot:
import matplotlib.pyplot as plt import seaborn as sns x = ['A', 'B', 'C'] y = [1, 5, 3] sns.barplot(y, x) plt.show()
This results in:
Going back to the Titanic example, this is done in much the same way:
import matplotlib.pyplot as plt import seaborn as sns titanic_dataset = sns.load_dataset("titanic") sns.barplot(x = "survived", y = "class", data = titanic_dataset) plt.show()
Which results in:
Change Bar Plot Color in Seaborn
Changing the color of the bars is fairly easy. The
color argument accepts a Matplotlib color and applies it to all elements.
Let's change them to
blue:
import matplotlib.pyplot as plt import seaborn as sns x = ['A', 'B', 'C'] y = [1, 5, 3] sns.barplot(x, y, color='blue') plt.show()
This results in:
Or, better yet, you can set the
palette argument, which accepts a wide variety of palettes. A pretty common one is
hls:
import matplotlib.pyplot as plt import seaborn as sns titanic_dataset = sns.load_dataset("titanic") sns.barplot(x = "embark_town", y = "survived", palette = 'hls', data = titanic_dataset) plt.show()
This results in:
Plot Grouped Bar Plot in Seaborn
Grouping Bars in plots is a common operation. Say you wanted to compare some common data, like, the survival rate of passengers, but would like to group them with some criteria.
We might want to visualize the relationship of passengers who survived, segregated into classes (first, second and third), but also factor in which town they embarked from.
This is a fair bit of information in a plot, and it can easily all be put into a simple Bar Plot.
To group bars together, we use the
hue argument. Technically, as the name implies, the
hue argument tells Seaborn how to color the bars, but in the coloring process, it groups together relevant data.
Let's take a look at the example we've just discussed:
import matplotlib.pyplot as plt import seaborn as sns titanic_dataset = sns.load_dataset("titanic") sns.barplot(x = "class", y = "survived", hue = "embark_town", data = titanic_dataset) plt.show()
This results in:
Now, the error bars on the Queenstown data are pretty large. This indicates that the data on passengers who survived, and embarked from Queenstown varies a lot for the first and second class.
Ordering Grouped Bars in a Bar Plot with Seaborn
You can change the order of the bars from the default order (whatever Seaborn thinks makes most sense) into something you'd like to highlight or explore.
This is done via the
order argument, which accepts a list of the values and the order you'd like to put them in.
For example, so far, it ordered the classes from the first to the third. What if we'd like to do it the other way around?
import matplotlib.pyplot as plt import seaborn as sns titanic_dataset = sns.load_dataset("titanic") sns.barplot(x = "class", y = "survived", hue = "embark_town", order = ["Third", "Second", "First"], data = titanic_dataset) plt.show()
Running this code results in:
Change Confidence Interval on Seaborn Bar Plot
You can also easily fiddle around with the confidence interval by setting the
ci argument.
For example, you can turn it off, by setting it to
None, or use standard deviation instead of the mean by setting
sd, or even put a cap size on the error bars for aesthetic purposes by setting
capsize.
Let's play around with the confidence interval attribute a bit:
import matplotlib.pyplot as plt import seaborn as sns titanic_dataset = sns.load_dataset("titanic") sns.barplot(x = "class", y = "survived", hue = "embark_town", ci = None, data = titanic_dataset) plt.show()
This now removes our error bars from before:
Or, we could use standard deviation for the error bars and set a cap size:
import matplotlib.pyplot as plt import seaborn as sns titanic_dataset = sns.load_dataset("titanic") sns.barplot(x = "class", y = "survived", hue = "who", ci = "sd", capsize = 0.1, data = titanic_dataset) plt.show()
Conclusion
In this tutorial, we've gone over several ways to plot a Bar Plot using Seaborn and Python. We've started with simple plots, and horizontal plots, and then continued to customize them.
We've covered how to change the colors of the bars, group them together, order them and change the confidence interval.
If you're interested in Data Visualization and don't know where to start, make sure to check out our bundle of books on Data Visualization in Python:
Data Visualization in Python
.
|
https://stackabuse.com/seaborn-bar-plot-tutorial-and-examples/
|
CC-MAIN-2021-17
|
refinedweb
| 1,156
| 65.73
|
Hi I need to send data from a CSV file, one line at a time via RS232 when requested from robot controller. I was thinking the easiest way would be to use SD card with the CSV file and use RS232 shield. Not sure how to write the sketch to just send one line at a time each time a message is recieved from robot. Can anyone help please? :)
Not sure how to write the sketch to just send one line at a time each time a message is recieved from robot.
First, you need to determine when the robot is requesting information, versus requesting a coffee or smoke break.
Then, you need to define what constitutes “one line”. Data on an SD card is stored sequentially. You stop reading when you have encountered an end-of-record marker. When that occurs depends on what you have defined for an end-of-record marker.
Once you have a line to send, and know that the robot has asked for data, sending it is trivial.
Serial.print(theLine);
Thanks for your help. when it's ready, the robot sends out a request in the form SHIFT 1 CR LF on the serial port. it then waits for a response that should be SHIFT X, Y, Z, ?X, ?Y, ?Z CR where the values are taken from my data file. the next request should get the next line from the data file and so on. It was easy with Basic, I used the readline function but I'm not sure how do it with the Arduino.
It was easy with Basic, I used the readline function but I'm not sure how do it with the Arduino.
Which SD library are you using? What code have you written? Do you know how to read anything from the SD card?
I think I worked out how to get the data and send it as a string, using the code below. I’m waiting to receive a RS232 shield before I can try for real.
Not sure if I’ve gone about it the right way but the code does read one line at a time when it gets a message from the serial port.
It seems to work with a single byte to read from serial (I used 48 which is “0”) but can not get to work with a string for the request from robot.
Any suggestions?
byte inByte = 0;
String dataString = “SHIFT”;
String dString = “”;
#include <SD.h>
const int chipSelect = 4;
void setup()
{
Serial.begin(9600);
// Serial.print(“Initializing SD card…”);
pinMode(10, OUTPUT);
if (!SD.begin(chipSelect)) {
Serial.println(“Card failed, or not present”);
return;
}
// Serial.println(“card initialized.”);
File dataFile = SD.open(“datalog.txt”);
if (dataFile) {
while (dataFile.available()) {
dString = (dataFile.read());
dataString = dataString + dString;
if (dString == 10) dataString = “SHIFT”;
if (dString ==13) {
Serial.println(dataString);
do inByte = Serial.read(),
delay(10);
while (inByte != 48);
}
}
dataFile.close();
}
else {
Serial.println(“error opening datalog.txt”);
}
}
void loop()
{
}
do inByte = Serial.read(), delay(10); while (inByte != 48);
Would you mind explaining what you think this code is doing? Especially the comma there.
Why do you want to wait any time before checking again?
It seems to work with a single byte to read from serial (I used 48 which is "0") but can not get to work with a string for the request from robot.
You are only reading (and discarding) individual characters. If you want to deal with strings, you need to collect the data in an array, keeping the array NULL terminated. Or, you could collect the data in a String.
|
https://forum.arduino.cc/t/how-to-send-position-data-to-industrial-robot/69537
|
CC-MAIN-2022-21
|
refinedweb
| 602
| 75.91
|
In this notebook, we'll describe, implement, and test some simple and efficient strategies for sampling without replacement from a categorical distribution.
Given a set of items indexed by $1, \ldots, n$ and weights $w_1, \ldots, w_n$, we want to sample $0 < k \le n$ elements without replacement from the set.
Theory¶
The probability of the sampling without replacement scheme can be computed analytically. Let $z$ be an ordered sample without replacement from the indices $\{1, \ldots, n\}$ of size $0 < k \le n$. Borrowing Python notation, let $z_{:t}$ denote the indices up to, but not including, $t$. The probability of $z$ is $$ \mathrm{Pr}(z) = \prod_{t=1}^{k} p(z_t \mid z_{:t}) \quad\text{ where }\quad p(z_t \mid z_{:t}) = \frac{ w_{z_t} }{ W_t(z) } \quad\text{ and }\quad W_t(z) = \sum_{i=t}^n w_{z_{t}} $$
Note that $w_{z_t}$ is the weight of the $t^{\text{th}}$ item sampled in $z$ and $W_t(z)$ is the normalizing constant at time $t$.
This probability is evaluated by
p_perm (below), and it can be used to test that $z$ is sampled according to the correct sampling without replacement process.
def p_perm(w, z): "The probability of a permutation `z` under the sampling without replacement scheme." n = len(w); k = len(z) assert 0 < k <= n wz = w[np.array(z, dtype=int)] W = wz[::-1].cumsum() return np.product(wz / W)
def swor_numpy(w, R): n = len(w) p = w / w.sum() # must normalize `w` first, unlike Gumbel version U = list(range(n)) return np.array([np.random.choice(U, size=n, p=p, replace=0) for _ in range(R)])
Heap-based sampling¶
Using heap sampling, we can do the computation in $\mathcal{O}(N + K \log N)$. It's possible that shrinking the heap rather than leaving it size $n$ could yield an improvement. The implementation that I am using is from my Python arsenal.
from arsenal.maths.sumheap import SumHeap def swor_heap(w, R): n = len(w) z = np.zeros((R, n), dtype=int) for r in range(R): z[r] = SumHeap(w).swor(n) return z
def swor_gumbel(w, R): n = len(w) G = np.random.gumbel(0,1,size=(R,n)) G += np.log(w) G *= -1 return np.argsort(G, axis=1)
Efraimidis and Spirakis (2006)'s algorithm, modified slightly to use Exponential random variates for aesthetic reasons. The Gumbel-sort and Exponential-sort algorithms are very tightly connected as I have discussed in a 2014 article and can be seen in the similarity of the code for the two methods.
def swor_exp(w, R): n = len(w) E = -np.log(np.random.uniform(0,1,size=(R,n))) E /= w return np.argsort(E, axis=1)
import numpy as np, pylab as pl from numpy.random import uniform from arsenal.maths import compare, random_dist R = 50_000 v = random_dist(4) methods = [ swor_numpy, swor_gumbel, swor_heap, swor_exp, ] S = {f.__name__: f(v, R) for f in methods}
from collections import Counter from arsenal.maths.combinatorics import permute def counts(S): "empirical distribution over z" c = Counter() m = len(S) for s in S: c[tuple(s)] += 1 / m return c D = {name: counts(S[name]) for name in S} R = {} n = len(v) for z in permute(range(n)): R[z] = p_perm(v, z) for d in D.values(): d[z] += 0 # Check that p_perm sums to one. np.testing.assert_allclose(sum(R.values()), 1)
for name, d in sorted(D.items()): compare(R, d).show(title=name);
Comparison: n=24 norms: [0.336428, 0.337442] zero F1: 1 pearson: 0.999762 spearman: 0.99913 Linf: 0.00428132 same-sign: 100.0% (24/24) max rel err: 0.105585 regression: [0.995 0.000]
Comparison: n=24 norms: [0.336428, 0.337414] zero F1: 1 pearson: 0.999894 spearman: 0.998261 Linf: 0.0025007 same-sign: 100.0% (24/24) max rel err: 0.118721 regression: [0.995 0.000]
Comparison: n=24 norms: [0.336428, 0.336196] zero F1: 1 pearson: 0.999919 spearman: 0.997391 Linf: 0.00188318 same-sign: 100.0% (24/24) max rel err: 0.118791 regression: [1.001 -0.000]
Comparison: n=24 norms: [0.336428, 0.336499] zero F1: 1 pearson: 0.999856 spearman: 0.998261 Linf: 0.00253601 same-sign: 100.0% (24/24) max rel err: 0.126029 regression: [1.000 0.000]
from arsenal.timer import timers T = timers() R = 50 for i in range(1, 15): n = 2**i #print('n=', n, 'i=', i) for _ in range(R): v = random_dist(n) np.random.shuffle(methods) for f in methods: name = f.__name__ with T[name](n = n): S = f(v, R = 1) assert S.shape == (1, n) # some sort of sanity check print('done')
done
fig, ax = pl.subplots(ncols=2, figsize=(12, 5)) T.plot_feature('n', ax=ax[0]) fig.tight_layout() T.plot_feature('n', ax=ax[1]); ax[1].set_yscale('log'); ax[1].set_xscale('log');
T.compare()
swor_exp is 1.5410x faster than swor_gumbel (median: swor_gumbel: 4.92334e-05 swor_exp: 3.19481e-05) swor_exp is 1.1082x faster than swor_heap (median: swor_heap: 3.54052e-05 swor_exp: 3.19481e-05) swor_exp is 10.4478x faster than swor_numpy (median: swor_numpy: 0.000333786 swor_exp: 3.19481e-05)
Remarks:
The numpy version is not very competitive. That's because it's uses a less efficient base algorithm that is not optimized for sampling without replacement.
The heap-based implementation is pretty fast. It has the best asymptotic complexity if the sample size is less then $n$.
That said, the heap-based sampler is harder to implement than the Exp and Gumbel algorithm, and harder to vectorize, unlike Exp and Gumbel.
The difference between the Exp and Gumbel tricks is just that the Gumbel trick takes does a few more floating-point operations. In fact, as I pointed out in a 2014 article, the Exp-sort and Gumbel-sort tricks produced precisely the same sample if we use the same random seed.
I suspect that the performance of both the Exp and Gumbel methods could be improved with a bit of implementation effort. For example, currently, there are some unnecessary extra temporary memory allocations. These algorithms are also trivial to parallelize. The real bottleneck is the random variate generation time.
|
https://timvieira.github.io/blog/post/2019/09/16/algorithms-for-sampling-without-replacement/
|
CC-MAIN-2021-10
|
refinedweb
| 1,048
| 70.6
|
I thought the suspend scene actuator is supposed to pause a scene, including all logic operations within that scene. Is this wrong? I have a script that keeps running after I pause the scene, which is causing an undesirable effect. Can anyone confirm what is actually paused with the suspend scene actuator?
Sounds your Python code is executed by an object in an active (non-suspended) scene.
I thought about that, however there are only 2 scenes (main scene and HUD scene). I pause the main scene and add the HUD overlay scene, however the mouse look scripts in the main scene stay active so when I exit my HUD (menu) my cameras have moved with the mouse. I just did a simple test to ensure the scene pauses, which it does. Then I deactivated the overlay scene to test if the scripts are running while paused, which they don’t. It seems having the overlay scene causes several scripts in my main scene to stay active. I would appreciate if you could take a quick look and let me know if you spot something I am missing. Press “e” to open/close the HUD (menu), move the mouse, then close the menu. It may be a moot point though as I am trying to integrate the new mouse actuator in the 2.72 build.
I managed to get this template updated to use the mouse actuator in 2.72, but it still has the same problem. The main scene, which is definitely paused, still updates mouse position causing the undesirable effect. Is this a possible bug?
How do I pause?.
Press “e” to pause and open the menu..
Thank you for the help, I will try these solutions on the mouse look scripts. If I understand, I need a way to save the cursor position before pausing and set that position before resuming.
Though I did notice that the mouse actuator in the 2.72 test build does something similar. Though the main scene is paused, moving the mouse causes the camera to “skip” to the new mouse position when you resume the scene. If this can be considered a bug, I will report it.
Add a small script to the same thing that triggers the resume scene, and flag it for high-priority (ie the flag on the actuator)
import bge bge.logic.mouse.position = (0.5, 0.5)
Thanks Geoff, this works exactly as I had hoped. In hindsight, I realize I didn’t need to save the position, just reset it to the screen center. Thanks again.
|
https://blenderartists.org/t/blender-2-72-scene-and-mouse-actuator/621481
|
CC-MAIN-2020-40
|
refinedweb
| 431
| 73.27
|
ConfigFile
Overview¶
Some applications need variable configurations.
For example, The configuration of account information for Twitter...
It's not good for publish if I implemented a program with a account information for Twitter, and it's not secure.
So, in this document we will show you a configuration file helper class for local file system on mbed.
You can use a variable configuration with it.
Basic concept¶
- A configuration set consist from a key and a value.
How to use it?¶
Configuration file¶
We need a configuration file.
- A comment line started by #.
- Empty lines have not meaning.
- Key=Value based configuration.
- A space character in a side of a key and a value have a meaning.
Configurtion file example
# # Configuration file for mbed. # MyKey1=This is a value for key1. MyKey2=Value 2 Message1 = This is a test message no.1 Message2 = This is a test message no.2 Message3 = This is a test message no.3
Reading¶
This example codes read a file named input.cfg on mbed local file system.
#include "mbed.h" #include "ConfigFile.h" LocalFileSystem local("local"); ConfigFile cfg; int main(void) { char *key = "MyKey"; char value[BUFSIZ]; /* * Read a configuration file from a mbed. */ if (!cfg.read("/local/input.cfg")) { error("Failure to read a configuration file.\n"); } /* * Get a configuration value. */ if (cfg.getValue(key, &value[0], sizeof(value))) { printf("'%s'='%s'\n", key, value); } }
- Ignore comments in a configuration file.
- Ignore empty lines in a configuration file.
- Keep spaces in side of keys and values.
Writing¶
This example codes write a configuration to a file.
#include "mbed.h" #include "ConfigFile.h" LocalFileSystem local("local"); ConfigFile cfg; int main(void) { /* * Set a configuration value. */ if (!cfg.setValue("MyKey", "TestValue")) { error("Failure to set a value.\n"); } /* * Write to a file. */ if (!cfg.write("/local/output.cfg")) { error("Failure to write a configuration file.\n"); } }
Library API¶
Example application¶
» Import this programConfigFile_TestProgram
A test program for ConfigFile library.
References¶
Last modified 26 Nov 2010, by 3 comments.
3 comments:
Hi! This is a handy tool but I have a problem with it: I can't change the config file with an text editor once it is on the mbed. I guives strange errors like file in use. I deliberately put the config file object into it's own scope so the desructor would be called after I've read all my keys: <<code>> { ConfigFile cfg; if (!cfg.read("/local/serv.cfg")) { error("Failure to read a configuration file.\n"); } if (cfg.getValue .... } <</code>> Any idea why? Thanks a lot for the help! Edit: To answer myself: Seems like there was a strange problem with a broken file in the mbed filesystem. It works well after changing the name of the configfile. And I've checked the source: the file gets closed at the end of the read call, so my {..} block was useless ...
Please login to post comments.
|
http://mbed.org/cookbook/ConfigFile
|
CC-MAIN-2013-20
|
refinedweb
| 484
| 62.14
|
0
I am in beginning Java and using Dr. Java to write my programs. I am trying to write an array that user input sets up the length of the array(#of students) that character grades are stored in. It is suppose to ask if you want a full print out, to update a students grade, or to exit. please let me exactly what is that I am doing wrong, I did include the error codes it is throwing, making it impossible to test accurately.
Thank you for your help:
import java.io.*; // import needed files import java.util.*; // import more needed files public class Grades{ public static void main (String []args){ int m, sid, asid, y, id, counter; //declare int variables SimpleInput keyboard = new SimpleInput (); // for user input System.out.println("Greetings, welcome to the class grade tracking system. "); System.out.println("How many students are in the class?"); id = keyboard.nextInt();// first user imput asked to set up length of array System.out.println("What would you like to do next? Please pick an option from the following menu."); char grade[]= new char [id]; //declare array for (int j=0; j<=1; j++){ //beginning of menu loop { System.out.println("Menu"); System.out.println("----"); System.out.println("1) Print Gradebook"); System.out.println("2) Update Grade"); System.out.println("3) Exit"); for (int i=0; i<=1; i++){ // beginning of grade input loop m = keyboard.nextInt(); if (m == 2);{ System.out.println("Enter student id to update:"); sid = keyboard.nextInt(); if (sid < 1);System.out.println("Invalid ID. Please try again."); i++;} if (sid >=1);{ asid = sid - 1; System.out.println("Enter new grade for "+sid+":"); char g = keyboard.nextChar(); //('cannot find symbol error keeps appearing...') grade[asid] = 'g'; //stores letter grade in appropriate place in Array System.out.print("New Grade for student # " + sid + "is " +g+ "."); { while(sid <1 || sid > id);{ System.out.println("Invalid ID. Please try again."); //incorrect int throws error j++;} // sends user back to menu to try again. } if (m > 3 || m < 1) System.out.println("Invalid choice. Please try again.");} j++;} //sends user back to menu to try again. if (m == 1);{ //variable m might not have been initialized for (y = 0;y<=grade.length;y++); //create output loop for student and grade printout int x= y + 1; //no one likes to be called student zero System.out.println("Student" + x + "has a current grade of " +grade[y]);{ //prints student and letter grade. } if(m == 3);{ //(throws error 'variable m might not have been initialized') // if three is selected in main menu, program is exited. System.out.println("Exiting Gradebook...Goodbye!"); } } }}}}
|
https://www.daniweb.com/programming/software-development/threads/429107/basic-array
|
CC-MAIN-2017-43
|
refinedweb
| 439
| 61.02
|
oath_base32_decode man page
oath_base32_decode — API function
Synopsis
#include <oath.h>
int oath_base32_decode(const char * in, size_t inlen, char ** out, size_t * outlen);
Arguments
- const char * in
input string with base32 encoded data of length inlen
- size_t inlen
length of input base32 string in
- char ** out
pointer to output variable for binary data of length outlen, or NULL
- size_t * outlen
pointer to output variable holding length of out, or NULL
Description
Decode a base32 encoded string into binary data.
Space characters are ignored and pad characters are added if needed. Non-base32 data are not ignored but instead will lead to an OATH_INVALID_BASE32 error.
The in parameter should contain inlen bytes of base32 encoded data. The function allocates a new string in *out to hold the decoded data, and sets *outlen to the length of the data.
If out is NULL, then *outlen will be set to what would have been the length of *out on successful encoding.
If the caller is not interested in knowing the length of the output data out, then outlen may be set to NULL.
It is permitted but useless to have both out and outlen NULL.
Returns
On success OATH_OK (zero) is returned, OATH_INVALID_BASE32 is returned if the input contains non-base32 characters, and OATH_MALLOC_ERROR is returned on memory allocation errors.
Since
1.12.
|
https://www.mankier.com/3/oath_base32_decode
|
CC-MAIN-2018-22
|
refinedweb
| 218
| 58.32
|
a .java file must be compiled into a .class file with the javac compiler. once you have the .class files, you can run those.
are you using the windows "run program" dialog box? if so, that's wrong - or at least, i've never heard of anyone doing it that way. Usually, once you have compiled the file.java into file.class, from a command line, you do this:
java file
Can you explain more clearly what you are doing?
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
public class hello
{
public static void main(String[]args)
{
System.out.println("Hello,World");
}
}
After typing this, I went to tools and then Compile Java, I clicked on this and it did compile, when I tried to run the program again using Run Java Application, I get the response "the process cannot access the file because it is being used by another process".
Also, when I do compile the file hello, it does not create a .class file.
I hope this makes sense, sorry if there is confusion.
Thank you for the help!
Originally posted by matt van:
How do you run the javac compiler? Also, how do a run a program from the command line? ...
See this Hello World tutorial (for Windows), which provides a step-by-step process for using the javac and java commands.
java.lang.NoClassDefFoundError: HelloWorld
Exception in thread "main"
Tool completed with exit code 1
To run the application you can use Tools --> Run Java Application (or Ctrl-2). I have never run into the error you describe in all the years I have used TextPad. In fact, I can't even make it produce that error when I open the .java and/or .class file in another application. [Tools --> Run ... is for something else entirely.]
So despite the appearance that you are using TextPad Tools properly, I am unable to help you resolve your issue.
Whatever editor the students are using (BlueJ, IntelliJ, Eclipse, TextPad, NotePad, DOS edit) they should send you their .java file in plain ascii text. You can compile their .java file with whatever method you choose (TextPad, command line or whatever) and likewise run the application after it is compiled using TextPad, command line or whatever. No problem. We do that all the time in the CattleDrive course here at JavaRanch. Make sure they are not sending you the .class file.
[ January 04, 2007: Message edited by: Marilyn de Queiroz ]
JavaBeginnersFaq
"Yesterday is history, tomorrow is a mystery, and today is a gift; that's why they call it the present." Eleanor Roosevelt
"Tool completed successfully"
in the result window, but TextPad will return to the .java file that you compiled.
After you successfully compile the .java file into a .class file (which you can see in the same directory as the .java file by using Windows Explorer), you should be able to "Run Java Application" and see some results in the result window (if it prints something using System.out.println).
The .java file should not be in the same directory that TextPad is installed in. I usually keep my .java files in a directory named "java" (i.e. C:\java\)
Double check that Configure --> Preferences --> Tools --> Run Java Application --> "Capture output" is checked.
JavaBeginnersFaq
"Yesterday is history, tomorrow is a mystery, and today is a gift; that's why they call it the present." Eleanor Roosevelt
[ January 05, 2007: Message edited by: marc weber ]
Originally posted by matt van:
Sorry about the last post [removed by mw]...
You can edit/delete your own posts by clicking on the paper/pencil icon. (Note to the curious: This was nothing "bad." It just looks like it got posted in mid-composition.)
So are you able to compile and run from the command line? If so, then I think we should move this thread to the IDE forum -- but only after we've verified that Java is correctly installed to work from the command line.
To the second message, what would I be looking for to kill the non-essential processes in task manager. Sorry this is taking so long for me to figure out and thank you for continuing to try and help me.
Originally posted by matt van:
... I tried to run from the command line and it does not seem to work...
Tell us exactly what steps you followed (starting from where you saved the .java file, and exactly what commands you entered), and where the problem occurred, including any error messages.
If you can copy and paste your command prompt session, that would be helpful.
[ January 11, 2007: Message edited by: matt van ]
Originally posted by Fred Rosenberger:
Can you explain more clearly what you are doing?
He is using Textpad. I started using it while reading Core Java by Cay S. Horstmann. Dr. Horstmann holds:
Textpad is.
"The differential equations that describe dynamic interactions of power generators are similar to that of the gravitational interplay among celestial bodies, which is chaotic in nature."
Originally posted by marc weber:
Close TextPad and follow the Hello World tutorial (for Windows). Tell us how these steps work for you.
Textpad will compile Java from a menu item within the application, compiling whatever source.java code file is open in the window.
"The differential equations that describe dynamic interactions of power generators are similar to that of the gravitational interplay among celestial bodies, which is chaotic in nature."
It should, but in this case it seems to be hanging on something. I think Textpad basically just uses a .bat file to issue the commands. If we can test the process by manually typing the commands, maybe we'll see what the problem is.
Originally posted by marc weber:
It should, but in this case it seems to be hanging on something. I think Textpad basically just uses a .bat file to issue the commands. If we can test the process by manually typing the commands, maybe we'll see what the problem is.
If so, I had this problem or something similar and fixed it by removing the compile java command using the delete command, then used the add java compile commnand to replace it. The command began working again.
Textpad does seem to use batch files, it clutters up the directory with these. I also experienced a system hang due to a setting noted as capture output and the way this command works in conjunction with the batch files.
Something along this line of thought is noted in the help files.
[ January 12, 2007: Message edited by: Nicholas Jordan ]
"The differential equations that describe dynamic interactions of power generators are similar to that of the gravitational interplay among celestial bodies, which is chaotic in nature."
Originally posted by matt van:
...I ended up saving my documents and using the recovery disk. I downloaded textpad and now the program works fine. Sorry for all of the confusion...
Wow, I'm glad you got it worked out!
|
https://coderanch.com/t/405766/java/problems-running-programs-Textpad
|
CC-MAIN-2017-39
|
refinedweb
| 1,175
| 66.03
|
On Wed, Nov 15, 2000 at 12:30:53PM -0500, Adam C Powell IV wrote: > Hello, > > I have a package which depends on atlas-dev for non-PPC, and lapack-dev > for PPC, (atlas doesn't build on PPC because of a compiler bug). > > I noticed that freeamp has arches in Build-Depends, e.g. nasm [i386], > but putting this in Depends: for a binary package results in an error. > > Is there any way I can do this? You'll probably need to generate the depends at build-time. Something like this: ifeq($(DEB_BUILD_ARCH),powerpc) dh_gencontrol -p<pkg> -- -DDepends="lapack-dev" else dh_gencontrol -p<pkg> -- -DDepends="atlas-dev" endif Then just ommit this dep from the control file, and it will be added at build-time. Ben -- -----------=======-=-======-=========-----------=====------------=-=------ / Ben Collins -- ...on that fantastic voyage... -- Debian GNU/Linux \ ` bcollins@debian.org -- bcollins@openldap.org -- bcollins@linux.com ' `---=========------=======-------------=-=-----=-===-======-------=--=---'
|
https://lists.debian.org/debian-mentors/2000/11/msg00078.html
|
CC-MAIN-2017-09
|
refinedweb
| 145
| 58.08
|
Re: IAR linker: too smart
- From: David Brown <david@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
- Date: 17 May 2006 17:54:11 +0200
Dirk Zabel wrote:
Hi,
I am using the IAR toolchain for our Renesas M16C based project. The device uses a lot of configuration data which are stored within an eeprom. In order to define the placement, I have something like this:
eeprom.c:
#pragma dataseg="E2PROM_DATA"
#define extern /* empty */
#include "eeprom.h"
#pragma dataseg=default
/* EOF */
and
eeprom.h:
extern __no_init unsigned char MBUS_exist;
extern __no_init char E2_VERSION_TEXT[20];
extern __no_init unsigned char CAN_exists;
extern __no_init unsigned char MBUS_exists;
/* and so on. */
/* EOF */
Any code which uses the preprocessor to redefine a keyword like "extern" is incorrect code. It doesn't matter whether it works or not - it's still so bad that any experienced reviewer would reject it outright.
I tell the linker to place the E2PROM_DATA segment where the eeprom is mapped into the address space and can use the data from other c sources by including eeprom.h
BUT
The linker throws away all variables in eeprom.h which are not used anywhere in the projet (there is NO OPTIMIZATION selected!!). This is really bad, since I need a stable address mapping for all configuration data; there might be parameters which may be needed in later software versions; sometimes I build testversions which do not contain all parts. I allways need the variables at the same adresses.
I did already asked the IAR support, but did not get a usable answer (they told me to use volatile, but this does not change anything).
IAR support are not famed for being helpful - on the other hand, their tools are well known for making correct and efficient code. The compiler (and linker) is doing exactly the right thing - if you declare variables that are never used, then the compiler and/or linker is free to remove them - they have no effect on the running of your program. You might think that the declarations have an effect on the addressing in eeprom - but that is not true. The order you declare or define data has no defined effect on the order in memory. For many targets, compilers will re-order the data for better alignment or padding. So even if you manage to persuade the tools to keep the unused data, you are getting a false sense of security - a new compilation could re-arrange the data.
The easiest way to get the effect you are looking for is to define a single structure for all the eeprom-based data, and ensure that it is linked to one specific address (solution "a" below). If you want to access struct members without having to add "s." in front of them, you could use #defines to avoid it (although it is almost certainly best to correct the old code rather than add hacks around it).
Solution (b) would not work (as explained above). Solution (c) is perfectly reasonable (although more effort than (a)), and solution (d) is again possible, but lots of effort to do well.
I see the following approaches:.
a) define a big struct which contains all variables and put that struct into E2PROM_DATA
b) put dummy references to all variables into my project
c) define the variables in some assembler source.
d) the IAR toolchain allows to put variables at numerically known addresses, I could place every variable at some pre-calculated address
a) I would have to change all code referring to (say) E2_VERSION_TEXT to s.E2_VERSION_TEXT (where s is the name of the struct variable). Since my project contains lot of old code, this is not very good.
b) ugly, blows up my project with "useless" code (the linker is quite "smart", you have to DO something with a variable or the code is optimized away).
c) ugly and dangerous, I have to keep the assembler definitions and c declarations in sync.
d) IMHO address calculations should be done by tools, not by programmes due to possible errors.
Has anyone some better idea?
Greetings
Dirk Zabel
- Follow-Ups:
- Re: IAR linker: too smart
- From: bob
- References:
- IAR linker: too smart
- From: Dirk Zabel
- Prev by Date: Re: Windows Students Embedded Challenge 2006
- Next by Date: Re: Cypress USB FX device and Windows standby mode issues
- Previous by thread: Re: IAR linker: too smart
- Next by thread: Re: IAR linker: too smart
- Index(es):
|
http://coding.derkeiler.com/Archive/General/comp.arch.embedded/2006-05/msg00933.html
|
CC-MAIN-2017-17
|
refinedweb
| 734
| 67.69
|
A widget that shows one of two icons depending on its state. More...
#include <Wt/WIconPair>
A widget that shows one of two icons depending on its state.
This is a utility class that simply manages two images, only one of which is shown at a single time, which reflects the current 'state'.
The widget may react to click events, by changing state.
This widget does not provide styling, and can be styled using inline or external CSS as appropriate. The image may be styled via the
<img> elements.
Construct an icon pair from the two icons.
The constructor takes the URL of the two icons. When
clickIsSwitch is set
true, clicking on the icon will switch state.
Sets the state to 0 (show icon 1).
Sets the state to 1 (show icon 2).
Returns the current state.
|
http://www.webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1WIconPair.html
|
CC-MAIN-2017-51
|
refinedweb
| 138
| 85.28
|
extern volatile unsigned long timer0_overflow_count;unsigned long hpticks (void){ return (timer0_overflow_count << 8) + TCNT0;}void loop() { int t1, t2; t1 = hpticks() * 4; t2 = hpticks() * 4; while (1) { if ((t2 - t1) >= pulse_interval) { process_pulse(); t1 = hpticks() * 4; } t2 = hpticks() * 4; }}
I'm pretty sure the hardware UART does not rely on interrupts and it would be fine to use it in an interrupt handler. Even if you were using software serial, you could simply set a flag in your interrupt handler and then from your main code, frequently check if the flag is set and if so, send the packet and clear the flag. Thought that would basically be what it's already done (mills is pretty much a flag being set by an interrupt handler).
I think Oracle missed out on his morning coffee before posting the above
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=41125.msg299537
|
CC-MAIN-2015-11
|
refinedweb
| 171
| 60.18
|
Red Hat Bugzilla – Bug 5310
Apache needs MULTIPLE_GROUPS option?
Last modified: 2008-05-01 11:37:51 EDT
I installed the Apache 1.3.6 package and modified the
configuration to run under a new user I created named
"httpd". httpd's primary group is also named "httpd", and
then it is a member of the "video" group as well.
From /etc/group:
video:x:401:httpd,admin
I have a CGI that I want to make executable only by the
"video" group...
-r-xr-x--- 1 root video 77 Sep 22 11:56
test.cgi
...however Apache will refuse to execute it. I get the
following error message in /etc/httpd/logs/error_log:
Wed Sep 22 12:18:48 1999] [error] [client 127.0.0.1] file
permissions deny server execution:
/video/tools/htdocs/test.cgi
It works fine if I chgrp it to "httpd".
test.cgi, by the way, contains the following:
#!/bin/sh
echo "Content-Type: text/plain"
echo ""
echo -n "id -a: "
id -a
It outputs...
id -a: uid=16(httpd) gid=16(httpd)
groups=16(httpd),401(video)
...so I know that httpd truly is a member of the group and
_should_ have permission to execute the script chgrp'd to
video.
Looking through the sources, I can see that
modules/standard/mod_cgi.c is calling ap_can_exec() from
ap/util.c, which checks the uid and gid of the file against
the current user and group. There is support for
supplementary groups, but it's wrapped in #ifdef
MULITPLE_GROUPS .. #endif statements.
I assume this means that Apache needs to be recompiled with
the MULTIPLE_GROUPS option?
This feature has enough additional security implications that we do not want it
turned on by default. This is also why it isn't documented in the apache
documentation nor supported as a configuration-time option.
You may recompile your apache and #define the preprocessor directive in httpd.h
if you need this feature.
|
https://bugzilla.redhat.com/show_bug.cgi?id=5310
|
CC-MAIN-2018-34
|
refinedweb
| 324
| 67.86
|
Automated unit testing became very popular in the Java world and then marched victoriously into the .NET territory, thanks to an excellent tool called nUnit.
However, nUnit has one serious limitation: it works only with managed code. Good old C++ is not going anywhere, and we, C++ programmers, also want to enjoy the wonders of nice and easy automated unit testing. GenTestAsm is the tool that makes it happen. It allows you to write unit tests in (unmanaged) C++, and then run them in nUnit.
GenTestAsm
When it comes to unit testing C++ code, there are essentially three choices:
Not doing unit testing at all is a very risky approach. The code becomes brittle and the risk of making changes is too high. TUT is a nice tool, but it does not provide a GUI test runner like nUnit. Also, having to switch between two different tools for managed and unmanaged code looks like a nuisance.
Therefore, I concentrated on the last approach - finding a way to run C++ tests in nUnit.
The general battle plan was as follows:
[Test]
Sadly, Win32 does not provide out-of-the box API for enumerating DLL exports. Fortunately, the format of DLL files is publicly available from Microsoft. I extract the list of exports by opening the executable file and analyzing the bytes. It is a little tedious, but not a very complex task. The biggest annoyance is that the PE file format uses relative virtual memory addresses (RVAs) instead of file offsets. This is great when the file is loaded in memory, but requires constant recalculations when working with the file on disk..
CSharpCodeProvider a certain prefix (by default UnitTest). Other exports are ignored.
GenTestAsm
UnitTest
The next problem is how to handle test failures. In the test's return value. Unmanaged tests must have the signature:
BSTR Test();
Return value of NULL means success, anything else means failure, and the returned string is the error message. I chose BSTR over regular char*, because BSTR has well-defined memory management rules, and .NET runtime knows how to free it.
NULL
string
BSTR
char*
Returning BSTR from the C++ test is nice, but it makes writing a test a little difficult. The author of the test must make sure that unhandled C++ exceptions don't escape the test. He must also format the error message and convert it to BSTR. If this were done by hand in each and every test, the code would become too verbose to be practical. Let's take a trivial test in C#:
// C#
public void CalcTest()
{
Assert.AreEqual( 4, Calculator.Multiply(2,2) );
}
and see how an equivalent test in C++ would look like:
// C++
__declspec(dllexport)
BSTR CalcTest()
{
try
{
int const expected = 4;
int actual = Calculator::Multiply(2,2);
if (expected != actual)
{
std::wostringstream msg;
msg << "Error in " << __FILE__ << " (" << __LINE__ << "): "
<< "expected " << expected << ", but got " << actual;
return SysAllocString( msg.str().c_str() );
}
}
catch (...)
{
return SysAllocString("Unknown exception");
}
return NULL;
}
This is too much boiler plate code. We need a support library here.
With the help of a tiny #include file we can squeeze our C+ test back to 3 lines of code:
#include
// C++
#include <span class="code-string">"TestFramework.h"</span>
TEST(CalcTest)
{
ASSERT_EQUAL( 4, Calculator::Multiply(2,2) );
}
TestFramework.h defines TEST macro that encapsulates the details of exception handling and BSTR conversion. It also defines a couple of ASSERT macros such as ASSERT_EQUAL.
TEST
ASSERT
ASSERT_EQUAL.
LoadLibrary()
GetProcAddress()
Marshal.GetDelegateForFunctionPointer()
If something must be loaded forever, let it not.
FreeLibrary()
// C++
typedef BSTR (*TestFunc)();
extern "C"
__declspec(dllexport)
BSTR __cdecl RunTest( LPCSTR dll, LPCSTR name )
{
HMODULE hLib = LoadLibrary(dll);
if (hLib == NULL) return SysAllocString(L"Failed to load test DLL");
TestFunc func = (TestFunc)GetProcAddress(hLib, name);
if (func == NULL) return SysAllocString(L"Entry point not found");
BSTR result = func();
FreeLibrary(hLib);
return result;
}
I put the thunk DLL as a resource into GenTestAsm.exe, and it is always written alongside generated managed assembly. Having two additional DLL files hanging around is a little annoying, but it is better than being unable to recompile your code.
GenTestAsm creates C# source code of the managed test assembly and then compiles it using .NET Framework C# compiler. The test assembly references nunit.framework.dll. The location of this DLL is specified in the gentestasm.exe.config file as follows:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<appSettings>
<add key="nUnit.Reference"
value="C:\Program Files\NUnit 2.2\bin\nunit.framework.dll" />
</appSettings>
</configuration>
If you use nUnit for .NET 2.0, GenTestAsm may have difficulties working with it. You might get the following error when creating your managed assembly:
fatal error CS0009: Metadata file
'c:\Program Files\NUnit-Net-2.0 2.2.8\bin\nunit.framework.dll'
could not be opened -- 'Version 2.0 is not a compatible version.'
This error occurs because GenTestAsm is a .NET 1.1 application, and by default uses .NET 1.1 C# compiler (when it is available). This compiler cannot reference an assembly created for a newer version of the Framework. To work around this problem, we must force GenTestAsm to use .NET 2.0 libraries, including .NET 2.0 C# compiler. This is achieved by adding a supportedRuntime element to the configuration file:
supportedRuntime
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<appSettings>
<add key="nUnit.Reference"
value="C:\Program Files\NUnit-Net-2.0 2.2.8\bin\nunit.framework.dll" />
</appSettings>
<startup>
<supportedRuntime version="v2.0.50727"/>
</startup>
</configuration> a more uniform approach to unit testing of managed and unmanaged code. The same tool is used to run the tests, and test syntax is.
|
http://www.codeproject.com/Articles/16066/GenTestAsm-Run-Your-C-Tests-in-nUnit?msg=1813674
|
CC-MAIN-2015-06
|
refinedweb
| 938
| 59.09
|
This class allows to include a set of layers in a database-side transaction, provided the layer data providers support transactions and are compatible with each other. More...
#include <qgstransaction.h>
This class allows to include a set of layers in a database-side transaction, provided the layer data providers support transactions and are compatible with each other.
Only layers which are not in edit mode can be included in a transaction, and all layers need to be in read-only mode for a transaction to be committed or rolled back.
Layers cannot only be included in one transaction at a time.
When editing layers which are part of a transaction group, all changes are sent directly to the data provider (bypassing the undo/redo stack), and the changes can either be committed or rolled back on the database side via the QgsTransaction::commit and QgsTransaction::rollback methods.
As long as the transaction is active, the state of all layer features reflects the current state in the transaction.
Edits on features can get rejected if another conflicting transaction is active.
Definition at line 47 of file qgstransaction.h.
Definition at line 89 of file qgstransaction.cpp.
Definition at line 84 of file qgstransaction.cpp.
Add layer to the transaction.
The layer must not be in edit mode. The transaction must not be active.
Definition at line 94 of file qgstransaction.cpp.
Begin transaction The statement timeout, in seconds, specifies how long an sql statement is allowed to block QGIS before it is aborted.
Statements can block, depending on the provider, if multiple transactions are active and a statement would produce a conflicting state. In these cases, the statements block until the conflicting transaction is committed or rolled back. Some providers might not honour the statement timeout.
Definition at line 133 of file qgstransaction.cpp.
Commit transaction.
All layers need to be in read-only mode.
Definition at line 151 of file qgstransaction.cpp.
Creates a transaction for the specified connection string and provider.
Definition at line 29 of file qgstransaction.cpp.
Creates a transaction which includes the specified layers.
Connection string and data provider are taken from the first layer
Definition at line 51 of file qgstransaction.cpp.
Executes sql.
Roll back transaction.
All layers need to be in read-only mode.
Definition at line 177 of file qgstransaction.cpp.
Definition at line 84 of file qgstransaction.h.
|
https://api.qgis.org/2.8/classQgsTransaction.html
|
CC-MAIN-2020-34
|
refinedweb
| 398
| 58.18
|
Zig is an open-source programming language designed for robustness, optimality, and clarity. Zig is aggressively pursuing its goal of overthrowing C as the de facto language for system programming. Zig intends to be so practical that people find themselves using it even if they dislike it.
This is a massive release, featuring 6 months of work and changes from 36 different contributors.
I tried to give credit where credit is due, but it's inevitable I missed some contributions as I had to go through 1,345 commits to type up these release notes. I apologize in advance for any mistakes.
Special thanks to my patrons who provide financial support. You're making Zig sustainable.
Stack Traces on All Targets
Zig uses LLVM's debug info API to emit native debugging information on all targets. This means that you can use native debugging tools on Zig code, for example:
- MSVC on Windows
- lldb on MacOS
- gdb and valgrind on Linux
In addition, Zig's standard library can read its own native debug information. This means that crashes produce stack traces, and errors produce Error Return Traces.
MacOS
This implementation is able to look at the executable's own memory to find out where the .o files are, which have the DWARF info.
Windows
Thanks to Sahnvour for implementing the PE parsing and starting the effort to PDB parsing. I picked up where he left off and finished Windows stack traces.
Thanks to Zachary Turner from the LLVM project for helping me understand the PDB format. I still owe LLVM some PDB documentation patches in return.
Similar to on MacOS, a Windows executable in memory has location information pointing to a .pdb file which contains debug information.
Linux
Linux stack traces worked in 0.2.0. However now
std.debug.dumpStackTrace & friends use
ArenaAllocator backed by
DirectAllocator. This has the downside of failing to print a stack trace when the system is out of memory, but for the more common use case when the system is not out of memory, but the debug info cannot fit in
std.debug.global_allocator, now stack traces will work. This is the case for the self hosted compiler. There is a proposal to
mmap() debug info rather than using
read().
See also Compatibility with Valgrind.
zig fmt
Thanks to Jimmi Holst Christensen's dilligent work, the Zig standard library now supports parsing Zig code. This API is used to implement
zig fmt, a tool that reformats code to fit the canonical style.
As an example,
zig fmt will change this code:
test "fmt" { const a = []u8{ 1, 2, 3, 4, 5, 6, 7 }; switch (0) { 0 => {}, 1 => unreachable, 2, 3 => {}, 4...7 => {}, 1 + 4 * 3 + 22 => {}, else => { const a = 1; const b = a; }, } foo(a, b, c, d, e, f, g,); }
...into this code:
test "fmt" { const a = []u8{ 1, 2, 3, 4, 5, 6, 7, }; switch (0) { 0 => {}, 1 => unreachable, 2, 3 => {}, 4...7 => {}, 1 + 4 * 3 + 22 => {}, else => { const a = 1; const b = a; }, } foo( a, b, c, d, e, f, g, ); }
It does not make any decisions about line widths. That is left up to the user. However, it follows certain cues about when to line break. For example, it will put the same number of array items in a line as there are in the first one. And it will put a function call all on one line if there is no trailing comma, but break every parameter into its own line if there is a trailing comma.
Thanks to Marc Tiehuis, there are currently two editor plugins that integrate with
zig fmt:
zig fmt is only implemented in the self-hosted compiler, which is not finished yet, so in order to use it one must follow the README instructions to build the self-hosted compiler from source.
The implementation of the self-hosted parser is an interesting case study of avoiding recursion by using an explicit stack. It is essentially a hand-written recursive descent parser, but with heap allocations instead of recursion. When Jimmi originally implemented the code, we thought that we could not solve the unbounded stack growth problem of recursion. However, since then, I prototyped several solutions that provide the ability to have recursive function calls without giving up statically known upper bound stack growth. See Recursion Status for more details.
Automatic formatting can be disabled in source files with a comment like this:
test "this is left alone" { }
zig fmt is written using the standard library's event-based I/O abstractions and
async/
await syntax, which means that it is multi-threaded with non-blocking I/O. A debug build of
zig fmt on my laptop formats the entire Zig standard library in 2.1 seconds, which is 75,516 lines per second. See Concurrency Status for more details.
zig run
zig run file.zig can now be used to execute a file directly.
Thanks to Marc Tiehuis for the initial implementation of this feature. Marc writes:
On a POSIX system, a shebang can be used to run a zig file directly. An example shebang would be
#!/usr/bin/zig run. You may not be able pass extra compile arguments currently as part of the shebang. Linux for example treats all arguments after the first as a single argument which will result in an 'invalid command'.
Note: there is a proposal to change this to
zig file.zig to match the interface of other languages, as well as enable the common pattern
#!/usr/bin/env zig.
Zig caches the binary generated by
zig run so that subsequent invocations have low startup cost. See Build Artifact Caching for more details.
Automated Static Linux x86_64 Builds of Master Branch
Zig now supports building statically against musl libc.
On every master branch push, the continuous integration server creates a static Linux build of zig and updates the URL to redirect to it.
In addition, Zig now looks for libc and Zig std lib at runtime. This makes static builds the easiest and most reliable way to start using the latest version of Zig immediately.
Windows has automated static builds of master branch via AppVeyor.
MacOS static CI builds are in progress and should be available soon.
Pointer Reform
During this release cycle, two design flaws were fixed, which led to a chain reaction of changes that I called Pointer Reform, resulting in a more consistent syntax with simpler semantics.
The first design flaw was that the syntax for pointers was ambiguous if the pointed to type was a
type. Consider this 0.2.0 code:
const assert = @import("std").debug.assert; comptime { var a: i32 = 1; const b = &a; @compileLog(@typeOf(b)); *b = 2; assert(a == 2); }
This works fine. The value printed from the
@compileLog statement is
&i32. This makes sense because
b is a pointer to
a.
Now let's do it with a
type:
const assert = @import("std").debug.assert; comptime { var a: type = i32; const b = &a; @compileLog(b); *b = f32; assert(a == f32); }
$ zig build-obj test.zig | &i32 test.zig:6:5: error: found compile log statement @compileLog(b); ^ test.zig:7:5: error: attempt to dereference non-pointer type 'type' *b = f32; ^
It doesn't work in 0.2.0, because the
& operator worked differently for
type than other types. Here,
b is the type
&i32 instead of a pointer to a type which is how we wanted to use it.
This prevented other things from working too; for example if you had a
[]type{i32, u8, f64} and you tried to use a for loop,
it crashed the compiler because internally a for loop uses the
&
operator on the array element.
The only reasonable solution to this is to have different syntax for the address-of operator and the pointer operator, rather than them both being
&.
So pointer syntax becomes
*T, matching syntax from most other languages such as C. Address-of syntax remains
&foo, again matching common address-of syntax such as in C. This leaves one problem though.
With this modification, the syntax
*foo becomes ambiguous with the syntax for dereferencing. And so dereferencing syntax is changed to a postfix operator:
foo.*. This matches post-fix indexing syntax:
foo[0], and in practice ends up harmonizing nicely with other postfix operators.
The other design flaw is a problem that has plagued C since its creation: the pointer type doesn't tell you how many items there are at the address. This is now fixed by having two kinds of pointers in Zig:
*T- pointer to exactly one item.
- Supports deref syntax:
ptr.*
[*]T- pointer to unknown number of items.
- Supports index syntax:
ptr[i]
- Supports slice syntax:
ptr[start..end]
Tmust have a known size, which means that it cannot be
c_voidor any other
@OpaqueType().
Note that this causes pointers to arrays to fall into place, as a single-item pointer to an array acts as a pointer to a compile-time known number of items:
*[N]T- pointer to N items, same as single-item pointer to array.
- Supports index syntax:
array_ptr[i]
- Supports slice syntax:
array_ptr[start..end]
- Supports len property:
array_ptr.len
Consider how slices fit into this picture:
[]T- pointer to runtime-known number of items.
- Supports index syntax:
slice[i]
- Supports slice syntax:
slice[start..end]
- Supports len property:
slice.len
This makes Zig pointers significantly less error prone. For example, it fixed issue #386, which demonstrates how a pointer to an array in Zig 0.2.0 is a footgun when passed as a parameter. Meanwhile in 0.3.0, equivalent code is nearly impossible to get wrong.
For consistency with the postfix pointer dereference operator, optional unwrapping syntax is now postfix as well:
0.2.0:
??x
0.3.0:
x.?
And finally, to remove the last inconsistency of optional syntax, the
?? operator is now the keyword
orelse. This means that Zig now has the property that all control flow occurs exclusively via keywords.
There is a plan for one more pointer type, which is a pointer that has a null-terminated number of items. This would be the type of the parameter to
strlen for example. Although this will make the language bigger by adding a new type, it allows Zig to delete a feature in exchange, since it will make C string literals unnecessary. String literals will both have a compile-time known length and be null-terminated; therefore they will implicitly cast to slices as well as null-terminated pointers.
There is one new issue caused by Pointer Reform. Because C does not have the concept of single-item pointers or unknown-length pointers (or non-null pointers), Zig must translate all C pointers as
?[*]T. That is, a pointer to an unknown number of items that might be null. This can cause some friction when using C APIs, which is unfortunate because Zig's types are perfectly compatible with C's types, but .h files are unable to adequately describe pointers. Although it would be much safer to translate .h files offline and fix their prototypes, there is a proposal to add a C pointer type. This new pointer type should never be used on purpose, but would be used when auto-translating C code. It would simply have C pointer semantics, which means it would be just as much of a footgun as C pointers are. The upside is that it would make interaction with C APIs go back to being perfectly seamless.
Default Float Mode is now Strict
In response to an overwhelming consensus, floating point operations use Strict mode by default. Code can use @setFloatMode to override the mode on a per-scope basis.
Thanks to Marc Tiehuis for implementing the change.
this was always a weird language feature. An identifier which referred to the thing in the most immediate scope, which could be a module, a type, a function, or even a block of code.
The main use case for it was for anonymous structs to refer to themselves. This use case is solved with a new builtin function, @This(), which always returns the innermost struct or union that the builtin call is inside.
The "block of code" type is removed from Zig, and the first argument of @setFloatMode is removed.
@setFloatMode now always refers to the current scope.
Remove Explicit Casting Syntax
Previously, these two lines would have different meanings:
export fn foo(x: u32) void { const a: u8 = x; const b = u8(x); }
The assignment to
a would give
error: expected type 'u8', found 'u32', because not all values of
u32 can fit in a
u8. But the assignment to
b was "cast harder" syntax, and Zig would truncate bits, with a safety check to ensure that the mathematical meaning of the integer was preserved.
Now, both lines are identical in semantics. There is no more "cast harder" syntax. Both cause the compile error because implicit casts are only allowed when it is completely unambiguous how to get from one type to another, and the transformation is guaranteed to be safe. For other casts, Zig has builtin functions:
- @bitCast - change type but maintain bit representation
- @alignCast - make a pointer have more alignment
- @boolToInt - convert true to 1 and false to 0
- @bytesToSlice - convert a slice of bytes to a slice of another type
- @enumToInt - obtain the integer tag value of an enum or tagged union
- @errSetCast - convert to a smaller error set
- @errorToInt - obtain the integer value of an error code
- @floatCast - convert a larger float to a smaller float
- @floatToInt - obtain the integer part of a float value
- @intCast - convert between integer types
- @intToEnum - obtain an enum value based on its integer tag value
- @intToError - obtain an error code based on its integer value
- @intToFloat - convert an integer to a float value
- @intToPtr - convert an address to a pointer
- @ptrCast - convert between pointer types
- @ptrToInt - obtain the address of a pointer
- @sliceToBytes - convert a slice of anything to a slice of bytes
- @truncate - convert between integer types, chopping off bits
Some are safe; some are not. Some perform language-level assertions; some do not. Some are no-ops at runtime; some are not. Each casting function is documented independently.
Having explicit and fine-grained casting like this is a form of intentional redundancy. Casts are often the source of bugs, and therefore it is worth double-checking a cast to verify that it is still correct when the type of the operand changes. For example, imagine that we have the following code:
fn foo(x: i32) void { var i = @intCast(usize, x); }
Now consider what happens when the type of
x changes to a pointer:
Although we technically know how to convert a pointer to an integer, because we usedAlthough we technically know how to convert a pointer to an integer, because we used
test.zig:2:29: error: expected integer type, found '*i32' var i = @intCast(usize, x); ^
@intCast, we are forced to inspect the cast and change it appropriately. Perhaps that means changing it to
@ptrToInt, or perhaps the entire function needs to be reworked in response to the type change.
Direct Parameter Passing
Previously, it was illegal to pass structs and unions by value in non-
extern functions. Instead one would have to have the function accept a
const pointer parameter. This was to avoid the ambiguity that C programs face - having to make the decision about whether by-value or by-reference was better. However, there were some problems with this. For example, when the parameter type is inferred, Zig would automatically convert to a
const pointer. This caused problems in generic code, which could not distinguish between a type which is a pointer, and a type which has been automatically converted to a pointer.
Now, parameters can be passed directly:
const assert = @import("std").debug.assert; const Foo = struct { x: i32, y: i32, }; fn callee(foo: Foo) void { assert(foo.y == 2); } test "pass directly" { callee(Foo{ .x = 1, .y = 2 }); }
I have avoided using the term "by-value" because the semantics of this kind of parameter passing are different:
- Zig is free to pass the parameter by value - perhaps if it is smaller than some number of bytes - or pass it by reference.
- To the callee, the value appears to be a value and is immutable.
- The caller guarantees that the bytes of the parameter will not change for the duration of the call. This means that it is unsound to pass a global variable in this way if that global variable is mutated by the callee. There is an open issue which explores adding runtime safety checks for this.
Because of these semantics, there's a clear flow chart for whether to accept a parameter as
T or
*const T:
- Use
T, unless one of the following is true:
Now that we have this kind of parameter passing, Zig's implicit cast from
T to
*const T is less important. One might even make the case that such a cast is dangerous. Therefore we have a proposal to remove it.
There is one more area that needs consideration with regards to direct parameter passing, and that is with coroutines. The problem is that if a reference to a stack variable is passed to a coroutine, it may become invalid after the coroutine suspends. This is a design flaw in Zig that will be addressed in a future version. See Concurrency Status for more details.
Note that
extern functions are bound by the C ABI, and therefore none of this applies to them.
Rewrite Rand Functions
Marc Tiehuis writes:
We now use a generic Rand structure which abstracts the core functions from the backing engine.
The old Mersenne Twister engine is removed and replaced instead with three alternatives:
- Pcg32
- Xoroshiro128+
- Isaac64
These should provide sufficient coverage for most purposes, including a CSPRNG using Isaac64. Consumers of the library that do not care about the actual engine implementation should use
DefaultPrng and
DefaultCsprng.
Error Return Traces across async/await
One of the problems with non-blocking programming is that stack traces and exceptions are less useful, because the actual stack trace points back to the event loop code.
In Zig 0.3.0, Error Return Traces work across suspend points. This means you can use
try as the main error handling strategy, and when an error bubbles up all the way, you'll still be able to find out where it came from:
const std = @import("std"); const event = std.event; const fs = event.fs; test "unwrap error in async fn" { var da = std.heap.DirectAllocator.init(); defer da.deinit(); const allocator = &da.allocator; var loop: event.Loop = undefined; try loop.initMultiThreaded(allocator); defer loop.deinit(); const handle = try async<allocator> openTheFile(&loop); defer cancel handle; loop.run(); } async fn openTheFile(loop: *event.Loop) void { const future = (async fs.openRead(loop, "does_not_exist.txt") catch unreachable); const fd = (await future) catch unreachable; }
$ zig test test.zig Test 1/1 unwrap error in async fn...attempt to unwrap error: FileNotFound std/event/fs.zig:367:5: 0x22cb15 in ??? (test) return req_node.data.msg.Open.result; ^ std/event/fs.zig:374:13: 0x22e5fc in ??? (test) return await (async openPosix(loop, path, flags, os.File.default_mode) catch unreachable); ^ test.zig:22:31: 0x22f34b in ??? (test) const fd = (await future) catch unreachable; ^ std/event/loop.zig:664:25: 0x20c147 in ??? (test) resume handle; ^ std/event/loop.zig:543:23: 0x206dee in ??? (test) self.workerRun(); ^ test.zig:17:13: 0x206178 in ??? (test) loop.run(); ^ Tests failed. Use the following command to reproduce the failure: zig-cache/test
Note that this output contains 3 components:
- An error message:
attempt to unwrap error: FileNotFound
- An error return trace. The error was first returned at
fs.zig:367:5and then returned at
fs.zig:374:13. You could go look at those source locations for more information.
- A stack trace. Once the error came back from
openRead, the code tried to
catch unreachablewhich caused the panic. You can see that the stack trace does, in fact, go into the event loop as described above.
It is important to note in this example, that the error return trace survived despite the fact that the event loop is multi-threaded, and any one of those threads could be the worker thread that resumes an async function at the
await point.
This feature is enabled by default for Debug and ReleaseSafe builds, and disabled for ReleaseFast and ReleaseSmall builds.
This is just the beginning of an exploration of what debugging non-blocking behavior could look like in the future of Zig. See Concurrency Status for more details.
New Async Call Syntax
Instead of
async(allocator) call(), now it is
async<allocator> call().
This fixes syntax ambiguity when leaving off the allocator, and fixes parse failure when call is a field access.
This sets a precedent for using
<
> to pass arguments to a keyword. This will affect
enum,
union,
fn, and
align (see #661).
ReleaseSmall Mode
Alexandros Naskos contributed a new build mode.
$ zig build-exe example.zig --release-small
- Medium runtime performance
- Safety checks disabled
- Slow compilation speed
- Small binary size
Alexandros Naskos bravely dove head-first into the deepest, darkest parts of the Zig compiler and implemented an incredibly useful builtin function: @typeInfo.
This function accepts a
type as a parameter, and returns a compile-time known value of this type:
pub const TypeInfo = union(TypeId) { Type: void, Void: void, Bool: void, NoReturn: void, Int: Int, Float: Float, Pointer: Pointer, Array: Array, Struct: Struct, ComptimeFloat: void, ComptimeInt: void, Undefined: void, Null: void, Optional: Optional, ErrorUnion: ErrorUnion, ErrorSet: ErrorSet, Enum: Enum, Union: Union, Fn: Fn, Namespace: void, BoundFn: Fn, ArgTuple: void, Opaque: void, Promise: Promise, pub const Int = struct { is_signed: bool, bits: u8, }; pub const Float = struct { bits: u8, }; pub const Pointer = struct { size: Size, is_const: bool, is_volatile: bool, alignment: u32, child: type, pub const Size = enum { One, Many, Slice, }; }; pub const Array = struct { len: usize, child: type, }; pub const ContainerLayout = enum { Auto, Extern, Packed, }; pub const StructField = struct { name: []const u8, offset: ?usize, field_type: type, }; pub const Struct = struct { layout: ContainerLayout, fields: []StructField, defs: []Definition, }; pub const Optional = struct { child: type, }; pub const ErrorUnion = struct { error_set: type, payload: type, }; pub const Error = struct { name: []const u8, value: usize, }; pub const ErrorSet = struct { errors: []Error, }; pub const EnumField = struct { name: []const u8, value: usize, }; pub const Enum = struct { layout: ContainerLayout, tag_type: type, fields: []EnumField, defs: []Definition, }; pub const UnionField = struct { name: []const u8, enum_field: ?EnumField, field_type: type, }; pub const Union = struct { layout: ContainerLayout, tag_type: ?type, fields: []UnionField, defs: []Definition, }; pub const CallingConvention = enum { Unspecified, C, Cold, Naked, Stdcall, Async, }; pub const FnArg = struct { is_generic: bool, is_noalias: bool, arg_type: ?type, }; pub const Fn = struct { calling_convention: CallingConvention, is_generic: bool, is_var_args: bool, return_type: ?type, async_allocator_type: ?type, args: []FnArg, }; pub const Promise = struct { child: ?type, }; pub const Definition = struct { name: []const u8, is_pub: bool, data: Data, pub const Data = union(enum) { Type: type, Var: type, Fn: FnDef, pub const FnDef = struct { fn_type: type, inline_type: Inline, calling_convention: CallingConvention, is_var_args: bool, is_extern: bool, is_export: bool, lib_name: ?[]const u8, return_type: type, arg_names: [][] const u8, pub const Inline = enum { Auto, Always, Never, }; }; }; }; };
This kicks open the door for compile-time reflection, especially when combined with the fact that Jimmi Holst Christensen implemented @field, which performs field access with a compile-time known name:
const std = @import("std"); const assert = std.debug.assert; test "@field" { const Foo = struct { one: i32, two: bool, }; var f = Foo{ .one = 42, .two = true, }; const names = [][]const u8{ "two", "one" }; assert(@field(f, names[0]) == true); assert(@field(f, names[1]) == 42); @field(f, "one") += 1; assert(@field(f, "on" ++ "e") == 43); }
This has the potential to be abused, and so the feature should be used carefully.
After Jimmi implemented
@field, he improved the implementation of
@typeInfo and fixed several bugs. And now, the combination of these builtins is used to implement struct printing in userland:
const std = @import("std"); const Foo = struct { one: i32, two: *u64, three: bool, }; pub fn main() void { var x: u64 = 1234; var f = Foo{ .one = 42, .two = &x, .three = false, }; std.debug.warn("here it is: {}\n", f); }
Output:
here it is: Foo{ .one = 42, .two = u64@7ffdda208cf0, .three = false }
See std/fmt/index.zig:15 for the implementation.
Now that we have
@typeInfo, there is one more question to answer: should there be a function which accepts a
TypeInfo value, and makes a type out of it?
This hypothetical feature is called
@reify, and it's a hot topic. Although undeniably powerful and useful, there is concern that it would be too powerful, leading to complex meta-programming that goes against the spirit of simplicity that Zig stands for.
Improve cmpxchg
@cmpxchg is removed. @cmpxchgStrong and @cmpxchgWeak are added.
The functions have operand type as the first parameter.
The return type is
?T where
T is the operand type.
New Type: f16
Ben Noordhuis implemented
f16. This is guaranteed to be IEEE-754-2008 binary16 format, even on systems that have no hardware support, thanks to the additions to compiler_rt that Ben contributed. He also added support for
f16 to
std.math functions such as
isnormal and
fabs.
All Integer Sizes are Primitives
Zig 0.2.0 had primitive types for integer bit widths of 2-8, 16, 29, 32, 64, 128. Any number other than that, and you had to use @IntType to create the type. But you would get a compile error if you shadowed one of the above bit widths that already existed, for example with
const u29 = @IntType(false, 29);
Needless to say, this situation was unnecessarily troublesome (#745). And so now arbitrary bit-width integers can be referenced by using an identifier of
i or
u followed by digits. For example, the identifier
i7 refers to a signed 7-bit integer.
u0 is a 0-bit type, which means:
@sizeOf(u0) == 0
- No actual code is generated for loads and stores of this type of value.
- The value of a
u0as always the compile-time known value of
0.
i0 doesn't make sense and will probably crash the compiler.
Although Zig defines arbitrary integer sizes to support all primitive operations, if you try to use, for example, multiplication on 256-bit integers:
test "large multiplication" { var x: u256 = 0xabcd; var y: u256 = 0xefef; var z = x * y; }
Then you'll get an error like this:
LLVM ERROR: Unsupported library call operation!
Zig isn't supposed to be letting LLVM leak through here, but that's a separate issue. What's happening is that normally if a primitive operation such as multiplication of integers cannot be lowered to a machine instruction, LLVM will emit a library call to compiler_rt to perform the operation. This works for up to 128-bit multiplication, for example. However compiler_rt does not define an arbitrary precision multiplication library function, and so LLVM is not able to generate code.
It is planned to submit a patch to LLVM which adds the ability to emit a lib call for situations like this, and then Zig will include the arbitrary precision multiplication function in Zig's compiler_rt.
In addition to this, Zig 0.3.0 fixes a bug where
@IntType was silently wrapping the bit count parameter if it was greater than pow(2, 32).
Improved f128 Support
Marc Tiehuis & Ben Noordhuis solved the various issues that prevented
f128 from being generally useful.
- Fix hex-float parsing. -Marc Tiehuis (#495)
- Add compiler-rt functions to support
f128. -Marc Tiehuis
__floatunditf
__floatunsitf
__floatunsitf
__floatuntitf
__floatuntisf
__trunctfdf2
__trunctfsf2
__floattitf
__floattidf
__floattisf
- Alignment fix and allow rudimentary f128 float printing. -Marc Tiehuis
- Fix f128 remainder division bug. The modulo operation computed rem(b+rem(a,b), b) which produces -1 for a=1 and b=2. Switch to a - b * trunc(a/b) which produces the expected result, 1. -Ben Noordhuis (#1137)
Build Artifact Caching
Zig now supports global build artifact caching. This feature is one of those things that you can generally ignore, because it "just works" without any babysitting.
By default, compilations are not cached. You can enable the global cache for a compilation by using
--cache on:
andy@xps:~/tmp$ time zig build-exe hello.zig real 0m0.414s user 0m0.369s sys 0m0.049.412s user 0m0.377s sys 0m0.038.012s user 0m0.009s sys 0m0.003s
When the cache is on, the output is not written to the current directory. Instead, the output is kept in the cache directory, and the path to it is printed to stdout.
This is off by default, because this is an uncommon use case. The real benefit of build artifact caching comes in 3 places:
- zig run, where it is enabled by default:
andy@xps:~/tmp$ time zig run hello.zig Hello, world! real 0m0.553s user 0m0.500s sys 0m0.055s andy@xps:~/tmp$ time zig run hello.zig Hello, world! real 0m0.013s user 0m0.007s sys 0m0.006s
- zig build, so that your build script only has to build once.
- When building an executable or shared library, Zig must build
compiler_rt.oand
builtin.ofrom source, for the given target. This usually only has to be done once ever, which is why other compilers such as gcc ship with these components already built. The problem with that strategy is that you have to build a special version of the compiler for cross-compiling. With Zig, you can always build for any target, on any target.
So caching these artifacts provides a happy solution.
The cache is perfect; there are no false positives. You could even fix a bug in
memcpy in the system's libc, and Zig will detect that its own code has (indirectly) been updated, and invalidate the cache entry.
If you use
zig build-exe, Zig will still create a
zig-cache directory in the current working directory in order to store an intermediate
.o file. This is because on MacOS, the intermediate .o file stores the debug information, and therefore it needs to stick around somewhere sensible for Stack Traces to work.
Likewise, if you use
zig test, Zig will put the test binary in the
zig-cache directory in the current working directory. It's useful to leave the test binary here so that the programmer can use a debugger on it or otherwise inspect it.
The
zig-cache directory is cleaner than before, however. For example, the
builtin.zig file is no longer created there. It participates in the global caching system, just like
compiler_rt.o. You can use
zig builtin to see the contents of
@import("builtin").
Compatibility with Valgrind
I noticed that valgrind does not see Zig's debug symbols (#896):
pub fn main() void { foo().* += 1; } fn foo() *i32 { return @intToPtr(*i32, 10000000); }
==24133== Invalid read of size 4 ==24133== at 0x2226D5: ??? (in /home/andy/downloads/zig/build/test) ==24133== by 0x2226A8: ??? (in /home/andy/downloads/zig/build/test) ==24133== by 0x222654: ??? (in /home/andy/downloads/zig/build/test) ==24133== by 0x2224B7: ??? (in /home/andy/downloads/zig/build/test) ==24133== by 0x22236F: ??? (in /home/andy/downloads/zig/build/test)
After digging around, I was able to reproduce the problem using only Clang and LLD:
static int *foo(void) { return (int *)10000000; } int main(void) { int *x = foo(); *x += 1; }
If this C code is built with Clang and linked with LLD, Valgrind has the same issue as with the Zig code.
I sent a message to the Valgrind mailing list, and they suggested submitting a bug fix to Valgrind. That's a good idea. I'm a little busy with Zig development though - anybody else want to take a crack at it?
In the meantime, Zig now has a
--no-rosegment flag, which works around the bug. It should only be used for this purpose; the flag will likely be removed once Valgrind fixes the issue upstream and enough time passes that the new version becomes generally available.
$ zig build-exe test.zig --no-rosegment $ valgrind ./test ==24241== Invalid read of size 4 ==24241== at 0x221FE5: main (test.zig:2)
Zig is now on Godbolt Compiler Explorer
Marc Tiehuis added Zig support, and then worked with the Compiler Explorer team to get it merged upstream and deployed.
The command line API that Compiler Explorer uses is covered by Zig's main test suite to ensure that it continues working as the language evolves.
zig init-lib and init-exe
zig init-lib can be used to initialize a zig build project in the current directory which will create a simple library:
$ zig init-lib Created build.zig Created src/main.zig Next, try `zig build --help` or `zig build test` $ zig build test Test 1/1 basic add functionality...OK All tests passed.
Likewise,
zig init-exe initializes a simple application:
$ zig init-exe Created build.zig Created src/main.zig Next, try `zig build --help` or `zig build run` $ zig build run All your base are belong to us.
The main Zig test suite tests this functionality so that it will not regress as Zig continues to evolve.
Concurrency Status
Concurrency is now solved. That is, there is a concrete plan for how concurrency will work in Zig, and now it's a matter of implementing all the pieces.
First and foremost, Zig supports low-level control over hardware. That means that it has atomic primitives:
...and it means that you can directly spawn kernel threads using standard library functions:
const std = @import("std"); const assert = std.debug.assert; const builtin = @import("builtin"); const AtomicRmwOp = builtin.AtomicRmwOp; const AtomicOrder = builtin.AtomicOrder; test "spawn threads" { var shared_ctx: i32 = 1; const thread1 = try std.os.spawnThread({}, start1); const thread2 = try std.os.spawnThread(&shared_ctx, start2); const thread3 = try std.os.spawnThread(&shared_ctx, start2); const thread4 = try std.os.spawnThread(&shared_ctx, start2); thread1.wait(); thread2.wait(); thread3.wait(); thread4.wait(); assert(shared_ctx == 4); } fn start1(ctx: void) u8 { return 0; } fn start2(ctx: *i32) u8 { _ = @atomicRmw(i32, ctx, AtomicRmwOp.Add, 1, AtomicOrder.SeqCst); return 0; }
On POSIX targets, when you link against libc, the standard library uses pthreads; otherwise it uses its own lightweight kernel thread implementation.
You can use mutexes, signals, condition variables, and all those things. Anything you can accomplish in C, you can accomplish in Zig.
However, the standard library provides a higher level concurrency abstraction, designed for optimal performance, debuggability, and structuring code to closely model the problems that concurrency presents.
The abstraction is built on two language features: stackless coroutines and
async/
await syntax. Everything else is implemented in userland.
std.event.Loop creates a kernel thread pool matching the number of logical CPUs. It can then be used for non-blocking I/O that will be dispatched across the thread pool, using the platform-native API:
- Windows - I/O Completion Ports
- MacOS - kqueue
- Linux - epoll
This is a competitor to libuv, except multi-threaded.
Once you have an event loop, all of the
std.event API becomes available to use:
std.event.Channel- Many producer, many consumer, thread-safe, runtime configurable buffer size. When buffer is empty, consumers suspend and are resumed by producers. When buffer is full, producers suspend and are resumed by consumers.
std.event.Future- A value that many consumers can
await.
std.event.Group- A way to
awaitmultiple
asyncoperations.
std.event.Lock- Ensures only one thread gets access to a resource, without blocking a kernel thread.
std.event.RwLock- Same as Lock except allows multiple readers to access data simultaneously.
std.event.fs- File system operations based on
async/
awaitsyntax.
std.event.tcp- Network operations based on
async/
awaitsyntax.
All of these abstractions provide convenient APIs based on
async/
await syntax, making it practical for API users to model their code with maximally efficient concurrency. None of these abstractions block or use mutexes; when an API user must suspend, control flow goes to the next coroutine waiting to run, if any. If no coroutines are waiting to run, the application will sit idly, waiting for an event from the respective platform-native API (e.g. epoll on Linux).
As an example, here is a snippet from a test in the standard library:
async fn testFsWatch(loop: *Loop) !void { const file_path = try os.path.join(loop.allocator, test_tmp_dir, "file.txt"); defer loop.allocator.free(file_path); const contents = \\line 1 \\line 2 ; const line2_offset = 7; try await try async fs.writeFile(loop, file_path, contents); const read_contents = try await try async fs.readFile(loop, file_path, 1024 * 1024); assert(mem.eql(u8, read_contents, contents)); var watch = try fs.Watch(void).create(loop, 0); defer watch.destroy(); assert((try await try async watch.addFile(file_path, {})) == null); const ev = try async watch.channel.get(); var ev_consumed = false; defer if (!ev_consumed) cancel ev; const fd = try await try async fs.openReadWrite(loop, file_path, os.File.default_mode); { defer os.close(fd); try await try async fs.pwritev(loop, fd, [][]const u8{"lorem ipsum"}, line2_offset); } ev_consumed = true; switch ((try await ev).id) { WatchEventId.CloseWrite => {}, WatchEventId.Delete => @panic("wrong event"), } const contents_updated = try await try async fs.readFile(loop, file_path, 1024 * 1024); assert(mem.eql(u8, contents_updated, \\line 1 \\lorem ipsum )); }
You can see that even though Zig is a language with manual memory management that insists on handling every possible error, it manages to be quite high level using these event-based APIs.
Now, there are some problems to solve:
- The way that canceling a coroutine works is currently unsound. - I know how to fix this, but it'll have to be in 0.4.0. Unfortunately it's causing occasional test failures.
- Lack of a guarantee about whether an async function call allocates memory or not. - In theory, there are many cases where Zig should be able to guarantee that an async function call will not allocate memory for the coroutine frame. However in practice, using LLVM's coroutines API, it will always result in an allocation.
- LLVM's coroutines implementation is buggy - Right now Zig sadly is forced to disable optimizations for
asyncfunctions because LLVM has a bug where Mem2Reg turns correct coroutine frame spills back into incorrect parameter references.
- LLVM's coroutines implementation is slow - When I analyzed the compilation speed of Zig, even with optimizations off, LLVM takes up over 80% of the time. And for the zig behavioral tests, even though coroutines are a tiny percent of the code, LLVM's coroutine splitting pass takes up 30% of that time.
And so, the plan is to rework coroutines, without using any of LLVM's coroutines API. Zig will implement coroutines in the frontend, and LLVM will see only functions and structs. This is how Rust does it, and I think it was a strong choice.
The coroutine frame will be in a struct, and so Zig will know the size of it at compile-time, and it will solve the problem of guaranteeing allocation elision - the
async callsite will simply have to provide the coroutine frame pointer in order to create the promise.
This will also be relevant for recursion; stackless function calls do not count against the static stack size upper bound calculation. See Recursion Status for more details.
Self-Hosted Compiler Status
The self-hosted compiler is well underway. Here's a 1 minute demo of the self-hosted compiler watching source files and rebuilding.
The self-hosted compiler cannot do much more than Hello World at the moment, but it's being constructed from the ground up to fully take advantage of multiple cores and in-memory caching. In addition, Zig's error system and other safety features are making it easy to write reliable, robust code. Between stack traces, error return traces, and runtime safety checks, I barely even need a debugger.
Marc Tiehuis contributed a Big Integer library, which the self-hosted compiler is using for integer literals and compile-time math operations.
Writing the self-hosted compiler code revealed to me how coroutines should work in Zig. All the little details and ergonomics are clear to me now. And so before I continue any further on the self-hosted compiler, I will use this knowledge to rework coroutines and solve the problems with them. See Concurrency Status for more details.
As a reminder, even when the self-hosted compiler is complete, Zig will forever be stuck with the stage1 C++ compiler code. See The Grand Bootstrapping Plan for more details.
The self-hosted compiler is successfully sharing some C++ code with the stage1 compiler. For example the libLLVM C++ API wrapper is built into a static library, which then exports a C API wrapper. The self-hosted compiler links against this static library in order to make libLLVM C++ API calls via the C API wrapper. In addition, the Microsoft Visual Studio detection code requires the Windows COM API, which is also C++, and so a similar strategy is used. I think it's pretty neat that the build system builds a static library once and then ends up linking against it twice - one for each of the two compiler stages!
Recursion Status
I've said before that recursion is one of the enemies of perfect software, because it represents a way that a program can fail with no foolproof way of preventing it. With recursion, pick any stack size and I'll give you an input that will crash your program. Embedded developers are all too familiar with this problem.
It's always possible to rewrite code using an explicit stack using heap allocations, and that's exactly what Jimmi did in the self-hosted parser.
On the other hand, when recursion fits the problem, it's significantly more clear and maintainable. It would be a real shame to have to give it up.
I researched different ways that Zig could keep recursion, even when we introduce statically known stack upper bound size. I came up with a proof of concept for @newStackCall, a builtin function that calls a function using an explicitly provided new stack. You can find a usage example in the documentation by following that link.
This works, and it does break call graph cycles, but it would be a little bit awkward to use. Because if you allocate an entire new stack, it has to be big enough for the rest of the stack upper bound size, but in a recursive call, which should be only one stack frame, it would overallocate every time.
So that's why I think that the actual solution to this problem is Zig's stackless coroutines. Because Zig's coroutines are stackless, they are the perfect solution for recursion (direct or indirect). With the reworking of coroutines, it will be possible to put the coroutine frame of an async function anywhere - in a struct, in the stack, in a global variable - as long as it outlives the duration of the coroutine. See Concurrency for more details.
So - although recursion is not yet solved, we know enough to know that recursion is OK to use in Zig. It does suffer from the stack overflow issue today, but in the future we will have a compile error to prevent call graph cycles. And then this hypothetical compile error will be solved by using
@newStackCall or stackless functions (but probably stackless functions). Once recursion is solved, if stackless functions turn out to be the better solution, Zig will remove
@newStackCall from the language, unless someone demonstrates a compelling use case for it.
For now, use recursion whenever you want; you'll know when it's time to update your code.
WebAssembly Status
The pieces for web assembly are starting to come together.
Ben Noordhuis fixed support for
--target-arch wasm32 (#1094).
LLVM merged my patch to make WebAssembly a normal (non-experimental) target. But they didn't do it before the LLVM 7 release. So Zig 0.3.0 will not have WebAssembly support by default, but 0.4.0 will.
That being said, the static builds of Zig provided by ziglang.org have the WebAssembly target enabled.
Apart from this, there appears to be an issue with Zig's WebAssembly linker. Once this is solved, all that is left is to use WebAssembly in real life use cases, to work out the ergonomics, and solve the inevitable issues that arise.
Documentation
The language reference documentation now contains no JavaScript. The code blocks are pre-formatted with
std.zig.Tokenizer. The same is true for these release notes.
The
builtin.zig example code in the documentation is now automatically updated from the output of Zig, so the docs can't get out of date for this.
In addition to the above, the following improvements were made to the documentation:
Standard Library API Changes
std.mem.SplitIteratoris now public
std.math.atan2is now public
std.os.linuxnow makes public all the syscall numbers and syscall functions
std.math.casthandles signed integers
- added
std.zig.parse
- added
std.zig.parseStringLiteral
- added
std.zig.render
- added
std.zig.ast
- added
std.zig.Token
- added
std.zig.Tokenizer
- added
std.io.readLine
- replace
File.existswith
File.access. -Marc Tiehuis
- rename
std.rand.Randto
std.rand.Random
- added common hash/checksum functions. -Marc Tiehuis
- SipHash64, SipHash128
- Crc32 (fast + small variants)
- Adler32
- Fnv1a (32, 64 and 128 bit variants)
- Add Hmac function -Marc Tiehuis
- Added timestamp, high-perf. timer functions -tgschultz
std.os.time.sleep
std.os.time.posixSleep
std.os.time.timestamp
std.os.time.miliTimestamp
std.os.time.Timer
- Added complex number support. -Marc Tiehuis
std.math.complex.Complex
std.math.complex.abs
std.math.complex.acos
std.math.complex.acosh
std.math.complex.arg
std.math.complex.asin
std.math.complex.asinh
std.math.complex.atan
std.math.complex.atanh
std.math.complex.conj
std.math.complex.cos
std.math.complex.cosh
std.math.complex.exp
std.math.complex.log
std.math.complex.pow
std.math.complex.proj
std.math.complex.sinh
std.math.complex.sin
std.math.complex.sqrt
std.math.complex.tanh
std.math.complex.tan
std.math.complex.ldexp_cexp
- Added more slice manipulation functions. Thanks Braedon Wooding for the original PR. (#944)
std.mem.trimLeft
std.mem.trimRight
std.mem.trimRight
std.mem.lastIndexOfScalar
std.mem.lastIndexOfAny
std.mem.lastIndexOf
std.mem.endsWith
- Added
std.atomic.Stack
- Added
std.atomic.Queue
- Added
std.os.spawnThread. It works on all targets. On Linux, when linking libc, it uses pthreads, and when not linking libc, it makes syscalls directly.
- Add JSON decoder. -Marc Tiehuis
std.json.Token
std.json.StreamingParser
std.json.TokenStream
std.json.validate
std.json.ValueTree
std.json.ObjectMap
std.json.Value
std.json.Parser- A non-stream JSON parser which constructs a tree of Value's.
- Added
std.SegmentedList
- Removed functions from
std.Buffer. Instead users should use
std.io.BufferOutStream.
- Removed
std.Buffer.appendFormat
- Removed
std.Buffer.appendByte
- Removed
std.Buffer.appendByteNTimes
- Add arbitrary-precision integer to std. -Marc Tiehuis
std.math.big.Int
std.math.big.Limb
std.math.big.DoubleLimb
std.math.big.Log2Limb
std.os.Dirgains Windows support.
std.os.File.accessno longer depends on shlwapi.dll on Windows.
std.os.path.dirnamereturns null instead of empty slice when there is no directory component. This makes it harder to write bugs. (#1017)
- Reading from a file can return
error.IsDir.
- Added
std.math.floatMantissaBitsand
std.math.floatExponentBits-Marc Tiehuis
std.mem.Allocatorallows allocation of any 0 sized type, not just
void. -Jimmi Holst Christensen.
- Added
std.os.cpuCount
- Added
std.sort.ascand
std.sort.desc-Marc Tiehuis
std.fmt.formatadd
*for formatting things as pointers. (#1285)
std.fmt.formatadd integer binary output format. -Marc Tiehuis (#1313)
- Added
std.mem.secureZero. -Marc Tiehuis
This is identical toThe resulting assembly has been manually verified in --release-* modes.
mem.set(u8, slice, 0)except that it will never be optimized out by the compiler. Intended usage is for clearing secret data.
It would be valuable to test the 'never be optimized out' claim in tests but this is harder than initially expected due to how much Zig appears to know locally. May be doable with
@intToPtr,
@ptrToIntto get around known data dependencies but I could not work it out right now.
std.fmt.formathandles non-pointer struct/union/enums. Adds support for printing structs via reflection. (#1380)
- Many
std.osfile functions no longer require an allocator. They rely on
PATH_MAX, because even Windows, Linux, and MacOS syscalls will fail for paths longer than
PATH_MAX.
- Add
std.crypto.chaCha20IETFand
std.crypto.chaCha20With64BitNonce. -Shawn Landden & Marc Tiehuis
- Add poly1305 and x25519 crypto primitives. -Marc Tiehuis
These are translated from monocypher which has fairly competitive performance while remaining quite simple.Initial performance comparision:
Zig: Poly1305: 1423 MiB/s X25519: 8671 exchanges per second Monocypher: Poly1305: 1567 MiB/s X25519: 10539 exchanges per secondThere is room for improvement and no real effort has been made at all in optimization beyond a direct translation.
- Removed deprecated, unused Windows functions
std.os.windows.CryptAcquireContextA
std.os.windows.CryptReleaseContext
std.os.windows.CryptGenRandom
Thank you contributors!
- Tesla Ice Zhang fixed typos in the Zig grammar documentation and created The IntelliJ IDEA plugin for the Zig programming language
- Jay Weisskopf cleaned up the Zig documentation
- hellerve finished the Mac OS dir entry iterator code
- Raul Leal fixed an undeclared identifier error in readUntilDelimiterBuffer and incorrect number of parameters in readUntilDelimiterAlloc (#877)
- Wander Lairson Costa fixed the build process to find libxml2 and zlib correctly. (#847)
- tgschultz added more linux syscalls and constants to the std lib.
- tgschultz fixed compiler errors around Darwin code.
- Harry Eakins added readability improvements and a bug-fix to the standard library crypto throughput test.
- tgschultz Added DirectAllocator support for alignments bigger than os.page_size on posix systems. (#939)
- Braedon Wooding & Josh Wolfe Added UTF-8 encoding and decoding support. (#954)
- Alexandros Naskos Fixed a bug where comptime was being incorrectly applied across function definition boundaries. (#972)
- Braedon Wooding worked towards unifying the
std.ArrayListand
std.HashMapAPIs regarding iteration. (#981)
- Braedon Wooding added documentation for arg types and error inference.
- tgschultz added custom formatter support to
std.fmt.format.
- isaachier Fixed const-ness of buffer in
std.Buffer.replaceContentsmethod (#1065)
- isaachier Fixed error handling in
std.Buffer.fromOwnedSlice. (#1082)
- Arthur Elliott Added
std.ArrayList.setOrErrorso you can set a value without growing the underlying buffer, with range safety checks.
- marleck55 std/fmt: Use lowercase k for kilo in base 1000 (#1090)
- tgschultz added C string to fmt by using
{s}. (#1092)
- Alexandros Naskos Fixed optional types of zero bit types. (#1110)
- Jay Weisskopf Made
zig versioncompliant with SemVer with regards to the git revision metadata.
- Sahnvour fixed a compilation error on windows introduced by pointer reform.
- Bodie Solomon Fixed zig not finding std lib files on Darwin when the executable is a symlink. (#1117)
- Isaac Hier Fixed the increment operation for the comptime value
-1.
- Isaac Hier Fixed the compiler's internal path joining function when the dirname is empty.
- tgschultz Fixed standard library regressions from updated syntax. (#1162)
- Isaac Hier Improved the compile error for when the RHS of a shift is too large for the LHS. (#1168)
- Jay Weisskopf Fixed version detection for out-of-source builds.
- Isaac Hier Fixed an assertion crash on enum switch values
- wilsonk Fixed a build error in the crypto throughput test (#1211)
- Bas van den Berg Fixed
std.ArrayList.insertand added tests. (#1232)
- tgschultz Added
std.ArrayList.swapRemove. (#1230)
- Eduardo Sánchez Muñoz fixed bad code generated when an extern function returns a small struct. (#1234)
- Bas van den Berg fixed aligned reallocation. (#1237)
- Bas van den Berg improved realloc on fixed buffer allocator. (#1238)
- Wink Saville gave ArrayList tests consistent names. (#1253)
- Wink Saville added
std.ArrayList.swapRemoveOrError. (#1254)
- Jay Weisskopf Fixed minor documentation errors (#1256)
- kristopher tate Added more
std.os.posixconstants.
- kristopher tate Made tests skippable by returning
error.SkipZigTest
- Nathan Sharp Added
std.io.PeekStreamand
std.io.Slicestream. SliceStream is a read-only stream wrapper around a slice of bytes. It allows adapting algorithms which work on InStreams to in-memory data.
PeekStream is a stream wrapper which allows "putting back" bytes into the stream so that they can be read again. This will help make look-ahead parsers easier to write.
- dbandstra added int writing functions to OutStream, and skipBytes function to InStream (#1300)
- dbandstra add SliceOutStream, rename SliceStream to SliceInStream (#1301)
- Matthew D. Steele added "Comments" section to language reference (#1309)
- kristopher tate Windows: Call RtlGenRandom() instead of CryptGetRandom() (#1319)
- kristopher tate Add builtin function @handle() (#1297)
- kristopher tate better support for `_` identifier (#1204, #1320)
- Matthew D. Steele Fix the start-less-than-end assertion in std.rand.Random.range (#1325)
- Matthew D. Steele Fix a type error in std.os.linux.getpid() (#1326)
- Matthew D. Steele Add thread ID support to std.os.Thread (#1316)
- Shawn Landdendoc: @addWithOverflow also returns if overflow occured
- Shawn Landdenadded a red-black tree implementation to std
- Wink Saville fixed @atomicRmw not type checking correctly.
- prazzb Fixed LLVM detection at build time for some linux distros. (#1378)
- tgschultz fixed handling of [*]u8 when no format specifier is set. (#1379)
- Shawn Landden do not use an allocator when we don't need to because of the existance of PATH_MAX
- Raul Leal Allow implicit cast from
*[N]Tto
?[*]T(#1398)
- kristopher tate Added a test for writing u64 integers (#1401)
- tgschultz Fixed compile error when passing enum to fmt
- tgschultz Implemented tagged union support in
std.fmt.format(#1432)
- Raul Leal Allow implicit cast from
*Tand
[*]Tto
?*c_void
- kristopher tate correct version comparison for detecting msvc (fixes #1438)
- kristopher tate allow bytes to be printed-out as hex (#1358)
- Shawn Landden updated incorrect documentation comments (#1456)
- hfcc Added compilation error when a non-float is given to
@floatToInt
- kristopher tate X25519: Fix createPublicKey signature and add test (#1480)
- Sahnvour Fixes a path corruption when compiling on windows. (#1488)
- Bas van den Berg Add capacity and appendAssumeCapacity to ArrayList
- emekoifixed WriteFile segfault
- kristopher tate fixed handling of file paths with spaces in the cache
- Wink Saville fixed build failures of FileOutStream/FileInStream from syntax changes
- emekoifixed compiling on mingw (#1542)
- Raul Leal added builtin functions:
@byteOffsetOfand
@bitOffsetOf.
- Christian Wesselhoeft fixed BufferOutStream import - it is defined in io.zig.
- Wink Saville fixed a typo in a doc comment
- Wink Saville fixed a build issue with GCC 8
- Wink Saville refactored some parsing code in the self-hosted compiler
- Jay Weisskopf improved the help output of the command line interface
Miscellaneous Improvements
- LLVM, Clang, and LLD dependencies are updated to 7.0.0.
- Greatly increased test coverage.
- std.os - getting dir entries works on Mac OS.
- allow integer and float literals to be passed to var params. See #623
- add @sqrt built-in function. #767
- The compiler exits with error code instead of abort() for file not found.
- Add @atomicLoad builtin.
- stage1 compiler defaults to installing in the build directory
- ability to use async function pointers
- Revise self-hosted command line interface
- Add exp/norm distributed random float generation. -Marc Tiehuis
- On linux,
clock_gettimeuses the VDSO optimization, even for static builds.
- Better error reporting for missing libc on Windows. (#931)
- Improved fmt float-printing. -Marc Tiehuis
- Fix errors printing very small numbers
- Add explicit scientific output mode
- Add rounding based on a specific precision for both decimal/exp modes.
- Test and confirm exp/decimal against libc for all
f32values. Various changes to better match libc.
- The crypto throughput test now uses the new
std.os.timemodule. -Marc Tiehuis
- Added better support for unpure enums in tranlate C. -Jimmi Holst Christensen (#975)
- Made container methods that can be
const,
const. -Jimmi Holst Christensen
- Tagged union field access prioritizes members over enum tags. (#959)
std.fmt.formatsupports
{B}for human readable bytes using SI prefixes.
- Zig now knows the C integer sizes for OpenBSD. Thanks to Jan Schreib for this information. (#1016)
- Renamed integer literal type and float literal type to
comptime_intand
comptime_float. -Jimmi Holst Christensen
- @canImplicitCast is removed. Nobody will miss it.
- Allow access of
array.lenthrough a pointer. -Jimmi Holst Christensen
- Optional pointers follow const-casting rules. Any
*T -> ?*Tcast is allowed implicitly, even when it occurs deep inside the type, and the cast is a no-op at runtime.
- Add i128 compiler-rt div/mul support. -Marc Tiehuis
- Add target C int type information for msp430 target. #1125
- Add
__extenddftf2and
__extendsftf2to zig's compiler-rt.
- Add support for zig to compare comptime array values. -Jimmi Holst Christensen (#1167)
- Support
--emitin
testcommand. -Ben Noordhuis (#1175)
- Operators now throw a compiler error when operating on undefined values. -Jimmi Holst Christensen (#1185)
- Always link against compiler_rt even when linking libc. Sometimes libgcc is missing things we need, so we always link compiler_rt and rely on weak linkage to allow libgcc to override.
- Add compile error notes for where struct definitions are. (#1202)
- Add @popCount.
- Cleaner output from zig build when there are compile errors.
- new
builder.addBuildOptionAPI. -Josh Wolfe
- Add compile error for disallowed types in extern structs. (#1218)
- build system: add
-Dskip-releaseoption to test faster. -Andrew Kelley & Jimmi Holst Christensen
- allow
==for comparing optional pointers. #658
- allow implicit cast of undefined to optional
- switch most windows calls to use W versions instead of A. (#534)
- Better anonymous struct naming. This makes anonymous structs inherit the name of the function they are in only when they are the return expression. Also document the behavior and provide examples. (#1243)
- compile error for @noInlineCall on an inline fn (#1133)
- stage1: use os_path_resolve instead of os_path_real to canonicalize imports. This means that softlinks can represent different files, but referencing the same absolute path different ways still references the same import.
- rename
--enable-timing-infoto
-ftime-reportto match clang, and have it print llvm's internal timing info.
- Binary releases now include the LICENSE file.
- Overhaul standard library api for getting random integers. -Josh Wolfe (#1578)
Bug Fixes
- fix incorrect compile error on inferred error set from async function #856
- fix
promise->Tsyntax not parsed #857
- fix crash when compile error in analyzing @panic call
- fix compile time array concatenation for slices #866
- fix off-by-one error in all standard library crypto functions. -Marc Tiehuis
- fix use-after-free in BufMap.set() - Ben Noordhuis #879
- fix llvm assert on version string with git sha -Ben Noordhuis #898
- codegen: fix not putting llvm allocas together
- fix calling convention at callsite of zig-generated fns
- inline functions now must be stored in const or comptime var. #913
- fix linux implementation of self exe path #894
- Fixed looking for windows sdk when targeting linux. -Jimmi Holst Christensen
- Fixed incorrect exit code when build.zig cannot be created. -Ben Noordhuis
- Fix os.File.mode function. -Marc Tiehuis
- Fix OpqaueType usage in exported c functions. -Marc Tiehuis
- Added
memmoveto builtin.o. LLVM occasionally generates a dependency on this function.
- Fix
std.BufMaplogic. -Ben Noordhuis
- Fix undefined behavior triggered by fn inline test
- Build system supports LLVM_LIBDIRS and CLANG_LIBDIRS. -Ben Noordhuis
- The Zig compiler does exit(1) instead of abort() for file not found.
- Add compile error for invalid deref on switch target. (#945)
- Fix printing floats in release mode. -Marc Tiehuis (#564, #669, #928)
- Fix @shlWithOverflow producing incorrect results when used at comptime (#948)
- Fix labeled break causing defer in same block to fail compiling (#830)
- Fix compiler crash with functions with empty error sets. -Jimmi Holst Christensen (#762, #818)
- Fix returning literals from functions with inferred error sets. -Jimmi Holst Christensen (#852)
- Fix compiler crash for
.ReturnTypeand @ArgType on unresolved types. -Jimmi Holst Christensen (#846)
- Fix compiler-rt ABI for x86_64 windows
- Fix extern enums having the wrong size. -Jimmi Holst Christensen (#970)
- Fix bigint multi-limb shift and masks. -Marc Tiehuis
- Fix bigint shift-right partial shift. -Marc Tiehuis
- translate-c: fix typedef duplicate definition of variable. (#998)
- fix comptime code modification of global const. (#1008)
- build: add flag to LLD to fix gcc 8 build. (#1013)
- fix AtomicFile for relative paths. (#1017)
- fix compiler assert when trying to unwrap return type
type. -Jimmi Holst Christensen
- fix crash when evaluating return type has compile error. (#1058)
- Fix Log2Int type construction. -Marc Tiehuis
- fix std.os.windows.PathFileExists specified in the wrong DLL (#1066)
- Fix structs that contain types which require comptime. (#586)
Now, if a struct has any fields which require comptime, such as
type, then the struct is marked as requiring comptime as well. Same goes for unions.
This means that a function will implicitly be called at comptime if the return type is a struct which contains a field of type
type.
- fix assertion failure when debug printing comptime values
- fix @tagName handling specified enum values incorrectly. (#976, #1080)
- fix ability to call mutating methods on zero size structs. (#838)
- disallow implicit casts that break rules for optionals. (#1102)
- Fix windows x86_64 i128 ABI issue. -Marc Tiehuis
- Disallow opaque as a return type of function type syntax. (#1115)
- Fix compiler crash for invalid enums. (#1079, #1147)
- Fix crash for optional pointer to empty struct. (#1153)
- Fix comptime
@tagNamecrashing sometimes. (#1118)
- Fix coroutine accessing freed memory. (#1164)
- Fix runtime libc detection on linux depending on locale. (#1165)
- Fix await on early return when return type is struct.
- Fix iterating over a void slice. (#1203)
- Fix crash on
@ptrToIntof a
*void(#1192)
- fix crash when calling comptime-known undefined function ptr. #880, #1212
- fix
@setEvalBranchQuotanot respected in generic fn calls. #1257
- Allow pointers to anything in extern/exported declarations (#1258) -Jimmi Holst Christensen
- Prevent non-export symbols from clobbering builtins. (#1263)
- fix generation of error defers for fns inside fns. (#878)
- Fixed windows getPos. -Jimmi Holst Christensen
- fix logic for determining whether param requires comptime (#778, #1213)
- Fixed bug in LLD crashing when linking twice in the same process. (#1289)
- fix assertion failure when some compile errors happen
- add compile error for non-inline for loop on comptime type
- add compile error for missing parameter name of generic function
- add compile error for ignoring return value of while loop bodies (#1049)
- fix tagged union initialization with a runtime void (#1328)
- translate-c: fix for loops and do while loops with empty body
- fix incorrectly generating an unused const fn global (#1277)
- Fix builtin alignment type. -Marc Tiehuis (#1235)
- fix handling multiple extern vars with the same name
- fix llvm assertion failure when building std lib tests for macos (#1417)
- fix false negative determining if function is generic
- fix
@typeInfounable to distinguish compile error vs no-payload (#1421, #1426)
- fix crash when var in inline loop has different types (#917, #845, #741, #740)
- add compile error for function prototype with no body (#1231)
- fix invalid switch expression parameter. (#604)
- Translate-c: Check for error before working on while loop body. -Jimmi Holst Christensen (#1445)
- use the sret attribute at the callsite when appropriate. Thanks to Shawn Landden for the original pull request. (#1450)
- ability to
@ptrCastto
*void. (#960)
- compile error instead of segfault for unimplemented feature. (#1103)
- fix incorrect value for inline loop. (#1436)
- compile errors instead of crashing for unimplemented minValue/maxValue builtins
- add compile error for comptime control flow inside runtime block (#834)
- update throughput test to new File API (#1468)
- fix compile error on gcc 7.3.0. Only set -Werror for debug builds, and only for zig itself, not for embedded LLD. (#1474)
- stage1: fix emit asm with explicit output file (#1473)
- stage1: fix crash when invalid type used in array type (#1186)
- stage1 compile error instead of crashing for unsupported comptime ptr cast (#955)
- stage1: fix tagged union with no payloads (#1478)
- Add compile error for using outer scoped runtime variables from a fn defined inside it. (#876)
- stage1: improve handling of generic fn proto type expr. (#902)
- stage1: compile error instead of incorrect code for unimplemented C ABI. (#1411, #1481)
- add support for partial C ABI compatibility on x86_64. (#1411, #1264)
- fix crash when var init has compile error and then the var is referenced (#1483)
- fix incorrect union const value generation (#1381)
- fix incorrect error union const value generation (#1442)
- fix tagged union with only 1 field tripping assertion (#1495)
- add compile error for merging non- error sets (#1509)
- fix assertion failure on compile-time
@intToPtrof function
- fix tagged union with all void payloads but meaningful tag (#1322)
- fix alignment of structs. (#1248, #1052, #1154)
- fix crash when pointer casting a runtime extern function
- allow extern structs to have stdcallcc function pointers (#1536)
- add compile error for non-optional types compared against null (#1539)
- add compile error for
@ptrCast0 bit type to non-0 bit type
- fix codegen for
@intCastto
u0
- fix @bytesToSlice on a packed struct (#1551)
- fix implicit cast of packed struct field to const ptr (#966)
- implementation for bitcasting extern enum type to c_int (#1036)
- add compile error for slice.*.len (#1372)
- fix optional pointer to empty struct incorrectly being non-null (#1178)
- better string literal caching implementation We were caching the ConstExprValue of string literals, which works if you can never modify ConstExprValues. This premise is broken with `comptime var ...`.
So I implemented an optimization in ConstExprValue arrays, where it stores aFurthermore, before a ConstExprValue array is expanded into canonical form, it removes itself from the string literal cache. This fixes the issue, because before an array element is modified it would have to be expanded.
Buf *directly rather than an array of ConstExprValues for the elements, and then similar to array of undefined, it is expanded into the canonical form when necessary. However many operations can happen directly on the
Buf *, which is faster.
- add compile error for casting const array to mutable slice (#1565)
- fix
std.fmt.formatIntto handle upcasting to base int size
- fix comptime slice of pointer to array (#1565)
- fix comptime string concatenation ignoring slice bounds (#1362)
- stage1: unify 2 implementations of pointer deref. I found out there were accidentally two code paths in zig ir for pointer dereference. So this should fix a few bugs. (#1486)
- add compile error for slice of undefined slice (#1293)
- fix @compileLog having unintended side effects. (#1459)
- fix translate-c incorrectly translating negative enum init values (#1360)
- fix comptime bitwise operations with negative values (#1387, #1529)
- fix self reference through fn ptr field crash (#1208)
- fix crash on runtime index into slice of comptime type (#1435)
- fix implicit casting to
*c_void(#1588)
- fix variables which are pointers to packed struct fields (#1121)
- fix crash when compile error evaluating return type of inferred error set. (#1591)
- fix zig-generated DLLs not properly exporting functions. (#1443)
This Release Contains Bugs
Zig has known bugs.
The first release that will ship with no known bugs will be 1.0.0.
Roadmap
- Redo coroutines without using LLVM Coroutines and rework the semantics. See #1363 and #1194.
- Tuples instead of var args. #208
- Well-defined copy-eliding semantics. #287
- Self-hosted compiler. #89
- Get to 100% documentation coverage of the language
- Auto generated documentation. #21
- Package manager. #943
Active External Projects Using Zig
Thank you financial supporters!
Special thanks to those who donate monthly. We're now at $1,349 of the $3,000 goal. I hope this release helps to show how much time I've been able to dedicate to the project thanks to your support.
- Lauren Chavis
- Raph Levien
- Stevie Hryciw
- Andrea Orru
- Harry Eakins
- Filippo Valsorda
- jeff kelley
- Martin Schwaighofer
- Brendon Scheinman
- Ali Anwar
- Adrian Sinclair
- David Joseph
- Ryan Worl
- Tanner Schultz
- Don Poor
- Jimmy Zelinskie
- Thomas Ballinger
- David Hayden
- Audun Wilhelmsen
- Tyler Bender
- Matthew
- Mirek Rusin
- Peter Ronnquist
- Josh Gentry
- Trenton Cronholm
- Champ Yen
- Robert Paul Herman
- Caius
- Kelly Wilson
- Steve Perkins
- Clement Rey
- Eduard Nicodei
- Christopher A. Butler
- Colleen Silva-Hayden
- Wesley Kelley
- Jordan Torbiak
- Mitch Small
- Josh McDonald
- Jeff
- Paul Merrill
- Rudi Angela
- Justin B Alexander
- Ville Tuulos
- shen xizhi
- Ross Cousens
- Lorenz Vandevelde
- Ivan
- Jay Weisskopf
- William L Sommers
- Gerdus van Zyl
- Anthony J. Benik
- Brian Glusman
- Furkan Mustafa
- Le Bach
- Jordan Guggenheim
- Tyler Philbrick
- Marko Mikulicic
- Brian Lewis
- Matt Whiteside
- Elizabeth Ryan
- Thomas Lopatic
- Patricio Villalobos
- joe ardent
- John Goen
- Luis Alfonso Higuera Gamboa
- Jason Merrill
- Andriy Tyurnikov
- Sanghyeon Seo
- Neil Henning
- aaronstgeorge@gmail.com
- Raymond Imber
- Artyom Kazak
- Brian Orr
- Frans van den Heuvel
- Jantzen Owens
- David Bremner
- Veit Heller
- Benoit Jauvin-Girard
- Chris Rabuse
- Jeremy Larkin
- Rasmus Rønn Nielsen
- Aharon sharim
- Stephen Oates
- Quetzal Bradley
- Wink Saville
- S.D.
- George K
- Jonathan Raphaelson
- Chad Russell
- Alexandra Gillis
- Pradeep Gowda
- david karapetyan
- Lewis
- stdev
- Wojciech Miłkowski
- Jonathan Wright
- Ernst Rohlicek
- Alexander Ellis
- bb010g
- Pau Fernández
- Krishna Aradhi
- occivink
- Adrian Hatch
- Deniz Kusefoglu
- Dan Boykis
- Hans Wennborg
- Matus Hamorsky
- Ben Morris
- Tim Hutt
- Gudmund Vatn
- Tobias Haegermarck
- Martin Angers
- Christoph Müller
- Johann Muszynski
- Fabio Utzig
- Eigil Skjæveland
- Harry
- moomeme
- xash
- bowman han
- Romain Beaumont
- Nate Dobbins
- Paul Anderson
- Jon Renner
- Karl Syvert Løland
- Stanley Zheng
- myfreeweb
- Dennis Furey
- Dana Davis
- Ansis Malins
- Drew Carmichael
- Doug Thayer
- Henryk Gerlach
- Dylan La Com
- David Pippenger
- Matthew Steele
- tumdum
- Alex Alex
- Andrew London
- Jirka Grunt
- Dillon A
- Yannik
- VilliHaukka
- Chris Castle
- Antonio D'souza
- Silicon
- Damien Dubé
- Dbzruler72
- McSpiros
- Francisco Vallarino
- Shawn Park
- Simon Kreienbaum
- Gregoire Picquot
- Silicas
- James Haggerty
- Falk Hüffner
- allan
- Ahmad Tolba
- jose maria gonzalez ondina
- Adrian Boyko
- Benedikt Mandelkow
- Will Cassella
- Michael Weber
Thank you Andrea Orru for sending me a giant box of Turkish Delight
|
https://www.tefter.io/bookmarks/47072/readable
|
CC-MAIN-2020-05
|
refinedweb
| 11,502
| 56.66
|
Web server programs
CGI programs are executed by a web server program in response to a user request. Examples of web server programs include Apache ( ) which runs on all major operating systems and Internet Information Server (or IIS) which runs on Microsoft Windows only. Apache is the most popular web server by a very large margin, running 63.22% of over 14 million active Internet domains, with IIS running 26.14% of active domains (figures from for Feb 2002).
Web server programs also serve static HTML format web pages. When the web server detects that a requested URL (Uniform Resource Locator) is a CGI program , then instead of sending the program file as text to the web browser, it loads and runs the program, supplying input if there is any (see below) and sending the output of the program to the browser which requested the URL. The web server will implement a set of rules to decide whether or not a requested file is a CGI program, e.g. from it having a file extension of .pl or .cgi and it being installed in a particular folder or directory, e.g. one named cgi-bin .
CGI program input
A CGI program can obtain input from part of a URL after a ? mark e.g:
This URL can be input by the user, linked from a web page, or generated automatically.
A CGI can also obtain similar input as posted from a user submitted HTML form to the program URL. Here is an example <FORM> tag which specifies a remote CGI program which will process and handle the submitted form data:
<form action="../cgi-bin/responseform.cgi" method="post">
This URL is relative to the location of the static HTML page containing the form, but absolute URLs will work just as well. The CGI program doesn't have to be on the same web site as the form that provides it with input, though it often will be.
CGI program output
In the simplest case, CGI programs can start with a suitable HTTP (HyperText Transport Protocol) header and send their output as plain text. This CGI program is about as simple as they get:
#!C:/python21/python # Simplest Python CGI Program print "Content-type: text/plain\n" print "hello CGI world!"
When installed as python/simple.cgi relative to the site root (in this case ) it displays the plain text: hello CGI world! in the browser.
There are 3 active lines of code in this program.
#!C:/python21/python
Tells a (Windows based) Apache server to interpret the rest of the script using a python interpreter installed in c:\python21\python (note use of Unix/Internet style forward slashes to delimit the path) . The #! characters must be the first 2 characters in the script file, so that the web-server program interprets the rest of the first line as an interpreter path. This first line of the program isn't Python code: it tells the web-server that the rest of the program is Python code. On a Unix type system this line might be #!/usr/bin/python , or the path of the Python interpreter.
print "Content-type: text/plain\n"
This outputs the HTTP header stating that the rest of the output will be plain text. Normally we will be using "Content-type: text/html\n" in order to send formatted HTML output. The HTTP standard expects 2 newlines after HTTP headers (most of the time you only need the one), so we have to specify an extra \n in addition to the one Python print outputs by default.
print "hello CGI world!"
The output of this print statement goes straight to the browser and is displayed directly in the browser window.
Of course for something as simple as this you would probably prefer to use a static file rather than a Python CGI program. We can also get more attractive formatting by making the CGI program create its output in HTML format.
The reason for using CGI programs instead of static HTML files is to generate the information sent to the browser when the request is made by the user using data current at that time. Our next example tells the user the local time at the server at the time the request is serviced.
#!/usr/local/bin/python # Change top line to reflect where Python is installed on your system # Python local time CGI Program import time print "Content-type: text/html\n" print "<html><Head><Title>Hello Python CGI World</Title></Head>" print "<Body><H1>Hello Python CGI World !</H1>" to=time.localtime(time.time()) # time.time() parameter not needed Python >= 2.1 print "<p>The local time on this server is: %s </p>" % time.asctime(to) print "</body></html>"
Pointing our browser at the URL for this CGI gave the output:
The local time on this server is: Mon Mar 18 08:11:38 2002
Requesting the same URL a few seconds later gave an updated time.
The Python cgi module will parse the usual kinds of input to CGI programs. This works for additional information passed within the URL and for form data.
The URL encoding approach is useful where the URL itself is auto generated or for test purposes. For example a coded URL can be sent as part of a CGI generated web page or email. The recipient can confirm subscription to a mailing list simply by clicking on a link within this message. If using an email this can contain a subscription confirmation request with a URL containing a special randomly generated code which is sent to the email address requested, so if the confirmation-required message is suitably worded and the user clicks on the generated confirmation link this establishes:
a. That the person reading email at the address receiving this URL is consenting to join the mailing list.
b. That the person who provided this address to the server using a web form or other means is likely either to be the owner of this email address or to be acting upon their request.
The additional information part of the URL is much used in search engine queries and appears after a question mark (?) within the URL, so this is sometimes called a "query string". This consists of name=value pairs separated from each other using ampersands (&).
The following program: inputfields.cgi uses the FieldStorage class within the cgi module.
#!/usr/local/bin/python # Python CGI Program to get URL or Form data import cgi input=cgi.FieldStorage() print "Content-type: text/html\n" print "<html><Head><Title>CGI Input Fields</Title></Head>" print "<Body><H1> CGI Input Fields</H1>" print "<ul>" for key in input.keys(): print "<li>%s: %s</li>" % (key,input[key].value) print "</ul>" print "</body></html>"
This program is simple and works well enough for now, but we will need to develop it further to handle the situation where a key is used to access more than one value, which can happen with multiple selection form dialog boxes. Note that we need to use the value attribute of the object stored in the dictionary in which the cgi.FieldStorage class stores the form response. This value attribute will be a string if only one value for the relevant key was available.
When called using the URL:
This program displayed the following output as a HTML list:
CGI Form Fields
The same CGI program can be made to give the same output by posting the data using an HTML form. This approach is easier than hand-crafting URLs for applications requiring the end user to input the data to send to the CGI program.
Here is the form:
The following HTML code was used for this form:
<HTML> <HEAD> <TITLE>test form</TITLE> </HEAD> <BODY bgcolor="#ffffff"> <H1>Test Form</H1> <form action="inputfields.cgi" method="post"> <p>Please enter your name. * <INPUT TYPE="text" NAME="Name" SIZE="25" MAXLENGTH="40"></p><p> Please enter your email. * <INPUT TYPE="text" NAME="Email" SIZE="25" MAXLENGTH="40"></p><p> <INPUT TYPE="checkbox" NAME="mailing_list">Please include me on your mailing list</p><p> <INPUT TYPE="submit"> </form> </P> </BODY> </HTML>
and the following program output was obtained:
CGI Input Fields
The first 2 fields were provided by the text form fields, the mailing_list: on entry was provided by the <INPUT TYPE="checkbox" NAME="mailing_list"> form field.
These 2 approaches to handling CGI data input using the form tag method="post" attribute and URL encoding are complementary.
In a real mailing list application we would need to confirm that the address input by the user was correct. If a mischievous user submitted someone else's address, or someone incorrectly typed their address this could result in the wrong person being sent mail using the mailing list and becoming annoyed by this.
To confirm that an email address is correct we will get our CGI program to send a message to the mail address, asking the owner of the address to confirm the subscription or apologising for the error and asking them to ignore the message.
To keep this as simple as possible for now, we will obtain the confirmations using a manual exchange of email. For this, the mailing list owner needs to be emailed the request from the web submission form.
Sending mail from a Python program can be achieved using the smtplib module. Here SMTP stands for Simple Mail Transport Protocol, which is the format of the messages used to relay email over the Internet.
Here is a simple (non CGI) Python program that sends an email:
import smtplib,string fromaddress='webmaster@some_host.net' # make this an address to reply to toaddress='listowner@another.net' # change this to your own address message="Subject: a message from Python smtplib\n\nHello mail world!" server = smtplib.SMTP('localhost') # address of SMTP server server.set_debuglevel(1) # get mail server debug messages server.sendmail(fromaddress, toaddress, message) # send the message server.quit() # close mail server connection
Note the use of 2 newlines (\n\n) between the Subject: header and the body of the mail message. You need a blank line after message headers, so mail client programs can tell where the mail header ends and the message body begins. You could alternatively add ordinary strings together to construct the message, and put a pair of \n newlines between the headers (e.g. From: and Subject: message lines and the message body itself.
Sending debug messages to the console or Python command line is useful for this test example, so we can see if the SMTP server is handling our message correctly. In a real application we would prefer for the mail receipt by the outgoing SMTP server to be quiet, once we know that it is working correctly.
The above script only works if the localhost (address: 127.0.0.1) happens to have an SMTP mail server able to relay outgoing messages running on it. This works on my PC at home. However, it won't work on yours unless you happen to have installed and configured a SMTP mail server to run on it. If you haven't you will need to replace this address with either the numeric Internet address or better, the domain name of your Internet Service Provider's outgoing mail server. Your Internet service provider or network administrator should be able to give you the required address.
Having demonstrated how to generate and send mail automatically, some do's and don'ts must to be stated, so you know how avoid annoying other mail users:
a. Don't send email to people who have not given you their consent to receive these messages. If you do this, your Internet account will be cancelled.
b. Do obtain their positive consent before you put someone onto a mailing list. To do this you can send a single message to a CGI user who has given details on a form saying they want to go onto a mailing list. This can contain an acceptance code for addition to a mailing list which they have to return to indicate consent. Having a confirmation step protects against a mistyped address or malicious use.
c. Don't send forged email. It is easy to send messages with any made-up outgoing address. However, you should only send such to your own mail addresses or to those who know in advance whom the mail is coming from, to educate you or them about how easy this is in practice. Otherwise, only use outgoing addresses which you are entitled to use and to which replies can find you. In many countries forging letters with intent to deceive their recipients is a criminal offence.
d. When setting up automated mail facilities of any kind do give some thought to the effect of mail loops. Consider what could happen if you are a member of a mailing list and set up a badly-configured automatic holiday mail responder which replies to the list posting address that you are temporarily away from your office. Whenever a message from this list is received a message will be sent back out to the list which the list will send back to you, and which your program will reply to etc. until someone breaks this loop. Mail loops like this could subject every other list member to thousands of unwanted messages before someone finds out who you are and throws you off this list.
We will now make a CGI process the data received from our mailing list submission form, and send it in an email to the list owner. We can do this by combining the form processing program with the email sending program:
#!/usr/local/bin/python # Send Mailing list request data to listowner import cgi,smtplib,string # get details from form form=cgi.FieldStorage() def html_header(): # print HTML header to browser print "Content-type: text/html\n" print "<html><Head><Title>CGI Form Submission</Title></Head>" print "<Body><H1> CGI Form Submission</H1>" def make_message(): #Create message from form data fromaddress='webmaster@some.host.net' # give an address for a human reply toaddress='listowner@some.where.else.net' # the address form data is sent to message="From: %s\n" % fromaddress # get one \n automatically here message="Subject: Mailing list subscription request\n" # need an extra \n message+="The following form submission was received:\n" for key in form.keys(): record= "%s: %s\n" % (key,form[key].value) message+=record return (fromaddress,toaddress,message) def send_mail(fromaddress,toaddress,message): # send message to listowner server = smtplib.SMTP('localhost') # create server object server.sendmail(fromaddress, toaddress, message) # send message server.quit() # close mail server connection html_header() (fromaddress,toaddress,message)=make_message() send_mail((fromaddress,toaddress,message) # thank user for input print """<p>Thanks for your request. Your details have been mailed to the list owner.</p>""" print "</body></html>" # end html
This works and is useful and simple, but it is not very robust. For example, if the user accidentally clicks the form submit button before entering valid data, invalid mail will be sent. This application could also send almost any email to the destination address if the CGI program is given data encoded as part of the URL. In a more practical form processing application we will want to check the validity of the data input by the user as much as possible and either give the user a valid confirmation through the browser or an error response asking them to resubmit the data. We should only send email if the form data validates correctly.
The following program can be used to obtain the environment variables which the web-server program makes available to a CGI program. Some of these variables will be the same as if you run the program interactively, while others will be special to the CGI environment.
#!/usr/local/bin/python # Python CGI Program to get host environment import os print "Content-type: text/html\n" print "<html><Head><Title>CGI Host Environment</Title></Head>" print "<Body><H1> CGI Host Environment </H1>" env=os.environ print "<ul>" for key in env.keys(): print "<li>%s: %s</li>" % (key,env[key]) print "</ul>" print "</body></html>"
|
http://bcu.copsewood.net/python/notes8.html
|
CC-MAIN-2017-22
|
refinedweb
| 2,700
| 61.06
|
Use the following steps to prepare your system for the upgrade.
If you are upgrading Ambari as well as the stack, you must know the location of the Nagios servers for that process. Use the Services->Nagios-> Summary panel to locate the hosts on which they are running.
Use the Services view on the Ambari Web UI to stop all services, including all clients, running on HDFS. Do not stop HDFS yet.
Finalize any prior upgrade if you have not done so already.
su $HDFSUSER hadoop namenode -finalize
Create the following logs and other files.
Because the upgrade to 2.0.6 includes a version upgrade of HDFS, creating these logs allows you to check the integrity of the file system post upgrade.
Run
fsckwith the following flags and send the results to a log. The resulting file contains a complete block map of the file system. You use this log later to confirm the upgrade.
su $HDFS_USER hadoop fsck / -files -blocks -locations > /tmp/dfs-old-fsck-1.log
where
$HDFS_USERis the HDFS Service user (by default,
hdfs).
Capture the complete namespace of the filesystem. (The following command does a recursive listing of the root file system.)
su $HDFS_USER hadoop dfs -lsr / > /tmp/dfs-old-lsr-1.log
where
$HDFS_USERis the HDFS Service user (by default,
hdfs).
Create a list of all the DataNodes in the cluster.
su $HDFS_USER hadoop dfsadmin -report > /tmp/dfs-old-report-1.log
where
$HDFS_USERis the HDFS Service user (by default,
hdfs).
Optional: copy all or unrecoverable only data stored in HDFS to a local file system or to a backup instance of HDFS.
Optional: create the logs again and check to make sure the results are identical.
Save the namespace. You must be the HDFS service user to do this and you must put the cluster in Safe Mode.
su $HDFS_USER hadoop dfsadmin -safemode enter hadoop dfsadmin -saveNamespace
Copy the following checkpoint files into a backup directory. You can find the directory by using the Services View in the UI. Select the HDFS service, the Configs tab, in the Namenode section, look up the property NameNode Directories. It will be on your NameNode host.
dfs.name.dir/edits
dfs.name.dir/image/fsimage
dfs.name.dir/current/fsimage
On the JobTracker host, copy
/etc/hadoop/confto a backup directory.
Store the layoutVersion for the NameNode. Make a copy of the file at
where
$dfs.name.dir/current/VERSION
is the value of the config parameter
$dfs.name.dir
NameNode directories. This file will be used later to verify that the layout version is upgraded.
Stop HDFS. Make sure all services in the cluster are completely stopped.
If you are upgrading Hive, back up the Hive database.
Stop Ambari Server. On the Server host:
ambari-server stop
Stop Ambari Agents. On each host:
ambari-agent stop
|
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.0/bk_using_Ambari_book/content/ambari-chap9-1_2x.html
|
CC-MAIN-2015-32
|
refinedweb
| 471
| 68.97
|
would like to merge this to core-8-5-branch. Any objections? Anyone who
>> would like to test it on Solaris 10 and/or OSX?
> Test passes on Solaris 10, threaded and not.
Sadly the news is not so good on OSX.
At least on Mac OS X 10.6.8 (Snow Leopard), the new test
unixforkevent-1.1 hangs on the trunk.
--
| Don Porter Applied and Computational Mathematics Division |
| donald.porter@... Information Technology Laboratory |
| NIST |
|______________________________________________________________________|
On 01/08/2013 16:22, Trevor Davel (Twylite) wrote:
> On 2013/08/01 04:14 PM, Lars Hellström wrote:
>>> The result of this subcommand shall be a string describing what sort of
>>> command /commandName/ is; if no other information is available, the
>>> result shall be *native*.
>> Very minor quibble, but "native" feels rather close to "part of core" to me,
>> which is clearly not the idea here. I understand the reasoning behind it,
>> but wouldn't for example "other" be more to the point? (Yeah, cue bikeshedding.)
> I understood "native" to be "a proc coded in native code" as opposed to
> in Tcl code. Given that any Tcl extension that calls
> Tcl_CreateCommand() will end up with the default description, I'd be
> more comfortable with that default implying 'native / C code' than
> 'other / unknown'.
The notion is that of "implemented in machine-native code" but that's a
bit too windy for a script. :-) It's not unknown, but Tcl (probably) has
nothing else to say about it. Implementation-wise, it's a constant in
exactly one place (plus I don't recall how many tests ;-)); if there's a
consensus on change for another value, I'm happy to alter it.
> An alternative default could be "extension" (implying native code in an
> extension library), assuming that (i) all creation of commands in the
> core is explicitly identified as something non-default; and (ii) core
> C-coded procs have a distinct identifier ... which leads to ...
>
> Bike-shedding: is there any value to distinct identifiers for Tcl core
> commands (e.g. "core" or "builtin") and/or Tcl core commands that
> support byte-coding (e.g. "core-bc")? The best I can imagine is some
> informational use in a debugger, which doesn't sound particularly
> compelling. If desirable then implementation (in Tcl_CreateInterp)
> should be trivial.
It's not actually trivial, since we have calls to Tcl_CreateObjCommand
in quite a few places; it's not just one big table any more. I'm also
not sure if we have created them all at the point when Tcl_CreateInterp
returns. More to the point though, there's nothing to force the type
registration process on anything (and shouldn't be) so the default must
be suitable for returning about, say, the [sqlite3::sqlite3] command. Or
some command in an extension not previously discussed.
That said, I don't know what to do with "is part of Tcl core" and "is
capable of being bytecoded (in some circumstance not described)". There
is no behaviour in Tcl scripts that could usefully be gated on that
information at all; not drawing a deep distinction between the commands
in the core and the commands supplied by users/extensions is a key Tcl
hallmark.
Donal.
On 01/08/2013 15:14, Lars Hellström wrote:
> Why use "::" as separator here, hinting at some connection to namespaced
> names, when in fact the result is nothing of the sort? Expect users to take
> a result of "tk::scrollbar" as a sign that tk::scrollbar is the name of a
> command.
I was thinking by analogy with namespaces and package names (which have
no formal connection).
Donal.
|
https://sourceforge.net/p/tcl/mailman/tcl-core/?viewmonth=201308&viewday=2
|
CC-MAIN-2018-22
|
refinedweb
| 601
| 62.38
|
import java.util.*; public class enrollment{ public static void main(String[]args){ int balance, payment; balance = 20000; String partial = "partial"; String full = "full"; System.out.print("\nEnter Name: "); Scanner st = new Scanner(System.in); String name = st.nextLine(); System.out.print("\nYour Remaining Balance is: "+balance); System.out.print("\nChoose your Payment term(Partial/Full): "); String term = st.nextLine(); if(term == partial){ System.out.print("\nHow much would you like to pay for this quarter: "); Scanner in = new Scanner(System.in); payment = in.nextInt(); balance = balance - payment; System.out.print("\nYour Balance is: "+ balance); if(balance > payment){ System.out.print("\nYou have remaining balance of: " + balance); }else if(balance == payment){ System.out.print("You are already paid"); } } } }I've been doing this for a week, I always tend to search for some alternative solution, but still it doesn't fit to my program, whenever I answer this question:
System.out.print("\nChoose your Payment term(Partial/Full): "); String term = st.nextLine();
it always execute to the end.
I don't know what is wrong about my If condition because I already declare partial as "partial" so I assume that it will take partial and execute the statement. Just need some advice so I can finish the program until to the end at full payment. Any suggestion is deeply appreciated.
|
http://www.dreamincode.net/forums/topic/303110-simple-enrollment-java-program/page__pid__1762972__st__0
|
CC-MAIN-2016-36
|
refinedweb
| 219
| 52.66
|
Scroll down to the script below, click on any sentence (including terminal blocks!) to jump to that spot in the video!
gstreamer0.10-ffmpeg
gstreamer0.10-plugins-goodpackages.
At this point... we have a directory with a PHP class inside. And, honestly, we could just move this into its own repository, put it on Packagist and be done! But in that case, it wouldn't be a bundle, it would simply be a library, which is more or less defined as: a directory full of PHP classes.
So what is the difference between a library and a bundle? What does a bundle give is that a library does not? The "mostly-accurate" answer is simple: services. If we only created a library, people could use our classes, but it would be up to them to add configuration to register them as services in Symfony's container. But if we make a bundle, we can automatically add services to the container as soon as our bundle is installed. Sure, bundles can also do a few other things - like provide translations and other config - but providing services is their main super power.
So, we're going to create a bundle. Actually, the perfect solution would be to create a library with only the
KnpUIpsum class, and then also a bundle that requires that library and adds the Symfony service configuration. A good example of this is KnpMenu and KnpMenuBundle.
To make this a bundle, create a new class called
KnpULoremIpsumBundle. This could be called anything... but usually it's the vendor namespace plus the directory name.
Make this extend
Bundle and... that's it! You almost never need to have any logic in here.
To enable this in our app, open
bundles.php and configure it for all environments. I'll remove the
use statement for consistency. Normally, this happens automatically when we install a bundle... but since we just added the bundle manually, we gotta do it by hand.
And, congratulations! We now have a bundle!
So.... what the heck does that give us? Remember: the super-power of a bundle is that it can automatically add services to the container, without the user needing to configure anything. How does that work? Let me show you.
Next to the bundle class, create a new directory called
DependencyInjection. Then, add a new class inside with the same name of the bundle, except ending in
Extension. So,
KnpULoremIpsumExtension. Make this extend
Extension from
HttpKernel. This forces us to implement one method. I'll go to the Code -> Generate menu, or Cmd+N on a Mac, choose "Implement Methods" and select the one we need. Inside, just
var_dump that we're alive and... die!
Now move over and refresh. Yes! It hits our new code!
This is really important. Whenever Symfony builds the container, it loops over all the bundles and, inside of each, looks for a
DependencyInjection directory and then inside of that, a class with the same name of the bundle, but ending in
Extension. Woh. If that class exists, it instantiates it and calls
load(). This is our big chance to add any services we want! We can go crazy!
See this
$container variable? It's not really a container, it's a container builder: something we can add services to.
Right now, our service is defined in the
config/services.yaml file of the application. Delete that! We're going to put a service configuration file inside the bundle instead. Create a
Resources/ directory and another
config/ directory inside: this is the best-practice location for service config. Then, add
services.xml. Yep, I said XML. Wait, don't run away!
You can use YAML to configure your services, but XML is the best-practice for re-usable bundles... though it doesn't matter much. Using XML does have one tiny advantage: it doesn't require the
symfony/yaml component, which, at least in theory, makes your bundle feel a bit lighter.
To fill this in... um, I cheat. Google for "Symfony Services", open the documentation, search for XML, and stop when you find a code block that defines a service. Click the XML tab and steal this! Paste it into our code. The only thing we need to do is configure a single service whose id is the class of the service. So, use
KnpU\LoremIpsumBundle\KnpUIpsum. We're not passing any arguments, so we can use the short XML syntax for now.
But this file isn't processed automatically. Go to the extension class and remove the
var_dump(). The code to load the config file looks a little funny:
$loader = new XmlFileLoader() from the DependencyInjection component. Pass this a
new FileLocator - the one from the
Config component - with the path to that directory:
../Resources/config. Below that, add
$loader->load('services.xml').
Voilà! Refresh the page. It works! When the container builds, the
load() method is called and our bundle adds its service.
Next, let's talk about service id best-practices, how to support autowiring and public versus private services.
|
https://symfonycasts.com/screencast/symfony-bundle/bundle-services
|
CC-MAIN-2020-05
|
refinedweb
| 843
| 68.47
|
Below is a small and easy puzzle on multi-threading, ideal for beginners. And to be honest, title should have been “C Multithreading” since it uses pthreads, and not C++11 for threading.
#include <pthread.h> #include <unistd.h> #include <iostream> using namespace std; int i; void* f1(void*) { for (i = 0; i < 10; i+=2){ cout << i << endl; sleep(1); } return 0; } void* f2(void*) { for (i = 1; i < 10; i+=2){ cout << i << endl; sleep(1); } return 0; } int main() { pthread_t t1, t2; pthread_create(&t1, 0, f1, 0); pthread_create(&t2, 0, f2, 0); pthread_join(t1, 0); pthread_join(t2, 0); }
So what would be the output? In what order numbers will be printed? That’s the small puzzle for you.
To make you really think, I won’t give answer. You are free to share your answer in comments though. And hey running the program without thinking would be considered cheating!
Don’t you cheat!
|
http://vinayakgarg.wordpress.com/
|
CC-MAIN-2014-15
|
refinedweb
| 156
| 74.29
|
In these days of Web 2.0, the line between outdated (and therefore obsolete) and retro (and therefore cool again) can get pretty blurred. Desktop Applications: outdated (unless they’re HTML-based or made by Google). Client/Server: retro (no green-screens please!). Tiered Design: retro (but only if at least two tiers are AJAX/JavaScript-based).
See what I mean? It’s hard to keep up. Greg Ward’s predecessors must have gotten pretty confused along the way as well. Following is a single line from their Web 2.0-based medical application.
public class Patient extends JavascriptStringBuilder
And yes, “Patient” means exactly what you think it does. As does “Javascript.”
|
http://thedailywtf.com/Articles/What,-Me-Layer.aspx
|
crawl-002
|
refinedweb
| 112
| 79.87
|
HASH(9) BSD Kernel Manual HASH(9)
hash - general kernel hashing functions
#include <sys/hash.h> uint32_t hash32_buf(void *buf, size_t len, uint32_t hash); uint32_t hash32_str(void *buf, uint32_t hash); uint32_t hash32_strn(void *buf, size_t len, uint32_t hash); uint32_t hash32_stre(void *buf, int end, char **ep, uint32_t hash); uint32_t hash32_strne(void *buf, size_t len, int end, char **ep, uint32_t hash);
The hash32() functions are used to give a consistent and general inter- face addi- tional termination condition of terminating when they find a character given by end in the string to be hashed. If the argument ep is not NULL, it is set to the point in the buffer at which the hash function terminat- ed hashing.
The hash32() functions return a 32 bit hash value of the buffer or string.; }
free(9), hashinit(9), malloc(9).
The hash functions were first committed to NetBSD 1.6. The OpenBSD ver- sions were written and massaged for OpenBSD 2.3 by Tobias Weingartner, and finally committed for OpenBSD 3.2. MirOS BSD #10-current December 8,.
|
http://mirbsd.mirsolutions.de/htman/i386/man9/hash.htm
|
crawl-003
|
refinedweb
| 175
| 70.43
|
Making Timeline Control for DataGrid In WPF Introduction
In this article we will see how we can make a Timeline control in WPF. Creating A WPF Project
Fire up Visual Studio 2010, create a WPF Application, and name it as
TimelineSample.
Here is the thing, we should build a user control that would display time or
times for a partcular hour. Let's say we have 6:30am, 6:35am and 6:55am as times
for a particular hour 6am. So we should diplay as the following pictorial
notations.
To do the above control, we need to use rectangles for the representation of
Hour and Minute(s). The following is the xaml for doing so.
We have created the following two Brushes to be used respectively.
Now that we have the brushes we can draw the rectangles. The horizontal
rectangle is for Hour and the vertical rectangle(s) for minute(s).
As you see from the above xaml display, we have Minute rectangles Visibility
made as Hidden. Here is the trick for visibility; if we use visibility as
collapsed the space taken by the rectangle is gone and the next rectangle would
take it's place. In order to do that we need a Dependency Property which would
contain the minute(s) as user has to give.
As you see in the above code display, the MinutesProperty is created as a
Dependency Property. It is a type of TimelineControl and it would have the value
as string.
Let's have the Loaded event of the User Control, where we will make the minute
rectangles visible based on the minute list provided.
The User Control is ready. Now let's add a datagrid or listbox to display the
user control.
The issue that we are going to face is the Time Scale, that means when we add
hour columns based on some criteria, if it changes the Time Scale should be
fixed or flexible. It should not repeat the Hours columns that already added to
the data grid.
To solve that issue let's create a custom datagrid control and add two
Dependency Properties, such as Start Time and End Time.
Add a class named TimelineDatagrid and add it to the project.
Use the namespaces displayed below:
Now inherit the DataGrid class and then create two Dependency Properties as
displayed below.
Now, let's add this custom datagrid to the MainWindow by referencing the
Namespace.
And add the DataGrid also customize it's properties as required.
Let's add a class and name it as TimeScale, which would have the properties
required to bind to the DataGrid.
As you see in the above code display, we have the TimeScale class which has the
properties as Day and the Hours in 24Hour format.
As soon as you proceed through this article, you would understand why we have
taken the 24Hour format instead the normal time (am/pm) format.
In the above code display, you could see we are initializing the StartTime and
EndTime of the DataGrid and then we have a List of TimeScale for sample data
purpose.
Now we need to subscribe the Loaded event of the DataGrid to load the customized
columns.
As you see in the above code display, we have subscribed to the Loaded event of
the DataGrid and in the handler, we are passing the StartTime and EndTime values
to a method called LoadColumns.
We have to add DatagridTemplateColumn as the TimelineControl and the First
column for Date as DatagridTextColumn.
Inside the method add the following lines of code to add the DataGridTextColumn.
Add the below condition for StartTime for the timeScale of 12pm to 11pm.
Add the below condition for EndTime is Greater than StartTime.
That's it. Run the application to see the Datagrid Control as TimeLine control.
Hope this article helps.
Visit in for latest articles.Facebook Page:
©2015
C# Corner. All contents are copyright of their authors.
|
http://www.c-sharpcorner.com/uploadfile/dpatra/making-timeline-control-for-datagrid-in-wpf/
|
CC-MAIN-2015-18
|
refinedweb
| 654
| 72.26
|
frules 0.1.0
simple functional fuzzy rules implementation
Frules stands for **fuzzy/funtional rules**. It allows to work easily with
fuzzy rules and variables.
Installation:
pip install frules
## Linguistic variables and expressions
Expression is a core concept in frules. `Expression` class represents subrange
of [linguistic variable] in
fuzzy logic.
Variables in classical math take numerical values. in fuzzy logic, the
*linguistic variables* are non-numeric and are described with expressions.
Expressions map continuous variable like nemerical temperature to its
linguistic counterpart. For example temperature can be described as cold, warm
or hot. There is no strict boundary between cold and warm - this is why this
expressions are fuzzy.
To create new expression we use function that takes numerical value of
contiunous variable and returns *truth value*. Truth value ranges between
0 and 1 - it's a degree of membership of continous value to that linguistic
variable.
```python
from frules.expressions import Expression
#We know that anything over 50 degrees is hot and below 40 is't hot
hot = Expression(lambda x: min(1, max((x - 40) / 10., 0)))
```
This ugly lambda is representation of some fuzzy set. If we take a look how it
behaves, we'll see that it in fact returns 1 for anything over 50, 0 for
anything below 40 and some linear values between 40 and 50:
```python
>>> map(lambda x: {x: min(1, max((x - 40) / 10., 0))}, xrange(35, 55, 2))
[{35: 0}, {37: 0}, {39: 0}, {41: 0.1}, {43: 0.3}, {45: 0.5}, {47: 0.7}, {49: 0.9}, {51: 1}, {53: 1}
```
Using a lot of lambdas in practice makes your code a mess. Fuzzy expressions
described this way are additionally hard to write because of some value
assertions they must satisfy.
This is why we ancapsulate don't use raw functions and encapsulate them with
expressions. Moreover frules provides a bunch of helpers that eases definition
of new expressions. Example of full set of expressions for temperature variable
could look this way:
```python
from frules.expressions import Expression as E
from frules.expressions import ltrapezoid, trapezoid, rtrapezoid
cold = E(ltrapezoid(10, 20), "cold") # anything below 10, more is fuzzy
warm = E(trapezoid(10, 20, 30, 35), "warm") # anything between 20 and 30
hot = E(rtrapezoid(30, 35), "hot") # anything over 35, less is fuzzy
```
Expressions can be reused/mixed using logical operators:
```python
cold_or_hot = cold || warm
not_hot = !hot
```
Optional names will be helpful when we start to work with fuzzy rules.
## Fuzzy rules
Although expressions define linguistic variables, they aren't strictly bound
to any variable. They are rather the adjectives we use to describe something and
their meaning depends strictly on context. Both *person* and *data* could
be *big* but this particular adjective has slighlty different meaning in each
case.
`Rule` objects bounds continous variable with expressions. Rules also can also
be evaluated to see how true they are for given continous input.
```
>>> from frules.rules import Rule
>>> is_hot = Rule(temperature=hot)
>>> is_hot.eval(temperature=5)
0.8
```
Rules can be mixed using logical operators (`&` and `|`) to create more
sophisticated rules that allow fuzzy reasoning:
```python
from frules.expressions import Expression as E
from frules.rules import Rule as R
from frules.expressions import ltrapezoid, trapezoid, rtrapezoid
# age expressions
too_young = E(ltrapezoid(16, 18), "too_young")
young = E(trapezoid(16, 18, 25, 30), "young")
old = - (too_young && young)
# height expressions
tall = E(rtrapezoid(165, 180), "tall")
short = E(ltrapezoid(165, 180), "short")
# yes expression
yes = E(lambda yes: float(yes), "yes") # converts bool to float
# rules
is_hot = R(age=young, height=tall) # equvalent to R(age=young) & R(height=tall)
is_chick = - R(has_penis=yes)
should_date = is_hot & is_chick
```
Having set such rules we can do some reasoning:
```
>>> shoud_date
((age = young & height = tall) & !has_penis = yes)
>>> should_date.eval(age=17, height=170, has_penis=False) > should_date.eval(age=20, height=170, has_penis=True)
True
>>>
>>> candidates = {
... "c1": {"age": 18, "height": 178},
... "c2": {"age": 20, "height": 175},
... "c3": {"age": 50, "height": 180},
... "c4": {"age": 25, "height": 161},
... }
...
>>> max(candidates.iteritems(), key=lambda (key, inputs): is_hot.eval(**inputs))
('c1', {'age': 18, 'height': 178})
```
- Downloads (All Versions):
- 59 downloads in the last day
- 419 downloads in the last week
- 1555 downloads in the last month
- Author: Michał Jaworski
- Categories
- Package Index Owner: swistakm
- DOAP record: frules-0.1.0.xml
|
https://pypi.python.org/pypi/frules/0.1.0
|
CC-MAIN-2015-32
|
refinedweb
| 713
| 57.77
|
Simple analysis in profiling tools of "new XML()" being passed a ~500k XML document show a few low-hanging fruit that drastically help reduce CPU time and memory usage. I'll attach patches for the simplest fixes I can see which actually make a big difference in real-world usage.
Created attachment 563931 [details] [diff] [review]
Hoist property check out of tight loop / prevent unnecessary Namespace object creation
This patch addresses two simple issues:
1) XMLToXMLString() is a recursive function that calls GetBooleanXMLSetting as its first order of business to check if pretty-printing is enabled. When toString()-ing a large XML object, this winds up getting called N times where N is the number of E4X objects. This simply hoists the property check up a level so that it is not checked on every recursive call.
2) When GetNamespace() is initially called with a NULL set of ancestor namespaces, the function uses an empty namespace array. This has a side-effect of *never* producing a match for the default namespace assigned to every object (""). Thus, every call to GetNamespace() allocates a *new* Namespace object even though they all have the default namespace string (""). This simply creates a new Namespace object with "" values and inserts it into the empty array object in this case. The end result is that only one Namespace object is created for the entire traversal of the XML document. Previously N Namespace objects would be created for N E4X objects.
Created attachment 563932 [details] [diff] [review]
AppendAttributeValue escapes values without creating transient StringBuffer and JSFlatString
This patch prevents the creation of a transient StringBuffer and a JSFlatString created from it on every AppendAttributeValue when performing toString(). This prevents triple-copying all attribute text as it is written out and memory pressure from doing so.
Created attachment 563933 [details] [diff] [review]
AppendAttributeValue escapes values without creating transient StringBuffer and JSFlatString
Re-upped last patch, missed two NULL->false return value changes.
Here are performance stats to give you an idea of how big a difference just these changes make. I applied these patches and also added this printf as the last thing in ToXMLString() prior to returning: printf("compartment %u\n", cx->compartment->gcBytes); My test loads a ~500k XML document into an object and then calls toString() on it 10 times. When running this test, I am also timing the wall & CPU time.
Without patches:
compartment 6365184
compartment 5775360
compartment 8212480
compartment 4509696
compartment 6946816
compartment 9379840
compartment 5611520
compartment 8044544
compartment 4329472
compartment 6766592
wall 9416.54ms user 9210.00ms
With patches:
compartment 4112384
compartment 4001792
compartment 4186112
compartment 4370432
compartment 3948544
compartment 4132864
compartment 4317184
compartment 4501504
compartment 4079616
compartment 4263936
wall 4689.59ms user 4440.00ms
Comment on attachment 563933 [details] [diff] [review]
AppendAttributeValue escapes values without creating transient StringBuffer and JSFlatString
Looks great, ask #jsapi to land for you -- traveling and super-busy, sorry I can't do it.
/be
Created attachment 569259 [details] [diff] [review]
AppendAttributeValue escapes values without creating transient StringBuffer and JSFlatString.
Re-up patch as export with header for checkin.
Created attachment 569260 [details] [diff] [review]
Hoist property check out of tight loop / prevent unnecessary Namespace object creation.
Re-up patch as export with header for checkin.
Andrew, in the future could you please include the bug number in the checkin comments?
Pushed:
Created attachment 569910 [details]
Optimize EscapeAttributeValueBuffer...
Your patch is a great optimization!
Not sure how much time EscapeAttributeValueBuffer takes, but is it worth to optimize EscapeAttributeValueBuffer, so that chunks of non-escaped chars are append in one go? Appending an array of chars results in a memcpy, but doing one by one with length checks in each s much more costly.
My optimizations were guided by Quantify. After applying the patch it didn't show up as a hotspot. I suppose things can always be made faster but I was just focusing on small wins which were grossly affecting the run time.
|
https://bugzilla.mozilla.org/show_bug.cgi?id=691001
|
CC-MAIN-2016-40
|
refinedweb
| 657
| 52.6
|
Calculate Area Circle Using Java Example
All programming language is designed to perform a mathematical operation. Java language is bundled with Math library for calculation in a simple and complex problem.
This tutorial will teach you on how your Java program Calculate the Area of a Circle based on user input value. This tutorial will use a Scanner function and an Integer and Double data type for variables. Please follow all the steps to complete this tutorial.
Calculate the Area of a Circle in Java Steps
- Create a new class and name it what you want.
2. Above your Main Class, Insert an import code to access the Scanner library.
[java]import java.util.Scanner;[/java]
3. Declare a scanner and two variables inside your main method.
[java]Scanner input = new Scanner(System.in);
int radius;
double area;[/java]
4. After step 3, insert the code below for user input.
[java]System.out.print(“Enter value of the circle radius: “);
radius = input.nextInt();[/java]
5. After step 4, insert the code below for the algorithm and output.
[java]area = Math.PI * radius * radius;
System.out.println(“Area of a circle is: ” + area);[/java]
6. Run your program and the output should look like the image below.
7. Complete Source Code.
[java]import java.util.Scanner;
public class CalculateCircleArea {
public static void main(String args[]){
Scanner input = new Scanner(System.in);
int radius;
double area;
System.out.print(“Enter value of the circle radius: “);
radius = input.nextInt();
area = Math.PI * radius * radius;
System.out.println(“Area of a circle is: ” + area);
}
}[/java]
About The Calculate the Area of a Circle In Java
If you have any comment or suggestions about on How to Calculate the Area of a Circle in Java, feel free to leave your comment below, use the contact page of this website or use my contact information.
|
https://itsourcecode.com/free-projects/java-projects/calculate-area-circle-using-java/
|
CC-MAIN-2021-49
|
refinedweb
| 309
| 50.84
|
import "go.mozilla.org/gopgagent"
Package gpgagent interacts with the local GPG Agent.
var ( ErrNoAgent = errors.New("GPG_AGENT_INFO not set in environment") ErrNoData = errors.New("GPG_ERR_NO_DATA cache miss") ErrCancel = errors.New("gpgagent: Cancel") )
Conn is a connection to the GPG agent.
NewConn connects to the GPG Agent as described in the GPG_AGENT_INFO environment variable.
func (c *Conn) GetPassphrase(pr *PassphraseRequest) (passphrase string, outerr error)
type PassphraseRequest struct { CacheKey, Error, Prompt, Desc string // If the option --no-ask is used and the passphrase is not in // the cache the user will not be asked to enter a passphrase // but the error code GPG_ERR_NO_DATA is returned. (ErrNoData) NoAsk bool }
PassphraseRequest is a request to get a passphrase from the GPG Agent.
Package gopgagent imports 11 packages (graph) and is imported by 1 packages. Updated 2017-10-20. Refresh now. Tools for package owners. This is an inactive package (no imports and no commits in at least two years).
|
https://godoc.org/go.mozilla.org/gopgagent
|
CC-MAIN-2020-40
|
refinedweb
| 156
| 58.28
|
index - Java Beginners
in hand.
Write a Java GUI application called Index.java that inputs several... the number of occurrences of the character in the text.
Write a Java GUI... should be counted together. Store the totals for each letter in an array, and print
Array - Java Beginners
class ArrayExamples {
public static void main(String[] args) throws IOException...};
Arrays.sort(array);
int index = Arrays.binarySearch(array, 2);
System.out.println("Array list found in index " + index);
array in java - Java Interview Questions
array in java array is a object in java. is it true, if true then what is its class name?
or-
array object is of which class? Hi.../beginners/arrayexamples/index.shtml
Thanks
array - Java Beginners
array WAP to perform a merge sort operation. Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
array - Java Beginners
Friend,
Try the following code:
import java.util.*;
public class Search... ArrayList();
ArrayList list2 = new ArrayList();
for(int element: arr){
int index = list1.indexOf(element);
if (index != -1 ){
int newCount = list2.get(index
index
Fortran Tutorials
Java Tutorials
Java Applet Tutorials
Java Swing and AWT Tutorials
JavaBeans Tutorials
array manipulation - Java Beginners
example at: manipulation We'll say that a value is "everywhere" in an array if for every pair of adjacent elements in the array, at least one of the pair
Index Out of Bound Exception
tries to access an element beyond the capacity of index of the array...;
int index = 8;
value = array[ index ];
...
Index Out of Bound Exception
array sort - Java Beginners
array sort hi all,
can anybody tell me how to sort an array...:
public class SortArrayWithoutUsingMethod{
public static void sortArray(int array[], int len){
for (int i = 1; i < len; i++){
int j = i
array, index, string
array, index, string how can i make dictionary using array...please help
array in javascript how to initialize array in javascript and can we increase the size of array later on?
is Array class in javascript ?
also...://
Hope
Java arraylist index() Function
Java arrayList has index for each added element. This index starts from 0.
arrayList values can be retrieved by the get(index) method.
Example of Java Arraylist Index() Function
import
Java Array - Java Beginners
Java Array Q4-Write a program to exchange the nondiognal elements of a two dimensional array A of size NxN without using any other array ie.
each... the following code. This will solve your problem.
class SwapDiagonalArray... java.util.*;
public class ArrayElements {
public static void main(String[] args) {
int array[]={1,2,1,1,3,4,4,3,6,8,0,6,0,3};
int num;
int count;
for(int i = 0
Java array - Java Beginners
Java array Q 1- write a program that counts the frequency... a program tofind sum of all non dioganal elements of a two dimensional NxN array... array A of size NxN without using any other array ie.
each a[i][j]witha[j][i
Java Array - Java Beginners
Java Array Q1-Write a program to exchange non diagonal elements of two dimensinal NXN
Array without using temporary array Hi Friend,
Please try the following code to solve your problem. Here is the code:
class pleaseeeeeeeeeeeeeeee...
1. Create a new class (in the employees package) called Dependent... dependents to Employee that is an array of Dependent objects, and instantiate a five
Array
Array What if i will not declare the limit index of an array, how will I declare an array and store values with it using loop?
Hi Friend,
Try the following code:
import java.util.*;
class ArrayExample2
MultiDimensional Array - Java Beginners
Table of 1 to 10 by Using Multidimensional Array in java Hi Friend,
Try the following code:
public class MultiplicationTable{
public static void main(String[] args) {
int[][] array = new int[11][11];
for (int i
array 1 - Java Beginners
array 1 WAP to input values in 2 arrays and merge them to array M. Hi Friend,
Try the following code:
import java.util.*;
class...;
for (int[] array : arr) {
arrSize += array.length;
}
int
java class - Java Beginners
java class Define a class product with the following data members... in the sorted array without altering the order of the records.
3.To search for a product(using binary search) in the array of products.
Write a menu driven
ARRAY SIZE. - Java Beginners
ARRAY SIZE. Thanks, well that i know that we can use ArrayList... the elements in array A. Then doubled array A by multiplying it by 2 and then storing it in array B. But gives me error.
import java.io.*;
import
array
is an example that store some integers into an array and find the occurrence of each number from the array.
import java.util.*;
public class SearchNumberOccurrence...array Hi
i have array like {1,2,3,4,,5,5,6,6} then how can i
JavaScript array index of
JavaScript array index of
In this Tutorial we want to describe that makes you to easy to understand
JavaScript array index of. We are using JavaScript... by
an array object and store in a variable index.
Finally the document. write
String Array - Java Beginners
for me to manipulate a String Array.
For Example, I had a String Array ("3d4..., as to by which method can I separate the Integers from this Array of String...,
Code to solve the problem :
class StringArray
{
public static void main
ARRAY SIZE!!! - Java Beginners
array, that time u choose either ArrayList or Vector Class (From Collection Class...ARRAY SIZE!!! Hi,
My Question is to:
"Read integers from the keyboard until zero is read, storing them in input order in an array A. Then copy...) {
}
Hi friend,
class ArrayMerge
{
public static
example | Java
Programming | Java Beginners Examples
| Applet Tutorials...
applications, mobile applications, batch processing applications. Java is used... |
Linux Tutorial |
Java Script Tutorial
| PHP Tutorial |
Java
java - Java Beginners
://
Here you... index values. If an array has n components, then you can say n is the length...java I want to about array of objects with some examples.
How
java class string - Java Beginners
java class string Write a program that reads three strings...://
Thanks... :
import java.io.*;
public class ReadString {
public static void main
java - Java Magazine
index.
Example:
class ArrayExample {
public static void main(String...;
System.out.println("Element at index 0: " + array[0]);
System.out.println("Element at index 1: " + array[1]);
System.out.println("Element at index 2
Write a long value at given index into long buffer.
Write a long value at given index into long buffer.
In this tutorial, we will see how to write a long value at given index
into long buffer.
LongBuffer API:
The java.nio.LongBuffer class extends java.nio.Buffer class
programs - Java Beginners
information. Array Programs How to create an array program in Java? Hi public class OneDArray { public static void main (String[]args){ int
java String array - Java Beginners
java String array I want to print values stored in array of string ("1","2","3","4","5" ...) in the form
1 2 3 4
5 6 7 8
9 10 11 12
how can it be done ??
Thanks Hi,
public class ArrayTest
creating index for xml files - XML
with Java and I want to know which libarary or class package or severices I have...creating index for xml files I would like to create an index file... after another and then retrieve each tag and create index to that file. In some array question. - Java Beginners
java array question. I need help with this:
Create a program... in an array. Have the program then print the numbers in rows of 10 and calculate...:
public class RandomNumberArray {
private static void showNumbers
Pass the array please.. - Java Beginners
Pass the array please.. hi!
i'm having problem... them in an array. When finished receiving the numbers, the program should pass the array to a method called averageNumbers. This method should average the numbers
ShortBuffer in java, Write a short value into short buffer at given index.
;given index.
ShortBuffer API:
The java.nio.ShortBuffer class extends java.nio.Buffer class. It
provides the following methods:
Return...Write a short value into short buffer at given index.
In this tutorial
OOP with Java-Array - Java Beginners
OOP with Java-Array Write a program to assign passengers seats in an airplane. Assume a small airplane with seat numberings as follows:
1 A B C...*;
public class AirlineReservation {
public static void main(String args
Two- Dimensional Array - Java Beginners
Two- Dimensional Array I am new in java programming. I am creating a two-dimensional array. This is my code
**
class BinaryNumbers
{
public static void main(String[] args)
{
//create a two-dimensional array
int ROWS = 21
Algorithm_3 - Java Beginners
the following links:... is traversed from 0 to the length-1 index of the array and compared first two values
How to get specific index value from int buffer.
;intBuf.get(index));
}
}
Output
C:>java...How to get specific index value from int buffer.
In this tutorial, we will discuss how to get specific index value from
int buffer.
IntBuffer
index of javaprogram
index of javaprogram what is the step of learning java. i am not asking syllabus am i am asking the step of program to teach a pesonal student.
To learn java, please visit the following link:
Java Tutorial
Array list java code - Java Beginners
Array list java code Create an employee class with an employee id's,name & address. Store some objects of this class in an arraylist. When an employee id is given, display his name & address? Please provide the DETAIL java code
two dimansional array - Java Beginners
is the code to modify
// The "TwoDimensional" class.
import java.awt.*;
import hsa.Console;
public class TwoDimensional
{
static Console c... class
Hi Friend,
Try the following code:
import java.awt.
Array in Java
of an array starts with zero.
The elements are accessed by their numerical index.
Different types of array used in Java are One-dimensional, Two-dimensional and multi... of an Array
Initialization of an Array
Arrays in Java for different data
arrays part 2 - Java Beginners
index if the integer appears in the array
o Returns -1 otherwise.
? A static... java.util.*;
public class ArrayExamples{
public static int[] readIntegers...arrays part 2 Question 2: Useful Array Algorithms and Operations (5
JavaScript Array Class
JavaScript Array Class
In this section, you will study how to use Array class in JavaScript.
The Array class is one of the predefined classes available in JavaScript
Java Array declaration
Java Array declaration
In this section you will learn how to declare array in java. As we know an
array is collection of single type or similar data type... and index means that
this variable holds an array. The index will hold
merge sorting in arrays - Java Beginners
or characters to an array and apply merge sorting on this array Hi Friend,
Please visit the following link:
Thanks
creating class and methods - Java Beginners
( ) method that creates array of 4 objects of Computer
class and that takes input...creating class and methods Create a class Computer that stores... of the Computers.
This class contains following methods,
- Constructor method
Array - Java Beginners
Array how to declare array of hindi characters in java
program1 - Java Beginners
;
}
}
}
}
}
----------------------------------
Visit for more information.... sorting.......including programs..... Hi Aman,
public class... array[] = {12,9,4,99,120,1,3,10};
System.out.println("RoseIndia\n\n
Array in java
Array in java
In following example we will discuss about Array in Java. Array... in memory at fixed
size. We can use multiple type array. It can be used in Java, C... many index Arrays.
It is not possible to change the size of an Array during
Array in Java - Java Beginners
Array in Java Please help me with the following question. Thank you.
Write a program that reads numbers from the keyboard into an array of type int[]. You may assume that there will be 50 or fewer entries in the array. Your
Find position of an array element
Find position of an array element
In this section, you will learn how to find the index of an array element. For this, we have allowed the user to enter the array elements of their choice and also the number whose index value
JavaScript array index of
JavaScript array index of
... to easy to understand
JavaScript array index of. We are using JavaScript... by
an array object and store in a variable index.
Finally the document. write print
insertionSort - Java Beginners
));
}
}
For more information on Java Array visit to :
Thanks... of a program that sorts array of integers in ascending order
(small to large
array - Java Beginners
array
Drop Index
Drop Index
Drop Index is used to remove one or more indexes from the current database.
Understand with Example
The Tutorial illustrate an example from Drop Index
complete this code (insertion sort) - Java Beginners
://
Thanks... takes array of numbers and an index
* then it creates new array containing all...
* at the specified index.
* The method returns the new array as a result.
*/
static
Array sorting - Java Beginners
Array sorting Hello All.
I need to sort one array based on the arrangement of another array.
I fetch two arrays from somewhere... need to sort the "name" array alphabetically. I can do that easily using
array password - Java Beginners
array password i had create a GUI of encryption program that used the array password. my question is can we do the password change program? i mean we change the older password with the new password
initialise array by reading from file - Java Beginners
initialise array by reading from file Hello, I wnat to know how i would initialise an array by reading a text file, which contains a simple pattern... the problem :
import java.io.*;
class FileRead
{
public static void main
JavaScript array slice
;
JavaScript array class's slice() method returns the
selected items or elements from the array according to the provided starting and
ending index... the starting index position. It does not modify the length of the
array. Here
java - Java Beginners
information.
Thanks.... I hope that, its helpful code for you.
public class lengthDemo{
public
Java Array Values to Global Varibles - Java Beginners
Java Array Values to Global Varibles I am working on a program... to pass on to the rate and periods variables in the class declarations. Any help......
ThankYou...
Code is :
import java.io.*;
public class GlobalArray
Simple
java array - Java Beginners
java array 1.) Consider the method headings:
void funcOne(int[] alpha, int size)
int funcSum(int x,int y)
void funcTwo(int[] alpha, int[] beta...];
int num;
Write Java statements that do the following:
a. Call array
java array write a java method that takes an array of float values... are distinct)
Hi Friend,
Try the following code:
import java.util.*;
class...)){
System.out.println("There are duplicate elements.");
Float array
Two dimensional array in java
Two dimensional array in java.
In this section you will learn about two-dimensional array in java with an
example. As we know that array is a collection... dimensional array is defined as an
"array of array". In java the element
Write a byte into byte buffer at given index.
; index.
ByteBuffer API:
The java.nio.ByteBuffer class extends java.nio.Buffer class. It
provides the following methods:
Return type
Method... Write a byte into byte buffer at given index.
In this tutorial, we
java Class - Java Beginners
java Class Can anyone please explain what this code means?
import java.util.TreeMap;
import java.util.Iterator;
//Please tell me what this class declaration means????????????
public class ST, Val> implements Iterable
String Array - Java Beginners
again,,, and I'll come back to you , if I had other problem regarding JAVA
java - Java Beginners
.
http...://
Difference between Array
abstact class - Java Beginners
abstact class write a java program to implement an animal abstact class
Write a int value into int buffer at given index.
;index.
IntBuffer API:
The java.nio.IntBuffer class extends java.nio.Buffer class. It
provides the following methods:
Return type...Write a int value into int buffer at given index.
In this tutorial, we
Write a float value into float buffer at given index.
;given index.
FloatBuffer API:
The java.nio.FloatBuffer class extends...;}
}
Output
C:\>java PutIndexValue
Store value at index : 3...Write a float value into float buffer at given index.
In this tutorial
Array - Java Beginners
array - Java Beginners
with out class - Java Beginners
with out class can we write a program with out a class in core java?if yes,give example
Array declaration in Java
Array in Java can be used to store similar kind of data or element... an array,
we must know it's index value.
We can give any name...];
The index of the first element is always 0.
Array can be classified into two
|
http://www.roseindia.net/tutorialhelp/comment/70792
|
CC-MAIN-2014-52
|
refinedweb
| 2,809
| 58.28
|
JW is a badass mofo! Especially his FLIES mod.
Downloading
Nice script. Excellent watching some armour pinning us down getting wiped out
Had prob on dedi server though whilst using =BTC= revive, didnt work after player was revived, next time player had to avoid getting killed lol.. Anyways just thought I'd mention that I think it may a conflict with the revive script but not sure. Apart from that a really nice script to use in game and very simple to use and effective
Total comments : 2, displayed on page: 2
execVM "JWC_CASFS\initCAS.sqf";
#include "JWC_CASFS\casDefine.hpp"
#include "JWC_CASFS\casMenu.hpp"
if !(vehicleVarName player in JWC_CASarray) exitWith {};!
|
http://www.armaholic.com/page.php?id=23139
|
CC-MAIN-2017-26
|
refinedweb
| 110
| 59.9
|
Investors eyeing a purchase of Krispy Kreme Doughnuts Inc (Symbol: KKD) stock, but tentative about paying the going market price of $20.04/share, might benefit from considering selling puts among the alternative strategies at their disposal. One interesting put contract in particular, is the January 2016 put at the $18 strike, which has a bid at the time of this writing of $1.45. Collecting that bid as the premium represents a 8.1% return against the $18 commitment, or a 10.2% annualized rate of return (at Stock Options Channel we call this the YieldBoost ).
Selling a put does not give an investor access to KKD's upside potential the way owning shares would, because the put seller only ends up owning shares in the scenario where the contract is exercised. And the person on the other side of the contract would only benefit from exercising at the $18 strike if doing so produced a better outcome than selling at the going market price. ( Do options carry counterparty risk? This and six other common options myths debunked ). So unless Krispy Kreme Doughnuts Inc sees its shares decline 10.3% and the contract is exercised (resulting in a cost basis of $16.55 per share before broker commissions, subtracting the $1.45 from $18), the only upside to the put seller is from collecting that premium for the 10.2% annualized rate of return.
Below is a chart showing the trailing twelve month trading history for Krispy Kreme Doughnuts Inc, and highlighting in green where the $18 strike is located relative to that history:
The chart above, and the stock's historical volatility, can be a helpful guide in combination with fundamental analysis to judge whether selling the January 2016 put at the $18 strike for the 10.2% annualized rate of return represents good reward for the risks. We calculate the trailing twelve month volatility for Krispy Kreme Doughnuts Inc (considering the last 252 trading day closing values as well as today's price of $20.04) to be 34%. For other put options contract ideas at the various different available expirations, visit the KKD Stock Options page of StockOptionsChannel.com.
In mid-afternoon trading on Thursday, the put volume among S&P 500 components was 629,251 contracts, with call volume at 798,021, for a put:call ratio of 0.
|
https://www.nasdaq.com/articles/commit-buy-krispy-kreme-doughnuts-18-earn-102-annualized-using-options-2015-04-02
|
CC-MAIN-2020-50
|
refinedweb
| 394
| 61.87
|
CodeRush 13.1 includes a number of new features to make working with XAML easier.
The Declaration navigation provider is available in XAML code now. You can use it to navigate to:
With CodeRush 13.1 installed, Visual Studio’s XAML Intellisense is smarter and more capable. CodeRush suggestions are integrated with the Intellisense window, and include:
The following XAML specific code cleanup rules are new for 13.1: Remove All Comments – removes all XAML comments. Remove Default Values – removes control attributes initialized to default values.
This CodeProvider declares a new XAML namespace reference for the active qualifier.
If the type resolves to multiple locations, a submenu will appear allowing you to select the namespace to declare.
This CodeProvider declares multiple namespace references for every qualified control that can resolve to a single declaration. This refactoring can be useful after pasting XAML fragments from another source.
CodeRush 13.1 includes seven new Grid CodeProviders, which makes it much easier to work with controls inside XAML Grids.
These CodeProviders insert the specified number of columns or rows at the specified location, shifting control position as needed. This CodeProvider can save a huge amount of time if you need to add a column or row to an existing grid. In the example below, we effortlessly add two rows to an already complex grid, shifting 50 control positions down automatically.
This provider allows you to visually position a control inside the parent grid without reaching for the mouse or risking unintended changes (such as span and margins being set by the Visual Studio designer due to less-than-precise mouse operations).
These CodeProviders remove the specified number of columns or rows at the specified location, shifting control position and span as needed. Controls contained entirely inside the deleted range will be removed unless the “(keep controls)” variation of this provider is applied.
All grid manipulation operations are intelligently performed so undo is simple:
Note: the first 13.1 release omitted the delete column/row providers. The subsequent minor update (and daily builds) include this feature.
Now setting common numeric properties is fast and easy. Here are your shortcuts:
Shortcut
Property
h
Height
m
Margin
p
Padding
w
Width
Just follow these with a number (1-4 digits) and press the template expansion key. For example, the h149 template, when expanded inside a control’s tag, produces the following:
Height="149"
Height="149"
CodeRush 13.1 includes a new set of dynamic code templates for creating common XAML layouts. New shortcuts for common controls:
Control
b
Button
bd
Border
cb
ComboBox
l
Label
lb
ListBox
rb
RadioButton
sl
Slider
sp
StackPanel
tb
TextBlock
tbx
TextBox
tc
TabControl
ti
TabItem
vb
ViewBox
The shortcuts above work anywhere a control is valid inside XAML. These templates will expand with a unique name. For example, pressing “b” followed by the Space or Tab key will produce the following:
I should emphasize the name is unique, which is useful when you’re quickly prototyping or presenting to other developers.
If you want to omit the name from the expansion, just follow the template with a comma (the comma is used throughout the CodeRush template library to produce shorter versions of many templates). So if you want a nameless border, just expand the “bd,” template.
If you’re expanding one of these controls inside a grid, you can optionally specify the location using one of these modifiers:
r{RowNumber}c{ColumnNumber} c{ColumnNumber}r{RowNumber} r{RowNumber} c{ColumnNumber}
r{RowNumber}c{ColumnNumber}
c{ColumnNumber}r{RowNumber}
r{RowNumber}
c{ColumnNumber}
So to place a TextBlock inside row 1, column 2 of the parent grid, you can expand this template:
tbr1c2
tbr1c2
This template will do the same thing:
tbc2r1
tbc2r1
To omit the name, just follow the template with the comma.
Modifying templates with grid position like we’ve seen above is optional. And as you might expect, with CodeRush, there is more than one way to get the job done…
The row and column modifiers seen above also work as stand-alone templates inside the main/start tag of a control. That means you can use any of these templates to set a control’s position within a grid:
r{RowNumber}c{ColumnNumber} c{ColumnNumber}r{RowNumber} r{RowNumber} c{ColumnNumber}
r{RowNumber}
Also, don’t forget the Grid | Position Control provider shown above if you prefer a more visual approach.
Need a grid that’s 3x3? Use the g3x3 template.
Need a grid that’s 2x5? You guessed it – use the g2x5 template. Creating grids is easy. Just use this format:
g{ColumnCount}x{RowCount}
g{ColumnCount}x{RowCount}
This animation shows how quickly you can set up a grid with the templates:
Note: the initial 13.1 release of this template limits row and column counts to 9x9. Subsequent releases will increase this to 20x20.
What’s that you say? You’re more of a visual person? Typing g2x5 is too much work for you? It gets easier. Here’s your new template:
g
That’s right kids. You need a Grid of any size? Just hit the letter g followed by the template expansion key (typically Space or Tab, depending on your settings). Here’s what you’ll get:
Exciting, isn’t it? On Monday I’ll show the cool new features we’ve added to the Debug Visualizer.
Nice. Great job. Loving all the new XAML features.
Great, I can use all the help I can get from CR when editing XAML. I like the little pop-up interface for choosing where to insert the row or column. I have used another tool in the past, but it popped a modal dialog, so this is better.
Don't know if this is intentional, but I seem to have the "double Nav-link" problem (for lack of a better name). After adding a row, for example, I type something for the height and press Enter. It accepts the value and moves the cursor to the end of the line. If I then move the cursor back to the height value I typed in, I see there is a still a nav-link, and I have to press Enter again to clear it.
The control expansion templates work, not when I add the row and columns. The numeric properties and gnxn don't work for me, either. g works, in that it creates a grid, but only expands to a <Grid>. Any ideas?
Hi James,
Apologies for the delay in responding. The double nav link issue will be corrected to match your expected behavior in a future release. Also, the 13.1.5 install did not include the latest XAML templates. The next daily build and future releases will have them. If you still have problems with the templates after installing a daily build (or if you want to get the latest XAML templates without installing a daily build), contact support@devexpress.
|
https://community.devexpress.com/blogs/markmiller/archive/2013/06/07/new-xaml-features-in-coderush-13-1-for-visual-studio.aspx
|
CC-MAIN-2017-26
|
refinedweb
| 1,157
| 63.7
|
Detect board type or availble services?
- misterlisty last edited by misterlisty
Is it possible to detect what radios are avaible on the board? I have some code that i wish to auto-enable/disable code based on whether the board has sigfox,lora,lte- me? Is their a inbuilt variable that will indicate this"?
File "boot.py", line 15, in <module>
ImportError: cannot import name LoRa from a sipyboard, how do safely handle this?
- robert-hh Global Moderator last edited by
@misterlisty Micropython uses the prefix u for its modules to indicate, that it may be a subset of what you have in CPython. For compatibility, the names without u are also provided. So, the genric name in MicroiPyhton is uos.uname, while os.uname is also possible. Likewise, you can write
import uos
or
import os
There are some deviations from that simple rule. Like in CPython, you can use _io, whereas in Micropython it's called uio (not u_io), and _io is not known. Also, why some modules have a non-u alias and others not is unclear. You could change that easily in mpconfigport.h of the respective port.
- misterlisty last edited by
@robert-hh Why do you use uos.uname() when i have to use os.uname(), even the doco refers to a presffix of u for some reason?
- robert-hh Global Moderator last edited by robert-hh
@misterlisty Python is an interpreting language, and Import statements get effective only when the statement is executed, unlike #includes of C. Simple test on a WiPy:
import uos if uos.uname().sysname == "LoPy": from network import LoRa else: print ("not LoPy")
- misterlisty last edited by
thanks but can i create conditional imports?
if bLora:
from network import LoRa #shuold this work?
- robert-hh Global Moderator last edited by
@misterlisty You can use the output of uos.uname(), which returns a tuple with machine details. On a LoPy, uos.uname().sysname has the value "LoPy".
|
https://forum.pycom.io/topic/2252/detect-board-type-or-availble-services/
|
CC-MAIN-2019-35
|
refinedweb
| 326
| 68.06
|
Attack of the PHP clones: Drupal, HHVM, and Vagrant
For those wanting to give it a spin, Metal Toad has added HHVM support to our Vagrant box: github.com/metaltoad/trevor.!
For those wanting to give it a spin, Metal Toad has added HHVM support to our Vagrant box: github.com/metaltoad/trevor.?
The following is a rapid installation of PHP 5.5 on OS X 10.8. This compiles 5.5 from source, including two required libraries and finding the appropriate configure command. If you are comfortable at the command line, and especially if you are comfortable compiling your own binaries, then this should take no more than 30 minutes, with the majority being the actual PHP compilation.
Let's jump right in. Here's an overview of the steps:
Let's take a minute to step back and think about why we use namespaces, and how to use them to improve code quality. I suspect there's a lingering hesitance to embrace their usefulness..?
So what do you need to know?
|
https://www.metaltoad.com/taxonomy/term/114
|
CC-MAIN-2020-16
|
refinedweb
| 173
| 65.22
|
At last some good news. Streaming subset of XInclude I was talking about gets blessing from the W3C XML Core WG. Here is what Jonathan Marsh (MSFT, editor of XInclude) writes:
It appears to be impossible to improve streamability without removing
functionality from XInclude. The WG decided instead to bless a kind of
"streamable subset" by adding text along these lines:
_______
The abscense parse="text", or it may be unable to
access another part of the document using parse="xml" and an xpointer
because of streamability concerns. An implementation may choose to treat
any or all absences of a value for the href attribute as resource
errors. Implementors should document the conditions under which such
resource errors occur.
_______
New version of XInclude spec is going to be published soon. As they are slightly changing syntax again (removing accept-charset attribute), I think it will be Working Draft again.
Well, I know I stink on graphics. Yesterday I tried to develop a logo for the XInclude.NET project and here is what I ended up. The idea was about Lego and intergration or parts into a round thing, whatever.
I'd like to hear what do you guys think about this logo?
I'm personally not really satisfied with it and I doubt I can make it better, so let's have a logo contest. You send me your logo variants (find my email in top right corner of this page), I put them to some page and after some time we vote for a winner logo.
Prize? Well, XInclude.NET project doesn't have sponsors, so we can't afford anything more valuable than "The logo design by" line in every bit of XInclude.NET documentation and of course a pile of eternal gratitude.
When your hard disk dies Monday morning, that's nice week start. Low type tasks on recovering your data, sources, reinstalling and configuring all the stuff you cannot work without... Refreshing.
Basically I've recovered already. Surprisingly I cannot now install Office 2003, it says "You've got McAffee VirusScan Enterprise installed, Office 2003 Pro cannot be installed on the same machine with that crap." Hmmmm... Anybody seen that? I failed to google any workarounds.
Dummy entry to provide single place for nxslt.exe utility comments.
.
BizTalk Server 2004 will launch on March 2, 2004.
At last!
BizTalk Server 2004 will launch on March 2, 2004.
And to get us to speed up 8 BizTalk 2004 MSDN webcasts are arranged between March 2 and March 5!
Here is the first developer treat: As part of the launch there will be an MSDN BizTalk Server Developer Blitz with no less than eight web casts packed with information from 3/2 to 3/5. These sessions are developer orientated, full of demos and guarranteed to get you up to speed. Get your own mini-Teched on BizTalk Server for the attractive price of $0 and delivered to you in the comfort of your office/home on the same week we launch the product. Don't forget to register now - these sessions will likely full up fast.
Worth to get registered now.
I.
It's definitely love-to-steaming-strikes-back day today. Here is another sample of how streaming XML processing approach fails.
The only XInlcude feature still not implemented in XInlcude.NET project is intra-document references. And basically I have no idea how to implement it in .NET pull environment (as well as Elliotte Rusty Harold has no idea how to implement it in his SAX-based implementation). What's the problem?
Meanwhile I managed to create simple dummy online demo of ForwardXPathNavigator (XPathNavigator implementation over XmlReader) I was talking about. Here it is.
Here is Daniel clarifies things about XSE:.
Looks like Microsoft's patenting its XML investments. Recently we had a hubbub about Office 2003 schemas patenting, then XML scripting. Daniel like many others feel alarm, you too?
Well, I'm not. Patenting software ideas is stupid thing, but that's a matter of unperfect reality we live in. Everything is patented nowadays, right up to the wheel. So if Office XML is gonna be patented I prefer it's being patented by Microsoft. After all they are not interested to close it (aka make it die), instead they made Office schemas Royalty-Free. And one more reason - I'm sure all we don't want to find ourself one day rewriting all Office-based solutions just because of another Eolas scrooge case or even to pay for out-of-blue-license to some other litigious bastards.
That's all sounds reasonable if that's really defensive patenting though, otherwise - be prepared..
Ok, Dare great deal clarified things in his "Combining XPath-based Filtering with Pull-based XML Parsing" post:
Actually Oleg is closer and yet farther from the truth than he realizes. Although
I wrote about a hypothetical ForwardOnlyXPathNavigator in my article entitled Can
One Size Fit All? for XML Journal my planned article which should show up
when the MSDN XML Developer Center launches in a month or so won't be using it. Instead
it will be based on an XPathReader that is very similar to the one used in BizTalk
2004, in fact it was written by the same guy. The XPathReader works similarly
to Daniel Cazzulino's XseReader but uses the XPath subset described in Arpan Desai's Introduction
to Sequential XPath paper instead of adding proprietary extensions to XPath
as Daniel's does..
Daniel writes about performant (and inevitably streaming) XML processing, introducing XSEReader (aka Xml Streaming Events Reader). While he didn't publish the implementation itself yet, but only teasing with samples of its usage, I think I get the idea. Basically I know what he's talking about. I've been playing with such beasts, making all kinds of mistakes and finally I came up with a solution, which I think is good, but I didn't publish it yet. Why? Because I'm tired to publish spoilers :) It's based on "ForwardOnlyXPathNavigator" aka XPathNavigator over XmlReader, Dare is going to write about in MSDN XML Dev Center and I wait till that's published.
It's been Microsoft DevDays 2004 in Israel today. Well, DevDay actually. Here are the impressions I got there:
This interesting trick has been discussed in microsoft.public.dotnet.xml newsgroup recently. When one has a no-namespaced XML document, such as
<?xml version="1.0"?>
<foo>
<bar>Blah</bar>
</foo>
<?xml version="1.0"?>
<foo xmlns="">
<bar>Blah</bar>
</foo>
I.
Dare.
Did you know XslTransform class allows custom XmlResolver to return not only Stream (it's only what default XmlResolver implementation - XmlUrlResolver class supports), but also XPathNavigator! Sounds like undeservedly undocumented feature. What it gives us? Really efficient advanced XML resolving scenarios such as just mentioned recently on asp.net XML forum - getting access to XML fragments from within XSLT. Or looking up for cached in-memory XML documents. Or constructing XML documents on the fly for XSLT, e.g. via accessing SQL Server database from within XSLT stylesheet and processing the result. Well, part of it could be done also with XSLT parameters and extension functions, but XmlResolver is more powerful, flexible and elegant approach.
Here is a sample XmlFragmentResolver, which allows XSLT to get access to external XML fragments (XML fragment aka external general parsed entity is well-formed XML with more than one root elements):
public class XmlFragmentResolver : XmlUrlResolver
{
override public object GetEntity(Uri absoluteUri, string role,
Type ofObjectToReturn)
{
using (FileStream fs = File.OpenRead(absoluteUri.AbsolutePath))
{
XmlTextReader r = new XmlTextReader(fs,
XmlNodeType.Element, null);
XPathDocument doc = new XPathDocument(r);
return doc.CreateNavigator();
}
}
}
xslt.Transform(doc, null, Console.Out, new XmlFragmentResolver());
<xsl:apply-templates
Note, that instead you can load XML fragment and pass it as a parameter, but then you should know statically in advance all XML fragments/documents XSLT would ever require. XmlResolver approach allows XSLT to take over and access external documents or fragments really dynamically, e.g. when a file name cannot be known prior to the transformation.
One of consequences of the revolutionary XML support in Microsoft Office 2003 is a possibility to
unlock information in the Microsoft Office System using XML. Most likely that was deliberate decision to open Office doors for XML technology and I'm sure that's winning strategy.
Talking about transforming WordprocessingML (WordML) to HTML, what's the state of the art nowadays? There are two related activities I'm aware of, both Microsoft rooted. First, it's "WordML to HTML XSL Transformation" XSLT stylesheet available for download at Microsoft Download Center. It's huge while well documented while unsupported beta XSLT stylesheet, which transforms Word 2003 Beta 2 XML documents to HTML. Its final release, which will also support images is expected, but who knows when?Second, Don Box is experimenting with Wordml2XHTML+CSS transformation, mostly for the sake of his blogging workflow. He said his stylesheet is better (less global variables etc.). Apparently Don didn't finish it yet, so the stylesheet isn't available.
So one stylesheet is only for Word 2003 Beta 2 documents, second isn't ready yet, sounds bad, huh? Here is my temporary solution - original "WordML Beta 2 to HTML XSL Transformation" stylesheet fixed by me to support Word 2003 RTM XML documents. As usually with Microsoft stuff, "beta" most likely is 99% RTM version. So I fixed Beta 2 stylesheet a bit and it just works. In fact that's only namespaces that I fixed yet. I'm currently testing the stylesheet with big real documents, so chances are I'll need to modify it further.
Download version 1.0 of the stylesheet here - Word2HTML-1.0.zip. Credits due to Microsoft and personally to whoever developed the stylesheet. Any bug reports or comments are appreciated. Just post comment to this text.
Another idea is to implement support for images. Basically the idea is to decode images and save them as external files in XSLT external function and I don't see how to make it in portable way, so most likely I'll end up soon with two stylesheet versions - for MSXML and .NET. Stay tuned..
Have you noted this thread in microsoft.public.dotnet.xml newsgroup? A guy was trying to get list of unique values from XML document of 46000 records. Using Muenchian grouping method. For MSXML4 it took 20 seconds, while in .NET 1.0 and 1.1 it effectively hung.
Well, as all we know Muenchian method works deadly slowly in .NET unfortunately. MSXML4 optimizes generate-id($node1) = generate-id($node2) expression by making direct comparison of nodes instead of generating and comparing ids. .NET implementation isn't so sophisticated. Emerging .NET 1.1 sp1 is going to make it faster, but what's today's solution?
Enter EXSLT.NET's set:distinct() extension function. Using it the result was:
695 unique keys generated from about 46000 records in less
than 2 seconds.
set:distinct(atl_loads/atl_load/client_key)>
atl_loads/atl_load/client_key[generate-id(.) =
generate-id(key('client_key_lkp',client_key)[1])]
Special kudos to Dimitre Novatchev for optimizing EXSLT.NET set functions.
This page is an archive of entries from February 2004 listed from newest to oldest.
January 2004 is the previous archive.
March 2004 is the next archive.
Find recent content on the main index or look in the archives to find all content.
|
http://www.tkachenko.com/blog/archives/2004/02/
|
CC-MAIN-2014-15
|
refinedweb
| 1,901
| 57.37
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.