text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Daily Data Science Puzzle
import numpy as np # popular instagram accounts # (millions followers) inst = [232, #"@instagram" 133, #"@selenagomez" 59, #"@victoriassecret" 120, #"@cristiano" 111, #"@beyonce" 76] #"@nike" inst = np.array(inst) superstars = inst > 100 print(superstars[0]) print(superstars[2])
What is the output of this puzzle?
Numpy is a popular Python library for data science focusing on linear algebra.
The following handy numpy feature will prove useful throughout your career. You can use comparison operators directly on numpy arrays. The result is an equally-sized numpy array with boolean values. Each boolean indicates whether the comparison evaluates to True for the respective value in the original array.
The puzzle creates a list of integers. Each integer represents the number of followers of popular Instagram accounts (in millions). First, we convert this list to a numpy array. Then, we determine for each account whether it has more than 100 million followers.
We print the first and the third boolean value of the resulting numpy array. The result is True for @instagram with 232 million followers and False for @victoriassecret with 59 million followers.
Are you a master coder?
Test your skills now!
Related Video
Solution
True
False | https://blog.finxter.com/numpy-array-compare-operator/ | CC-MAIN-2020-10 | refinedweb | 196 | 51.04 |
Python Button - PLEASE HELP
I'm trying to make a button in Python. I've messed around with tkinter, but I can't get it to work. Can someone send me the code for a button?
Here's code like you asked for.... click the text instead of a button. When clicked, you'll see the click detected printed on the console
You can also run this code in a web page instead of tkinter by changing the import statement at the top to:
import PySimpleGUIWeb as sg
Here's a repl of the code with this change made:
I'm confused... but I might know the solution
You don't have to press the button with the mouse, you can press a specific key, such as "Enter"
@CodeCenter what do you mean by that, do you have any examples or clips that can help me understand what you need?
@CodeCenter Maybe make a blank input? Or like maybe if the input = a then it "pushes" a button?
@TheDrone7 I saw a project on replit a while back, but I can't find it. I want to have text that when clicked, will execute a command. It works like a button, but it is text, not a box
@TheDrone7 Hey, by the way, would you like to help me out with making a website. I'm the official user for the Code Center dicord community, and I want to create a website. I have little html experience, and no CSS experience. If you could help me set something very simple up, that would be very helpful. I can provide all graphics and information. Also, go to my profile and we can talk in discord.
@CodeCenter Hey I have good experience in HTML, I don't know too much CSS But that won't be a problem, I will be willing to build your website free of charge. I have a few questions I need answered first tho.
Do you currently have a domain?
How much Experience do you have with HTML?
What kind of website are you looking to build?
@TheDrone7 The version of PySimpleGUI that works with repl.it that is not tkinter based is called PySimpleGUIWeb. It utilizes Remi as the underlying framework
check this out
@a5rocks I'm looking for a text button
@CodeCenter Check out curses. You might be able to figure out how to get the mouse thing working.
Something else is that I remember @mat1 has a library that allows for mouse clicks in terminal. EDIT: Here : (has no docs, just read the source on the homepage) | https://replit.com/talk/ask/Python-Button-PLEASE-HELP/11086 | CC-MAIN-2022-21 | refinedweb | 433 | 81.63 |
I recently deleted all my PCs from DB and rediscovered them. (We are swapping out 200 PCs and spiceworks was getting confused since i didnt manually delete the old ones). But anyway, now that they are rediscovered, SW is not picking up my Symantic Endpoint protection. It was picking it up before. We also recently changed from Norton Corporate and it was picking up who had that installed and updated. All other software seems to be populating. I do have it seletcted to scan for AV in the settings, slow scan, incremental off, deltas only off, and I have stopped/restarted, also separated scan IPs into subnets and tried only scanning small groups at a time. Any ideas? I happen to need a report of antivirus installed to compare with what our endpoint server is seeing...Ugh. Thanks, E.?
This topic was created during version 5.0.
The latest version is 7.5.00101.
24 Replies
Dec 22, 2010 at 8:27 UTC
have you check this?
Resolving Spiceworks Unknowns Script to make changes to the PCs with ease:
Dec 22, 2010 at 8:27 UTC
All of the steps of troubleshooting Windows unknowns. It can be be found here
Dec 22, 2010 at 8:35 UTC
These are not unknowns, everything else is fine, except AV status.
I appreciate your post, but I cant go mucking around with WMI, firewall, DCOM, UAC on 250 production computers based on a whim.
Dec 22, 2010 at 9:03 UTC
As far as I understand, SW will first check Security Center through a WMI connection for the installed AV and its up to date status. The scan will do a secondary check of programs in the registry.
What is Security Center reporting for one of your clients? Is the AntiVirusProducts namespace correctly reporting your AV info?
Dec 22, 2010 at 10:01 UTC
I just checked several PC info in SW, but it is saying no software (not just AV, but all software) found. I have scanned about 5 times in the last few days. Some software seems to be showing up in my "software" inventory. Not sure what is going on yet. Nothing has changed on the PCs.
Dec 22, 2010 at 10:18 UTC
Which version of SW are you running?
What happens if you run a manual rescan on one of the PCs that is not reporting software items, does it find the software then?
Dec 22, 2010 at 10:45 UTC
It seems to be working when i go to that PC in spiceworks, then click on software, it says no software found, then i click rescan from there on the pc software page, then check and software is populated, including antivirus.
I changed collection of AV settings to false, then restarted, then true, then restarted and then tried individual scan again (it didnt work previously, but seems to be now). I will run a scan on one subnet and check a few of those pc's to see if it is updated and report back.
Thanks-
Dec 22, 2010 at 12:24 UTC
Network scan just finished on my first subnet. I checked several PCs after that, and it is still saying "no software found" and gives a button to rescan. If i click that rescan button, it will populate, but I cannot do that on each individual computer.
I have rechecked my scan settings and dont see anything obvious there as to why it is not picking up software on the normal scan.
Dec 22, 2010 at 2:45 UTC
check if the pc in more scan-ranges
use a own scan-range for them with only windows authentication
i hope this helps
Dec 22, 2010 at 3:08 UTC
hsc,, i'm not following what you are saying.
I have my scan ranges separated into 9 different groups by subnet.
For instance 192.168.1.0-254, 192.168.2.0-25, 192.168.3.0-254.
For the past 8mths, it has worked fine and collected the data properly. Now it is not picking up the software. The PC is only in one scan range.
If I go to PC details in spiceworks there is no software there, if i run the scan from the software (rescan that device only) it picks everything up. When i run a complete scan in that PC's subnet range, it does not pick up the software. If I add a new scan entry with that PC name, and run a network scan with only that PC selected and no other subnets, then it does not get the software info. If I go back to that PC in spiceworks and click on the software tab, it still says no software found, it will populate the info just fine. So, it appears to be a global scan issue that is not happening when i do an individual scan. Does it use a different engine or setting or something when i do an individual scan?
Dec 22, 2010 at 4:22 UTC
As a matter of fact, its not pulling most information from the PC, such as storage, alerts, users, hotfixes, services, memory, etc... It is only gathering basic info about the pc name, manufacturer, type, group, last reboot, os, IP, model, last login. Pretty much everything on the top section after showing complete profile.
If i scan the pc individually while viewing the pc info, it then populates it all.
Am I missing a network scan setting? There's not really any options to change that would cause this, that i can see anyway.
Dec 23, 2010 at 1:13 UTC
the scan is really only windows-account for the scan entry?
i think it runs only one scan for the PC-IP and if this is with snmp or HTTP the next scan-entry for this ip is excluded
you now what i mean?
no overlaped scan-entrys?
Dec 23, 2010 at 1:57 UTC
Check in Settings/Network Scan if there are scan exclusions, further if Default Schedule is setup for your needs and if Scheduled Scan is enabled.?
Dec 23, 2010 at 8:46 UTC
windows admin account, yes it has local admin access, it is also the same account that is used when i do an individual scan when it works.
Yes, it is set to "scanner sends all data instead of deltas =true
Also incremental scanning is set to disabled, scan speed = slow.
Thanks.
Dec 23, 2010 at 10:57 UTC
There's been a lot of discussion in this thread, so I'm just trying to better understand your original problem description. Looking back at your original post, what changes were making Spiceworks confused and led you to delete out all your existing machines? Do they each have the same names and/or IP address as before? It almost seems from what you've said that there was enough of a change from old machine to new machine to make SW think it was a new machine. Is this what you were seeing?
Also, with anything you've manually rescanned since this change, are they now picking up the software inventory correctly?
Dec 23, 2010 at 12:54 UTC
Ya, it is a bit long and confusing. I discovered that it wasnt only AV, but all software and info that is not being collected.
The changes we are making is replacing all PC's with a new image. For instance I have PC1 that gets turned off and brought back in. The user is issued a newly imaged PC that is now named PC2. The IP address is the same as before, but name, SN, softwre, etc was changed. I noticed that spiceworks was very confused about this, so i deleted all PC's and then told spiceworks to go out and rediscover them. Now, when a PC is swapped out, I go into SW and manually delete it, and then it auto discovers the new one.... Some of the machines are the same PC and same name, but usually not. thats why i figured i'd just delete them all from the database. That worked great, all new were discovered, along with the old ones that had not been swapped out yet.
The problem is, it is not picking up anything other than basic pc info such as name, ip, sn, bios, memory. It does not pick up software, disk space, memory slots, and stuff like that. If I manually scan a PC (while looking at that one PC in spiceworks and clicking "rescan this pc only" everything works great. When I use the scan entries in the network scan settings page, it does not collect the data. Even if i only add one specific PC to the scan range, then run the scan, it does not work. If i go to that same PC in spiceworks and rescan it, it works. Doesnt make sense.
If I go to each PC in spiceworks and say rescan, it works.
Dec 23, 2010 at 2:53 UTC
Thanks for all of your help and time to reply to this. I'm just going to reinstall it and rediscover over the holidays. Its faster and easier than trying to dink around and try to figure out why.
Dec 23, 2010 at 3:56 UTC
Uninstall/Reinstall of Spiceworks didnt make a difference either. Same symptoms.
I set up one subnet with about 20 pc's in it, and it collects everything except for the software installed. When i go to one of the PCs info in SW, then click the software tab, it says "No Software Found" Spiceworks has not found any software on this device yet. RESCAN NOW. So i press that button, and voila- it populates all of the software on the PC.
This is just plain dumb. How do I get ahold of a spiceworks person for help?
Dec 24, 2010 at 7:44 UTC
Which version of SW are you running? I asked that earlier to see if you were on an earlier version of 5.0, in case it was the same version I had a few oddities with software data collection. In my case an manual rescan corrected the problem and those few computers were fine with the scheduled scan after that.
Dec 27, 2010 at 2:33 UTC
It happens not often, but sometime. Try the following:
- remove (all) your scan range(s)
- remove all your "Scan Accounts" (there is one you can`t delete)
- restart SW
- create all needed scan accounts (Adding Devices and Changing Login Accounts)
- create your scan range(s) and assign the new created scan accounts (don`t use the scan account, you couldn`t delete)
- wait for finnishing the next full network scan
Dec 27, 2010 at 9:58 UTC
I havent been able to check this, it has been scanning all weekend. If it still has not picked anything up, I will try deleting the ranges again and accounts. But I did end up uninstalling and reinstalling the latest version 5.0.2 or something. And when i checked after scanning one range, it still did not pick up the info, but it did when i went to individual pc's page to scan manually. I will check on it tomorrow and reply back with status. I also need to look up which logs may be helpful, I did not see anything useful in any of the ones linked to from the network settings page.
Dec 28, 2010 at 9:19 UTC
Still not collecting unless i scan individual pc's. I removed all scan entries and scan accounts and readded them and scanned one subnet, no dice.
Going to try to use my NT login for authentication instead of the local pc admin account like i have been all along.
This discussion has been inactive for over a year.
You may get a better answer to your question by starting a new discussion. | https://community.spiceworks.com/topic/122704-not-picking-up-av-installs-updates | CC-MAIN-2017-34 | refinedweb | 2,002 | 78.59 |
Opened 12 years ago
Closed 11 years ago
#6075 closed (fixed)
max_num, etc. for inline models in newforms-admin
Description
The max_num feature is missing in the newforms-admin branch. When you add max_num to your Child_Inline max_num doesn't do anything.
from django.contrib import admin class Child_Inline(admin.TabularInline): model = Child extra = 3 max_num = 5
I looked at the code in
django/contrib/admin/options.py and it seems that there is no max_num feature there.
I added a patch with the missing functionality. I'm not sure if it's the best way to do it, especially the line with apply in it.
Attachments (2)
Change History (7)
Changed 12 years ago by
comment:1 Changed 12 years ago by
comment:2 Changed 11 years ago by
Changed 11 years ago by
Updated patch for django version 7363 (should work for versions from 7270)
comment:3 Changed 11 years ago by
comment:4 Changed 11 years ago by
Seems like whatever is done to add this feature to the admin should probably be abstracted to the InlineFieldset portion of newforms, rather than living in contrib.admin, as it would be just as useful there.
It doesn't look like any of the (max_num_in_admin, min_num_in_admin, num_extra_on_change, num_in_admin) described in the old admin doc (here:) have been implemented in newforms-admin. Rather than adding them one at a time it would make more sense to have one ticket cover them all. Marking this blocking on merge since it's a loss of capability from old admin. When implemented doc will be needed to describe any changes in the names of these things, and tests would be good too. | https://code.djangoproject.com/ticket/6075 | CC-MAIN-2019-26 | refinedweb | 279 | 69.62 |
Embedding Caddy Web Server in Go
Web servers help us to manage and scale our web apps in production. They have been the mysterious part of the stack to some web developers. It’s because configuring a web server is fairly time consuming for first timers as they need to know how reverse-proxies work and also have to deal with quirky config files.
But thanks to Caddy. It removes complexity surrounding the webservers. Developers who have used Caddy know the drill. Download a binary file, write few lines of config, start the binary by pointing it to the config file. Now you will have a secure web server running as caddy takes care of SSL by itself.
We know that CaddyServer is simple. But do you know that it’s extensible too? Yes, you can import caddy as library and embed the web server inside your Go code.
import "github.com/mholt/caddy"
When the web server is embedded into the code, we get more control over it. Adding a battle tested web server into your code allows you to use your code as reverse-proxy with graceful restarts or even for load balancing.
Today you will learn how to use caddy server as library in Go code. We will build a simple web app that runs on
port 8000 and route all the requests hitting
port 80 to it using caddy server as library.
A basic Caddyfile looks like this when you reverse proxy a web service running on
port 8000 to the
port 80.
:80
proxy / localhost:8000
Now let’s start building our components one by one.
Simple HTTP server
We will write a dead simple HTTP server using
mux router that exposes an endpoint
/ping.
The code is fairly simple here, we use
gorilla/mux as router and send a plain text response to the
/ping endpoint by starting http server on
port 8000.
Visiting the URL
localhost:8000/ping in browser after running the above code should output this.
Now we have got our HTTP server up and running. Let us first proxy it through
port 80 using standalone caddy server manually. The Caddyfile will look like this.
:80
proxy / localhost:8000
Start the Caddy server by pointing it to the Caddyfile.
./caddy --conf Caddyfile
If both HTTP server and this caddy server is running, opening
localhost:80/ping in the browser will route requests to
localhost:8000/ping and give you the text response.
Embedding a basic caddy server
We will replace the standalone caddy server in above step with an embedded server. In order to do that, first we need to install caddy as a library by doing
go get github.com/mholt/caddy.
Here’s the code to embed a basic version of caddy server.
Inside
main() we first set the
AppName and
AppVersion for our caddy instance.
Then we simply load a default config with server type
http by doing
caddy.LoadCaddyfile("http") and start the caddy server. This does not load any config file from disk, instead it simply starts a server on
port 2015.
⚠️ We are setting up server type as
http, so we have to make sure server type package
_ "github.com/mholt/caddy/caddyhttp" is imported, else code will panic.
Running the above code spits this output.
Activating privacy features... done.
2018/12/08 14:20:21
Caddy says the server has been started on
port 2015. If you goto
localhost:2015 you should get a
404 error from caddy server as there is no upstream connected to that port.
Now we have confirmed that the caddy server is running as library. It’s time to make use of it by reverse-proxying an upstream server, the simple HTTP server we initially built.
Reading caddy config file
We can make the previous snippet to read the config from
Caddyfile by adding couple of functions to it.
loadConfig() function to read the contents of the caddy config file and
init() to tell caddy which function to use inorder to load the config file.
The
loadConfig() function assembles the
CaddyfileInput struct by accepting
contents of the Caddyfile, it’s
Filepath and
serverType. The
ServerTypeName and
Contents are important fields of the struct where as
Filepath is just for logging or reference purpose.
Inside
init() you tell the caddy server to use the function
loaderConfig() to prepare the
CaddyfileInput. When you run this snippet, it proxies the http port:
port 80 to the
port 8000. But before doing that we need to start our upstream HTTP server by running the snippet
simpleHTTPServer.go.
Now all your requests to
localhost:80 are proxied to the upstream server running on
localhost:8000. Hitting
/ping endpoint returns the response served from the upstream server.
Playing around with the code
The snippets have been wrapped into its own packages in this repository.
Clone the repository and
cd to
embed_caddy folder. The code structure there should look like this:
embed_caddy
|
|_ main.go
|
|_ glide.lock
|
|_ glide.yaml
|
|_ simpleserver/
| |
| |_ httpserver.go
|
|_ webserver/
|
|_ server.go
I have used Glide as a package manager here. So, install glide by doing
brew install glide. If you’re not on macOS, do
go get github.com/Masterminds/glide.
Install the dependencies by running
glide install from inside the
embed_caddy folder. Now you are ready to build and run the code. | https://medium.com/backendarmy/embedding-a-web-server-in-go-e116cbbdc936 | CC-MAIN-2020-34 | refinedweb | 898 | 65.73 |
="....
Does your application need to know user's details on browser details, os and device type? We can use PHP's misc. function to achieve that provided you have an update copy of browsercap.ini
There are a few settings and plugins that can enhance the development experience with Yii in PHPStorm or IntelliJ IDEA. This article explains how to get the most out of your IDE.
It is convenient to use the same identification attribute, say
info, in all of the active records of your application. It should be a virtual read-only attribute defined by a getter method, its label being the model.
I wanted to customize the CJuiAutoComplete, so that it displays a thumb image before the label like the one shown in the following image:
this is my way for embed js code block in view file:
Bootstrap tabs gets unselected/inactive when user navigates to other page and comes back. How to make bootstrap tabs remain active/selected after navigating to different web pages.
I'm using PayPal's script from
It can happen that you work in development environment and you make changes to database tables structures, adding tables, or changing fields.
We are running one frontend running NGINX and several app servers running Apache2. There are several issues we have come across but right now I'll be documenting one of them. I'll be completing this article when I get more time.
You have a 'Category' model with Id, Name and Visibility (boolean, where 0 = Public, and 1 = Private)..
namespace app\components;
common\components\LanguageSelector.php
<?php namespace common\components;
In this article I will show you how to slightly increase application security, by exploiting the fact that Yii implements the Front Controller Pattern. | https://www.yiiframework.com/wiki?sort=views | CC-MAIN-2018-26 | refinedweb | 292 | 53.92 |
Subject: Re: [boost] C++03 / C++11 compatibility question for compiled libraries
From: Peter Dimov (lists_at_[hidden])
Date: 2018-02-08 17:21:24
Edward Diener wrote:
> As discussed in the doc to the CXX Dual library you do not have to choose
> to allow all possible variations, depending on how many different CXXD
> mods you decide to use. You could decide that if C++11 on up is being used
> your name is some 'xxx_std' while if C++11 on up is not being used your
> name is 'xxx', and therefore you have two naming variants.
You could do that, and it simplifies things considerably (although it
doesn't allow you to link 03 with 11.)
But if you do that, what's the point of having separate CXXD macros per
component then? You only need one, and the whole CXXD library collapses to a
single `namespace cxxd = std|boost` directive.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2018/02/241112.php | CC-MAIN-2021-25 | refinedweb | 174 | 70.84 |
OLPC:Style guide
From OLPC
This is a work in progress -- please add your comments and suggestions to this page.
Welcome to the OLPC Wiki style guide. This is here to keep people from creating pages with strange wikisemantics and titles and categorizations that don't naturally help other wiki users find them. Or lead to edit wars that continue across centuries.
[edit] Languages
Proposed: Pages in different languages should be on their own wikis -- one wiki per language. Languages that do not yet have a few introductory pages in their language (including the Main Page, OLPC:About and this style guide page) may have individual pages translated here on this multilingual site.
- Counterargument: I disagree if you mean completely different wiki's. Multiple languages can co-exist here at OLPC with proper translation and auto translation to fill in the gaps.Seth 21:55, 18 August 2008 (UTC)
[edit] Campaigns and slogans
- One Laptop per Child : lowercase 'p'.
- Give One, Get One / Give 1, Get 1 / give 1, get 1 : the latter predominated in 2007, moving to the former in 2008.
- give a laptop. change the world. and longer version give a laptop. get a laptop. change the world. for Simply Give and G1G1.
- Change the World : the Give 100 and Give 1000 programs in 2008. Two caps, lowercase 'the'
[edit] Project pages
[edit] Projects
see Activities and Activities/tmp
This is a description of sections, format, and style that are helpful in creating a project page that others can use and contribute to.
Collections and library/reading activities
- ...
Software and interactive activities
- ...
Data and other basic collections
- Sound samples, sets of clipart
- Software libraries, to be used by other software collections
- Icon collections to be used to customize one's interface
[edit] Organizations
Pages about organizations introduce the group, and should describe its engagement in education, connected digital networks, and access to knowledge, in all relevant languages.
- ...
Community groups and local chapters (see also #Regions below)
- ...
Volunteers] and community members
- ...
[edit] Regions
Articles about regional groups should describe the region and the groups and organizations in them, and the extent of deployments, educational efforts, and user groups in that region. They should link to specific pages for each regional chapter or user group. The general page about OLPC efforts in South Carolina, for instance, should be South Carolina, not OLPC South Carolina (which may exist if an organized group using that name exists; in the case of Illinois for instance that group would be ILXO rather than OLPC Illinois).
[edit] Article naming
[edit] Starting a new article
The article you're about to write already exists in the 20,000+ pages here: use the search box, also Google search, and browse Special:Categories to find it.
- Always check for alternative capitalizations — if you're starting Content Projects, check for Content projects first.
[edit] Titling a new article
Use a title that others will find and use.
- Avoid capitalizing page titles unless a necessary proper noun. Use Content projects, not Content Projects.
- Similarly, avoid running words together in CamelCase unless you are referring to a proper noun such as MediaWiki. This makes it easier to link words used normally in a sentence.
- The name of an article should [be able to] appear in the first sentence describing its subject. Avoid article names that add extra features to the title, such as "Feature Browsing toolkit" -- call this Browsing toolkit and find another way to indicate it is a feature.
- See also lead sentence style... you should mention the article name as early in the first sentence as possible, and mark it in bold. (Trick: If you make a wiki link to the current article name, it appears in bold — in this article [[OLPC:Style guide]] appears as OLPC:Style guide.)
[edit] Redirects
On a similar note to Article naming, it's a good idea to make redirects to alternative capitalizations and common typos of page names. For instance, if you're starting Content projects, make redirects to Content Projects, Contentprojects, etc. (But there's no need for content projects; due to MediaWiki's automatic capitalization of the first letter, this goes to the same place as Content projects.)
Similarly, if you find yourself looking for a wikipage and assuming it's under another name which turns out to be a nonexistent page, redirect that page to the correct one when you find it. For instance, if you're looking for what eventually turns out to be Content projects and find, while looking along the way, that Projects for content is empty, redirect that page to Content projects.
[edit] Subpages
In general, subpages are to be avoided -- better to use a descriptive name which can be read out without a 'slash' somewhere in it. However, there are exceptions:
- Archives of a page/talk page
- Dated versions of the same page ... though these should usually be without subpages as well, unless they are dated versions of large subsites, which only link to one another. For instance, if we were to make a "2006" version of Localization/, that might go under Localization/2006/... since there are many other pages that go along with it.
- Subpages of user pages for half-finished notes, scratchwork, testing, and semi-private projects - although if it's at the point where other people can begin to understand and contribute to it, move it out of your user space into the main wiki area.
Subpages are often used when a set of heirarchical information is moved to a wiki. In these cases, the articles about that information should be named according to the nouns they describe; and the titles of the pages (see #Titles) should be mentioned as early in the first sentence of the page as possible. If you can't include the title of an article as written in a sentence, you may want to rename it.
Exceptions to this are usually made for essays and other notes made within user: and project: namespaces.
[edit] Preparing for the future
If you are starting a specific type of page and know that you will need to make general versions later, that's fine -- use the most generic name that is not already taken. likewise, if you are starting the "August 2007" version of a page, leave it at the general name until you have more than one instance, at which point you can start to move to archival page titles. This just helps ensure that any specific page-name is part of an ecology that includes more generic page names and overviews of the abstract topic at hand.
[edit] Links, classification, and structure
[edit] Links to other articles
Links can be listed in full, in which case you should replace any underscores in the link with a space. For instance, Style guide should not have an underscore — even though Style_guide links to the same page, the underscore is ugly.
MediaWiki links are case sensitive. However, MediaWiki will find a linked article regardless of the capitalization of its first letter. This feature lets you include page links in sentence flow, e.g. Join friends in testing to try out recent builds.
[edit] Headers
- do NOT use H1 headers within a page (i.e., don't do =Header with a single equals on each side=). Start your hierarchy with ==H2 level header== within the body of the wiki article. H1 headers should be reserved for the page's title (like this page's OLPC:Style guide above); any further H1's should indicate the inclusion of an entire other page. Imagine what happens when you transclude one page at the end of another: this should make sense within a Table of Contents if you add an H1 header before it.
- although this is a valid point, other wikis have solved it by adjusting header levels when transcluding. E.g. if a page is including at an H4 level, then any H1 heading in that page is mapped to H4, and so on.
- As with article titles, only capitalize the first letter of the first word unless using proper nouns, for example ==Upcoming community events, not ==Upcoming Community Events.
[edit] Categorization
Every page should belong to a category (multiple categorization is possible) so when creating or editing pages, try to find a good one.
You should never create a new category until you have located at least two pages to put in that category. Edit the new category's page to explain briefly its intent and any templates that categorize in it. When you create a new category, add it to existing categories so the category itself is discoverable.
Category names are usually plural — things in the category are activities, countries, messaging ideas, etc. And category titles should follow #Titling a new article conventions, thus "Category:Spanish deployments", not "Category:Spanish Deployment".
[edit] Writing
[edit] Write for translation
See also Guidelines for writing for eventual translation for a more detailed description of guidelines.
Keep sentences simple:
- Avoid idioms and extended metaphors. Where metaphors are particularly useful, try to use one that is universal and has no double-meanings.
- Avoid subclauses or long sentences.
- Avoid long strings of adjectives or nouns.
- Convert sentences in unusual tenses into ones in simple tenses with extra clarifying notes.
[edit] Write for posterity: avoid "currently", and date "will"
Wiki pages live forever. Every time you write, for example:
- The content translation program, for which there is currently no organization...
These software packages currently depend on Orbit...
TamTam will feature 2 different tools...
you mislead future readers, and you make it nearly impossible for editors to tell if a page is out of date.
- Unless the page's title is tied to a particular date, like "Meeting #3 notes" or "Curriculum Jam Fall 2007",
don't use "currently" or "will" or any other phrase tied to a point in time, unless you date your statement with As of August 2008, ...
[edit] Keep words clear and unique
- Avoid words with ambiguous meanings.
- Long words with clear roots can be better than short words that are obscure or have many connotations
[edit] Specific words
- "USB flash drive"
- "USB key" is confused with cryptographic concepts such as developer key, "stick" and "thumb" are non-standard.
- first "Browse Activity", then just "Browse"
- Activities aren't commands and are more than programs. So the first time a page or chapter mentions an activity, say "the Read Activity". Thereafter you can simply say "Read".
[edit] Obsolete and deprecated information
This wiki is stuffed with information from 2007 before there was final hardware, an official release, and any consensus on what/when/where/how OLPC would do things. This mass of outdated information makes the wiki harder to maintain, search, and understand.
If you think a page might be obsolete, mark it so at the top something like
If you're sure a page is obsolete,
{{obsolete|link=[[better page]]}}
- consider removing most of its [[Category:Foos]] annotations so obsolete pages don't show up in categories.
- click the obsolete page's [What links here] in the navigation and replace links to it with better page
Once nothing links to the page, mark it with the {{delete}} template
{{delete|2007 info no longer applies, nothing important links here, replaced by [[better page]]}}
[edit] Pre 8.2 information
As of December 2008, Release 8.2.0 is the latest stable release. However, many G1G1 2007 recipients have not upgraded, and some G1G1 2008 recipients are receiving laptops with Release 8.1.0 installed. We encourage users to upgrade to the latest release, {{Consider upgrading}} is a template that makes this recommendation.
So pages should start with the 8.2.0 information, then mention earlier behavior:
- In Release 8.2.0, xyz works like this
- In releases prior to 8.2.0, xyz worked like this
{{Consider upgrading}}
You should eliminate info that only applies to releases prior to [Release notes/7.1.0|Release 7.1.0]] ("ship.2", build 653).
[edit] Using advanced features
[edit] Templates
Are special pages intended to be invoked from other pages producing some result based on parameters provided or extracted from the including page. A template that just transcludes static text is not a template and should be treated as normal page. There's the Template:Sandbox to try new ideas on templates.
Always add a <noinclude> purpose... Usage ...</noinclude> block explaining the purpose of the template.
Templates should be tagged with the Category:Template or one of its subcategories within the <noinclude> block
[edit] Page transclusion
The composing of an article based on other pages is a very powerful idea but that can quickly get out of hand as interdependencies develop (not to mention the complication for future editors to actually find the right page to edit). Still, it's a powerful (and sometimes complicated) way to reuse context in several places.
If a page is intended for transclusion in other pages, you should probably put any categorization or semantic annotation in it within a <noinclude>...</noinclude section, otherwise every page that transcludes it will get categorized and annotated.
[edit] Semantic annotations
This wiki has the Semantic MediaWiki and Semantic Forms extensions. These let you annotate information in wiki pages so you can browse it and other pages can query it. See Semantic MediaWiki for the pages using it, and cautions.
[edit] Visual design of pages
Some of the principles in the HIG apply across the board. For instance, walter is unhappy with the insertion of navigational elements (or is it just visual attractions?) in the middle of a page; see template talk:support-nav for a recent discussion.
[edit] Article and visual flow
[edit] Use of color
[edit] External links
- From useit.com, ca. 2002 | http://wiki.laptop.org/go/Style_guide | crawl-002 | refinedweb | 2,288 | 53.1 |
CodePlexProject Hosting for Open Source Software
I have a theme.chirp.less file that I use as a template for more specific themes. The theme file is never used by itself, and contains references to variables that don't exist in itself. The variables are defined in specific theme files which
import the main theme file. The result is that I can add a new theme to the site by simply copying/pasting a theme specific file and changing the variable definitions.
The only problem with this approach is that chirpy tries to compile the main theme setting/template file and complains that variables are undefined, which in that file, they are undefined.
Is there a way to tell that one file to not generate it's own css file, but have it still be imported by the others?
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://chirpy.codeplex.com/discussions/390601 | CC-MAIN-2017-43 | refinedweb | 175 | 81.22 |
Introduction to Angr
Denis Nuțiu
Updated on
・4 min read
I always wanted to play around with a binary analysis framework but most of the time I was turned off by how difficult it was to install and use it. Just recently I've thought to give angr a try and now I want to share my experience with you! I will present you a two scripts that solve two challenges, if you wish to dig deeper and learn Angr then you should visit it's official documentation.
angr is a python framework for analyzing binaries. It combines both static and dynamic symbolic ("concolic") analysis, making it applicable to a variety of tasks.
For me the easiest way to install Angr and get it working on the first try was to download Kali linux, install it in VirtualBox (make sure you have at least 12 GB space for the disk) and execute:
pip install angr.
From here you can setup your Python dev environment in Kali as you please.
For the first challenge we have the following source code:
//written by bla #include <stdio.h> #include <string.h> #include <unistd.h> int main(int argc, char **argv) { int count = atoi(argv[1]); int buf[10]; if(count >= 10 ) return 1; //printf("%lx\n", (size_t)(count * sizeof(int))); memcpy(buf, argv[2], count * sizeof(int)); if(count == 0x574f4c46) { printf("WIN!\n"); //execl("/bin/sh", "sh" ,NULL); } else printf("Not today son\n"); return 0; }
Challenge source: level-7
The goal is to find two arguments to give to the program in order to overflow buf into count and display
WIN. We can attempt to solve this with trial and error, debugging, do some computation or we can make Angr solve it for us with the following Python script.
import angr import claripy def resolve_win(state): # if the bytes of "WIN" are found in stdout it returns true return b"WIN" in state.posix.dumps(1) if __name__ == '__main__': print("starting.") # Declare project, load the binary proj = angr.Project('./lab-13/0-tutorial/level07') # Create a 32-bit symbolic bitvector named "password" arg1 = claripy.BVS('sym_arg', 8 * 11) # maximum 11 * 8 bits arg2 = claripy.BVS('sym_arg', 8 * 44) # maximum 44 * 8 bits # We construct an entry_state passing the two arguments st = proj.factory.entry_state(args=['./level07', arg1, arg2]) # he st.libc.max_strtol_len tweak tells the atoi/strtol symbolic representation to # resolve strings that are of at most 11 bytes length (the default is 10) st.libc.max_strtol_len = 11 # Now we will create what in angr terms is called a simulation manager. # pg = proj.factory.simgr(st) # This can be read as: explore looking for the path p for which the current state # p.state contains the string "WIN" in its standard output (p.state.posix.dumps(1), # where 1 is the file descriptor for stdout). pg.explore(find=resolve_win) print("solution found") s = pg.found[0] print(s.posix.dumps(1)) # dump stdout # Print and eval the fist argument print("Arg1: ", s.solver.eval(arg1, cast_to=bytes)) # Print and eval the second argument print("Arg2: ", s.solver.eval(arg2, cast_to=bytes))
Running the script will give us the solution for this binary, if the binary would change slightly (the count) we can still run the script and get a solution.
The next challenge is easier, the binary is called multiple-styles and it can be downloaded from here:
By looking at it's disassembly output:
We can see that the program does the following things:
- Calls
readwhich reads the 'password' from stdin into a buffer.
- Loads the string "myvnvsuowsxs}ynk" into a buffer.
- Loops through the buffer byte by byte adds 10
00400a27 add dword [rbp-0x54 {var_5c_2} {var_5c_1}], 0xato it and compares it with the previously loaded string.
- If they match it will jump to 0x00400a6c and print "you got it!"
At this point we can google for online caesar cipher, paste the string that got loaded and decipher it with an offset of -10, but we're going to let angr
decipher the password for us.
import angr import claripy if __name__ == '__main__': print("starting") proj = angr.Project("./multiple-styles", auto_load_libs=False) # Create a 32-bit symbolic bitvector named "password" password = claripy.BVS('password', 20*8) # We construct a blank_state with the address of main and we pass password to stdin st = proj.factory.blank_state(addr=0x004009ae, stdin=password) # We create a simulation manager pg = proj.factory.simulation_manager(st) # We tell angr to look for 0x00400a6c which is the starting address of the green block # that prints "you got it!" while telling him to avoid the address 0x00400a40 pg.explore(find=(0x00400a6c), avoid=(0x00400a40)) print("solution found") # We grab the solution. s = pg.found[0] # We can print the contents of stdin - 0: print("Flag: ", s.posix.dumps(0)) # We can also get the password from our symbolic bitvector print("Pass: ", s.solver.eval(password, cast_to=bytes))
While writing the scripts I've used
angr version
8.19.7.25. Please consult Angr's official documentation if you wish to learn more!
Thank you for reading! :D
References:
How to setup DNS over HTTPS (Doh 🍩!) in Google Chrome on MacOSX
Arpit Mohan -
The Development Trifecta: Three Questions Developers Should Be Asking Themselves All the Time
Mark Scott -
Engaging in OverTheWire's Wargames
Thomas -
| https://dev.to/denisnutiu/introduction-to-angr-kf8 | CC-MAIN-2019-43 | refinedweb | 883 | 63.9 |
I want to loop many mxds within a folder to retrieve layer,connection,etc. information using python.
I just need the syntax for looping through the mxds.
I want to loop many mxds within a folder to retrieve layer,connection,etc. information using python.
I just need the syntax for looping through the mxds.
If you are interested in the innards of the mxd, many recommend ... X-ray
I have the following since I am not sure whether os.walk limitation apples to me since I want to get feature classes information within mxds.
import arcpy, os
workspace = ' xxxxxx '
for root, dirs, files in os.walk(workspace):
for f in files:
if f.endswith(".mxd"):
mxd = root + '\\' + f
print f
I get a list of mxds which is the first successful step but I need the feature classes.
You'll need to get the DataFrames in your mxd
arcpy.mapping.ListDataFrames("name of mxd")
then you'll need to get the layers in each data frame
arcpy.mapping.ListLayers("name of mxd, "", "name of data frame")
Can I input the workspace for listdataframes & listlayers since I want info on 19 mxds, not just one ?
Rebecca Strauch, GISP produced this Python addin for data inventory and “broken-link” repair.
which does project inventory, if it doesn't serve purposes, perhaps she might have some commentary on what you are trying to accomplish
Devin, it definitly is worth trying my data inventory addin, as Dan mentioned. since there are so many different types of data and data connections that can be within any mxd, there are many tests that need to be filtered thru to find the right pe. The add in does that fairly well, but still may not catch ALL oes yet. But it does spit out an excel and .csv output so you can see what you have. It will do recursive mxds in a folder, and runs fairly fast so, the inventory part you should try.
if it doesn't get what you need, and .addin is just a zip file....change the extension and unzip. You'll see how I looped thru everything, and you can grab snippets from thatand modify if needed.
This looks great, I just browsed it quickly. Good advice use on a non-working testing folder/files. I will test this out. Thank you.
Keep in mind, if I remember correctly, the inventory lists all whether broken link or not.
The broken link list will list only those, and may differ depending on relative relationship, and user permissions, if you copy the data.....which I still recommend for testing. I'm using it to find where users have local data for mxd's that are shared and in a network drive. helps me find the local data that needs to be centrally located and fixed. That might not be your need....but just a tidbit of possible usage.
Interesting, I used Select folder to walk thru & List all types of feature classes. It writes to excel with just title headings.
Are you saying all three output files end up blank? I just downloaded and ran it (second button...first only shows databases, not mxd data) against a folder that I created in the last few days (i.e., not the computer or folder I created and tested the addin on originally) I had two mxd it, and all three files were populated. For example, the .txt file showed...
Maybe the excel file is a different version? Maybe try double clicking the .csv and have it open in your versions of excel?
You may want to change the .addin file to .zip, expand it and try running thru the toolbox instead as an addin with the buttons. That way you can see a bit more of what it is trying to do. Not sure why it is coming out blank, unless it is one of the types I dont have come red yet. I think I have a lot of notes in the .py script itself on what does and doesn't work.
I went ahead and did as you suggested. I opened and parsed the syntax to attempt in getting only what I need. No luck in successfully creating a script for what I want. I apologize for taking up your time. You have been more than helpful. I don't know what I cant seem to get this right to loop all mxds.
Hi Devin,
I have created a Python script to give info on all mxds in a specified folder. The output is a separate text file for each mxd. I based it off of this page:Python script to export file paths of all features in mxd into text document. I found that not all available document/layer/table... info was being reported on so I used this page to get as much info as I could (although I am having trouble getting data driven page info). There is still info I would like to get out (like feature symbology), but this is the best I could do/find.
The following code prints out the data I want for a specified mxd. I want to iterate every mxd in a specified folderpath and return the layers. Any idea how to do this ?
import arcpy,os
folderPath = (xxxxxx)
mxd = arcpy.mapping.MapDocument (xxxx)
layers = arcpy.mapping.ListLayers (mxd)
for layer in layers:
if layer.supports("dataSource"):
print layer.dataSource
del mxd
Hi Devin,
I don't know if this is the best way to do this (probably not), and it doesn't do subfolders (Gerard's code seems to do this) but this is what I do (it works ):
##Set the mxd folder path.
MxdFolderPath = r"C:\GIS"
##Loop through each file in the folder.
for FileName in os.listdir(MxdFolderPath):
FileFullPath = os.path.join(MxdFolderPath, FileName)
##If the file exists then...
if os.path.isfile(FileFullPath):
##Initialize the file extensions to look for list.
FileExtensionsToReportList = [".mxd",".Mxd",".MXD"]
##For each file extension to report on...
for FileExtensionToReport in FileExtensionsToReportList:
##If the file ends with the file extension to report on then...
if FileName.endswith(FileExtensionToReport):
Sorry, was in a rush to leave at the end of the day and didn't properly read your post.
First you need to get a listing of all the data frames and go through each of them:
DataFrameList = arcpy.mapping.ListDataFrames(MxdFile)
for DataFrame in DataFrameList:
There is a bunch of info you can get on the data frame. Then in each data frame loop get a listing of the layers in each data frame and go through each of them:
LayerList = arcpy.mapping.ListLayers(MxdFile, "", DataFrame)
for Layer in LayerList:
You can also get a listing of tables:
TableList = arcpy.mapping.ListTableViews(MxdFile, "", DataFrame)
for Table in TableList:
I have attached the script I created.
Just finished a project to extract as much relevant layer information as possible from MXD's in a folder with folders.
The choice to go only one niveau is intentional. This makes backup folders 'invisible'.
Hope this helps, suggestions for improvements are welcome.
David,
Try the following:
import arcpy,os def printlayers(mxdfile): mxd = arcpy.mapping.MapDocument(mxdfile) layers = arcpy.mapping.ListLayers(mxd) for layer in layers: if layer.supports("dataSource"): print layer.dataSource del mxd folderPath = (xxxxxx) for root,dirs,files in os.walk(folderpath): for file in files: if file.endswith('.mxd'): printlayers(os.path.join(root,file))
For using os.walk study: OS.walk in Python
part of the fun in coding is figuring out how to use something for your own project.
Gerard Havik
Hi Gerard,
I am not having trouble going through each mxd in a folder. I am having trouble reporting the colour, linestyle type, thickness, point symbol, etc... that a symbol is using. There doesn't seem to be any way of doing that. I want to report all the information so that someone could, if needed, recreate the mxd from scratch and get the same result - or if I want to know how the mxd was at certain times and report on it and compare with other times in a table format, without having to back up the mxd and opening it up to see all the settings, symbology, etc... .
Hi Devin
I wrote the following python script to copy the datasets (feature classes & rasters) for each mxd within a folder into a new File Geodatabase. You could use the following as a starting point and instead of copy it out write the location of the layers (feature classes and rasters) into a summary table.
''' Created on Jan 20, 2016 Copy all feature classes and rasters from each mxd in a folder into a new File Geodatabase. @author: PeterW ''' import os import time import arcpy # set arguments folder = arcpy.GetParameterAsText(0) out_gdb = arcpy.GetParameterAsText(1) # folder = arcpy.GetParameterAsText(0) # out_gdb = arcpy.GetParameterAsText(1) # Processing time def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int(sec_elapsed % (60 * 60) / 60) s = sec_elapsed % 60 return "{}h:{:>02}m:{:>05.2f}s".format(h, m, s) start_time1 = time.time() # copy layers function def copy_features(): try: if arcpy.Exists(os.path.join(out_gdb, layer.datasetName)): arcpy.AddMessage("Feature class already exists, it will be skipped") else: arcpy.FeatureClassToGeodatabase_conversion(lyr_source, out_gdb) except: arcpy.AddMessage("Error copying: " + layer.name) arcpy.AddError(arcpy.GetMessages()) def copy_rasters(): try: if arcpy.Exists(os.path.join(out_gdb, layer.datasetName)): arcpy.AddMessage("Raster already exists, it will be skipped") else: out_raster = os.path.join(out_gdb, layer.datasetName) arcpy.CopyRaster_management(lyr_source, out_raster) except: arcpy.AddMessage("Error copying: " + layer.name) arcpy.AddError(arcpy.GetMessages()) # Loop through each data frame, layer and copy to new file geodatabase)) copy_features() if layer.isRasterLayer: lyr_source = layer.dataSource lyr_name = layer.name.encode("utf8", "replace") arcpy.AddMessage("Copying: {}".format(lyr_name)) copy_rasters() # Determine the time take to copy features end_time1 = time.time() print ("It took {} to copy all layers to file geodatabase".format(hms_string(end_time1 - start_time1)))
Let me know if you need help amending the following to meet you needs.
This is what worked for me. I used only what I needed and added openpyxl so I can write to excel. Yet I am still working on it successfully writing to excel
#Import Modules
import arcpy,os
from openpyxl import Workbook
#Set folder space
folder = xxxxx
#Set variables
# create excel worksheets
wb = Workbook()
ws1 = wb.create_sheet("yyy")))
print fullpath + lyr_source
wb.save('aaaaaaa.xlsx')
Thank you for your help. My next step is the openpyxl writing to excel.
I have tried write to csv also, but it seems that openpyxl is more versatile, e.g. write to a native xlsx .
I will see what works out for me and let you know.
I am going with csv instead, so I can share the script not worrying whether a person may or may not have openpyxl. Csv should suffice, but I am having a little trouble.
I have the following and can see the csv file in windows explorer processing/looping the files, but when I open the csv it has one file just repeated in several rows. You know what may be the cause?
with open('CSVLISTLAYERS.csv', 'wb') as outputcsv:
writer = csv.writer(outputcsv, dialect = 'excel')
for filename in (fullpath + lyr_source):
writer.writerow ([fullpath + lyr_source])
I found a script.
import arcpy, os #Read input parameters from GP dialog folderPath = arcpy.GetParameterAsText(0) if folderPath=="": folderPath = r"D:\TESTFOLDER" #Loop through ech MXD file for filename in os.listdir(folderPath): fullpath = os.path.join(folderPath, filename) if os.path.isfile(fullpath): if filename.lower().endswith(".mxd"): #open rapportfile for MXD outFilename=fullpath[:fullpath.rfind(".")]+".csv" mes= '\nMXD: %s' % (fullpath) print mes arcpy.AddMessage(mes) rapportfile=open(outFilename,"w") header='MXD;WORKSPACE;FEATURECLASS' rapportfile.write(header+'\n') mxd = arcpy.mapping.MapDocument(fullpath) for df in arcpy.mapping.ListDataFrames(mxd): layerList = arcpy.mapping.ListLayers(mxd, "", df) mes='MXD %s bevat %s layers' % (filename, len(layerList)) arcpy.AddMessage(mes) print mes for lyr in layerList: if lyr.supports("dataSource"): workspace=lyr.workspacePath fc=lyr.datasetName print 'WorkspacePath: %s' % workspace print 'FeatureClass: %s' % fc reg='%s;%s;%s' % (filename,workspace,fc) #arcpy.AddMessage(reg) rapportfile.write(reg+'\n') mes='Papportbestand: %s\n' % (outFilename) print mes arcpy.AddMessage(mes) rapportfile.close() del mxd
I am still working on trying to get the csv to write. You know about readlines and writelines? I just leaned of these, yet I haven't noticed anyone mentioning this option, ever.
the csv module is well documented 13.1. csv — CSV File Reading and Writing — Python 2.7.11 documentation and there are thousands of examples online
This is part of my Python addin for data inventory and “broken-link” repair. tool box, but I'll include the first script here. It inventories and lists the fgdb to out put csv, and it also coverts it to an .xsl (not .xlsx). Maybe you can pick out what you need from this.
EDIT: bTW replace all the myMsgs with print or arcpy add message .
''' ---------------------------------------------------------------------------:\__Data1\_TalkeetnaBU" #()
Thanks Dan. culdn't remember if it was AddMessage or Addmessage, etc
The two custom mods should be commented out too, but that may break things if trying to run this as a standalone. It's meant to be in the toolbox, but still should have the required csv pieces.
Also, written last year when my print formats and other tricks I've learned since are not used. One of those..."when I have time, I'll make it nicer" things. but works for me, and works in the toolbox, so....
I went ahead and arrived at this, using built in python. This performs perfect as a test. However, I have an issue when setting a returned list as a reader then writing.
The error is AttributeError: 'unicode' object has no attribute 'readlines. I think it has an issue with passing the list in its current encoded state ?
#Open Excel Docuements
readexcel = open ('xxx.csv','r')
writeexcel = open ('yyy.csv','w')
#Set Reader & Writer
reader = readexcel.readlines()
writer = writeexcel.writelines (reader)
#Loop Read Excel Document to print to Write Excel Docuement
for line in (readexcel):
line = readexcel.next()
print (line)
#Close Excel Files
readexcel.close()
writeexcel.close()
Devin,
As soon as I found something that worked I stopped searching.
For reading and writing I use mostley simple 'object=open(file,mode)' and for reading I loop over the file one line at the time 'for line in object:'. Very compact and simple code. Writing is 'object.write(reg) where 'reg' is a string.
Readline() is more complicated I think. See
The csv module is useful for reading parts of complicated csv-files, it produces a list of dictionaries. It is possible that it makes more elegant code for writing csv-files than what I used so far, if there is time I'l try, never to old to learn.
PS, my code uses the semicolon as delimiter, I'm from Holland. You want to change it to colon I think.
Met vriendelijke groet,
G.J. (Gerard) Havik
Dataspecialist
een gedachte voor het milieu: is printen van deze mail echt nodig?
Verstuurd vanaf mijn iPad
Op 23 mei 2016 om 22:37 heeft Devin Underwood <geonet@esri.com<mailto:geonet@esri.com>> het volgende geschreven:
GeoNet <>
How do I loop a folder of mxds
reactie van Devin Underwood<> in Python - Bekijk de volledige discussie<>
Search through geonet threads for Walk function in python. You can also search outside geonet for python and walk. | https://community.esri.com/thread/176767-how-do-i-loop-a-folder-of-mxds | CC-MAIN-2019-13 | refinedweb | 2,577 | 68.47 |
This chapter is going to introduce structures in C. Structures is one of the important concepts of C programming. Structure is known an user defined data type. You must be thinking when we have so many data types then why we need structures? You would remember that array is an collection of alike data items. Practically, we need to deal with miscellaneous collection of data items and that is possible because of Structure.
16.1Introduction to Structures
Logically related data items can be stored under one name. Data items can be alike or different. Variables can be accessed and each variable corresponds to an item in the structure. Each item is known as member or field of structure which do have a particular data type. Name of the structure is also known as tag name. Syntax of structure is as follows:
struct name_of_tag
{
data_type_1 member_1;
data_type_2 member_2;
…. ….
};
Figure Format of structure
Explanation: Here,
struct is the keyword and it is an indicator for compiler that structure is being defined.
name_of_tag is usually an identifier which is the name of structure.
member_1, member_2 are called members or fields of structures.
data_type_1, data_type_2 are representing data types of member_1, member_2 respectively
Let us consider an example of defining structure,
struct employee
{
char name[20];
int employee_id;
float salary;
};
Explanation: Here,
struct is the keyword
employee is the name of structure which is an identifier
name, emplyee_id, salary are the members of structure. They are not variables hence they themselves do not consume space in computer memory.
Since we know the way of defining structure so next step is to declare structure. Like other data types, we declare structure variables. Here, struct keyword is followed by the name of identifier which is the tag name. This is followed by variables list which are separated by comma and it is terminated by semi colon. Tag name or structure definition is not associated with memory but as soon as variables are associated with structure definition, compiler allocates memory. Variables are to be declared as shown below:
below:
struct name_of_tag variable_1, variable_2;
Explanation: Here,
struct is the keyword
name_of_tag is the name of structure
variable_1, variable_2 are the structure variables.
Let us consider the above sample structure definition and declare variables according to that definition.
struct employee it, operations, training;
Explanation: Here,
struct is the keyword
employee is the tag name
struct employee when combined togerthe represents data type. This is an derived data type which is derived from basic data types
it, operations, trainingi are variables of type struct employee and memory is going to be associated with it, operations and training respectively.
Member of a structure can be accessed and they are treated as separate individuals. For accessing a member we specify the name of variable followed by period (.) which is again followed by name of the member. Syntax is as follows:Member of a structure can be accessed and they are treated as separate individuals. For accessing a member we specify the name of variable followed by period (.) which is again followed by name of the member. Syntax is as follows:
variable_1.member_1;
variable_1.member_2;
Explanation: Here,
variable_1 is representing the structure variable
member_1, member_2 are the respective members of the structure
Let us consider the above sample structure and find out the way to access it.
struct employee
{
char name[20];
int employee_id;
float salary;
};
struct employee it = {“Debasif”, 546502, 30000};
For accessing the members we will have to use the following specifications. We can access Debasif by specifying it.name. Similarly, we can access 546502 by specifying it.employee_id and 30000 can be accessed by specifying it.salary respectively.
The size of structure is the sum of sizes of each member of structure. For example,
struct employee
{
char name[20];
int employee_id;
double salary;
} it;
The size of each member of structure is as follows:
it.name is an array of characters which has 20 bytes.
it.employee_id is of type integer which is of 4 bytes
it.salary is of type double which is of 8 bytes
So the structure is of size 20+4+8=32 bytes
Now, there are chances to have size of structure not equal to sum of individual members and the reason is slack bytes. Usually, computers allocate memory for members sequentially on the basis of their size respectively. Sometimes members are allocated at some special boundaries which are known as word boundaries where additional bytes are padded at end of each member whose size is smaller as compared to largest data type so that address of each member begins at word boundary. These bytes do not contain any information and waste memory. These additional bytes which are added to maintain boundary are called slack bytes. Data present in word boundary can be accessed quickly hence slack bytes themselves are not useful but their presence can increase the rate of access.
Let us consider an example to illustrate the working principle of structure.
/* Program to illustrate structure */
# include <stdio.h> # include <conio.h> # include <stdlib.h> # include <string.h> void main() { struct employee { char name[20]; int employee_id; float salary; }; struct employee emp; emp.employee_id=546502; emp.salary=30000; strcpy(emp.name, "Debasif"); printf("Employee information\n"); printf("%d %s %7.2f\n", emp.employee_id, emp.name, emp.salary); getch(); }
Snapshot of program looks in editor as follows:
Fig - C Program illustrate structure
After compiling programs looks similar as in the following snapshot:
Figure - Output of C program illustrate structure
With this, we conclude this chapter. Next chapter will introduce unions and enumerated data types. Thank you. | https://wideskills.com/c-tutorial/structures | CC-MAIN-2021-21 | refinedweb | 932 | 55.54 |
15 July 2010 07:32 [Source: ICIS news]
By Judith Wang
SHANGHAI (ICIS news)--China’s economy has started to cool in the second quarter, with the deceleration of growth expected to continue into the second half of the year, likely affecting its demand and production of petrochemicals, analysts said on Thursday.
June-quarter GDP grew 10.3% year on year, according to the National Statistics Board. This represented a 1.6 percentage point decline from an 11.9% growth recorded in the first quarter.
Analysts said they expect the pace of growth to further slow as the year progresses, with third quarter likely logging a 9.8% growth and fourth-quarter expansion seen at 9.4%.
This was largely due to the high base in 2009, when ?xml:namespace>
Now faced with possible economic overheating that could fuel inflation,
“This is a fatal blow on the petrochemical sector, which is mostly used in housing construction,” said Liu Yanzhao, an analyst from Shanghai-based Western Securities.
Polyvinyl chloride (PVC), for example, has its main application in the construction sector.
“In the second half, the government will continue to curb the property market to prevent housing bubbles and contain inflation after housing prices in some cities surged sharply last year,” Liu said.
Stringent lending requirements were now in place for homebuyers, including a higher downpayment for purchasing second homes.
“People are waiting for property prices to fall. They are [on a] wait-and-see [mode] now, and the real estate developers dare not build more houses. Demand for petrochemicals will naturally fall,” Liu added.
The government’s energy saving and emissions reduction scheme also had the effect of slowing down investment, industrial production and overall domestic consumption, analysts said.
“This is the last year of the 11th five-year-plan, and the government faced huge pressure to finish the target. So I assume the government will take harsh measures to complete the task, like shutting down high energy-consuming companies, such as steel mills and chemical plants,” Liu said.
“Petrochemical production and investment for petrochemical projects will both be affected,” Liu said.
Investment in fixed assets during the period jumped 25.0% year on year to CNY11,419bn, while retail sales of consumer goods grew a strong 18.2% to CNY7,266.9bn, based on NBS data.
Consumer prices grew an average 2.6% over the January-June period, while wholesale prices jumping 6.0%, according to official statistics. Accelerating inflation since April prompted the government to step up its efforts to curb lending.
New lending in the first half of the year was down a hefty 37% to CNY4,630bn, based on official data.
($1 = CNY6 | http://www.icis.com/Articles/2010/07/15/9376692/china-economy-slows-in-q2-petchems-output-demand-to-slacken.html | CC-MAIN-2015-06 | refinedweb | 446 | 56.86 |
Red Hat Bugzilla – Bug 525968
missing dep ?
Last modified: 2009-09-28 03:03:13 EDT
[mclasen@planemask simple-greeter]$ sealert
Traceback (most recent call last):
File "/usr/bin/sealert", line 37, in <module>
import slip.dbus.service
File "/usr/lib/python2.6/site-packages/slip/dbus/__init__.py", line 1, in <module>
import bus
ImportError: No module named bus
It appears you have to add a bunch of "from dbus" to your import statements now.
Aint api stability great ? :-(
This looks more like a bug in python-slip: bug #525860.
*** This bug has been marked as a duplicate of bug 525860 *** | https://bugzilla.redhat.com/show_bug.cgi?id=525968 | CC-MAIN-2017-13 | refinedweb | 102 | 76.52 |
One shade of authorship attribution
By Guillaume Filion, filed under
planktonrules,
Python,
machine learning,
R,
IMDB,
series: IMDB reviews,
automatic authorship attribution.
• 23 March 2013 •
This article is neither interesting nor well written.
Everybody in the academia has a story about reviewer 3. If the words above sound familiar, you will definitely know what I mean, but for the others I should give some context. No decent scientific editor will accept to publish an article without taking advice from experts.
This process, called peer review, is usually anonymous and opaque. According to an urban legend, reviewer 1 is very positive, reviewer 2 couldn't care less, and reviewer 3 is a pain in the ass. Believe it or not, the quote above is real, and it is all the review consists of. Needless to say, it was from reviewer 3.
For a long time, I wondered whether there is a way to trace the identity of an author through the text of a review. What methods do stylometry experts use to identify passages from the Q source in the Bible, or to know whether William Shakespeare had a ghostwriter?
The 4-gram method
Surprisingly, the best stylistic fingerprints have little to do with literary style. For instance, lexical richness and complexity of the language are very difficult to exploit efficiently. The unconscious foibles, the recurrent mistakes and misuse of the punctuation much better betray their author because they tend to be writer invariant.
A very simple method to extract this information is to count all the 4-grams (the sequences of 4 characters) of a text. For instance the 4-grams of "to be or not to be" are 'to_b', 'o_be', '_be_', etc., and the 4-grams 'to_b', 'o_be' occur two times. The idea of this decomposition is that the most frequent words of a text will produce the most frequent 4-grams, and the most frequent mistakes will belong to several 4-grams.
In order to catch features such as punctuation errors, mis-capitalization, space omission or space doubling, it is important not to process the text in any way before collecting the 4-grams, which makes it much easier than standard Natural Language Processing (see The elements of style). Somewhat ironically, the stop words such as 'and', 'the' etc., usually filtered out for carrying no semantic content turn out to be the most informative. Every author uses the most common English words with a slightly different frequency, which then constitutes his/her fingerprint.
How good is this?
Remember planktonrules from The geometry of style? I scraped off the 4-grams from his reviews and collected the 1,000 most frequent as a feature set (planktonrules uses many double and triple spaces, and uses 'film' much more often than 'movie' contrary to most of the authors). I then used R to train a Support Vector Machine with a random selection of 10,000 reviews among a set of 50,000 and tested the model on the 40,000 remaining reviews (click on the Penrose triangle below to see the R and Python code). The accuracy of such a brutal approach is surprisingly high. The error rate is around 2%, with a false negative rate of 6.6% and a false positive rate of 0.3%.
The first script collects the 1000 most frequent 4-grams from a collection of IMDB reviews and saves them as a pickle file. You can create an input for that script from the post The elements of style. Unfold and copy the Python script folded in the Penrose triangle and run it on the dummy input that you can download from there. The dummy input consist of 10 reviews, but none of them is written by planktonrules.
# -*- coding:utf-8 -*-
import json
import sys
import pickle
from collections import defaultdict
# I have the IMDB reviews as JSON documents.
with open(sys.argv[1]) as f:
all = json.load(f)
counter = defaultdict(int)
for doc in all:
for i in xrange(len(doc['body'])-4):
counter[doc['body'][i:(i+4)]] += 1
# Pickle the 1000 most used 4-grams.
features = sorted(counter, key=counter.get, reverse=True)[:1000]
pickle.dump(features, open('features.pic', 'w'))
The second script was run on two input files, one containing a random sample of IMDB reviews, the second one containing all the reviews written by planktonrules up until 2011. The output was redirected to a file called
scores.txt.
# -*- coding:utf-8 -*-
import re
import sys
import json
import pickle
from collections import defaultdict
planktonrules = 'ur2467618'
features = pickle.load(open('features.pic'))
featureset = set(features)
# The script takes two file names as arguments.
with open(sys.argv[1]) as f:
all = json.load(f)
with open(sys.argv[2]) as f:
plank = json.load(f)
# Print the header.
sys.stdout.write('planktonrules\t' + \
'\t'.join([re.sub('\W', '_', f) for f in features]) + '\n')
# Each line corresponds to a review.
for doc in all + plank:
auth = 1 if doc['authid'] == planktonrules else 0
counter = defaultdict(int)
for i in xrange(len(doc['body'])-4):
key = doc['body'][i:(i+4)]
if key in featureset: counter[key] += 1
sys.stdout.write(str(auth) + '\t' + \
'\t'.join([str(counter[a]) for a in features]) + '\n')
The R session to train and test a SVM is tiny, if you use the package
e1701. Fitting the SVM takes a few minutes, and getting the predictions too.
library(e1071)
scores <- read.delim('scores.txt')
train <- sample(nrow(scores), 10000)
trainset <- scores[train,]
testset <- scores[-train,]
model <- svm(as.factor(planktonrules) ~ ., data=trainset)
predictions <- predict(model, newdata=testset)
mean(predictions == testset[,1])
I tried several other classifiers (logistic regression, LDA, QDA and CART), but SVM always gave the best results. LDA gave a reasonable fit but the false negative rate never dropped below 15%, no matter how many 4-grams I would include. I also tried other feature sets, such as the most common 4-grams of the corpus (not necessarily used by planktonrules), and the ones for which the frequency is the most different between planktonrules and the rest of the writers but the results were not as good.
Does that mean that I can catch the author of the brilliant quote that introduced this post? Not very likely of course, because the text is very short. And also because I do not have a reference set for peer reviews. The example given above has it easy, in the sense that it is a binary classifier. Building such classifiers for a large set of authors must be substantially more difficult. But we can bet that Google has already done it. With access to your mail and everything you write in Google Docs/Google Drive, they probably have stylistic fingerprints for a large portion of the Internet community. I guess I should work on a stylistic fingerprint eraser then...
« Previous Post | Next Post »
blog comments powered by Disqus | http://blog.thegrandlocus.com/2013/03/one-shade-of-authorship-attribution | CC-MAIN-2017-22 | refinedweb | 1,154 | 63.59 |
Hello everyone , I am a Java Initiate , Today, Xiao Bian will take you to study together Java Technology base !
1.Calendar It literally means calendar , stay java in Calendar Class can set and read the display year through methods 、 month 、 Japan 、 when , branch 、 Seconds, etc . When creating a Calendar Can't use new keyword , because Calendar Class, which is an abstract class , You need to call static methods getInstance() Method to get a Calendar The object of , Call other methods .
2.Calendar Class methods are shown in the following figure :
The picture above is quoted from 《 Novice tutorial 》
3. How to pass Canledar Class to get the current month, month, day, week, hour, minute and second of the computer :
import java.util.Calendar; public class p1 { public static void main(String[] args) { // TODO Auto-generated method stub Calendar c = Calendar.getInstance();// obtain Calendar object // Get the current year int year = c.get(c.YEAR); // Get the current month int month = c.get(c.MONDAY)+1; // Get the current day int day = c.get(c.DATE); // When getting int hour = c.get(c.HOUR); // Get points int minute = c.get(c.MINUTE); // Get seconds int second = c.get(c.SECOND); // Get the current day of the week ( It starts on Sunday ) int week = c.get(c.DAY_OF_WEEK)-1; // Set the date , Time, minute and second is the default current value Calendar c1 = Calendar.getInstance(); c1.set(2020, 5, 20); System.out.println(" The current time is : "+year+" year "+month+" month "+day+" Japan "+"\t week "+week); System.out.println(" The current time is : "+hour+" when "+minute+" branch "+second+" second "); System.out.println(" The set date is : "+c1.getTime()); } }
The result of the operation is :
From the above code , You can find when to get the current month , The reason we need to add one is that its month is from 0 At the beginning , So we need to add one . And the day of the week is the same , Because it's from Sunday for the first day .
1.DateFormat Class is to format the date into a string . stay Date Class represents the date and time , When printing, the date and time will be output in English format by default , When converting to Chinese format, you need DateFormat class .DateFormat Classes are also abstract classes , You can't instantiate . You can get it statically DateFormat class .
2.DateFormat The common methods of class are :
The picture is quoted from 《C Chinese language network Java course 》
3.DateFormat Class to pass as parameters to these methods , It includes FULL Represents the complete format 、LONG A long format 、MEDIUM Represents the normal format 、SHORT An example of a short format :
import java.text.DateFormat; import java.util.Date; public class p2 { public static void main(String[] args) { // TODO Auto-generated method stub Date d=new Date(); // Define four formats DateFormat f,l,m,s; //Full Format f=DateFormat.getDateInstance(DateFormat.FULL); //Long Format l=DateFormat.getDateInstance(DateFormat.LONG); //medium Format m=DateFormat.getDateInstance(DateFormat.MEDIUM); //short Format s=DateFormat.getDateInstance(DateFormat.SHORT); // Format date System.out.println("Full Format :"+f.format(d)); System.out.println("Long Format :"+l.format(d)); System.out.println("medium Format :"+m.format(d)); System.out.println("short Format :"+s.format(d)); } }
The output is zero :
1. The solution is : Nothing can be denied 100 Divisible, but can be 4 Divisible year , Or can be 400 An integer year is a leap year , The rest of the year is not a leap year .
2. Code :
import java.util.Scanner; public class p3 { public static void main(String[] args) { // TODO Auto-generated method stub System.out.print(" Please enter the year :"); Scanner scan = new Scanner(System.in); int year = scan.nextInt(); if(year%4==0&&year%100!=0||year%400==0){ System.out.println(" What you entered "+year+" Year is a leap year "); }else{ System.out.println(" What you entered "+year+" It's not a leap year "); } } }
This paper mainly introduces Calendar class 、DateFormat class 、 How to judge leap year .
This paper introduces Calendar Class to display and set the date and time .DateFormat Class is mainly used to convert date format into string form , English to Chinese format .
Through the example of how to judge whether the year entered by the user is a leap year, we can help you understand .
I am a Java Initiate , I hope you can learn from this article , It helps you ! Welcome to WeChat , If you have any problems, you can help us solve them at any time , It's good to make a friend ~
This article is from WeChat official account. - Java Advanced learning communication (java_xianghong) , author :Java Initiate
The source and reprint of the original text are detailed in the text , If there is any infringement , Please contact the [email protected] Delete .
Original publication time : 2021-04-04
Participation of this paper Tencent cloud media sharing plan , You are welcome to join us , share . | https://javamana.com/2021/04/20210416181019480s.html | CC-MAIN-2022-40 | refinedweb | 823 | 57.16 |
This is an Atom formatted XML site feed. It is intended to be viewed in a Newsreader or syndicated to another site. Please visit Atom Enabled for more info.
I had a hard time getting the Django debug toolbar to work this morning. Complicating factor: the site runs inside a vagrant virtualbox and my browser is simply running inside OSX.
(I use vagrant and virtualbox, see why I use vagrant and my vagrant setup on OSX)
Ok, I thought I had installed everything correctly. What can be the problem? I checked and double checked. The best way to verify your settings is to use django's own "diffsettings" management command, that way you're sure you're looking at the right settings:
$ bin/django diffsettings ... DEBUG = True INSTALLED_APPS = ['debug_toolbar', 'lizard5_site', ... ] INTERNAL_IPS = ('33.33.33.20', '127.0.0.1', '0.0.0.0') ... MIDDLEWARE_CLASSES = ('debug_toolbar.middleware.DebugToolbarMiddleware', ...) ...
Debug mode is on, it is in the INSTALLED_APPS and I've correctly enabled the middleware. Oh, and I've adjusted the INTERNAL_IPS setting.
The 0.0.0.0 shouldn't be needed, but at that time I was just trying things out to no avail.
... Time to call in the artillery. We have the source code, so I looked up the debug toolbar version I was using and put an import pdb;pdb.set_trace() into the def show_toolbar(request) method inside debug_toolbar/middleware.py.
The first lines of that short function are:
if request.META.get('REMOTE_ADDR', None) not in settings.INTERNAL_IPS: return False
In went the pdb and I reloaded the homepage:
> /.../django_debug_toolbar-1.0.1-py2.7.egg/debug_toolbar/middleware.py(26)show_toolbar() -> if request.META.get('REMOTE_ADDR', None) not in settings.INTERNAL_IPS: (Pdb) request.META['REMOTE_ADDR'] '33.33.33.1'
Sure enough, the error was in my INTERNAL_IPS setting after all. REMOTE_ADDR on the request turned out to be 33.33.33.1! Even though I talk to it from OSX with 33.33.33.20. So there must be some internal virtualbox/vagrant trickery that does some mapping here.
So: if you use the django debug toolbar inside vagrant: make sure you've got the correct port in the INTERNAL_IPS setting!
Quite some colleagues use my name to raise an error. It is a trick I learned them :-)
There are a large number of debugging tips and tricks. import pdb; print statements; a full-blown IDE with an interactive debugger; logging; etc. spot!
("TRS"? "Time registration system"! An internal company website for managing our projects and for booking hours. I'll use this blog series to simply tell you how I build it. With topics such as "bundling javascript", "python 3" and "letting test coverage slip".)
Ok, a time registration system. A bunch of Person objects in a database; projects you can book hours on; probably some assignment of persons to projects; a bit of reporting. Why build it yourself?!? There are 2538 existing ones! On the other hand, it is relatively simple, so it can't hurt too much to build it yourself.
In the end, the assumption was that there are enough special cases to make it worthwhile.
In the end, I was the one making the new system almost single-handedly. Fun! I had a personal reason to particulary enjoy building it: it was a relatively straightforward Django app. Lots of Python coding, quite some thinkwork to set up the data model, designing the user interface. All things I like.
Now, why was this particularly enjoyable? Well, as I was quite stressed at the time. I recognized it and took a little bit of time off, spending some afternoons cycling. And, to quote from a previous blog post:... "therapeutic programming"? :-)
Last).... I think the lesson for me is that giving early feedback is important. And the company earned some bonus points for actually being happy with me for telling about my problems and coming up myself with a simple way to fix/reduce it.
The thinking!
I normally use Firefox as my webbrowser, but sometimes it is handy to have a second browser. Especially when I'm doing permission work in a Django website: certain users should see more than others. Having two browsers open at the same time, both with different logged-in (or anonymous) users is handy then.
So I've got chrome. Which is set to auto-update itself with newer versions.
As I was running out of disk space, I investigated my program folder a bit for stuff I could throw out. Suddenly I spotted that chrome took up more than 16GB of space. What? I right-clicked the icon and selected "show package contents" and started screaming:
Turns out the bloody idiotic program kept every version of itself for the last 1.5 years... 16GB! And I cannot find a setting anywhere that prohibits this behaviour. In the end I just deleted the olderer versions, which worked fine.
So...
Update: yes, someone knows it. Chris Adams has identified the problem in the comments below. The DivX plugin has mucked about with permissions, preventing the chrome updater from working correctly...
Last week the company I work at (Nelen & Schuurmans) organized a one-day course for all the software developers. Nice initiative! Tom Stevens of Namahn.com lead the day.
By law of nature, I make notes during such a day. I didn't, however, make a full summary; so I can only give an impression with a couple of loose remarks and quotes and ideas.
Anyway, the day was bits of theory interspersed with group exercises, culminating in a design that we tested with paper prototypes. To start off I've got a video of those tests. (If you don't understand Dutch: scroll through to the 1 minute mark. That way you at least get a nice impression on how a paper prototype can work.)
A core idea is mental modeling. What is the mental model of the user? What mental model should he have in order to use your app or website? What mental model of reality does the user already have? Which terms or concepts should you use in your app or site?
Interviews are important to get this clear. And if you interview or if you observe someone: listen. Listen a lot. At least initially. You are the student, the user is the teacher. Take time to gain trust. And when you do ask questions, try to get to the bottom. The user might say something, but after asking "why" five times, digging ever deeper, you might get a different answer than originally. Alternatively, use the why/where/what/when/who kind of questions.
Why are mental models important? Well, you make them anyway when interacting. Enter a building and you see the door knob: you automatically make a mental model of that door knob and the way you expect it to behave when you interact with it. Should I push or pull or whatever?
So when your site or application has a good and consistent mental model behind it, it will provide you and your user with something to hold on to: it helps you explain what happens and it helps you understand what happened and what is about to happen when you interact with it.
A very good requirement you can force upon yourself when you make a mental model: you should be able to explain it to your grandma within two minutes.
Tell stories. Human beings like and relate to stories. So write down textual usage scenarios (or visual: simple comic strip). Use specific thought-up "personas" like "Harry from accounting who positively hates computers". And try to get together a representative number of usage scenarios.
Those scenarios, when written down, help get discussion underway. If you only tell what you're going to do, it is easy to nod your head and to agree. If you see something written down that's obviously wrong or obviously missing key elements... Discussion! This way you gain clarity quickly. And... you gain clarity before spending four weeks programming something!
After gaining clarity in this way, the next step is classification. Basically: find the right words. Nouns and verbs. The terminology you use for your application. Spend time getting this right.
Btw, make a difference between information structure and application structure. The information structure means the content. "Article", "blog post", "map layer". So: the core data structure. The application structure, on the other hand, is much more about about the form/layout.
(Note: he mentioned the elements of user experience by Garrett (link to PDF with the main graph). The way I understand it now as I can actually read the graph is that the information structure/application structure difference is more of a "duality" that needs to be resolved in the resulting visual design.)
Conceptual design is the next phase: design the main structure of your website. Items like:
So basically: converting the output of the classification/terminology phase to a navigation. Tip: make wireframes or mock-ups of the various screens and draw the navigation (the flow) between those wireframes. He called it a wireflow.
Apparently designing interfaces by Jenifer Tidwell is "the bible" for stuff like this.
When designing, use four Gestalt principles: proximity, similarity, continuity and closure. (The page linked has five items, btw).
And keep in mind that too many choices mean that a task takes too much time (Hick's law), so don't put in too many choices.
Nice day! I hope it helps us (and me) to realize and remember that some advance planning and designing helps save time and money and effort.
Note: a colleague also blogged about this day.
FastCompany's Jeff Chu recently visited Nelen & Schuurmans (where I work) for an article about Dutch water management. The article is now online: a nice one.
As scientists predict a wetter, stormier future for much of the planet, the Dutch have become a nationwide consulting company, fanning across the world to talk about water.
I'm reasonably well-known as a Python developer (and zope/plone/django). So what am I doing here in a water company? Nelen & Schuurmans is one of those water consultancy companies mentioned in the article. The reason for my presence is also right there in the article:
."
IT and consultancy go hand-in-hand for us. Which gives nice results as the company is really smart: almost everyone has a university degree (with a couple of PhDs thrown in). Trying novel approaches. Combining ideas. Combining fields of work. Combining fields of experience.
Personally, I've got this combination right here inside myself. I'm a Python developer. And by education I'm a civil engineer. Two side notes:
So by education and experience (and importantly: interest!) I fit right in. There's a lot of freedom that I also enjoy a lot. And we're profitable, which is quite a feat in the current marketplace (and quite good for not worrying). A funny quote in the article points at one of the reasons for the profitability: we don't spend a lot of money on unnecessarily decorative offices.
In the Nelen & Schuurmans lab, a Spartan collection of desks located in a house in the medieval heart of Utrecht, ...
We are however dead smack right in the center of Utrecht, which is a lovely place to have an office!
Note: as I'm spending quite some time muttering right now (over a technical decision), truthfulness forces me to say that not everything is roses and sunshine. But that's something for a separate entry (probably on how I try to deal with it, I do have to re-gain my positive energy).
Anyway, I've got nice work. So let me finish with a video I made for demo purposes. I'd also like to point at another quick video, a more 3d-like visualization. Nice!
Pillow is a better-packaged version of PIL, the Python Imaging Library. There's one problem you can see, though, and that's with importing. PIL allowed both import Image and from PIL import Image. Pillow sanely only supports the second version.
A colleague had a problem earlier today with TileStache that used the old import Image version. And that fails with Pillow. The solution I suggested him was to add a little hack at the top of a file that's loaded early in the process (in this case a Django settings file was the best spot):
import sys from PIL import Image sys.modules['Image'] = Image
This makes sure you get PIL.Image when you do import Image afterwards. Problem solved.
Note: if both of the imports work for you without this hack, it can be a good idea to check if you need to clean something up. Import both versions and check their __file__ attribute:
>>> from PIL import Image >>> Image.__file__ '/usr/lib/python2.7/dist-packages/PIL/Image.pyc' >>> import Image >>> Image.__file__ '/usr/lib/python2.7/dist-packages/PIL/Image.pyc'
In my case, I only have PIL, apparently. The colleague had a different result for the first import: that one was from Pillow, the second from PIL. Something to keep in mind if you have weird PIL-related results.
I noticed a question on stackoverflow about Fabric + buildout as opposed to Fabric + pip + virtualenv. Good question! Why? Because Fabric changes the regular trade-off between buildout and pip:
checkout of the buildout and restarting nginx and so: everything outside of buildout's control.
(Btw, I've answered the question, too). | http://reinout.vanrees.org/weblog/pythonfeed.xml | CC-MAIN-2014-15 | refinedweb | 2,258 | 67.15 |
Details
Description
Use case:
- when looping a directory, (imagine someone is too stupid and dunno the dmoz database can be downloaded and try to crawl it with Droids) we got collect a lot of links that will be handled later. assume the requirement is to fetch dmoz directory +1 link outside dmoz.org, In the original mechanism, it will keep adding new links to the TaskQueue. Ideally, there should be a mechanism to give a higher priority to the non-dmoz.org links, so when non-dmoz links are added, they are processed first, and be removed from the TaskQueue asap.
with the patch in
DROIDS-47, a constructor is added to the SimpleTaskQueue to support a custom Queue. This issue suggests to change the SimpleTaskQueue to use a PriorityBlockingQueue by default, and add a getWeight to the Task interface
I'm also thinking about a more complex TaskQueue. to be discussed in the mail list later.
Activity
- All
- Work Log
- History
- Activity
- Transitions
the previous attachment missed some files
the previous patch reversed the order. the higher the weight, the sooner a task should be processed
any comment to this feature?
could a weight field be added to Task? or could Task be enhanced to support a map of custom data? without adding weight to the Task interface, this feature cannot be implemented.
for the Queue, there could be diff options:
1. include in SimpleTaskQueue as provided in this patch, or
2. make a separated TaskQueue implementation, e.g. PrioritizedTaskQueue, or
3. do not include in the distribution (maybe provide in any example)
re. between 1 and 2, the so-called prioritization is not too complex, so I think it is ok to include SimpleTaskQueue rather than separate to another queue, if it is to be included in the dist at all.
merged code to allow applying to the current snapshot. no functional change.
I am fine with the feature but I have problems with the diff. It adds formating changes to the code which makes it hard to identify the real changes.
let me submit another patch. i have a habit to use the formatter of my IDE but I haven't set it to use the coding style of this project, so. ... :-P
p.s. for this issue, it could be handled just by adding a weight integer field. but i feel it is most flexible if the LinkTask could whole any arbitrary data. And the simplest way is to make it extends Map.
public class LinkTask extends HashMap<String, Serializable> { //other interface are skipped; protected final String id; //whatever data type for ID protected final URI uri; //refer to DROIDS-52, this may cause problem for URI) // all the other data are optional
use cases:
- say, in submitting a link, we want to associate information about cookie/http header, so the fetcher could use the cookie info when fetching
- any optional fields like weight could be used
- any component, such as filter or parser or whatever, could mark arbitrary tag for a link. say, a parser/factory, may read a "parser"/"contentType" value to decide how the data could be parsed. (so the parser doesn't depends on HttpEntity in interface) or the outlink could be attached directly to a LinkTask.
i throw the initial idea here to see if anyone has comment. more details on the implementation could be provided.())
so, weighted becomes optional. if user want to support weight, then, they implement Weighted and let the user decide how to weight.
p.s. I'm designing a filter framework that work at a broader sense than URL filter. The Weighted interface is actually designed to cater the ordering of Filter as well.
the patches changes quite a number of files, but it's all about
remarks: LinkTask consumes 72 bytes per instance in a sample test. If the servers do not handle links fast enough, LinkTask will be kept adding to the memory. Just a quick calculation (maybe wrong), 1.5G memory could hold 20M LinkTask. It is preferable to minimize the field in a LinkTask, and use the shortest field. (int instead of long)
How it works: | https://issues.apache.org/jira/browse/DROIDS-48?focusedCommentId=12700591&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-48 | refinedweb | 694 | 64 |
I have a python script which opens a file and reads it:
f= open("/file.dat", "r").read()
file.dat is a multiple lines file with quotes, spaces, new lines and special characters such as
#,&,"
I would like to echo
f into a new file named
t.dat. I have tried:> t.dat".format(f) os.system(cd)
which prints to the screen the file content until the “& config” which is in it and the error: sh: 32: Config: not found
Tried the following as well with similar results:> t.dat".format(f)
cd= "$echo {} >> t.dat".format(f)
What is the correct way to perform this? Thank you!
Answer
Use
shlex.quote()
import shlex cd = r"printf '%sn' {} >> t.dat".format(shlex.quote(f.read())) | https://www.tutorialguruji.com/python/echo-multiple-lines-from-another-file-with-multiple-quotes-special-characters-into-a-new-file/ | CC-MAIN-2021-43 | refinedweb | 126 | 75.4 |
Hi, all...I successfully created a sample app using ext-gen and Ext JS 6.6, following the instructions on the Sencha site. It builds and runs just fine using the "npm start" command.
However, I am having a monstrously difficult time applying my custom theme (which extends Triton) to this app. The build process fails, with errors like "cannot satisfy requirements for "my-custom-theme", etc.
If I had to guess, the problem lies in the app.json file, where you actually define the theme. Ext JS seems to expect that the theme you choose for a given build (like desktop) will be one of the stock themes that ship with Ext JS, which are located in the namespaced @sencha NPM repo. I have no idea, nor can I find documentation, how to get Ext JS to recognize a theme located in an NPM package outside the default repo.
Has anyone successfully done this?
Thanks! | https://www.sencha.com/forum/showthread.php?470538-Using-a-custom-theme-with-the-new-6-6-Node-based-tooling&s=4af6213b26285821cf17721303163940&p=1321567 | CC-MAIN-2018-47 | refinedweb | 156 | 73.17 |
Following up on last week's article on
std::vector, today we will be focusing on
std::array. Where
std::vector represented dynamically sized arrays,
std::array is a container that represents arrays of fixed size.
std::array lends itself nicely to use in embedded systems, as memory can be statically allocated at compile-time.
std::array Overview
In order to utilize
std::array, you will need to include the
array header:
#include <array>
std::array is a header-only implementation, which means that once you have a C++ runtime set up for your target system you will get this feature for free.
std::array provides many benefits over built-in arrays, such as preventing automatic decay into a pointer, maintaining the array size, providing bounds checking, and allowing the use of C++ container operations.
As mentioned above,
std::array is a templated class that represents fixed-size arrays. The size of a
std::array is known at compile time, and the underlying buffer is stored within the
std::array object itself. These two facts are very useful for embedded systems programmers, as they can control buffer sizes and storage location during compilation time. Avoiding the need for dynamic memory allocation saves computational cycles and reduces memory fragmentation. If you declare the object on the stack, the array itself will be on the stack. If you put it in a global scope, it will be placed into global static storage.
Unlike
std::vector, which requires 24 bytes of overhead, no overhead is needed for a
std::array.
Creating a
std::array
The
std::array container is prototyped on two elements: the type that you want to be stored and the size of the array.
std::array containers of different sizes are viewed as different types by the compiler.
//Declares an array of 10 ints. Size is always required std::array<int, 10> a1; //Declare and initialize with an initializer list std::array<int, 5> a2 = {-1, 1, 3, 2, 0};
You can make new arrays via copy:
//Making a new array via copy auto a3 = a2; //This works too: auto a4(a2);
And you can also copy arrays of the same size by using the
= operator:
//Assign a2 to a3's values: a2 = a3; // But you can only use the '=' operator on arrays of equivalent size. //Error: //a1 = a2; //<[...],10> vs <[...],5>! invalid
At least you don't have to worry about remembering
memcpy argument order or writing past the end of your buffer!
Accessing Data
While the size of a
std::array is fixed at compile time, the contents of a
std::array can be modified during runtime. The familiar
[] operator can be used to access specific elements:
//Assigning values works as expected a3[0] = -2;
However, as with
std::vector, the
[] operator does not use bounds checking. If you want to access an element with bound checks enabled, use the
at() function.
std::cout << "a2.at(4): " << a2.at(4) << std::endl; // Bounds checking can generate exceptions. Try: //auto b = a2.at(10);
You can also access the
front() and
back() member functions to get the members at the beginning & end of the array.
data()
Like
std::vector,
std::array doesn't implicitly decay into a raw pointer. If you want to use the underlying
std::array pointer, you must use the
data() member function.
For example, let's assume you are using an API with a C-style buffer interface:
void carr_func(int * arr, size_t size) { std::cout << "carr_func - arr: " << arr << std::endl; }
If you tried to pass a
std::array for the first argument, you would generate a compiler error.
../../array.cpp:44:2: error: no matching function for call to 'carr_func' carr_func(a2); ^~~~~~~~~ ../../array.cpp:4:6: note: candidate function not viable: no known conversion from 'std::array<int, 5>' to 'int *' for 1st argument void carr_func(int * arr)
Instead you need to use the
data() member:
//Error: //carr_func(a2, a2.size()); //OK: carr_func(a2.data(), a2.size());
size() and
max_size()
You can access the size of a
std::array using the
size() member function.
max_size() is also valid for
std::array. However, since the size of a
std::array is constant,
max_size will always be equal to
size.
empty()
std::array has a specific use for the
empty() member function which differs from other containers: it only returns
True if the array size is 0:
std::array<int, 0> a_empty;
The underlying
empty() operation checks if the container has no elements (
begin() ==
end()). Since
std::array is statically sized, this condition is only hit when you have a zero-length array.
Container Operations
Since
std::array is a container class and provides the basic container interfaces. Since a
std::array cannot grow or shrink, all functionality related to resizing or remembering a current position has been removed (e.g.
push_back).
However, you can still use a
std::array with functions that are written to operate on container classes, such as
std::sort.
std::sort(a1.begin(), a1.end());
Also worth noting - unlike built-in arrays (which decay into a pointer), a
std::array container can be passed by value into a function.
Putting it All Together
Example code for
std::array can be found in the
embedded-resources Github repository. | https://embeddedartistry.com/blog/2017/6/28/an-introduction-to-stdarray | CC-MAIN-2017-43 | refinedweb | 876 | 51.48 |
XQuery/URL Rewriting Basics
Contents
- 1 Motivation
- 2 Method
- 3 Customizing URLs
- 4 Further considerations
- 5 Acknowledgments
Motivation[edit]
You want to take simple, short, intuitive and well designed incoming URLs and map them to the appropriate structures in your database. You want to achieve the ideal of 'cool URLs' and make your XQuery apps portable within your database and to other databases.
Method[edit]
A typical URL in eXist has a format similar to the following:
You want users to access this page through a cooler, less platform-dependent URL such as:
In order to go transform your URLs into the latter cool form, you need to understand the fundamentals of URLs in eXist.
Parts of a URL[edit]
Fundamentally, eXist's URLs consist of 3 parts:
- The Hostname and Port: In the example above the hostname is and the port is 8080
- The Web Application Context: In the example above the context is /exist
- The Path: In the example above the path is /rest/db/app/search.xq?q=apple
Customizing an eXist URL can mean targetting 1 or more of the 3 parts.
Rewriting Primer[edit]
Some methods below make use of eXist's URL-rewriting facility, that conceptually will let your application follow a MVC (model-view-controller) design. eXist 1.5 comes preconfigured with a working setup that embodies these principles:
- The collection that lives below /db/myapp/, which is exposed through the REST servlet via /exist/rest/db/myapp/, can at the same time be reached through URL-rewriting in the location /exist/apps/myapp/.
- Placing a controller.xql inside of /db/myapp/ will determine how the data, a.k.a. model inside of this collection gets presented in the space created by URL-rewriting - so to say: it controls the view at the model.
Please read farther below on how to configure URL-rewriting in version 1.4.1 of eXist to get the same setup.
Customizing URLs[edit]
Changing the Port[edit]
The port for eXist's default web server (Jetty) is 8080, and it is set in $EXIST_HOME/tools/jetty/etc/jetty.xml line 51. You can modify this file, or you can set the port on startup by setting the -Djetty.port=80 flag upon startup.
Note that how you change the port is different based on how you start eXist. If you start eXist from the bin/startup using a UNIX or DOS shell you must change the startup.sh or startup.bat file. If you start eXist automatically using the UNIT tools/wrapper/exist.sh tools or the Windows Services you need to change the jetty.xml file.
Restart eXist. Now, with this change made, your URL will now look like:
instead of:
On Unix (including Mac OS X) and Linux, you will need to run eXist as root in order to bind to port 80. Otherwise the server won't start.
Changing the Web Application Context[edit]
To trim your server's web application context from /exist to /, go to line 134 of the same $EXIST_HOME/tools/jetty/etc/jetty.xml file and change the following:
From:
<Arg>/exist</Arg>
To:
<Arg>/</Arg>
Restart eXist. Now, with this change made, your URL will now look like:
instead of:
Customizing the Remainder of the URL[edit]
In customizing the remainder of the URL, eXist's URL Rewriting feature becomes both powerful and challenging. (See eXist Documentation on URL rewriting for complete documentation on this aspect of URLs in eXist.)
The heart of eXist's URL Rewriting is a file that controls the URLs for its portion of your site; this file is called controller.xql, and you place it at the root of your web application directory. It controls all of the URLs in its directory and in child directories (although child directories can contain their own controller.xql files - more on this later). If your web application is stored on the filesystem, you would likely place 'controller.xql' in the /webapp directory. If your web application is stored in the eXist database, you might put it in the /db collection. In our running example app, where would you store your controller.xql?
Current form:
Goal URL:
A natural location for the controller.xql would be the /db/app directory, because the search.xq file (and presumably the other .xq files) are stored in this directory or beneath.
Given this location for our app's root controller.xql, we need to tell eXist to look for the root controller.xql in the '/db/app' directory. We do this by editing the controller-config.xml file in the /webapp/WEB-INF folder. Comment out lines 27-28, and add the following:
<root pattern="/*" path="xmldb:exist:///db/app"/>
Then restart eXist. This new root pattern will forward all URL requests (/*) to the /db/app directory. Now, with this change made, your URL will now look like:
instead of:
The final step to customizing the URL is to create a controller.xql file that will take a request for /search?q=apple and pass this request to the search.xq file along with the q parameter.
A basic controller.xql file that will accomplish this goal is as follows:
xquery version "1.0"; (:~ Default controller XQuery. Forwards '/search' to search.xq in the same directory and passes all other requests through. :) (: Root path: forward to search.xq in the same collection (or directory) as the controller.xql :) if (starts-with($exist:path, '/search')) then let $query := request:get-parameter("q", ()) return <dispatch xmlns=""> <forward url="search.xq"/> <set-attribute </dispatch> (: Let everything else pass through :) else <ignore xmlns=""> <cache-control </ignore>
Note that the $exist:path variable is a variable that eXist makes available to controller.xql files. The value of $exist:path is always equal to the portion of the requested URL that comes after the controller's root directory. A request to '/search' will cause $exist:path to be '/search'.
Save this query as controller.xql and place it in your /db/app directory. Congratulations! Our URL is now in the very cool form we had envisioned:
instead of:
This $exist:path variable is one of 5 such variables available to controller.xql files. (See the full URL Rewriting documentation for more information on each.) These variables give you very fine control over the URLs requested as well as eXist's own internal paths to your app's resources.
Since you may wish to re-route a URL request based on the URL parameters (e.g. q=apple), you may wish to retrieve the URL parameter using the request:get-parameter() function, and then to explicitly pass this parameter to the target query using the <add-parameter> element, as in the example controller.xql file.
Thus, in customizing the "path" section of the URL, we have actually paid attention to 3 items:
- The root pattern and path to its root controller directory (recall the <root> element inside the controller-config.xml file)
- The remainder of the path after the controller directory
- The URL parameters included as part of the URL
This simple example only touches the surface of what you can do with URL Rewriting. Using URL Rewriting not only gives your apps 'cool URLs', but it also allows your apps to be much more portable, both on your server and in getting your apps onto other servers.
Further considerations[edit]
Defining multiple 'roots'[edit]
If you want your main app to live in /db/app but you still want to access apps such as the admin app ('/webapp/admin') stored on the filesystem, add a <root> element to controller-config.xml declaring the root pattern you want to associate with the filesystem's /webapp directory. Replace your current root elements with the following:
<root pattern="/fs" path="/"/> <root pattern="/*" path="xmldb:exist:///db/app"/>
This will pass all URL requests beginning with /fs to the filesystem's webapp directory. All other URLs will still go to the /db/app directory.
Using multiple controller.xql files[edit]
While you can get along fine with only one controller.xql (or even none!), eXist allows controller.xql files to be placed at any level of a root controller hierarchy, as defined in the controller-config.xml's <root> element(s). This allows the controller.xql files to be highly specific to the concerns of a given directory. eXist searches for the deepest controller.xql file that matches the deepest level of the URL request, working up toward the root controller.xql.
The importance of order in the controller.xql logic[edit]
Make sure that you arrange your conditional expressions in the proper order so that the rules are evaluated in that order, and no rules are inadverently evaluated first. In other words, if another rule matches URLs beginning with '/sea', the URL rewriter would always pass '/search' URLs to that rule instead of your '/search' rule.
Variable Standards[edit]
The code inside of controller.xql gets passed some variables in addition to the usual ones. Below controller.xql does not do any forwarding, but instead prints their values, and the path to the document requested, if there is one there…
xquery version "1.0"; declare namespace exist=""; import module namespace text=""; declare variable $exist:root external; declare variable $exist:prefix external; declare variable $exist:controller external; declare variable $exist:path external; declare variable $exist:resource external; let $document := concat($exist:root, (: $exist:prefix, :) $exist:controller, $exist:path) return <dummy> <exist-root>{$exist:root}</exist-root> <exist-prefix>{$exist:prefix}</exist-prefix> <exist-controller>{$exist:controller}</exist-controller> <exist-path>{$exist:path}</exist-path> <exist-resource>{$exist:resource}</exist-resource> <document>{$document}</document> </dummy>
Acknowledgments[edit]
Joe Wicentowski contributed the core of this article to the eXist-open mailing list on Mon, 19 Oct 2009. It was subsequently edited by Dan McCreary and Joe Wicentowski into its present form. | https://en.wikibooks.org/wiki/XQuery/URL_Rewriting_Basics | CC-MAIN-2015-32 | refinedweb | 1,648 | 55.64 |
This file exposes an interface to building/using memory SSA to walk memory instructions using a use/def graph. More...
#include "llvm/ADT/DenseMap.h"
#include "llvm/ADT/GraphTraits.h"
#include "llvm/ADT/SmallPtrSet.h"
#include "llvm/ADT/SmallVector.h"
#include "llvm/ADT/ilist.h"
#include "llvm/ADT/ilist_node.h"
#include "llvm/ADT/iterator.h"
#include "llvm/ADT/iterator_range.h"
#include "llvm/ADT/simple_ilist.h"
#include "llvm/Analysis/AliasAnalysis.h"
#include "llvm/Analysis/MemoryLocation.h"
#include "llvm/Analysis/PHITransAddr.h"
#include "llvm/IR/BasicBlock.h"
#include "llvm/IR/DerivedUser.h"
#include "llvm/IR/Dominators.h"
#include "llvm/IR/Module.h"
#include "llvm/IR/Type.h"
#include "llvm/IR/Use.h"
#include "llvm/IR/User.h"
#include "llvm/IR/Value.h"
#include "llvm/IR/ValueHandle.h"
#include "llvm/Pass.h"
#include "llvm/Support/Casting.h"
#include <algorithm>
#include <cassert>
#include <cstddef>
#include <iterator>
#include <memory>
#include <utility>
Go to the source code of this file.
This file exposes an interface to building/using memory SSA to walk memory instructions using a use/def graph.
Memory SSA class builds an SSA form that links together memory access instructions such as loads, stores, atomics, and calls. Additionally, it does a trivial form of "heap versioning" Every time the memory state changes in the program, we generate a new heap version. It generates MemoryDef/Uses/Phis that are overlayed on top of the existing instructions.
As a trivial example, define i32 () #0 { entry: call = call noalias i8* (i64 4) #2 %0 = bitcast i8* call to i32* call1 = call noalias i8* (i64 4) #2 %1 = bitcast i8* call1 to i32* store i32 5, i32* %0, align 4 store i32 7, i32* %1, align 4 %2 = load i32* %0, align 4 %3 = load i32* %1, align 4 add = add nsw i32 %2, %3 ret i32 add }
Will become define i32 () #0 { entry: ; 1 = MemoryDef(0) call = call noalias i8* (i64 4) #3 %2 = bitcast i8* call to i32* ; 2 = MemoryDef(1) call1 = call noalias i8* (i64 4) #3 %4 = bitcast i8* call1 to i32* ; 3 = MemoryDef(2) store i32 5, i32* %2, align 4 ; 4 = MemoryDef(3) store i32 7, i32* %4, align 4 ; MemoryUse(3) %7 = load i32* %2, align 4 ; MemoryUse(4) %8 = load i32* %4, align 4 add = add nsw i32 %7, %8 ret i32 add }
Given this form, all the stores that could ever effect the load at %8 can be gotten by using the MemoryUse associated with it, and walking from use to def until you hit the top of the function.
Each def also has a list of users associated with it, so you can walk from both def to users, and users to defs. Note that we disambiguate MemoryUses,).
MemoryDefs are not disambiguated because it would require multiple reaching definitions, which would require multiple phis, and multiple memoryaccesses per instruction.
Definition in file MemorySSA.h. | http://llvm.org/doxygen/MemorySSA_8h.html | CC-MAIN-2018-17 | refinedweb | 476 | 50.84 |
Don't mind the mess!
We're currently in the process of migrating the Panda3D Manual to a new service. This is a temporary layout in the meantime.
Actor Basics
The python class
Actor is designed to hold an animatable model and
a set of animations. Since the Actor class inherits from the NodePath class,
all NodePath functions are applicable to actors.
Note, however, that Actor is a Python class that extends the C++ NodePath class. For the most part, you don't have to think about this: Actor inherits sensibly from NodePath and generally does what you expect. There are a few subtle oddities, though. When you attach an Actor into a scene graph, the low-level C++ Panda constructs only records the NodePath part of the Actor in the scene graph, which is fine as long as you also keep a pointer to the Actor instance in your Python objects. If you let the Actor destruct, however, its visible geometry will remain, but it will cease animating (because it is no longer an Actor). Also, even if you keep the Actor object around, if you retrieve a new pointer to the Actor from the scene graph (for instance, as returned by the collision system), you will get back just an ordinary NodePath, not an Actor.
The Actor interface provides a high-level interface on the low-level Panda constructs. In Panda, the low-level node that performs the animation is called Character. You can see the Character node in the scene graph when you call
actor.ls().
Do not confuse the Actor class with the ActorNode class, which is used for physics. They are completely unrelated classes with similar names.
Using Actors
The Actor class must be imported before any loading or manipulation of actors.
from direct.actor.Actor import Actor
Once the model is loaded, the actor object must be constructed, and the model and animations must be loaded:
Loading each animation requires a tuple: the name one is giving the animation and the path to the animation. This entire process can be shortened to a single command:
nodePath = Actor('Model Path', { 'Animation Name 1':'Animation Path 1', 'Animation Name 2':'Animation Path 2', })
Note that it is also possible to store the animations and model in the same file. In that case, just create the Actor with just the model as parameter.
Although this is a rarely-used technique, it is possible to assemble a character model out of several separate pieces (separate models). This is further explained in the section Multi-Part Actors.
Panda3D supports both skeletal animation and morph animations.
It is also possible to load animations asynchronously, if your build of Panda has Threading enabled (which is the case in version 1.6.1 and above).
Panda Filename Syntax
The filenames used in the Actor constructor must follow Panda's filename conventions. See Loading Models for more information. Loading actors and animations utilizes the panda model path, the same as for static models.Previous Top Next | https://www.panda3d.org/manual/?title=Loading_Actors_and_Animations | CC-MAIN-2019-18 | refinedweb | 503 | 54.52 |
operator
trueif the first expression evaluates to less than the second one does, and
falseotherwise.
expression1 < expression2
You can determine whether one expression is less than another with the
< operator. You can use it with any data type expression. String expressions are compared according to the byte values of their characters. process local stream string-1 initial {"Catch-up"} local stream string-2 initial {"Catch-22"} do when string-1 < string-2 output string-1 || " < " || string-2 || "%n" else when string-1 > string-2 output string-1 || " > " || string-2 || "%n" else when string-1 = string-2 output "String compare has an error." || "%n" done ; Output: "Catch-up > Catch-22"
The following tests use "
<" to compare strings. The first and third tests evaluate as
true; the second evaluates as
false.
do when "a" < ul "b" ... done do when "a" < "B" ... done do when "a" < "b" ... done
You can also compare two variables of different data types, for example, an integer and a BCD. If their numeric values are the same, one will not compare greater than the other.
import "ombcd.xmd" unprefixed process local integer one-integer initial {33} local bcd one-bcd initial {33} do when (one-bcd < one-integer) output "Error. BCD of " || "d" % one-bcd || " shown as less than integer of " || "d" % one-integer || ".%n" else output "Correct. BCD of " || "d" % one-bcd || " shown as not less than integer of " || "d" % one-integer || ".%n" done ; Output: "Correct. BCD of 33 shown as not less than integer of 33."
This operator has two deprecated synonyms:
is less-than and
isnt greater-equal. | http://developers.omnimark.com/docs/html/keyword/192.html | CC-MAIN-2020-10 | refinedweb | 263 | 75.81 |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Dear all.
I am trying to resize my cam size to 640x480 with Macbook Air internal Cam. My cam size and and OpenCV size are 320x240 now, and when I try to change to 640x480 I am keep having trouble..
here's my cam list..
When I am using 'Capture.list()[3]' and size 320x240 it works fine.. but when I change to 'Capture.list()[0]' and size 640x480 it is still working but really slow.. I just guess because of Rectangle but still I don't know why..
here's my code..
import processing.video.*; import gab.opencv.*; import java.awt.Rectangle;
OpenCV openCV; Capture cam; Rectangle[] faces;
void setup() { size(640, 480);
cam = new Capture(this, Capture.list()[0]); cam.start();
openCV = new OpenCV(this, 640, 480); openCV.loadCascade(OpenCV.CASCADE_FRONTALFACE);
noFill(); stroke(0,255,0); }
void draw() { if (cam.available()) { cam.read(); image(cam,0,0);
openCV.loadImage(cam); faces = openCV.detect(); for (int i=0; i<faces.length; i++) { rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); }
} }
Thank you so much. Stella. | https://forum.processing.org/two/discussion/13766/capture-with-macbook-air-internal-cam | CC-MAIN-2019-47 | refinedweb | 201 | 79.97 |
1631089920
Are you part of any group chats? Do you need to coordinate with friends & family? Do you sometimes forget to remember something? We do! Bots allow for new interactions with a message centric interface instead of the user interfaces we are used to. As such they are a perfect answer to our questions. They can be part of group chats to coordinate with friends & family and keep us accountable for these small tasks in life.
What is this session about?
Together we will build a a shared shopping list as a chatbot using the Microsoft Bot Framework. The chatbot uses the LUIS API for natural language processing to find out the chat participant's intent (add a shopping list item, mark an item as done, remove an item from the shopping list, ...) and entities (5 kg bananas, three apples, 2 liters of orange juice, ...).
If any information is missing, the bot starts a dialogue with the chat participants to retrieve the remaining information needed. Afterwards, the Azure Function for the respective intent is called which operates (add, update, delete) on a Cosmos DB.
Who is it aimed at?
Developers with no prior knowledge about chat bots, that want to learn more about building bots and have a basic understanding of any programming language.
Why should I attend?
This workshop will guide you through building a real world bot with zero knowledge about any of the technologies we are using. Because this is a full stack scenario you will also learn more about language understanding with LUIS, non relational databases with Cosmos DB and serverless computing through Azure Functions.
What should I do now?
If this workshop interests you do not forget to add it to your calendar.
However, basic understanding of any programming language will be helpful.
This event is presented by Sandro Speth & Malte Reimann, who are our Gold Microsoft Student Ambassadors.
#chatbot #microsoft
1630572150
This Edureka video of "Chatbots using TensorFlow" gives you an idea about what are chatbots and how did they come into existence. It provides a brief introduction about all the layers involved in creating a chatbot using TensorFlow and Machine Learning.
#chatbot #tensorflow #machinelearning #ai
1630530900
In this video, we will build a chatbot using Python with Flask Framework. It's not like building and running on your IDLE but with Chatbot Interface using HTML, CSS, Js.
Download Boilerplate code here:
This video is all about how python can be used in various applications. The key aspects you will learn in this video,
00:09 Introduction
00:46 What is an API [Application Programming Interface]
02:19 How Flask can be used in Python Programming
02:51 Difference between Flask and Django
03:27 Important API Operations [CRUD Operations]
05:55 How to Build an API for Chatbot using Python Software
08:44 How to Import Flask
09:53 How to Create a Python File by making a directory with dos commands
19:36 Creating Chatbot with flask
20:28 Creating another directory to store your HTML files
25:44 Installing ChatterBot Package version 1.0.1
26:43 Creating custom YML files
27:58 Corpus explained
29:16 Creating our own Corpus
33:50 Training our Data with the help of Chatterbot
37:22 Creating Conversations on YML formats in Chatterbot corpus
43:46 Creating another application for fetching data from URL
46:58 Creating HTML page for the user interface of Chatbot with some Java Script
01:08:15 Executing and Training our Python File
01:08:56 Chatbot created using Flask Framework
01:09:18 Testing our Chatbot
#chatbot #Python #HTML #CSS #JavaScript
1630461544
In this video, we show how you can get Vertex Pipelines notifications from Chatbot.
#chatbot
1630379711
History of chatbots dates back to 1966 when a computer program called ELIZA was invented by Weizenbaum. It imitated the language of a psychotherapist from only 200 lines of code. You can still converse with it here: Eliza.
On similar lines let's create a very basic chatbot utlising the Python's NLTK library.It's a very simple bot with hardly any cognitive skills,but still a good way to get into NLP and get to know about chatbots.
The idea of this project was not to create some SOTA chatbot with exceptional cognitive skills but just to utilise and test my Python skills.This was one of my very first projects, created when I just stepped into the world of NLP and I thought of creating a simple chatbot just to make use of my newly acquired knowledge.
NLTK(Natural Language Toolkit)
Natural Language Processing with Python provides a practical introduction to programming for language processing.
For platform-specific instructions, read here
pip install nltk
After NLTK has been downloaded, install required packages
import nltk from nltk.stem import WordNetLemmatizer nltk.download('popular', quiet=True) # for downloading popular packages nltk.download('punkt') nltk.download('wordnet')
You can run the chatbot.ipynb which also includes step by step instructions.
python chatbot.py
Download Details:
Author: parulnith
The Demo/Documentation: View The Demo/Documentation
Download Link: Download The Source Code
Official Website:
#python #chatbot #ai
1630371533
This tutorial will cover the basics of Rasa, an open source library for building chatbots, including how words are translated into machine learning features and how the next conversation turn is picked. Then we'll quickly build a simple bot together and answer audience questions.
Dr. in the background, his name is Benson.
#python #chatbot #ai
1630210329
This tutorial will cover the basics of Rasa, an open source library for building chatbots, including how words are translated into machine learning features and how the next conversation turn is picked. Then we'll quickly build a simple bot together and answer audience questions.
#chatbot #python #machinelearning
Use AI to create a Discord chat bot that talks like your favorite characters. Learn how to code it in Python and JavaScript.
⭐️ Course Contents ⭐️
⌨️ (00:00) Intro
⌨️ (01:38) Gather data
⌨️ (12:27) Train the model
⌨️ (24:27) Deploy the model
⌨️ (29:42) Build the Discord bot in Python
⌨️ (41:17) Build the Discord bot in JavaScript
⌨️ (51:35) Keep the bots online
💻 Lynn's GitHub resource for this video:
#python #javascript #chatbot #discord
1629946402
This Edureka video on "Azure ChatBot Service" will help you understand the nitty-gritty of ChatBot and how to create them using Microsoft Azure.
#azure #chatbot #microsoft
1629867600
In this article, I present the challenges to keep your chatbot accessible to all your users.
According to the United Nations , around 15 per cent of the world’s population, or estimated 1 billion people, live with disabilities. They are the world’s largest minority.
The French RGAA is mainly a translation and an adaptation of the WCAG 2.0 guideline.
"Art. Source: Law n° 2005–102 of February 11, 2005 for equal rights and opportunities, participation and citizenship of disabled persons .
Although not all websites are state-run, it is generally advisable to follow these guidelines.
1629863528
In this article, we’ll build a Python Flask app that uses Pinecone — a managed similarity search service — to do just that.
1629856053
Using a semantic search engine, you can extend the reach of your chatbot with minimal effort.
Not too long ago, chatbots and assistants didn’t work well. But due to leaps in the performance of NLP systems made after the introduction of transformers in 2017, combined with the open source nature of many of these models, the landscape is quickly changing. Yet, for all the recent advances, there is still significant room for improvement.
1628593364
There is more demand for chatbots in the workforce than ever before, and if you want to get ahead of the game and hire someone who can handle this new and exciting field, you'll need to know how to find the best one. Just read this article with our list of Hire top Chatbot developers from around the world.
It's not a new idea at all. The bots that are behind AI have been making a lot of waves lately. People frequently use Chatbots to engage with their customers. While this can be a great way to scale up your business, it's also confusing and temperamental as it is developed. However, for those who want to get into the chatbot industry, there are several things you need to keep in mind. Constantly ask for feedback.
Without constant feedback, it won't be easy to ascertain which direction you should move in. Before diving headfirst into a chatbot, ask yourself if you're willing to invest the time and money in this project? Is it something that can be done for free, or do you have the means? What kind of time frame are you looking at? Are you looking for a long-term solution or something that can be developed overnight?
Don't mistake Chatbots with chat rooms. A lot of people think the latter is what they can do without even having any prior knowledge of the technology.
If you're looking for an expert in this area, whether it's an in-house chatbot or something more complex like a brand ambassador robot, these are some of your best options out there.
List of the Top Chatbot Development Companies:
Auxano Global Services provide highly scalable and concurrent load handling chatbot solutions which cater the need of startups as well as large enterprise. We support a customer in complete life cycle of project from conversation design to after-live supervised & unsupervised learning of bot. our highly refined and AI powered chatbots will warmly greet your website visitors, automatically route chats to the concerned departments, help users get through troubleshooting processes and much more.
2.Consagous Technologies
Founded in the year 2008, Consagous Technologies is a leading Web & Mobile App Development Company in the USA and India, with its headquarters in Indore. The Company has a great reputation in providing impeccable IT solutions to businesses of every scale across the globe. Consagous is an established company ranked as one of the fastest-growing private companies in the US by Clutch, App Futura, and GoodFirms.
3. Panaceatek
Pan.
#chatbots #chatbot #chatbotdevelopment #chatbotdevelopers #web #webdevelopment #webdevelopers #webdev #developers #hirechatbotdevelopers #hiretopchatbotdevelopers #chatbotdevelopersforhire
1628169864
Last year, Facebook AI Research open sourced BlenderBot 1.0, the largest open domain chatbot ever built. The first version of BlenderBot was one of the first chatbots to combine empathy, personality and knowledge in a single system.
More specifically, BlenderBot 2.0 incorporates two main innovations relative to the previous version:
Improved long term memory capabilities.
Ability to search the internet for real time knowledge.
1628162340
Natural Language Processing, or NLP for short, is a part of artificial intelligence that deals with the communication between machines and humans using natural language or ordinary language.
With this technology, you can, for example, build advanced and sophisticated chatbots. Using NLP.js, you can bootstrap your own private chatbot and cloud AI service. | https://morioh.com/topic/chatbot | CC-MAIN-2021-39 | refinedweb | 1,825 | 51.99 |
I have the following code that works in that I can enter the hour, min and sec and it will wait until this time to run the code.
What I need is to be able to run this permanently and for it to run at 7am every morning (except saturday and sunday).
How can I alter this code to achieve the following, I also need to have this in its own thread due to other objects running their own code within the same package
import java.util.Timer; import java.util.TimerTask; import java.util.Calendar; import java.util.TimeZone; public class MyTimer { Timer timer; Calendar startTime; public MyTimer(int seconds) { startTime = Calendar.getInstance(TimeZone.getDefault()); startTime.set(Calendar.HOUR_OF_DAY, 20); startTime.set(Calendar.MINUTE, 56); startTime.set(Calendar.SECOND, 00); timer = new Timer(); timer.schedule(new RunTask(), startTime.getTime(), 1000 * 60 * 60 * 24); } class RunTask extends TimerTask { public void run() { System.out.println("It's Time"); timer.cancel(); //Terminate the timer thread } } public static void main(String args[]) { new MyTimer(5); System.out.println("Waiting"); } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/9232-timer-class-how-reschedule.html | CC-MAIN-2013-48 | refinedweb | 176 | 52.76 |
Red Hat Bugzilla – Bug 74346
tgetent leaks memory on every call.
Last modified: 2015-01-07 19:00:32 EST
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 4.0)
Description of problem:
Repeated calls to the tgetent routine from within a single process seem to leak
memory. Demonstration program 'ft.C' to be compiled with
g++ -o ft ft.C -ltermcap
(NOTE: assumes vt100 is defined in your system's termcap database).
#include <stdlib.h>
#include <termcap.h>
#include <assert.h>
int main(int argc, char **argv)
{
for(;;)
int ret = tgetent(NULL,"vt100");
}
The program that was initially affected by this problem is a port of some very
old, strange software, and it does wierd terminal handling. This forces me to
call tgetent repeatedly. As a workaround, I'm caching the output of
tgetstr/tgetnum/etc for the capabilities I'm interested in.
Version-Release number of selected component (if applicable):
libtermcap-2.0.8-28
libtermcap-devel-2.0.8-28
How reproducible:
Always
Steps to Reproduce:
1. run 'top'
2. On a seperate console, run the 'ft' program (see description)
3. Watch top
Actual Results: The 'ft' process will steadily increase in size until it eats
all memory in the system, and then crash.
Expected Results: The 'ft' process should NOT increase in size. The only thing
it should do is eat up CPU.
Additional info:
Possibly relevant RPM versions:
libtermcap-2.0.8-28
libtermcap-devel-2.0.8-28
gcc-c++-2.96-98
gcc-2.96-98
glibc-devel-2.2.4-13
glibc-2.2.4-13
Created attachment 91752 [details]
patch for memory leak
This patch will fix the memory leak on tgetent.
This problem also caused bash to die when you would set the TERM variable to an
invalid type, then valid, then invalid again.
This was because a security patch to termcap.c made it allocate a new buffer on
every call, regardless of whether a buffer was passed in.
This patch fixes that by remembering correctly when libtermcap allocates memory
and only freeing it in those cases (eliminates the bash crash),
and always freeing the buffer when the reference is updated (eliminating the
memory leak).
The previous patch fixes only part of the memory leak it seems..
bash now works correctly, however the ft program from Michael's report still
eats memory.
It seems to be the linked list handling in tgetent..
Created attachment 91764 [details]
fixes all tgetent leaks
This includes the above patch, and also fixes the leak in the "ft" program
Michael describes.
tgetent is now leak-free!
(used valgrind - - to find this one -- cool
tool!)
Created attachment 91765 [details]
again
again -- messed up my filenames, and last one didn't take. sorry.
Created attachment 122706 [details]
libtermcap-2.0.8-44.src.rpm
Thank you for your precise example !
I can not use your patch today (the issue
above was partially resolved), but
there is still the 'bug' (as you describe) !
Thanks !
---------------------------------------------
--- termcap-2.0.8/termcap.c.rasold 2006-01-02 17:10:29.000000000 +0100
+++ termcap-2.0.8/termcap.c 2006-01-02 17:10:52.000000000 +0100
@@ -421,6 +421,7 @@
sp = get_one_entry(fp, term_list[index]);
if (sp == NULL) break;
build_list(&l, sp, term_list);
+ free (sp);
}
fclose(fp);
--------------------------------------------- | https://bugzilla.redhat.com/show_bug.cgi?id=74346 | CC-MAIN-2017-04 | refinedweb | 553 | 68.26 |
All great programmers learn the same way. They poke the box. They code something and see what the computer does. They change it and see what the computer does. They repeat the process again and again until they figure out how the box works.
– Seth Godin, Poke The Box
A long time ago, back when DOS ruled the world, back before the World Wide Web, back when I was teaching myself BASIC… we typed code out by hand.
There really weren’t a whole lot of good alternatives. If you were lucky, your book came with a floppy disk in the back sleeve that had all the examples on it.
But for the most part, if you wanted to learn programming, it was a lot of trial and error, and a lot of “copying and pasting” code from books (with your hands… using a keyboard).
Why Typing Is Awesome
It’s easy to discount that story as an example of terrible hardship that nobody has to endure anymore. But there’s an amount of… badassity to it.
But more than badassity, typing code out by hand helps you learn. And learning is the name of the game in software.
Typing helps you learn the syntax. It helps you learn the keywords. It makes you think, and as you’re writing out the 10th
import foo from 'foo', the little details become apparent.
“Oh, those separators in the
for loop are semicolons, not commas.”
“Oh,
import {foo} from 'foo' isn’t the same as
import foo from 'foo'.”
Typing makes you curious about the words you are forced to write out. “What do all those things in
public static void main(String[] args) mean, anyway?”
It also helps you learn the various error messages. Inevitably, you’ll type something wrong or leave out something you thought wasn’t important or that your eye didn’t notice (damn semicolons).
When you’re typing in a program by hand, you can try to run it at various points along the way, to see what works. Maybe more importantly, you can see where it breaks. “Poking the box.”
How To Start Typing in a World With Ctrl-C
At this point, let’s suppose you are convinced that typing shit by hand is the best way to learn. How would one go about mastering this skill?
Well, it’s quite simple. Every time you would copy and paste some example code, type it out by hand instead.
- When copying from a StackOverflow answer: type it out instead
- When copying example code out of an ebook: type it out instead
- When following a tutorial on a blog: type it out instead
- When following any tutorial that says “the sample code is available in the file below”: ignore that pre-packaged bundle of non-learning and type it out instead
By all means, use the example code to check your work; use it if you get stuck. But don’t let the example code be a crutch that prevents you from learning to walk on your own.
But What About…?
But wait! These days we have fancy IDEs, package managers, and millions of libraries at our fingertips. Shouldn’t we use those to make programming more efficient?
Yes, we should.
I’m not advocating for typing out every line of code that you use, or even that you read and understand every bit of library code you import. And I’m definitely not against automating repetitive typing.
Typing by hand is important for learning.
Once you understand the code… once you’ve mastered the syntax and the special symbols… once you’re saying, “Ok I get it now, typing this out is boring…” That’s a great time to start being more efficient about it.
Automate for speed, not for lack of understanding. [Tweet this]
Interested in React?
If by chance you’re wanting to learn React, I have a book for that where typing shit by hand features prominently (though there is a version that comes with code samples). You can learn about that here.
I also publish a weekly(ish) newsletter with useful articles about React, JavaScript, and other fun stuff like that. It’s free, and you can sign up here.
For a step-by-step approach to learning React, | https://daveceddia.com/the-lost-art-of-typing-shit-by-hand/ | CC-MAIN-2017-51 | refinedweb | 717 | 81.63 |
Chinese-RFID-Access-Control-Library 0.0.5
A library for interfacing with one of the most common RFID Access Control System sold in China.
========================
This library allows python to control one of the most common RFID Access Control Systems sold in China. Now you can integrate an access control system with your software to do things like remove an user when they have failed to pay their bill.
The goal of this project is to provide the ability to automate an inexpensive, out-of-the-box RFID Access Control solution. This is especially made for businesses that rely on access control + monthly billing (hackerspaces, makerspaces, and gyms).
Main Features
-----
- Programmatically add and remove users to/from the access control system
- Programmatically trigger the relay to open the door
- Convert the 10-digit format RFID numbers to comma format or vice versa
Hardware Requirement
-----
This library currently only works with a single type of controller which goes by a wide variety of model numbers. The controller can be found by searching for "TCP access control" on Ebay, Aliexpress, and Amazon. It costs around $30-85 (depending on the number of doors). You can know which one to buy by looking for one that looks like this:
![alt tag]
One of the awesome things this controller has is a web interface. You can also add users, remove users, view logs, and change settings manually through that interface. Pictures of the interface are available here:
RFID Card Number Explanation
-----
![alt tag]
There are two numbers on the card. The access controller only uses the right number which I'm calling comma-format.
My usage example below shows an example of a function which converts the number on the left (which I'm calling 10-digit format) to the number on the right (comma format).
Usage
-----
Install:
pip install Chinese-RFID-Access-Control-Library
Add.add_user(badge)
Remove.remove_user(badge)
Open door #1:
from rfid import RFIDClient
ip_address = '192.168.1.20'
controller_serial = 123106461
client = RFIDClient(ip_address, controller_serial)
client.open_door(1)
TODO
-----
- Add an optional name parameter to add_user. The access controller also stores the user's name.
- The controller also stores the user's 2-factor pin for when the keypad is enabled. Need to add an optional parameter to add_user for a pin.
- Add a get_users method to RFIDClient that outputs a list of all the users currently in the controller.
- Add a get_logs method to RFIDClient which outputs the card swipe logs.
Special Thanks
-----
- Thanks to Brooks Scharff for figuring out the cool stuff that this access controller could do and keeping me interested in the project.
- Thanks to Dallas Makerspace for letting me implement and test it at their facility.
- Thanks to Mike Metzger for his work on starting to reverse engineer Dallas Makerspace's first access control system and documenting it to show me how to do it.
- Downloads (All Versions):
- 10 downloads in the last day
- 118 downloads in the last week
- 362 downloads in the last month
- Author: Paul Brown
- Download URL:
- Keywords: rfid,access control
- License:
The MIT License (MIT) Copyright (c) 2014 Paul: pawl
- DOAP record: Chinese-RFID-Access-Control-Library-0.0.5.xml | https://pypi.python.org/pypi/Chinese-RFID-Access-Control-Library/0.0.5 | CC-MAIN-2015-32 | refinedweb | 531 | 62.27 |
Compiling Scipy and Matplotlib using pip on Lion
So I upgraded to Lion. Predictably, some things went wrong. This time, the main thing that bit me was that for some reason,
pip stopped working. After a bit of messing around with
brew,
pip and
easy_install, I found out it was almost entirely my own fault. I messed up my
PATH.
In the meantime, I had uninstalled all of
brew's Python, so I had to reinstall. For me, that entails Python, Numpy, Scipy and Matplotlib. Only this time, Scipy would not build. Some obscure error in some
veclib_cabi_c.c would report errors. A quick round of googling reveals:
In order to get Scipy to compile, you need to insert
#include <complex.h> in
./scipy/lib/blas/fblaswrap_veclib_c.c.src ./scipy/linalg/src/fblaswrap_veclib_c.c ./scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c
That done, Scipy compiles perfectly fine.
But, that is not enough yet. As this blogpost outlines, Matplotlib is not currently compatible with
libpng 1.5, which ships with Lion. Fortunately, this is already fixed in the most recent source on the Matplotlib repo, so you just have to checkout that:
pip install -e git+
By doing that, Matplotlib should install just fine.
Seriously though, these PyPi repos are in a very sorry state. Every time I install one of these packages, I have to jump through hoops and spend hours debugging packages that really should work right out of the box. After all,
brew,
rvm and
gem can do it just fine. Why is
pip such a horrible mess? | http://bastibe.de/2011-08-01-compiling-scipy-and-matplotlib-using-pip-on-lion.html | CC-MAIN-2017-17 | refinedweb | 262 | 76.11 |
HOUSTON (ICIS)--Here is Friday's midday ?xml:namespace>
CRUDE: May WTI: $101.32/bbl, up $1.03; May Brent: $106.83/bbl, up 68 cents
NYMEX WTI crude futures rose on pre-weekend buying in response to upbeat US economic data showing strong growth in non-farm payrolls, suggesting that the economy continues to gain traction. The dollar fell against a basket of currencies. WTI topped out at $101.63/bbl before retreating.
RBOB: May $2.9342/gal, up 2.24 cents
Reformulated blendstock for oxygen blending (RBOB) gasoline futures continued to make gains during morning trading after the US Department of Labor said that the economy added 192,000 jobs in March. Gains in crude futures also boosted RBOB.
NATURAL GAS: May $4.442/MMBtu, down 2.8 cents
The May contract edged downwards for the first time in three sessions, as the latest near-term weather forecasts indicating milder conditions over high-demand regions from next week onwards blunted bullish sentiment stemming from Thursday's weekly gas storage report by the US Energy Information Administration (EIA).
ETHANE: steady at 28.75 cents/gal
Ethane spot prices were steady in early trading, even as buying interest was strengthening, sources said.
AROMATICS: benzene flat at $4.70-4.73/gal
Activity was thin in the US benzene market early in the day. As a result, spot prices were flat from the previous afternoon.
OLEFINS: ethylene bid flat at 51 cents/lb, RGP done lower at 59.5 cents/lb
US April ethylene bid levels were flat at 51 cents/lb, compared with the close of the previous day, against no fresh offers. US April refinery-grade propylene (RGP) traded at 59.5 cents/lb, down from the previous front-month trade at 61.5 cents/lb for March material done during the previous week.
For more pricing intelligence please visit | http://www.icis.com/resources/news/2014/04/04/9769894/noon-snapshot-americas-markets-summary/ | CC-MAIN-2016-30 | refinedweb | 311 | 69.28 |
DEFICIT FINANCING ; Rajaji in Swarajya 1960
PROF. B. R. Shenoy is bringing out for lay readers a
booklet on inflation in India, in which he deals with
the causes of the evil and the remedy. I have had the
privilege of reading the manuscript and this is what I
have gathered from what the Professor sets out with
clarity and with figures. I have no doubt the booklet,
when published, will help people to understand the
gravity of the situation. In all low income countries,
expansion of money put in circulation results quickly
in price rises. Inflation is the word used when we
look at the cause and discuss the situation in terms
of money. Price rise is the phrase used when we speak
from the point of view of commodities. If the
expansion of money, whatever be the motive or reason
for such expansion, outpaces the physical volume of
output of commodities, we have a state of inflation
and prices rise as a result.
The Ministry of Commerce publishes the average of
wholesale prices. From the hand-outs of the Reserve
Bank of India we can obtain information about money
supply. There has been a continual rise in the general
index of prices. We see also that money supply has
considerably expanded, faster than the output of
national products.
With 1938-39 as base, the general index of prices in
August 1960 was 478, a rise of nearly five times. The
present changeover of the base from 1938-39 to 1952-53
obscures the enormous magnitude of the price rise.
Government collects funds from the people by taxation,
loan issues, small savings and profits of public
sector undertakings.
From these funds disbursements are made for
administrative expenditure, repayment of past loans,
and Plan investment outlay. When these and other items
of disbursements exceed the total receipts, what is
called budget deficit arises. These deficits are
covered by notes printed at the Government Security
Press at Nasik. This is called deficit financing.
This expansion of money is followed by what is called
secondary expansion through credits given by
commercial banks. For every Rs.100 crores of
additional Nasik money, there is usually another
Rs.100 crores of credit creation.
Inflation that now prevails in India began in 1955-56.
Budget deficits rose from 97 crores in 54-55 to 225
crores in 55-56. In 57-58 the Plan outlay was so great
that, with additional defence expenditure, the budget
deficit that year reached a peak of 495 crores. These
yearly deficits have a cumulative action.
The rise in prices due to inflation reduces the value
of money and life becomes unhappy for people living on
wages and fixed incomes. Their real income is reduced,
and some of them would have to draw on past savings
for current expenditure.
The rise in price corrodes all savings. This leads in
the case of the better placed classes to the transfer
of their savings to urban property, to gold and to
concealed exports of capital. Speculative transactions
acquire additional attraction. Hoarding of goods is
encouraged, eating into savings. For a time production
may be deceptively stimulated on account of higher
prices, but soon it gets retarded on account of
increased costs. Foreign purchasers of our goods will
move to other markets. Imported goods rise in price
giving windfall profits to importers and smugglers.
As a result of inflation, income shifts from the
masses to upper income groups. The middle classes are
most hit. The strike of the Union Government employees
was a symptom of this suffering. Industrialists and
their labour force, who are able to extract a share in
the receipts, do not suffer much but the condition of
the vastly larger number of farm lands is worsened.
Inflation must be followed by price controls and
import restrictions. These produce a great deal of
economic and social disorder and injustice. The
controls over steel, coal, cement, sugar, rubber,
fertilizers and food-grains have cast a gloom over the
life of the people.
Far from equalizing incomes, the policy of controls
makes the rich richer. The stagnant percapita
consumption of cloth and of food-grains is the best
evidence of the condition of the people, and this has
resulted from the misguided policies of the present
administration. In the case of all imported goods
including gold, there is a great gap between landed
cost and market price, ranging from 30 per cent to 500
per cent, depending on the nature of the commodity.
The difference between the landed cost of gold and the
market price is seventy rupees per tola. The import
markets are illegal and the gap between cost and
market price is officially ignored but this does not
nullify the reality. The benefit of all the gaps in
cost of imports and market price goes to importers and
smugglers. Excluding government imports where the
difference may be treated as a concealed tax,
according to a reliable estimate, the ill-gotten gains
on imports during two years would be of the order of
Rs.1,000 crores. This amount has several co-sharers -
corrupt officials who handle the issue of licences,
the recipients of the licences, including both those
who just sell them in the black market and real
importers. The accounts of cost are falsified by
inter-sales and the like, so as to bring the declared
cost to near the market price, and so as also to
replenish the importers for their payments for the
purchase of the licences and for corrupt transactions
with officials and go-betweens. All these incomes are
tax-free, being illicit in nature. It is these
earnings that enable some people to give large
political donations to the ruling party and also to
other groups for purchasing peace. The beneficiaries
of the illegal gain, on account of import controls,
are the upper income groups and the money is obtained
from those who consume the imported commodities or
articles into the production of which such imported
materials go. The total anti-social money that goes
thus from consumers' pockets to upper income groups
has been estimated as being of the order of Rs.300
crores per year. Inflation, import restrictions and
other controls have affected the moral standards of
the nation, and have led to the emergence of a new
undesirable profession engaged in touting for
obtaining licences, permits and contracts, in illicit
trafficking in import licences, and in smuggling gold,
diamonds, watches, cigarettes, fountain pens, razor
blades, photographic accessories, etc. The talent for
enterprise tends to gravitate around officialdom and
to practices to become rich quickly without spending
energy.
In the absence of inflation and controls, the talent
and resources would be actively engaged in adding to
national wealth under the free play of competition,
the normal road to progress. Inflation and controls
discourage efficiency and progress and honesty.
Easy money being available to some under controls and
inflation, they favour continued 'planning' which to
them means continued inflation and controls which
provide them opportunities to amass money. Political
parties in power also favour controls, as these give
an opportunity for the exercise of power and for
acquiring personal and political gains. Conscience
pricks are quelled by the thought that it is all done
in the national interests and the gains are only an
incidental by-product.
Never were the interests of the anti-social elements
so well looked after as under the present
administration. These controls must go or the
Government should change, if the country is to be
extricated from the morass it has got stuck in. It is
not true, as is argued sometimes, that rising prices
and controls and import restrictions and exchange
controls are inherent in a developing economy. The
experience of other countries-Canada, Belgium, West
Germany, Mexico, Japan, Italy and France-have
demonstrated the untruth of this plea.
It is not true, as is sometimes stated, that prices
all over the world have risen. West German national
income rose in real terms at 13 per cent per year in
each year of the period 1951 to 1958. But prices rose
there by less than one per cent per year, 5 per cent
only in all seven years. And West Germany was in the
forefront to remove restrictions on imports and on
payments abroad. In ever so many countries price
stability and surplus in balance of payments, and
abolition of restrictions on imports and payments,
have gone together with rapid economic growth.
Since 1955, Indian price-rise stands out almost alone.
Prices in May 1960 in India were 33 per cent higher
than in 1954. In France and Italy prices declined
during that period. In Germany, Belgium and Japan and
other countries the annual price rise was 1 per cent
or at most 2 per cent.
There is a notion that curtailing bank credit will
reduce inflation. Bank credit is so closely related to
deficit financing that keeping the latter going and
reducing inflation by control over credit is a
futility. It only adds to the confusion. To restrict
credit against food-grains and certain other
commodities would raise the cost of banking services
generally, and in particular the cost of credit to the
trade in those commodities, which are essential for
the economic life of the community. Naturally, such
policies encourage advances against assets outside the
banned list and drive the business of credit from
scheduled banks to others which are not under control.
The policy of credit controls has demonstrably failed.
Tampering with the credit- machinery will not achieve
anything as long as deficit financing is continuing.
The fact is that the attempt to 'invest' non-available
resources - which is what deficit financing amounts
to—is a wrong and futile policy. No plan can be larger
than the resources available for investment, be it
internal or that obtained from generous outsiders.
Even as water is no substitute for milk, inflation is
no real resource. The fallacy produces high prices and
distress. A plan based on inflation will retard
progress instead of accelerating it.
The Third Plan is tremendously inflationary. The overt
deficit financing of this Plan is Rs.550 crores. This
is misleading. Without totalitarian and physical
suppression of consumption, in order to mop up
people's money by reducing consumption, the amount of
supposed availability of savings estimated at Rs.7,200
crores is an over-estimate. The over-estimate is at
least of the order of Rs.1,300 crores.
Thus what the Plan requires by way of foreign aid,
(over and above the amount required for repayments
due) is not Rs.2,790 crores but Rs.5,350 crores. The
deficit financing therefore will not be only Rs.550
crores as planned but six times that figure. If the
foreign aid does not arrive according to the time
table, whatever the causes may be, the gap will be
much greater. And there are good reasons for
apprehending this.
We know that deficit financing to the extent of Rs.367
crores during the five years ending 59-60 led to a
price rise of 32 per cent. The deficit financing
inherent in the Third Plan will certainly involve
'runaway inflation', like the one that swept over
Germany after the first World War. During the five
months ending August 1960, prices have been rising at
a rate computed at 14.2 per cent per annum. This is an
indication to take note of Deficit financing has
already gone too far. Foreign aid and drafts on
currency reserves, cannot go on indefinitely. Holding
the price line, which is continually promised, would
be just King Canute's command to the waves of the sea.
It is pathetically argued that inflation will be
stopped by increase in production. Inflation retards
production. It drives up costs and the commodities
manufactured must be sold at higher prices. Prices and
costs rise simultaneously with inflation, and will
continue to rise with continuing inflation. Inflation
is the disease and the prices only indicate the
temperature. There is no good attempting to reduce the
symptom while keeping the disease going. Fair price
shops of any kind or number cannot achieve control of
prices. Even if buffer stocks released for sale
depress food prices artificially, this will shift
agriculture to other than food-crops, and render the
food position worse. Any commodity distribution at
arbitrary prices will fail, because the stocks will be
bought up as soon as they are put on the market, and
go to feed the black market. The cost of any remedy
put in action by way of subsidies will ultimately fall
on the shoulders of the tax-payers. The net result, so
far as the price level is concerned will be nil. The
price problem resulting from inflation cannot be
corrected by a change in the machinery of
distribution. The diagnosis must be kept in mind when
treatments are attempted The money let loose being the
cause, remedies other than reducing the money flow
will not avail.
The favourite notion that prices result from traders'
conspiracies is stupid. Such conspiracies are
impossible. The prevailing price rise is not the
outcome of either monopolies or impossible
conspiracies but of deficit financing. Prices have
risen despite bumper crops and heavy annual import of
food-grains of three million tons for four years.
To stop prices from rising, we must restore the
balance between the flow of production and the flow of
money. Inflation and excessive State interference are
the two evils of the Indian economy of today. If and
only when these two evils are removed, can we expect
to be saved from rising prices. If not, it is a case
of the ground being prepared for communists to take
totalitarian charge.
September 24, 1960 Swarajya
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around | http://athiyaman.blogspot.com/2007_08_01_archive.html | CC-MAIN-2018-17 | refinedweb | 2,309 | 62.68 |
Distributed Hash Tables, Part I
In the world of decentralization, distributed hash tables (DHTs) recently have had a revolutionary effect. The chaotic, ad hoc topologies of the first-generation peer-to-peer architectures have been superseded by a set of topologies with emergent order, provable properties and excellent performance. Knowledge of DHT algorithms is going to be a key ingredient in future developments of distributed applications.
A number of research DHTs have been developed by universities, picked up by the Open Source community and implemented. A few proprietary implementations exist as well, but currently none are available as SDKs; rather, they are embedded in commercially available products. Each DHT scheme generally is pitched as being an entity unto itself, different from all other schemes. In actuality, the various available schemes all fit into a multidimensional matrix. Take one, make a few tweaks and you end up with one of the other ones. Existing research DHTs, such as Chord, Kademlia and Pastry, therefore are starting points for the development of your own custom schemes. Each has properties that can be combined in a multitude of ways. In order to fully express the spectrum of options, let's start with a basic design and then add complexity in order to gain useful properties.
Basically, a DHT performs the functions of a hash table. You can store a key and value pair, and you can look up a value if you have the key. Values are not necessarily persisted on disk, although you certainly could base a DHT on top of a persistent hash table, such as Berkeley DB; and in fact, this has been done. The interesting thing about DHTs is that storage and lookups are distributed among multiple machines. Unlike existing master/slave database replication architectures, all nodes are peers that can join and leave the network freely. Despite the apparent chaos of periodic random changes to the membership of the network, DHTs make provable guarantees about performance.
To begin our exploration of DHT designs, we start with a circular, double-linked list. Each node in the list is a machine on the network. Each node keeps a reference to the next and previous nodes in the list, the addresses of other machines. We must define an ordering so we can determine what the “next” node is for each node in the list. The method used by the Chord DHT to determine the next node is as follows: assign a unique random ID of k bits to each node. Arrange the nodes in a ring so the IDs are in increasing order clockwise around the ring. For each node, the next node is the one that is the smallest distance clockwise away. For most nodes, this is the node whose ID is closest to but still greater than the current node's ID. The one exception is the node with the greatest ID, whose successor is the node with the smallest ID. This distance metric is defined more concretely in the distance method (Listing 1).
Listing 1. ringDistance.py
# This is a clockwise ring distance function. # It depends on a globally defined k, the key size. # The largest possible node id is 2**k. def distance(a, b): if a==b: return 0 elif a<b: return b-a; else: return (2**k)+(b-a);
Each node is itself a standard hash table. All you need to do to store or retrieve a value from the hash table is find the appropriate node in the network, then do a normal hash table store or lookup there. A simple way to determine which node is appropriate for a particular key (the one Chord uses) is the same as the method for determining the successor of a particular node ID. First, take the key and hash it to generate a key of exactly k bits. Treat this number as a node ID, and determine which node is its successor by starting at any point in the ring and working clockwise until a node is found whose ID is closest to but still greater than the key. The node you find is the node responsible for storage and lookup for that particular key (Listing 2). Using a hash to generate the key is beneficial because hashes generally are distributed evenly, and different keys are distributed evenly across all of the nodes in the network.
Listing 2. findNode.py
# From the start node, find the node responsible # for the target key def findNode(start, key): current=start while distance(current.id, key) > \ distance(current.next.id, key): current=current.next return current # Find the responsible node and get the value for # the key def lookup(start, key): node=findNode(start, key) return node.data[key] # Find the responsible node and store the value # with the key def store(start, key, value): node=findNode(start, key) node.data[key]=value
This DHT design is simple but entirely sufficient to serve the purpose of a distributed hash table. Given a static network of nodes with perfect uptime, you can start with any node and key and find the node responsible for that key. An important thing to keep in mind is that although the example code so far looks like a fairly normal, doubly linked list, this is only a simulation of a DHT. In a real DHT, each node would be on a different machine, and all calls to them would need to be communicated over some kind of socket protocol.
In order to make the current design more useful, it would be nice to account for nodes joining and leaving the network, either intentionally or in the case of failure. To enable this feature, we must establish a join/leave protocol for our network. The first step in the Chord join protocol is to look up the successor of the new node's ID using the normal lookup protocol. The new node should be inserted between the found successor node and that node's predecessor. The new node is responsible for some portion of the keys for which the predecessor node was responsible. In order to ensure that all lookups work without fail, the appropriate portion of keys should be copied to the new node before the predecessor node changes its next node pointer to point to the new node.
Leaves are very simple; the leaving node copies all of its stored information to its predecessor. The predecessor then changes its next node pointer to point to the leaving node's successor. The join and leave code is similar to the code for inserting and removing elements from a normal linked list, with the added requirement of migrating data between the joining/leaving nodes and their neighbors. In a normal linked list, you remove a node to delete the data it's holding. In a DHT, the insertion and removal of nodes is independent of the insertion and removal of data. It might be useful to think of DHT nodes as similar to the periodically readjusting buckets used in persistent hash table implementations, such as Berkeley DB.
Allowing the network to have dynamic members while ensuring that storage and lookups still function properly certainly is an improvement to our design. However, the performance is terrible—O(n) with an expected performance of n/2. Each node traversed requires communication with a different machine on the network, which might require the establishment of a TCP/IP connection, depending on the chosen transport. Therefore, n/2 traversed nodes can be quite slow.
In order to achieve better performance, the Chord design adds a layer to access O(log n) performance. Instead of storing a pointer to the next node, each node stores a “finger table” containing the addresses of k nodes. The distance between the current node's ID and the IDs of the nodes in the finger table increases exponentially. Each traversed node on the path to a particular key is closer logarithmically than the last, with O(log n) nodes being traversed overall.
In order for logarithmic lookups to work, the finger table needs to be kept up to date. An out-of-date finger table doesn't break lookups as long as each node has an up-to-date next pointer, but lookups are logarithmic only if the finger table is correct. Updating the finger table requires that a node address is found for each of the k slots in the table. For any slot x, where x is 1 to k, finger[x] is determined by taking the current node's ID and looking up the node responsible for the key (id+2(x-1)) mod (2k) (Listing 3). When doing lookups, you now have k nodes to choose from at each hop, instead of only one at each. For each node you visit from the starting node, you follow the entry in the finger table that has the shortest distance to the key (Listing | http://www.linuxjournal.com/article/6797?page=0,0 | CC-MAIN-2014-52 | refinedweb | 1,492 | 59.74 |
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
6
results of 6
On 29/09/03 15:44 -0700, Rich Morin wrote:
>
>
Running
h2xs -PAn foobar
will generate one.
You could probably comment it out too.
Cheers, Brian
> --
> - Canta Forda Computer Laboratory
> - The FreeBSD Browser, Meta Project, etc.
> - Prime Time Freeware's DOSSIER series
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Welcome to geek heaven.
>
> _______________________________________________
> Yaml-core mailing list
> Yaml-core@...
>
--
email: rdm@...; phone: +1 650-873-7841 - Canta Forda Computer Laboratory - The FreeBSD Browser, Meta Project, etc. - Prime Time Freeware's DOSSIER series
At 3:16 AM -0700 9/28/03, Brian Ingerson wrote:
>I can't believe how simple it is.
It is, indeed, amazingly concise. A splendid example of multi-lingual
code re-use, open source collaboration, etc. Kudos to all concerned.
Now for a couple of small requests:
* Could someone write up a skeletal man page that details the current
state of the Perl interface to Syck? That is, something which (in
addition to the Syck docs) lays out what we can currently do (and
how to do it)?
* If someone on the list knows how to build XS files under Mac OS X,
could they do so and post a link to the result? If not, I'll ask
for help on the macosx_perl (macosx@...) list.
* This might make an interesting article for MacTech, if anyone wants
to write it up (or answer my questions while I do so :-).
-r
--
email: rdm@...; phone: +1 650-873-7841 - Canta Forda Computer Laboratory - The FreeBSD Browser, Meta Project, etc. - Prime Time Freeware's DOSSIER series
---- 8< test.pl --------------------------------------------
use YAML::Parser::Syck;
use Data::Dumper;
local $/;
print Data::Dumper::Dumper YAML::Parser::Syck::Parse(<DATA>);
__DATA__
---
foo: &1
bar: xxx
baz: 111
bar: *1
---- 8< Syck.pm --------------------------------------------
package YAML::Parser::Syck;
use strict;
use vars qw($VERSION @ISA);
use warnings;
require DynaLoader;
$VERSION = '0.01';
@ISA = qw(DynaLoader);
YAML::Parser::Syck->bootstrap($VERSION);
1;
---- 8< Syck.xs --------------------------------------------
#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"
#include "ppport.h"
#include <syck.h>
SYMID perl_syck_handler(SyckParser *p, SyckNode *n) {
SYMID obj;
SV *sv, *scalar, *entry, *key, *value;
AV *seq;
HV *map;
long i;
switch (n->kind) {
case syck_str_kind:
sv = newSVpv(n->data.str->ptr, n->data.str->len);
break;
case syck_seq_kind:
seq = newAV();
for (i = 0; i < n->data.list->idx; i++) {
obj = syck_seq_read(n, i);
syck_lookup_sym(p, obj, (char**)&entry);
av_push(seq, entry);
}
sv = newRV_inc((SV*)seq);
break;
case syck_map_kind:
map = newHV();
for (i = 0; i < n->data.pairs->idx; i++) {
obj = syck_map_read( n, map_key, i);
syck_lookup_sym(p, obj, (char**)&key);
obj = syck_map_read(n, map_value, i);
syck_lookup_sym(p, obj, (char**)&value);
hv_store_ent(map, key, value, 0);
}
sv = newRV_inc((SV*)map);
break;
}
obj = syck_add_sym(p, (char *)sv);
return obj;
}
static SV * Parse(char *s) {
SV *obj;
SYMID v;
SyckParser *parser = syck_new_parser();
syck_parser_str_auto(parser, s, NULL);
syck_parser_handler(parser, perl_syck_handler);
syck_parser_error_handler(parser, NULL);
syck_parser_implicit_typing(parser, 1);
syck_parser_taguri_expansion(parser, 0);
v = syck_parse(parser);
syck_lookup_sym(parser, v, (char **)&obj);
syck_free_parser(parser);
return obj;
}
MODULE = YAML::Parser::Syck PACKAGE = YAML::Parser::Syck
PROTOTYPES: DISABLE
SV *
Parse (s)
char * s
---- 8< Makefile.PL --------------------------------------------
use ExtUtils::MakeMaker;
WriteMakefile(
NAME => 'YAML::Parser::Syck',
VERSION_FROM => 'Syck.pm',
LIBS => ['-lsyck'],
DEFINE => '',
INC => '-I.',
);
---- 8< -------------------------------------------------------
On 27/09/03 10:06 -0600, why the lucky stiff wrote:
> Barrie Stott (G.B.Stott@...) wrote:
> >
> > I downloaded syck-0.35 onto my Debian woody linux 2.2.20 system and
> > did as instructed for ruby (I have 1.6.7). When everything was
> > compiled and installed WITHOUT ERROR, I used irb as suggested in the
> > file syck-0.35/RELEASE to get something going and got the error (all
> > on one line):
> >
> > ruby: relocation error:
> > /usr/local/lib/site_ruby/1.6/i386-linux/syck.so:
> > undefined symbol: LONG2NUM
> >
>
> Check out from CVS. This is fixed there. As well as a number of other
> crucial parsing bugs. You can find CVS repository instructions at the
> Syck project page [].
>
> >
> > 2. Do I need racc or yaml in addition to syck? I've tried with and
> > without both and get the same error. Even so. it would cut down the
> > space a bit for searching for the error if I can remove them from
> > consideration.
> >
>
> To compile from CVS, you will need autoconf, automake, bison and re2c.
> re2c is available from the Syck page.
Hey. I'm working on a Perl binding (with Clark) to Syck.
The re2c distribution from Why's site was a total bastard, and wouldn't
compile. After fscking with it for 20 minutes I decided to try:
apt-get install re2c
Sure enough! After that libsyck built fine.
Cheers, Brian
Brilliant news! This sucks that I'm leaving town right now. I want to
jump into this right away and yet my plane beckons in just 1 hour.
On this day, I feel the love of a mother. More probably. When I get a
chance, I'll get you cvs access and we'll check this in pronto.
_why
Brian Ingerson (ingy@...) wrote:
>
> | http://sourceforge.net/p/yaml/mailman/yaml-core/?viewmonth=200309&viewday=29 | CC-MAIN-2015-18 | refinedweb | 856 | 67.55 |
I'm creating a web scraper and I'm running into an issue where the date the website gives me is of the form "Monday, January 1, 1991"
What's the best way to format this into a "MM-DD-YYYY" format? Should I split on the comma, pull out the month and convert it to a number, and then put the numbers together? Or is there some quicker way to do this?
Use the
datetime module, using
strptime to parse to a
datetime object, then
strftime to format as you need it:
from datetime import datetime date = datetime.strptime("Monday, January 1, 1991", "%A, %B %d, %Y") print(date.strftime("%m-%d-%Y"))
which outputs:
01-01-1991
For the record, any time you're considering rolling your own parser, the answer is almost always "Don't". Rolling your own parser is error-prone; if at all possible, look for an existing parser. | https://codedump.io/share/lMtkIQOlMBAG/1/python-format-date-of-the-form-quotmonday-january-1-1991quot | CC-MAIN-2017-17 | refinedweb | 154 | 68.5 |
Hello, my project is calling for multiple temperature readings. I have two DS18B20 temp sensors with the grove connectors going into a lotus v1 seeeduino. I am able to get one working, but it looks like all the examples online assume you are able to link up the two sensors data cable. with the grove setup i am plugging one into D2 and the other into D6. This code is just a variation of an example I found. Does anyone have any suggestions?
The error i am getting is
error: 'TempF1' was not declared in this scope Serial.print(TempF1); ^~~~~~
This is the code i am using.
#include <OneWire.h> #include <DallasTemperature.h> // Data wire is plugged into port 2 on the Arduino #define ONE_WIRE_BUS_1 2 #define ONE_WIRE_BUS_2 6 // Setup a oneWire instance to communicate with any OneWire devices OneWire oneWire_1(ONE_WIRE_BUS_1); OneWire oneWire_2(ONE_WIRE_BUS_2); // Pass our oneWire reference to Dallas Temperature. DallasTemperature sensor1(&oneWire_1); DallasTemperature sensor2(&oneWire_2); void setup(void) { // start serial port Serial.begin(9600); // Start up the library sensor1.begin(); sensor2.begin(); // locate devices on the bus } void loop(void) { sensor1.requestTemperatures(); float tempF1=sensor1.getTempFByIndex(0); sensor2.requestTemperatures(); float tempF2=sensor2.getTempFByIndex(0); Serial.print("sensor1: "); Serial.print(TempF1); Serial.print(" ") Serial.print("sensor2: "); Serial.print(TempF2); } | https://forum.seeedstudio.com/t/multiple-onewire-temp-sensors-on-lotus-v1-seeeduino/259964 | CC-MAIN-2021-49 | refinedweb | 211 | 50.84 |
Revision history for Perl extension Parallel::ForkManager. 1.19 2016-06-28 [ DOCUMENTATION ] - Typo fixes. (GH#10) - Add short discussion on security about the information passing via files between master/children processes. - Document the problem between PerlIO::fzip and fork(). (GH#11) [ ENHANCEMENTS ] - New way to spawn workers via 'start_child'. [ STATISTICS ] - code churn: 4 files changed, 114 insertions(+), 5 deletions(-) 1.18 2016-03-29 [ BUG FIXES ] - Storage file between child and parent could have the wrong name, because $$ was used instead of parent_pid. (GH#9, reported by Lucien Coffe) [ STATISTICS ] - code churn: 4 files changed, 37 insertions(+), 4 deletions(-) 1.17 2015-11-28 - Up Test::More's dependency version to v0.94 (because of 'subtest'). (GH#8, mauke) [ STATISTICS ] - code churn: 3 files changed, 88 insertions(+), 70 deletions(-) 1.16 2015-10-08 - wait_one_child wasn't waiting at all. (RT#107634, Slaven Rezic, Yanick) [ STATISTICS ] - code churn: 10 files changed, 517 insertions(+), 461 deletions(-) 1.15 2015-07-08 - test's watchdog actually exit if it's being hit. (RT#105747, Zefram) - condition to catch children reaped by external forces improved. (RT#105748, Zefram + Yanick) 1.14 2015-05-17 - Add 'reap_finished_children', 'is_child' and 'is_parent'. (GH#6, Nine bit) 1.13 2015-05-11 - Use 'select' instead of sleep in _waitpid_blocking. (GH#5) 1.12 2015-02-23 - Allow to use true blocking calls. (RT#102305) 1.11 2015-01-30 - Promote to non-dev release. 1.10_2 2015-01-25 - Put the problematic test as a TODO. 1.10_1 2015-01-22 - Increase timeouts in test to address FreeBSD failures. 1.09 2015-01-08 - Test was failing on Windows platforms. (Yanick Champoux) 1.08 2015-01-07 - New helper functions 'max_procs', 'running_procs' and 'wait_for_available_procs'. GH#4 (Yanick Champoux) - Play nicer with calls to 'waitpid' done outside of P::FM. GH#3 (Yanick Champoux) 1.07 2014-11-10 - Increase minimal Test::Simple requirement RT #92801 - Implement better style and practices in the examples in the POD. (Shlomi Fish) 1.06 2013-12-24 - Remove temporary directory only if it was an automatically generated one. Now fixed. (Shoichi Kaji) RT #89590 (johantheolive) 1.05 2013-09-18 - Remove temporary directory only if it was an automatically generated one. (reported by Manuel Jeckelmann) 1.04 2013-09-03 - Require File::Path 2.0 to support Perl 5.8 (Ian Burrell) - fix some typos #88358 (David Steinbrunner) - documentation fixes #84337 (Damyan Ivanov) 1.03 2013-03-06 - Use second parameter from new() that was unused in the last few released. (Michael Gang) 1.02 2012-12-24 - Fix test for Windows. 1.01 2012-12-23 - Disable utf8 test on Windows where it is a perl bug. - Change version number scheme to two parts. 1.0.0 2012-12-23 - Fixing RT 68298 - Insecure /tmp file handling using File::Temp::tempdir by John Lightsey (LIGHTSEY) - Adding another callback example and several tests Gabor Szabo (SZABGAB) 0.7 2001-04-04 - callback code tested, exit status return (Chuck, dLux) - added parallel_get.pl, a parallel webget example (dLux) - added callbacks.pl, a callback example (Chuck, dLux) - documentation updtes (Chuck, dLux) 0.6 2000-11-30 - documentation tweak fixes by Noah Robin - warning elimination fixes 0.5 2000-10-18 - original version; created by h2xs 1.19 0.7.9 2010-11-01 - Exclude the example scripts from getting installed. () 0.7.8 2010-08-25 - Make $VERSION compatible with the most perl versions possible () 0.7.7 2010-09-28 - Small distribution fixes 0.7.6 2010-08-15 - Added datastructure retrieval (Ken Clarke) - Using CORE::exit instead of exit () 0.7.5 2002-12-25 - Documentation fixes - Fix bug if you specify max_procs = 0 0.7.4 2002-07-04 - on_wait callback now runs from the wait_all_children method - run_on_wait can run a task periodically, not only once. 0.7.3 2001-08-24 - minor bugfix on calling the "on_finish" callback 0.7.2 2001-05-14 - win32 port - fix for the broken wait_one_child 0.7.1 2001-04-26 - various semantical and grammar fixes in the documentation - on_finish now get the exit signal also - on_start now get the process-identification also - described limitations in the doc | https://metacpan.org/changes/distribution/Parallel-ForkManager | CC-MAIN-2016-44 | refinedweb | 696 | 61.63 |
Aerospike Server Community Edition 3.6.0 was released on August 15, 2015 and is available for download here.
This release features a plethora of highlights, fixes and tools.
Highlights:
- (KVS) [AER-2903] Batch-read improvement.
- Batch-read requests are now proxied during cluster changes.
- Handle mixed namespace/bin selections in one batch call.
- Performance improvement.
- (KVS) [AER-2986] Scan improvement.
- Concurrent scan jobs are interlaced, allowing progress on all scans.
- Add ability to promote a scan to high-priority so it can be processed immediately.
- Major performance improvement.
- (KVS) [PROD-265] Support for Double data-type.
- Roll-back not supported once Double data inserted.
- Roll-back requires cold restart if no Double data inserted.
Fixes:
- (KVS) Refactoring of write code path for operations and error handling.
- (KVS) AER-3720 - Github #71 Create info threads as detached.
- (KVS) AER-3832 - fail transaction that tries to store key with single-bin data-in-memory configuration.
- (KVS) AER-4026 - fix incorrect processing of interface name resulting in bogus nodeid generated.
- (KVS) AER-3968 - fix negative heartbeat accounting.
- (KVS) AER-3946 - fix crash when non-virtual external address does not match interface ip.
- (KVS) AER-3993, AER-3897 - fix crash caused by invalid message operations.
- (KVS) AER-3810 - Allow reads and writes under cluster integrity situations.
- (KVS) AER-4237 - Fix for err_sync_copy_null_node.
- (KVS) Return void-time to client on write and UDF transactions.
- (KVS) Refactoring of write code path for operations and error handling.
- (KVS) Fix ‘operate’ transactions so read-operations are executed in-sequence.
- (SINDEX) AER-4151 - Fix leak of file descriptors when query run against a non-existent set.
- (SINDEX) AER-3924,AER-3806 - stat & configuration improvement - adding query-thread, query-batch-size, query-priority, query-threshold, query-untracked-time, to be settable from configuration file.
- (SINDEX) AER-3725,AER-4062 - Use query transaction structure from a pool.
- (UDF) AER-3785 - fix incorrect record access via UDF when record is already expired but not removed yet by background thread.
- (UDF) AER-3042, AER-4182 - increase the bin limit for UDFs to 512. Fail out if a UDF attempts to access a record with more than 512 bins.
- (UDF) Fix crash caused by lua error() calls with non-string argument type.
- (BUILD) AER-3051,AER-3052,AER-3752, AER-4162 - fix build on amazon OS & debian8 build. Create “aerospike” user on rpm distro’s.
Tools:
- (ASADMIN) Introduce asadmin 0.0.1, alternative to asmonitor.
- (AQL) AER-3483 - Add meta data print to json format.
- (AQL) Add option of REPLICA_ANY.
- (AQL) AER-3556 - Fix SET RECORD_TTL override namespace default ttl.
- (AQL) AER-3807 - Handle quote around bin name on an equality query.
- (AQL) AER-3734 Github#73 Fix double quote escape: Changed json output to use libjansson so all other escapes should be correct too.
- (AQL) AER-3794 - Fix setting of LuaPath.
- (backup) AER-3830 - honor “-s” option for keys.. | https://discuss.aerospike.com/t/aerospike-server-community-edition-ce-3-6-0-august-15-2015/1733 | CC-MAIN-2018-30 | refinedweb | 477 | 61.73 |
I have a large-ish project that I'm working on. It involves lots of modules with lots of classes inside them, and everything is coordinated by the main thread. Now, I have a variable in the main thread which I need to be accessible from pretty much everywhere (by which I mean by the other modules). I have tried putting "from main import Tree" (Tree being the variable I need) at the top of a module with the rest of its import statements, but for some reason that causes the main thread to not recognize any of the module's classes.
I don't know if what I'm doing is even remotely right, I figured it might work after reading a bit about python namespaces, but I can't think of anything else. I'm working with python 2.6. All help is appreciated. | https://www.daniweb.com/programming/software-development/threads/205705/need-help-with-scoping | CC-MAIN-2018-30 | refinedweb | 146 | 76.86 |
SwiftSSL alternatives and similar libraries
Based on the "Cryptography" category.
Alternatively, view SwiftSSL alternatives based on common mentions on social networks and blogs.
CryptoSwift9.8 6.8 L4 SwiftSSL VS CryptoSwiftCrypto related functions and helpers for Swift implemented in Swift programming language.
RNCryptor9.2 0.0 SwiftSSL VS RNCryptorCCCryptor (Apple's AES encryption) wrappers for iOS and Mac.
SwiftShield8.1 3.6 SwiftSSL VS SwiftShieldA tool to protect iOS apps against reverse engineering attacks.
Themis7.6 8.1 L3 SwiftSSL VS ThemisCrypto library for storage and messaging
Swift-Sodium5.9 6.3 L5 SwiftSSL VS Swift-SodiumSwift interface to the Sodium library for common crypto operations for iOS and OS X.
IDZSwiftCommonCrypto5.5 0.0 L4 SwiftSSL VS IDZSwiftCommonCryptoA wrapper for Apple's Common Crypto library written in Swift.
BlueCryptor4.4 2.1 L2 SwiftSSL VS BlueCryptorPure Swift cross-platform crypto library using CommonCrypto/libcrypto
Siphash4.2 1.1 L3 SwiftSSL VS SiphashSimple and secure hashing in Swift with the SipHash algorithm.
JOSESwift3.8 3.5 SwiftSSL VS JOSESwiftA framework for the JOSE standards JWS, JWE, and JWK.
BlueRSA3.6 2.7 SwiftSSL VS BlueRSAIBM's Cross Platform RSA Crypto library.
AES256CBC3.4 5.0 L4 SwiftSSL VS AES256CBCMost convenient AES256-CBC encryption for Swift 2
OpenSSL2.3 0.0 L4 SwiftSSL VS OpenSSLOpenSSL helpers for Swift 2.2 on Linux.
SCrypto1.4 0.0 L5 SwiftSSL VS SCryptoElegant Swift interface to access the CommonCrypto routines.
SweetHMAC1.3 0.0 L5 SwiftSSL VS SweetHMACA tiny and easy to use Swift class to encrypt strings using HMAC algorithms.
Keys1.2 0.0 L4 SwiftSSLSSL or a related project?
README
SwiftSSL
SwiftSSL is an Elegant crypto toolkit in Swift, based on CommonCrypto.
How To Get Started
Add SwiftSSL as submodule
git submodule add SwiftSSL
Add SwiftSSL.xcodeproj as subproject
Configure your project
- Add relative path for file module.map in your project Build Settings / Swift Compiler - Search Paths / Import Paths
Done!
Sample Code
SwiftSSL try to do things in swift way, so it doesn't just transfor code from openssl or other lib.
If you wanna digest String or NSData, you can do it just like this:
import SwiftSSL let plainText: String = "This is plain text." var digestString = plainText.digest(SwiftSSL.DigestAlgorithm.MD5)
Just that simple!
Wanna sign a message?
import SwiftSSL let message: String = "This is your message." let passphrase: String = "Your passphrase" var signature = message.sign(SwiftSSL.HMACAlgorithm.SHA512, key: passphrase)
Have fun!
About
SwiftSSL is still on the way. It has provided:
- Crypto
- Digest
- MD2
- MD4
- MD5
- SHA1
- SHA224
- SHA256
- SHA384
- SHA512
- HMAC
- MD5
- SHA1
- SHA224
- SHA256
- SHA384
- SHA512
References
License
Cypraea is released under the MIT license. See LICENSE for details.
*Note that all licence references and agreements mentioned in the SwiftSSL README section above are relevant to that project's source code only. | https://swift.libhunt.com/swiftssl-alternatives | CC-MAIN-2021-17 | refinedweb | 466 | 67.55 |
Features/GTK3/Porting/Implode
This page is being performed while I'm porting Implode.
Contents
Porting Gtk.DrawingArea
There are some things related with gtk.DrawingArea that we have to change when we are porting an activity to Gtk3. The names of the signals change and the way that they work as well.
Get allocation size
self.allocation property is no longer available and we should use self.get_allocation_width to get the allocation size:
self.allocation.width self.allocation.height
should be replaced by:
self.get_allocated_width() self.get_allocated_height()
Signals
expose-event
This signal was override by draw and it have to be connected with the method that was connected with the expose-event before. The method itself does not change but the arguments that it receives do. This is the new definition of the function in my case:
def _draw_event_cb(self, widget, cr):
size-allocate
This signal was used to resize the gtk.DrawingArea every time the window grows and at the startup of the activity. This is useful to re-draw the widget for different resolutions (desktops and XOs for example).
I used the same function connected to this signal but I change the signal connected by configure-event. Here is the definition of the callback:
def _configure_event_cb(self, widget, event):
I just used the size-allocate signal to save the the dimensions of the widget (width and height), so I can use them later on the draw signal.
def _size_allocate_cb(self, widget, rect): self.width = rect.width self.height = rect.height
Focus
Implode defines a new widget called GridWidget and it should be focusable because we want to move a cursor with the key arrows on it. So, this widget was using:
self.set_flags(Gtk.CAN_FOCUS)
but that method (set_flags) is no longer available and we have to replace it by:
self.set_can_focus(True)
Another thing related with the focus is to know who has the actual focus. In gtk2 it was done by
.focus_child
and in Gtk3 it should be replaced by:
.get_focus_child()
Handling .svg with rsvg
rsvg is a library to manage .svg files. The only thing that I found that should be updated is the import and the loading of a rsvg from data.
Replace the usual import:
import rsgv
by the Gtk3 one:
from gi.repository import Rsvg
This way to load a rsvg from data should be replaced:
rsvg.Handle(data=data)
by this new way
Rsvg.Handle.new_from_data(data)
Invalidate rectangle
Gdk.Window.invalidate_rect takes a Gdk.Rectangle instead a tuple in Gtk3.
rect = Gdk.Rectangle() rect.x, rect.y, rect.width, rect.height = (0, 0, width, height)
self.get_window().invalidate_rect(rect, True)
Useful Links
-
-
-
-
- | https://wiki.sugarlabs.org/index.php?title=Features/GTK3/Porting/Implode&oldid=80050 | CC-MAIN-2019-35 | refinedweb | 443 | 59.4 |
See docs in FrameTransformerInterface.
This class is an implementation for standalone (non ROS) applications.
Definition at line 88 of file FrameTransformer.h.
#include <mrpt/poses/FrameTransformer.h>
Definition at line 91 of file FrameTransformer.h.
This will be mapped to mrpt::math::TPose2D (DIM=2) or mrpt::math::TPose3D (DIM=3)
Definition at line 56 of file FrameTransformer.h.
This will be mapped to CPose2D (DIM=2) or CPose3D (DIM=3)
Definition at line 53 of file FrameTransformer.h.
Definition at line 146 of file FrameTransformer 58 110 of file FrameTransformer.h.
Publish a time-stampped transform between two frames.
Definition at line 44 of file FrameTransformer.cpp.
Referenced by run_tf_test1().
Definition at line 147 of file FrameTransformer.h. | https://docs.mrpt.org/reference/devel/classmrpt_1_1poses_1_1_frame_transformer.html | CC-MAIN-2020-24 | refinedweb | 119 | 53.78 |
Just replied yet again to someone whose customer thinks they're adding security by blocking outbound network traffic to cloud services using IP-based allow-lists. They don't.
Service Bus and many other cloud services are multitenant systems that are shared across a range of customers. The IP addresses we assign come from a pool and that pool shifts as we optimize traffic from and to datacenters. We may also move clusters between datacenters within one region for disaster recovery, should that be necessary. The reason why we cannot give every feature slice an IP address is also that the world has none left. We’re out of IPv4 address space, which means we must pool workloads.
The last points are important ones and also shows how antiquated the IP-address lockdown model is relative to current practices for datacenter operations. Because of the IPv4 shortage, pools get acquired and traded and change. Because of automated and semi-automated disaster recovery mechanisms, we can provide service continuity even if clusters or datacenter segments or even datacenters fail, but a client system that’s locked to a single IP address will not be able to benefit from that. As the cloud system packs up and moves to a different place, the client stands in the dark due to its firewall rules. The same applies to rolling updates, which we perform using DNS switches.
The state of the art of no-downtime datacenter operations is that workloads are agile and will move as required. The place where you have stability is DNS.
Outbound Internet IP lockdowns add nothing in terms of security because workloads increasingly move into multitenant systems or systems that are dynamically managed as I’ve illustrated above. As there is no warning, the rule may be correct right now and pointing to a foreign system the next moment. The firewall will not be able to tell. The only proper way to ensure security is by making the remote system prove that it is the system you want to talk to and that happens at the transport security layer. If the system can present the expected certificate during the handshake, the traffic is legitimate. The IP address per-se proves nothing. Also, IP addresses can be spoofed and malicious routers can redirect the traffic. The firewall won’t be able to tell.
With most cloud-based services, traffic runs via TLS. You can verify the thumbprint of the certificate against the cert you can either set yourself, or obtain from the vendor out-of-band, or acquire by hitting a documented endpoint (in Windows Azure Service Bus, it's the root of each namespace). With our messaging system in ServiceBus, you are furthermore encouraged to use any kind of cryptographic mechanism to protect payloads (message bodies). We do not evaluate those for any purpose. We evaluate headers and message properties for routing. Neither of those are logged beyond having them in the system for temporary storage in the broker.
The server having access to Service Bus should have outbound Internet access based on the server’s identity or the running process’s identity. This can be achieved using IPSec between the edge and the internal system. Constraining it to the Microsoft DC ranges it possible, but those ranges shift and expand without warning.
The bottom line here is that there is no way to make outbound IP address constraints work with cloud systems or high availability systems in general. | https://blogs.msdn.microsoft.com/clemensv/2013/08/28/blocking-outbound-ip-addresses-again-no/ | CC-MAIN-2017-30 | refinedweb | 580 | 61.56 |
Change show settings
The SETUP > Show > Show settings panel allows you to
import settings from other Shows
add/edit the path to linked projects
change the Compositing color space
change default video settings
Linked projects
Displays the projects linked to this show file and its availability (Status) on this machine.
Only Stand-alone and Server role are allowed to change linked projects
Browse to the project location
Click on the Launch icon next to the "Edit project path" in the linked project row
Add project link (Stand-alone and Server role)
Click on "Add project link"
Browse to the
.uprojectfile of the project which should be linked and click "Select"
Remove project link
Click on the Trash can icon in the linked project row
Edit project path (Stand-alone and Server role)
Click on "Edit project path"
Browse to the new location of the
.uprojectand click "Select"
Server machine: Editing the path of the linked project will affect all Client machines.
Edit local project path (Client role)
Client machines can override the project path locally in case their project path is different
Click on "Edit local project path"
Browse to the location of the
.uprojectand click "Select"
The project with a local override is indicated with
- local override
Remove local override
To remove the local override and go back to using the server machine’s project path:
Click on "Remove local override"
The linked project
needs to be kept in sync manually
needs to be present on all machines on the same path as the server machine, unless the local project path has been edited.
Import from other show
Allows you to import configuration and calibration settings (Base settings, Camera systems, Media inputs
Experimental features
File input (selected by default)
GPU output
Video IO service debug tools
These features are experimental. We do not recommend using them in production.
Next step
Continue to Configure camera tracking | https://help.pixotope.com/phc/2.2/Change-show-settings.3291545862.html | CC-MAIN-2022-21 | refinedweb | 316 | 53.28 |
Strings are words that are made up of characters, hence they are known as sequence of characters. In C++ we have two ways to create and use strings: 1) By creating char arrays and treat them as string 2) By creating
string object
Lets discuss these two ways of creating string first and then we will see which method is better and why.
1) Array of Characters – Also known as C Strings
Example 1:
A simple example where we have initialized the char array during declaration.
#include <iostream> using namespace std; int main(){ char book[50] = "A Song of Ice and Fire"; cout<<book; return 0; }
Output:
A Song of Ice and Fire
Example 2: Getting user input as string
This can be considered as inefficient method of reading user input, why? Because when we read the user input string using
cin then only the first word of the string is stored in char array and rest get ignored. The cin function considers the space in the string as delimiter and ignore the part after it.
#include <iostream> using namespace std; int main(){ char book[50]; cout<<"Enter your favorite book name:"; //reading user input cin>>book; cout<<"You entered: "<<book; return 0; }
Output:
Enter your favorite book name:The Murder of Roger Ackroyd You entered: The
You can see that only the “The” got captured in the book and remaining part after space got ignored. How to deal with this then? Well, for this we can use
cin.get function, which reads the complete line entered by user.
Example 3: Correct way of capturing user input string using cin.get
#include <iostream> using namespace std; int main(){ char book[50]; cout<<"Enter your favorite book name:"; //reading user input cin.get(book, 50); cout<<"You entered: "<<book; return 0; }
Output:
Enter your favorite book name:The Murder of Roger Ackroyd You entered: The Murder of Roger Ackroyd
Drawback of this method
1) Size of the char array is fixed, which means the size of the string created through it is fixed in size, more memory cannot be allocated to it during runtime. For example, lets say you have created an array of character with the size 10 and user enters the string of size 15 then the last five characters would be truncated from the string.
On the other hand if you create a larger array to accommodate user input then the memory is wasted if the user input is small and array is much larger then what is needed.
2) In this method, you can only use the in-built functions created for array which don’t help much in string manipulation.
What is the solution of these problems?
We can create string using string object. Lets see how we can do it.
String object in C++
Till now we have seen how to handle strings in C++ using char arrays. Lets see another and better way of handling strings in C++ – string objects.
#include<iostream> using namespace std; int main(){ // This is how we create string object string str; cout<<"Enter a String:"; /* This is used to get the user input * and store it into str */ getline(cin,str); cout<<"You entered: "; cout<<str<<endl; /* This function adds a character at * the end of the string */ str.push_back('A'); cout<<"The string after push_back: "<<str<<endl; /* This function deletes a character from * the end of the string */ str.pop_back(); cout << "The string after pop_back: "<<str<<endl; return 0; }
Output:
Enter a String:XYZ You entered: XYZ The string after push_back: XYZA The string after pop_back: XYZ
The advantage of using this method is that you need not to declare the size of the string, the size is determined at run time, so this is better memory management method. The memory is allocated dynamically at runtime so no memory is wasted. | https://beginnersbook.com/2017/08/strings-in-c/ | CC-MAIN-2019-13 | refinedweb | 643 | 63.53 |
how to learn simple way
Post your Comment
Java util package Examples
package
package Give two advantages of creating packages in java program... can allow types in one package to have unrestricted access to one another, still restricting the access for the types outside the package
Lang and Util Base Libraries
Lang and Util Base Libraries
The Base libraries provides us the fundamental features and functionality of
the Java platform.
Lang and Util Packages
Lang and Util package provides the fundamental classes and Object of
primitive type
Java package
Java package Which package is always imported by default
package in java
package in java when i run a package it give a error exception in thread "main"
java.lang.NoClassDefFoundError what i do
Java package
Java package What restrictions are placed on the location of a package statement within a source code file
Package
Package Create a Package named com.example.code. Write few classes, interface and abstract classes. Now create a class names PackageDemo that uses the classes of this package from other package
package
package hello,
What is a package?
hello,
To group set of classes into a single unit is known as packaging. Packages provides wide namespace ability
package creation
package creation program to create package having four different class in java
What is a package?
related to package in java. is a package? hi,
What is a package?
thanks
Hi,
The Package is a mechanism for organizing the group of related files
problem with package
problem with package Dear sir,
i have created one java file with package com.net; and i compiled the program.it showing the .class file... message as can not access package class methods
userdefined package
userdefined package package javap;
class HelloWorld
{
public... declare a package then in command prompt i set the classpath.After that i compiled the class and run the package then i got the error
C:\Users
Java util date
Java util date
The class Date in "java.util" package represents... to
string and string to date.
Read more at:
http:/
add new package java
add new package java How to add new package in Java
core java collection package - Java Interview Questions
core java collection package why collection package doesnot handle.......
Thanks..., Java includes wrapper classes which convert the primitive data types into real Java
Package in Java - Java Beginners
Package in Java Hi,
What is a package? Tell me how to create package?
Thanks
Hello,
packeage is nothing but you can say that folder of related java files.
ex,
if you write a jaava class like
package - Java Beginners
Creating a package in Java Create a package called My Package. In this, create a class call Marks that specifies the name of student, the marks in three subjects and the total of these marks. Displays the details of three
The package keyword
;
The package in java programming language
is a keyword that is used to define a package that includes the java classes. Keywords
are basically reserved words....
In java programming language while
defining a package then the package statement
Which package is imported by default?
package is imported by default? The java.lang package is imported by default even without a package declaration. In Java class has been imported in the following...Which package is imported by default? hi,
Which package
Java Package
Java Package
In a computer terminology, package is a collection of
the related files in the same... to use them in an easier way.
In the same manner, Java package is a
group
missing package-info.java file
missing package-info.java file How to find and add the packages in Java if it's missing from the list
javaSanthosh kumar July 26, 2012 at 8:36 PM
how to learn simple way
Post your Comment | http://www.roseindia.net/discussion/18256-Java-Util-Package---Utility-Package-of-Java.html | CC-MAIN-2015-48 | refinedweb | 634 | 54.52 |
I'm replying to this old thread, as I don't see any new GridView constructor with header appearance in GXT 3.1.1. I suppose to see a new constructor like this in GridView:
GridView(GridAppearance...
I'm replying to this old thread, as I don't see any new GridView constructor with header appearance in GXT 3.1.1. I suppose to see a new constructor like this in GridView:
GridView(GridAppearance...
Same question here. We need to make a chart with dotted line series.We currently use GXT 3.0.4 and in the upgrading process to 3.1.0.
Thanks for your detailed information.
Adding the following two lines to MyGridAppearance.MyGridStyle:
@Override
String cellInner();
This fixes the problem.
-Sharon
Here are our own GridAppearance and css used by Page1.java:
MyGridAppearance.java:
import com.google.gwt.core.client.GWT;
import com.google.gwt.resources.client.CssResource.Shared;
import...
Hi, Colin,
Here are the scenario to reproduce the issue:
Create a container with two buttons (button 1 and button 2) and a sub - panel. Each button is linked to a individual page in sub-panel.
...
I have reproduced this issue. I'm cleaning up the project and will send you the files soon.
Thanks.
I changed the package from:
com.sencha.gxt.cell.core.client.form.DateCell
to:
com.google.gwt.cell.client.DateCell
This fixed the problem.
However, in our production environment (GXT...
Hi, Colin,
Thanks for your reply.
I copied your LiveGridVisibleRowCount class to my project and ran the example. There are 11 visible rows in the grid, but the tool bar shows '1 - 14 of 44',...
Any updates on this?
Can we have an estimate when this will be fixed?
Thanks!
We have a panel with a livegrid and other components. However, the total number of visible rows displayed in livegrid is different from the number displayed in the toolbar. For example, only 9 rows...
I need to have vertical scroll bar for the viewport, but not horizontal scroll bar.
Inside viewport, I have a panel and need to have horizontal scroll bar but not vertical scroll bar.
I tried:... | https://www.sencha.com/forum/search.php?s=34599956d4ff806b25cbb4a918f5cdda&searchid=19531757 | CC-MAIN-2017-34 | refinedweb | 357 | 61.53 |
Smatch Static Analysis Tool Overview, by Dan Carpenter
By Jamesmorris-Oracle on Dec 10, 2015
This is a posting by Oracle Linux Kernel Engineer, Dan Carpenter. In this, he provides an overview of Smatch, the C static analysis tool which he developed, and which he uses to test the mainline Linux kernel code for security bugs.
My job at Oracle is focused on finding security bugs in the Linux kernel. My favorite type of bug is off by one bugs where the code says:
if (x > ARRAY_SIZE(table)) return -EINVAL;
The problem is that > should be changed to >= it so it says:
if (x >= ARRAY_SIZE(table)) return -EINVAL;
These are a one-line fix and an easy way for me to boost my patch count. I have made over a hundred of these changes. In fact, I probably have some kind of record for fixing the most off by one bugs! My record breaking secret is that I use an open source, static analysis tool which I wrote called Smatch.
Maybe the easiest way to understand how Smatch works is to download it and play with it a bit. Here are the instructions:
You will first need to install some dependencies such as the sqlite development packages for C, BASH, Perl and Python. Also I would recommend installing the libXML, gtk+2.0 and LLVM development packages as well but those are not required. Then run the following commands:
git clone git://repo.or.cz/smatch.git cd smatch make
Now let's create a small test.c file:
#include "check_debug.h" int var; void function(void) { if (var < 0 || var >= 10) return; __smatch_implied(var); }
The "check_debug.h" file can be included into any .c file. It is used to display internal Smatch information which helps with debugging. If you run `./smatch test.c`, then that prints the value of the "var" variable.
test.c:9 function() implied: var = '0-9'
Smatch also tries to track some relationships between variables so let's change our test.c file to look like this:
#include "check_debug.h" int a, valid; void function(void) { valid = 0; if (a >= 0 && a < 10) valid = 1; if (a == -1) __smatch_implied(valid); }
With this code, since -1 is outside the 0-9 range, that means "valid" is zero.
test.c:12 function() implied: valid = '0'
We could move the limit check into a separate function if we wanted:
#include "check_debug.h" int is_valid_month(int month) { if (month < 1 || month > 12) return 0; return 1; } int var; void function(void) { if (is_valid_month(var)) __smatch_implied(var); }
It prints that valid values are 1-12 as expected.
Basically we're tracking the values of all the variables. The math behind this is called flow analysis and the core part of Smatch is a flow analysis engine. The flow analysis engine lets you track more abstract things as well such as if a pointer has been freed or if it has been dereferenced. It easy to hook into the Smatch flow analysis engine and add more checks.
Since 2009, there have been around 3000 kernel bugs patched because of Smatch warnings. Most are minor bugs such as there might be an off by one bug so the computer will crash when someone installs 256 graphics cards. In that situation the programmer made a real mistake and we will fix it, but it has no real world impact. Other times even minor mistakes like returning a wrong error code can be serious, for example in 6d97e55f7172 ('vhost: fix return code for log_access_ok()') we were supposed to return zero on failure but instead we returned -EINVAL. Since -EINVAL is non-zero, that meant access was granted when it was supposed be denied.
The main complaint about every static analysis tool is that the rate of false positives is too high. The problem in the Linux kernel is that the developers fix all the real bugs and so only 100% false positives remain. It's better to focus on new warnings from newly added code because those are often real bugs. I try to discourage people from changing the kernel code just to silence false positives. Changing the code can be a good thing if it makes the code easier to understand but I always tell people that Smatch is still improving so, hopefully, there will be a way to silence the false positive by changing Smatch instead.
I always run Smatch on my patches before sending them to the kernel maintainers and it saves me from embarrassing mistakes. The command to do that is:
~/path/to/smatch/smatch_scripts/kchecker --spammy drivers/modified_file.c
Earlier I showed that Smatch can do cross function analysis. It does analyze short functions inline, as you have seen, but to get the full benefit, you have to build the cross function database. It takes around three hours. The command to is:
~/path/to/smatch/smatch_scripts/build_kernel_data.sh
Running that command creates a smatch_db.sqlite file. Then re-run the kchecker script and it will use the new cross function database. Or if you want to run Smatch over the whole kernel the command is:
~/path/to/smatch/smatch_scripts/test_kernel.sh
If you have any issues or suggestions feel free to email the list at smatch@vger.kernel.org.
-- Dan | https://blogs.oracle.com/linuxkernel/ | CC-MAIN-2016-22 | refinedweb | 881 | 71.04 |
The #100DaysOfMLCode is a challenge in which you spend at least 1 hour a day learning about Machine Learning and you publish what you have been learning to keep yourself accountable during that time.
During my first week of #100DaysOfMLCode I’ve been working on two different courses in no particular order. Here is the list of courses:
- Machine Learning Crash Course
- Computer Vision Udacity Nanodegree Free Preview here
This is what I learned about Machine Learning during my first 2 days:
Key Machine Learning Terminology
- Feature: features are the input variables we feed into a network, it can be as simple as a single number or more complex as an image (which in reality is a vector of numbers, where each pixel is a feature)
- Label: is the thing we are predicting, it is normally referred as y
- Prediction: or predicted value if the value we predict with a previously trained model for a given output and it is referred as y’
Regression vs. classification:
- A regression model predicts continuous values.
- A classification model predicts discrete values.
Linear Regression
Is a method for finding the straight line or hyper plane that best fits a set of points.
Line formula:
y = wx + b
Where:
w = Weights
x = Input features
b = Bias
Some convenient loss functions for linear regression are:
- L2 Loss also called squared error and it is equal to (observation — prediction) 2
- Mean Square Error: is the average squared loss per example over the whole dataset. To calculate MSE, sum up all the squared losses for individual examples and take the average (divide by the number of examples):
When training a model we want to minimize the loss as much as possible to make the model more accurate without over fitting.
This is what I learned about Pandas
Pandas is a great python API for column-oriented data analysis.
To import pandas use the following line:
import pandas as pd
There are 2 primary data structures used in Pandas:
Series: which represents a single column
DataFrame: which is similar to a relational data table, it is composed by one or more series.
To create a serie:
city_names = pd.Series(['Barcelona', 'Madrid', 'Valencia'])
population = pd.Series([1609000, 3166000, 790201])
To create a dataframe with the previous series use the following:
spain_cities_df = pd.DataFrame({ 'City name': city_names, 'Population': population })
A dataframe is created by passing a dictionary mapping with a string as the column name as a serie as the content.
Most commonly you will not write the content of a dataframe but read it from a file such as a comma separated values file (csv for short).
spain_cities_df = pd.read_csv('path/to/file.csv', sep=',')
You can get interesting statistics with the df.describe() function, you will get the count, mean, std, min, 25%, 50%, 75% and max for each column.
spain_cities_df.describe()
Another useful function is df.head() this will display the top 5 columns so you can have an idea of what the dataframe contains
spain_cities_df.head()
Similarly you can use df.tail() and it will return the last 5 rows of data in the dataframe. Both functions will accept an integer as input for the number of rows to return, by default it is 5 but you can use any number you want, for example
spain_cities_df.tail(20)
Will return the las 20 rows of the dataframe
A powerful feature is graphing. DataFrame.hist lets you quickly study the distribution of values in a column:
spain_cities_df.hist('Population')
To access data just use a column name as the key of the dataframe:
spain_cities_df.hist['City name']
Will return the whole serie, with the 3 items inside
To access just one item in that column you can do this
spain_cities_df.hist['City name'][0]
That will return “Barcelona” as a string
It is also possible to return only a slice of the dataframe (by slicing as you would do with any array in python)
spain_cities_df[0:2]
Will return a Dataframe with the first 2 columns of the sliced dataframe
Pandas will also allow manipulating data in series so for example you could do this:
spain_cities_df['Population']/1000
And all values in that column will be divided by 1000
To add new series (or columns) to a Dataframe it is as simple as to define it
spain_cities_df['New column'] = pd.Series([1, 2, 3])
Every value in a Dataframe will have an auto generated integer index, the index once created will never change, even if the data is reordered the index will move with the row.
Dataframe.reindex will reorder rows (it accepts a list of indexes as the new order)
spain_cities_df.reindex([2, 0, 1])
Will sort the cities as Valencia, Barcelona, Madrid
Pandas is huge and these are just the basics of course, but knowing just that it is already possible to do a lot of data analysis!
This is whas I could learn during my first 2 days of 100 days of ML Code!
During the first week I was also learning about Convolutional Neural Networks and Computer Vision, but that I will be posting in the next couple of days!
I post my daily updates on my Twitter account @georgestudenko and you can also see my daily progress on my Github repositoy
Source: Deep Learning on Medium | http://mc.ai/this-is-what-i-have-learned-during-my-first-2-days-of-100-days-of-machine-learning-code/ | CC-MAIN-2019-09 | refinedweb | 883 | 53.95 |
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
import numpy as np import pandas as pd import matplotlib.pyplot as plt
In The Drunkard's Walk, Leonard Mlodinow presents "The Girl Named Florida Problem":
"In a family with two children, what are the chances, if [at least] one of the children is a girl named Florida, that both children are girls?"
I added "at least" to Mlodinow's statement of the problem to avoid a subtle ambiguity (which I'll explain at the end).
To avoid some real-world complications, let's assume that this question takes place in an imaginary city called Statesville where:
Every family has two children.
50% of children are male and 50% are female.
All children are named after U.S. states, and all state names are chosen with equal probability.
Genders and names within each family are chosen independently.
To answer Mlodinow's question, I'll create a DataFrame with one row for each family in Statesville and a column for the gender and name of each child.
Here's a list of genders and a dictionary of state names:
gender = ['B', 'G']
us enumerate all possible combinations of genders and names, I'll use
from_product, which makes a Pandas MultiIndex.
names = ['gender1', 'name1', 'gender2', 'name2'] index = pd.MultiIndex.from_product([gender, us_states]*2, names=names)
Now I'll create a DataFrame with that index:
df = pd.DataFrame(index=index) df.head()
It will be easier to work with if I reindex it so the levels in the MultiIndex become columns.
df = df.reset_index() df.head()
This DataFrame contains one row for each family in Statesville; for example, the first row represents a family with two boys, both named Alabama.
As it turns out, there are 10,000 families in Statesville:
len(df)
10000
girl1 = (df['gender1']=='G')
The following function takes a Boolean Series and counts the number of
True values, which is the probability that the condition is true.
def prob(A): """Computes the probability of a proposition, A. A: Boolean series returns: probability """ assert isinstance(A, pd.Series) assert A.dtype == 'bool' return A.mean()
Not surprisingly, the probability is 50% that the first child is a girl.
prob(girl1)
0.5
And so is the probability that the second child is a girl.
girl2 = (df['gender2']=='G') prob(girl2)
0.5
Mlodinow's question is a conditional probability: given that one of the children is a girl named Florida, what is the probability that both children are girls?
To compute conditional probabilities, I'll use this function, which takes two Boolean Series,
A and
B, and computes the conditional probability $P(A~\mathrm{given}~B)$.
def conditional(A, B): """Conditional probability of A given B. A: Boolean series B: Boolean series returns: probability """ return prob(A[B])
For example, here's the probability that the second child is a girl, given that the first child is a girl.
conditional(girl2, girl1)
0.5
The result is 50%, which is the same as the unconditioned probability that the second child is a girl:
prob(girl2)
0.5
So that confirms that the genders of the two children are independent, which is one of my assumptions.
Now, Mlodinow's question asks about the probability that both children are girls, so let's compute that.
gg = (girl1 & girl2) prob(gg)
0.25
In 25% of families, both children are girls. And that should be no surprise: because they are independent, the probability of the conjunction is the product of the probabilities:
prob(girl1) * prob(girl2)
0.25
While we're at it, we can also compute the conditional probability of two girls, given that the first child is a girl.
conditional(gg, girl1)
0.5
That's what we should expect. If we know the first child is a girl, and the probability is 50% that the second child is a girl, the probability of two girls is 50%.
Before I answer Mlodinow's question, I'll warm up with a simpler version: given that at least one of the children is a girl, what is the probability that both are?
To compute the probability of "at least one girl" I will use the
| operator, which computes the logical
OR of the two Series:
at_least_one_girl = (girl1 | girl2) prob(at_least_one_girl)
0.75
75% of the families in Statesville have at least one girl.
Now we can compute the conditional probability of two girls, given that the family has at least one girl.
conditional(gg, at_least_one_girl)
0.3333333333333333
Of the families that have at least one girl,
1/3 have two girls.
If you have not thought about questions like this before, that result might surprise you. The following figure might help:
In the top left, the gray square represents a family with two boys; in the lower right, the dark blue square represents a family with two girls.
The other two quadrants represent families with one girl, but note that there are two ways that can happen: the first child can be a girl or the second child can be a girl.
There are an equal number of families in each quadrant.
If we select families with at least one girl, we eliminate the gray square in the upper left. Of the remaining three squares, one of them has two girls.
So if we know a family has at least one girl, the probability they have two girls is 33%.
So far, we have computed two conditional probabilities:
Given that the first child is a girl, the probability is 50% that both children are girls.
Given that at least one child is a girl, the probability is 33% that both children are girls.
Now we're ready to answer Mlodinow's question:
If your intuition is telling you that the name of the child can't possibly matter, brace yourself.
Here's the probability that the first child is a girl named Florida.
gf1 = girl1 & (df['name1']=='Florida') prob(gf1)
0.01
And the probability that the second child is a girl named Florida.
gf2 = girl2 & (df['name2']=='Florida') prob(gf2)
0.01
To compute the probability that at least one of the children is a girl named Florida, we can use the
| operator again.
at_least_one_girl_named_florida = (gf1 | gf2) prob(at_least_one_girl_named_florida)
0.0199
We can double-check it by using the disjunction rule:
prob(gf1) + prob(gf2) - prob(gf1 & gf2)
0.0199
So, the percentage of families with at least one girl named Florida is a little less than 2%.
Now, finally, here is the answer to Mlodinow's question:
conditional(gg, at_least_one_girl_named_florida)
0.49748743718592964
That's right, the answer is about 49.7%. To summarize:
Given that the first child is a girl, the probability is 50% that both children are girls.
Given that at least one child is a girl, the probability is 33% that both children are girls.
Given that at least one child is a girl named Florida, the probability is 49.7% that both children are girls.
If your brain just exploded, I'm sorry.
Here's my best attempt to put your brain back together.
For each child, there are three possibilities: boy (B), girl not named Florida (G), and girl named Florida (GF), with these probabilities:
$P(B) = 1/2 $
$P(G) = 1/2 - x $
$P(GF) = x $
where $x$ is the percentage of people who are girls named Florida.
In families with two children, here are the possible combinations and their probabilities:
$P(B, B) = (1/2)(1/2)$
$P(B, G) = (1/2)(1/2-x)$
$P(B, GF) = (1/2)(x)$
$P(G, B) = (1/2-x)(1/2)$
$P(G, G) = (1/2-x)(1/2-x)$
$P(G, GF) = (1/2-x)(x)$
$P(GF, B) = (x)(1/2)$
$P(GF, G) = (x)(1/2-x)$
$P(GF, GF) = (x)(x)$
If we select only the families that have at least one girl named Florida, here are their probabilities:
$P(B, GF) = (1/2)(x)$
$P(G, GF) = (1/2-x)(x)$
$P(GF, B) = (x)(1/2)$
$P(GF, G) = (x)(1/2-x)$
$P(GF, GF) = (x)(x)$
Of those, if we select the families with two girls, here are their probabilities:
$P(G, GF) = (1/2-x)(x)$
$P(GF, G) = (x)(1/2-x)$
$P(GF, GF) = (x)(x)$
To get the conditional probability of two girls, given at least one girl named Florida, we can add up the last 3 probabilities and divide by the sum of the previous 5 probabilities.
With a little algebra, we get:
$P(\mathrm{two~girls} ~|~ \mathrm{at~least~one~girl~named~Florida}) = (1 - x) / (2 - x)$
As $x$ approaches $0$ the answer approaches $1/2$.
As $x$ approaches $1/2$, the answer approaches $1/3$.
Here's what all of that looks like graphically:
Here
B a boy,
Gx is a girl with some property
X, and
G is a girl who doesn't have that property. If we select all families with at least one
Gx, we get the five blue squares (light and dark). Of those, the families with two girls are the three dark blue squares.
If property
X is common, the ratio of dark blue to all blue approaches
1/3. If
X is rare, the same ratio approaches
1/2.
In the "Girl Named Florida" problem,
x is 1/100, and we can compute the result:
x = 1/100 (1-x) / (2-x)
0.49748743718592964
Which is what we got by counting all of the families in Statesville.
I wrote about this problem in my blog in 2011. As you can see in the comments, my explanation was not met with universal acclaim.
One of the issues that came up is the challenge of stating the question unambiguously. In this article, I rephrased Mlodinow's statement to clarify it.
But since we have come all this way, let me also answer a different version of the problem.
Suppose you choose a house in Statesville at random and ring the doorbell. A girl (who lives there) opens the door and you learn that her name is Florida. What is the probability that the other child in this house is a girl?
In this version of the problem, the selection process is different. Instead of selecting houses with at least one girl named Florida, you selected a house, then selected a child, and learned that her name is Florida.
Since the selection of the child was arbitrary, we can say without loss of generality that the child you met is the first child in the table.
In that case, the conditional probability of two girls is:
conditional(gg, gf1)
0.5
Which is the same as the conditional probability, given that the first child is a girl:
conditional(gg, girl1)
0.5
So in this version of the problem, the girl's name is irrelevant. | https://nbviewer.org/github/AllenDowney/BiteSizeBayes/blob/master/florida.ipynb | CC-MAIN-2022-40 | refinedweb | 1,820 | 62.78 |
Here we present techniques for programmatic and declarative data binding and display with Windows Presentation Foundation.
Josh Smith
MSDN Magazine July 2008
Read more!
Our security experts present 10 vulnerable pieces of code. Your mission is to find the holes (a.k.a. bad security practices) in the code.
Michael Howard and Bryan Sullivan
MSDN Magazine November 2008
If you want to develop high-performance and high-quality commercial applications, you’ll still look to C++ and native code. Direct2D will help you deliver the graphics power you need.
Kenny Kerr
MSDN Magazine June 2009
Windows Imaging Component (WIC) is an extensible framework for encoding, decoding, and manipulating images. See how to use WIC to encode and decode different image formats.
MSDN Magazine April 2008
The System.Windows.Shapes namespace is Charles Petzold's namespace of choice for rendering two-dimensional vector graphics in WPF. Here he explains why.
Charles Petzold
MSDN Magazine March 2008
Dino Esposito compares the use of AJAX patterns and DOM manipulations to the use of the ASP.NET partial rendering engine.
Dino Esposito
MSDN Magazine August 2008
AJAX Extenders extend the behavior and features of ordinary Web controls so you can reduce postbacks and control input even better than with AJAX alone.
MSDN Magazine January 2008
This month Dino Esposito shows you how to get Windows-style modal dialog boxes for your Web applications thanks to the Ajax Control Toolkit and some clever coding.
MSDN Magazine Launch 2008
This month, use nested ListView controls to create hierarchical views of data and extend the eventing model of the ListView by deriving a custom ListView class.
This month Dino tackles the problem of large download size for Silverlight applications, explaining when to use streaming, when to divide the download, and other techniques for better performance over the wire.
MSDN Magazine January 2009
Ray Djajadinata
MSDN Magazine May 2007
A Sidebar gadget is a powerful little too that's surprisingly easy to create. Get in on the fun with Donavon West.
Donavon West
MSDN Magazine August 2007
This article may contain URLs that were
valid when originally published, but now link to sites or pages that no longer exist.
To maintain the flow of the article, we've left these URLs in the text, but disabled
the links.
Dino Esposito
Code for this article: Windows2000UI.exe
(303KB)
Figure 2 shows
an infotip extension for BMP files. To build this extension you need to create a
COM inproc server that implements IQueryInfo and IPersistFile. IQueryInfo is required
to provide the runtime text to the shell. IPersistFile is used by Explorer to let
the extension know about the specific file currently under the mouse pointer. I've
defined a couple of minimal ATL base classes (IQueryInfoImpl.h and IPersistFileImpl.h)
derived from those interfaces and use them to build more specialized classes (see
Figure 3). You can also see the declaration
for the coclass that embodies this shell extension.
In the m_szFile member variable, the Load method
of IPersistFile stores the name of the file that the extension is working on. This
method gets silently invoked by Explorer during the initialization process of the
extension. IQueryInfo includes only two functions, one of which (GetInfoFlags) is
not yet supported and must simply return E_NOTIMPL. Actually, once you get the minimal
implementation that IQueryInfoImpl.h and IPersistFileImpl.h provide, writing an
infotip extension is as easy as building a new ATL inproc object and filling out
the body of the sole IQueryInfo::GetInfoTip method.
HRESULT CBmpTip::GetInfoTip(DWORD dwFlags, LPWSTR* ppwszTip)
CComBSTR bstrInfo; GetBitmapInfo((CComBSTR *)&bstrInfo); *ppwszTip =
(WCHAR*) m_pAlloc->Alloc( (bstrInfo.Length() +1) * sizeof(WCHAR)); if (*ppwszTip)
wcscpy(*ppwszTip, (WCHAR*)(BSTR)bstrInfo);
HKCR \.bmp \shellex \{00021500-0000-0000-C000-000000000046}
HKLM \SOFTWARE \Microsoft \Windows \CurrentVersion \Shell Extensions \Approved
Visual Studio® 6.0 comes with a nice tool called
Dependency Walker that's capable of providing this information and much more. It
gives a complete snapshot of the executable header. However, the Dependency Walker
is a separate executable that you need to run from the context menu.
If you just need to know the main DLLs that
a certain executable depends upon, then you should find my extension really handy.
To determine the list of the statically linked libraries, my code makes use of the
ImageHLP API that Matt Pietrek repeatedly featured in his Under the Hood columns
in MSJ. In particular, I'll utilize BindImageEx to request the virtual address
of each function that is imported. To get this, BindImageEx binds to the executable
image and invokes a callback function for each imported library and function. All
you need to do at this point is write a callback that simply concatenates the module
names into a string.
The dependency list is useful, but it's probably
not the information you want displayed each time your mouse hesitates over a certain
file item with a DLL or EXE extension. Wouldn't it be great if you could store a
boolean flag somewhere to enable and disable this feature? You could use either
the registry or an INI file to store this information, but one problem remains:
how do you toggle that flag interactively? Using the Registry Editor or Notepad
is fine, but a bit impractical. If the shell extension was part of an application,
then the preferences dialog of the application would also be an excellent solution.
By selecting the More option (see
Figure 5), you can display the whole list of available columns. Three
columns are particularly exciting to me: Created, Author, and Module Version. Created
shows the original creation date of the file or folder. Author returns the name
of the person who signed the document according to the content of the SummaryInformation
header for compound files. Especially within a folder full of Office documents,
identifying at a glance those written by someone in particular is really useful.
Moreover, once you display a new column you can also sort the folder content by
that column, resulting in an even more useful feature. Note that the Author information
is available only for documents stored as compound files that embed a SummaryInformation
header. Aside from some of the Microsoft Office document formats (Word, Microsoft
Excel, or PowerPoint®), not many documents export a SummaryInformation block. FlashPix
images are an interesting exception.
Figure 6 The Module Version Column
Finally, the long-awaited Module Version column
contains the version number of an executable. Figure 6
shows this column enabled within the system32 folder. Through the column chooser
dialog, you can set a default width for the new column and select its position.
Reordering columns is a feature that applies on a per-folder basis, but you can
always make all the folders look the same by adjusting the features on a certain
folder and then clicking the Like Current Folder button in the View tab of the Folder
Option dialog box. There's also a folder setting called "Remember each folder's
settings" that allows you to control whether global folder options apply to each
single folder.
What I've said so far applies only to file folders;
namely, to folders that have a corresponding file system directory. Other types
of folders (such as namespace extensions) define the columns themselves when not
providing a completely different, non-column-based view. There are a few exceptions
to this, including My Documents and Favorites. They are namespace extensions, but
since their content mirrors the content of regular file folders, they provide a
standard tabular view and respect the current folder settings you have selected.
Developers who write column-based namespace
extensions should provide a way to allow column customization via a context menu
that appears by right-clicking on the column's caption.
HRESULT GetColumnInfo(DWORD dwIndex, SHCOLUMNINFO *psci);
psci->scid.fmtid = FMTID_SummaryInformation; psci->scid.pid = 2;
psci->scid.fmtid = *_Module.pguidVer; psci->scid.pid = 1;
dwColIndex = 0; while (true) { SHCOLUMNINFO shci; hr = pColumnProvider->GetColumnInfo(dwColIndex++,
&shci); if (FAILED(hr)) break; // other shell-specific code }
if (dwIndex >= 1) return S_FALSE;
switch(dwIndex) { case 0: InitColumn_Dimensions(pshci); case 1: InitColumn_Title(pshci);
// other code }
The sample application in
Figure 8 displays the Dimensions column, which contains the width and
the height of a BMP file in addition to the bits per pixel determining the image's
color depth.
HKCR { NoRemove Folder { NoRemove Shellex { NoRemove ColumnHandlers { ForceRemove
<CLSID> = s 'description' } } } }
ShellExecute(NULL, "find", NULL, NULL, NULL, 0);
HKLM \SOFTWARE \Microsoft \Windows \CurrentVersion \Explorer \FindExtensions
Figure 9 shows
a Find Process search handler in action, with its full list of processes running
at a certain moment in time. Windows 2000 also supports the ToolHelp API to get
system information about the running processes and modules. ToolHelp is supported
under Windows 9x, but not under Windows NT 4.0. (Under Windows NT 4.0 you
should use an alternative API called PSAPI.) The source code for the search handler
shown in Figure 9. It can be found in this month's
archive. It includes a DLL that blurs the distinction between the platforms, detects
the underlying operating system, and utilizes the proper API. Hence, the handler
works on any Win32® platform.
By writing a disk cleanup extension, you can
add a new entry to the dialog shown in Figure 10 just
to manage a specific and well-known set of files that your own application may have
created during its activity. Disk Cleanup has a modular structure and is composed
of some system-provided handlers, plus you can write and register your own. Each
extension implements a few COM interfaces to facilitate the communication with the
Disk Cleanup manager. Writing a cleanup extension is just a matter of creating a
COM object that exposes the IEmptyVolumeCache2 interface. There are slight differences
between a cleanup extension for Windows 98 and one for Windows 2000. Those for Windows
98 must provide IEmptyVolumeCache, while those for Windows 2000 must also provide
IEmptyVolumeCache2. IEmptyVolumeCache2 is a superset of IEmptyVolumeCache and just
adds the InitializeEx method.
Figure 11
shows the source code for a very basic cleanup extension that is able to free up
to 1MB of space. The standard implementation provides you with message boxes that
help you understand how and in what order the various methods are invoked. Note
that there's an error in the current MSDN documentation about the prototype of the
Deactivate method. As you can probably figure out from the emptyvc.h header file,
the correct prototype is:
STDMETHOD(Deactivate)(DWORD *pdwFlags)
The customization wizard is the only interactive
tool that the shell provides to enhance the folder's user interface. There's no
user interface support to help you assign a custom icon and a folder description.
To make up for this, let's write a folder's property page handler¯a shell extension
that inserts an additional page into the folder's Properties dialog box. This new
page will have a tab called Customize and will look like the one in
Figure 13. Figure 13 Property Sheet Handler
You can enter a description text and choose
an icon to replace the standard folder bitmap. A property sheet shell extension
requires you to arrange a dialog template and all the code necessary to put it to
work. Plus, you must implement the IShellExtInit and the IShellPropSheetExt interfaces.
Actually, this can be resolved by writing two functions: Initialize and AddPages.
Initialize tells you about the folder whose
properties are going to be shown. AddPages lets you add a new property page through
the PROPSHEETPAGE structure. Since you're working with one of the Windows 95 common
controls, remember to add a call to InitCommonControls before working with the property
page data structures. The source code for the shell extension looks in the current
directory for a desktop.ini file and extracts its content using an old, faithful
API such as GetPrivateProfileString. Typical content looks like this:
[.ShellClassInfo] Infotip=Contains all the articles I've written for MSJ.
IconFile=D:\My Pictures\ICON\Special\msj.ico IconIndex=0
HKCR \Folder \Shellex \PropertySheetHandlers \{CLSID}
HKCR { NoRemove Folder { NoRemove Shellex { NoRemove PropertySheetHandlers
{ ForceRemove {1F8F343A-1DE0-4B26-97C9-18A39FFC9880} } } } }
HKLM \Software \Microsoft \Windows \CurrentVersion \Explorer \DriveIcons
... \DriveIcons \D \DefaultIcon
mylib.dll,-204
Employing colorful icons can help identify folders
quickly, especially on very structured and large drives. On the other hand, too
many custom icons can significantly increase the user's confusion and make it even
harder to identify the correct folder. So use this Customize page selectively.
There is another functionality I'd like to see
available in the folder's context menu: creating a subfolder and opening the command
prompt from the folder itself, as shown in Figure 15.
By clicking on the New Folder menu item, you'll be presented with a dialog box to
accept the name of the child folder. To create a new folder, you can take advantage
of a little-known API called MakeSureDirectoryPathExists. It takes a path name and
does what its name implies: it makes sure that all the needed directories exist
and creates those that are missing. In this way, you could enter a string like
one\two\three
Dino Esposito is a senior consultant based in Rome. He authored Visual
C++ Windows Shell Programming (WROX, 1999) and cofounded. Reach Dino at
desposito@vb2themax.com.
From the March 2000 issue of MSDN Magazine. | http://msdn.microsoft.com/en-us/magazine/cc748674.aspx | crawl-002 | refinedweb | 2,217 | 54.22 |
This article discusses Grails support for the complementary technologies JSON and Ajax. After playing supporting roles in previous Mastering Grails articles, this time they take center stage. You will make an Ajax request using the included Prototype library as well as the Grails
<formRemote> tag. You'll also see examples of both serving up local JSON and dynamically pulling in remote JSON from across the Web.
To see all this in action, you'll put together a trip-planning page in which the user can type in a source and destination airport. After the airports are displayed on a Google Map, a link lets them search for hotels near the destination airport. Figure 1 shows this page in use:
Figure 1. Trip-planning page
You can achieve all of this functionality in about 150 lines of code spread across a single GSP file and three controllers.
A brief history of Ajax and JSON
When the Web first rose to popularity in the mid-1990s, browsers allowed only coarse-grained HTTP requests. Clicking on a hyperlink or a form-submit button caused the entire page to be erased and replaced by the new results. This was fine for page-centric navigation, but individual components on the page couldn't update themselves independently.
Microsoft® introduced the
XMLHTTP object with the release of Internet Explorer 5.0 in 1999. This new object gave developers the ability to make "micro" HTTP requests, leaving the surrounding HTML page in place. Although this feature wasn't based on a World Wide Web Consortium (W3C) standard, the Mozilla team recognized its potential and added an
XMLHttpRequest (XHR) object to the 2002 release of Mozilla 1.0. It has since become a de facto standard, present in every major Web browser.
In 2005, Google Maps was released to the general public. Its extensive use of asynchronous HTTP requests put it in stark contrast with the other Web mapping sites of the day. Instead of clicking and waiting for the entire page to reload as you pan a Google Map, you seamlessly scroll around the map with your mouse. Jesse James Garrett used the catchy mnemonic Ajax in a blog post describing the collection of technologies used in Google Maps, and the name has stuck ever since (see Resources).
In recent years, Ajax has come to be more of a loose umbrella term for "Web 2.0" applications than a specific list of technologies. The requests are usually asynchronous and made with JavaScript, but the response isn't always XML. The problem with XML in browser-based application development is the lack of a native, easy-to-use JavaScript parser. It's certainly possible to parse XML using the JavaScript DOM API, but it isn't easy for the novice. Consequently, Ajax Web services frequently return results in plain text, HTML snippets, or JSON.
In July 2006, Douglas Crockford submitted RFC 4627 to the Internet Engineering Task Force (IETF) describing JSON. By the end of that year, major service providers such as Yahoo! and Google were offering JSON output as an alternative to XML (see Resources). (You'll take advantage of Yahoo!'s JSON Web services later in this article.)
The benefits of JSON
JSON offers two major advantages over XML when it comes to Web development. First of all,
it is less verbose. A JSON object is simply a series of comma-separated
name:value pairs wrapped in curly braces. In contrast, XML uses duplicate start and end tags to wrap data values. This yields twice the metadata overhead than the corresponding JSON, inspiring Crockford to call JSON "the fat-free alternative to XML" (see Resources). When you are dealing with the "thin pipe" of Web development, every reduction in bytes pays real performance dividends.
Listing 1 shows how JSON and XML organize the same information:
Listing 1. Comparing JSON and XML
{"city":"Denver", "state":"CO", "country":"US"} <result> <city>Denver</city> <state>CO</state> <country>US</country> </result>
JSON objects should look familiar to Groovy programmers: if you replace the curly braces with square brackets, you'd be defining a
HashMap in Groovy. And speaking of square brackets, an array of JSON objects is defined in exactly the same way as an array of Groovy objects. A JSON array is simply a comma-separated series wrapped in square brackets, as shown in Listing 2:
Listing 2. A list of JSON objects
[{"city":"Denver", "state":"CO", "country":"US"}, {"city":"Chicago", "state":"IL", "country":"US"}]
JSON's second benefit becomes evident when you parse it and work with it. Loading JSON into memory is one
eval() call away. Once it is loaded, you can directly access any field by name, as shown in Listing 3:
Listing 3. Loading JSON and calling fields
var json = '{"city":"Denver", state:"CO", country:"US"}' var result = eval( '(' + json + ')' ) alert(result.city)
Groovy's
XmlSlurper gives you the same direct access to XML elements. (You worked with
XmlSlurper in "Grails services and Google Maps.") If modern Web browsers supported client-side Groovy, I'd be far less interested in JSON. Sadly, Groovy is strictly a server-side solution. JavaScript is the only game in town when it comes to client-side development. So I prefer working with XML in Groovy on the server side and JSON in JavaScript on the client side. In both cases, I can get my hands on the data with a minimal effort.
Now that you've gotten a glimpse of JSON, it's time to have your Grails application produce some JSON of your own.
Rendering JSON in a Grails controller
You first returned JSON from a Grails controller in "Many-to-many relationships with a dollop of Ajax." The closure in Listing 4 is similar to the one that you created back then. The difference is that this one is accessed via a friendly Uniform Resource Identifier (URI), as discussed in "RESTful Grails." It also uses the Elvis operator you first saw in "Testing your Grails application."
Add a closure called
iata to the
grails-app/controllers/AirportMappingController.groovy class you created in "Grails and legacy databases," remembering to import the
grails.converters package at the top of the file, as shown in Listing 4:
Listing 4. Converting Groovy objects to JSON
import grails.converters.* class AirportMappingController { def iata = { def iata = params.id?.toUpperCase() ?: "NO IATA" def airport = AirportMapping.findByIata(iata) if(!airport){ airport = new AirportMapping(iata:iata, name:"Not found") } render airport as JSON } }
Try it out by typing in your browser. You should see the JSON results shown in Listing 5:
Listing 5. A valid
AirportMapping object in JSON
{"id":328, "class":"AirportMapping", "iata":"DEN", "lat":"39.858409881591797", "lng":"-104.666999816894531", "name":"Denver International", "state":"CO"}
You can also type and to make sure that "Not Found" is returned. Listing 6 shows the resulting invalid JSON object:
Listing 6. An invalid
AirportMapping object in JSON
{"id":null, "class":"AirportMapping", "iata":"FOO", "lat":null, "lng":null, "name":"Not found", "state":null}
Of course, this "gut check" is no replacement for a real set of tests.
Testing the controller
Create AirportMappingControllerTests.groovy in test/integration. Add the two tests in Listing 7:
Listing 7. Testing a Grails controller
class AirportMappingControllerTests extends GroovyTestCase{ void testWithBadIata(){ def controller = new AirportMappingController() controller.metaClass.getParams = {-> return ["id":"foo"] } controller.iata() def response = controller.response.contentAsString assertTrue response.contains("\"name\":\"Not found\"") println "Response for airport/iata/foo: ${response}" } void testWithGoodIata(){ def controller = new AirportMappingController() controller.metaClass.getParams = {-> return ["id":"den"] } controller.iata() def response = controller.response.contentAsString assertTrue response.contains("Denver") println "Response for airport/iata/den: ${response}" } }
Type
$grails test-app to run the tests. You should see the successes in the JUnit HTML reports, as in Figure 2. (For a refresher on testing Grails applications, see "Testing your Grails application.")
Figure 2. Passing tests in JUnit
Here's what's going on in
testWithBadIata() in Listing 7. The first line (obviously) creates an instance of the
AirportMappingController. You do this so that later you can call
controller.iata() and write an assertion against the resulting JSON. To make the call fail (in this case) or succeed (in the case of
testWithGoodIata()), you need to seed the
params hashmap with an
id entry. Normally the query string gets parsed and stored in
params. In this case, however, there's no HTTP request to get parsed. Instead, I use Groovy metaprogramming to override the
getParams method directly, forcing the expected values to be present in the
HashMap that's returned. (For more on metaprogramming in Groovy, see Resources.)
Now that the JSON producer is working and tested, it's time to focus on consuming the JSON from a Web page.
Setting up the initial Google Map
I want the planning page to be available at. This means adding a
plan closure to grails-app/controllers/TripController.groovy, as shown in Listing 8:
Listing 8. Setting up the controller
class TripController { def scaffold = Trip def plan = {} }
Because
plan() doesn't end with a
render() or a
redirect(), convention over configuration dictates that grails-app/views/trip/plan.gsp will be displayed. Create the file using the HTML code in Listing 9. (To review the basics behind this Google Map, see "Grails services and Google Maps.")
Listing 9. Setting up the initial Google Map
<html> <head> <title>Plan</title> <script src="" type="text/javascript"></script> <script type="text/javascript"> var map var usCenterPoint = new GLatLng(39.833333, -98.583333) var usZoom = 4 function load() { if (GBrowserIsCompatible()) { <div id="search" style="width:25%; float:left"> <h1>Where to?</h1> </div> <div id="map" style="width:75%; height:100%; float:right"></div> </div> </body> </html>
If all goes well, visiting in your browser should give you something that looks like Figure 3:
Figure 3. A plain Google Map
Now that the basic map is in place, you should add a couple of fields for the source and destination airports.
Adding the form fields
In "Many-to-many relationships with a dollop of Ajax," you used Prototype's
Ajax.Request object. You'll use it again later in this article when you get some JSON from a remote source. In the meantime, you'll take advantage of the
<g:formRemote> tag. Add the HTML in Listing 10 to grails-app/views/trip/plan.gsp:
Listing 10. Using
<g:formRemote>
<div id="search" style="width:25%; float:left"> <h1>Where to?</h1> <g:formRemote From:<br/> <input type="text" name="id" size="3"/> <input type="submit" value="Search" /> </g:formRemote> <div id="airport_0"></div> <g:formRemote To: <br/> <input type="text" name="id" size="3"/> <input type="submit" value="Search" /> </g:formRemote> <div id="airport_1"></div> </div>
Click the Refresh button in your Web browser to see the new changes, shown in Figure 4:
Figure 4. Adding the form fields
Using a normal
<g:form> would cause the entire page to refresh when the user submits the form. By choosing
<g:formRemote>, you make an
Ajax.Request perform the form submission asynchronously behind the scenes. The input text field is named
id, ensuring that
params.id will be populated in the controller. The
url attribute on
<g:formRemote> clearly shows you that
AirportMappingController.iata() will be called when the user clicks the submit button.
You couldn't take advantage of
<g:formRemote> in "Many-to-many relationships with a dollop of Ajax" because you can't nest one HTML form inside another HTML form. Here, though, you can create two separate forms and not worry about having to write the Prototype code yourself. The results of the asynchronous JSON request will be passed to the
addAirport() JavaScript function.
Your next task is to create
addAirport().
Adding the JavaScript to handle the JSON
The
addAirport() function that you are about to create does two simple things: it loads the JSON object into memory and then uses the fields for various purposes. In this case, you use the latitude and longitude values to create a
GMarker and add it to the map.
For
<g:formRemote> to work, be sure that you include the Prototype library at the top of the head section, as shown in Listing 11:
Listing 11. Including Prototype in a GSP
<g:javascript
Next, add the JavaScript in Listing 12 after the
init() function:
Listing 12. Implementing
addAirport and
drawLine
<script type="text/javascript"> var airportMarkers = [] var line function addAirport(response, position) { var airport = eval('(' + response.responseText + ')') var label = airport.iata + " -- " + airport.name var marker = new GMarker(new GLatLng(airport.lat, airport.lng), {title:label}) marker.bindInfoWindowHtml(label) if(airportMarkers[position] != null){ map.removeOverlay(airportMarkers[position]) } if(airport.name != "Not found"){ airportMarkers[position] = marker map.addOverlay(marker) } document.getElementById("airport_" + position).innerHTML = airport.name drawLine() } function drawLine(){ if(line != null){ map.removeOverlay(line) } if(airportMarkers.length == 2){ line = new GPolyline([airportMarkers[0].getLatLng(), airportMarkers[1].getLatLng()]) map.addOverlay(line) } } </script>
The first thing the code in Listing 12 does is declare a couple of new variables: one to
hold the line and an array to hold the two airport markers. After you
eval() the incoming JSON, you call the fields such as
airport.iata,
airport.name,
airport.lat, and
airport.lng directly. (For a reminder of what the JSON object looks like, see Listing 5.)
Once you have a handle to the
airport object, you create a new
GMarker. This is the familiar "red push pin" you are used to seeing on Google Maps. The
title attribute tells the API what to display as a tooltip when the user's mouse hovers over the marker. The
bindInfoWindowHtml() method tells the API what to display when the user clicks on the marker. Once the marker is added to the map as an overlay, the
drawLine() function is called.
As the name suggests, it draws a line between the two airport markers if they both exist.
For more information on the Google Maps API objects such as
GMarker,
GLatLng, and
GPolyline, see the online documentation (see Resources).
Entering a couple of airports should make the page look like Figure 5:
Figure 5. Displaying two airports with a line drawn between them
Don't forget to refresh the Web browser each time you make changes to the GSP file.
Now that you have an example of using JSON returned locally from your Grails application, it's time to expand your horizons a bit. In the next section, you'll dynamically get JSON from a remote Web service. Of course once you have it, you work with the same way you just did in this example: you load it into memory and directly access the various attributes.
Remote or local JSON?
Your next task is to display the 10 closest hotels to the destination airport. This will almost certainly require you to get the data remotely.
There is no standard answer to the question of whether you should host the data locally or fetch it remotely per request. In the case of the airport dataset, I feel reasonably confident hosting it locally. The data is freely available and easy enough to ingest. (The United States has only 901 airports, and the number of major airports is fairly static; the list probably won't be out of date any time soon.)
If the airport dataset were more volatile, too large to reasonably store locally, or simply not available as a single download, I'd be far more inclined to request it remotely. The geonames.org geocoding service you used in "Grails services and Google Maps" offers JSON output as well as XML (see Resources). Type in your Web browser. You should see the JSON results shown in Listing 13:
Listing 13. JSON results from GeoNames
{"totalResultsCount":1, "geonames":[ {"alternateNames":[ {"name":"DEN","lang":"iata"}, {"name":"KDEN","lang":"icao"}], "adminCode2":"031", "countryName":"United States", "adminCode1":"CO", "fclName":"spot, building, farm", "elevation":1655, "countryCode":"US", "lng":-104.6674674, "adminName2":"Denver County", "adminName3":"", "fcodeName":"airport", "adminName4":"", "timezone":{ "dstOffset":-6, "gmtOffset":-7, "timeZoneId":"America/Denver"}, "fcl":"S", "name":"Denver International Airport", "fcode":"AIRP", "geonameId":5419401, "lat":39.8583188, "population":0, "adminName1":"Colorado"}] }
As you can see, the GeoNames service offers more information about the airport than the dataset from the USGS that you imported in "Grails and legacy databases." If new user stories emerge, such as needing to know the airport's time zone or the elevation in meters, GeoNames offers a compelling alternative. It also includes international airports like London Heathrow (LHR) and Frankfort (FRA). I'll leave it as an extra-credit exercise for you to convert
AirportMapping.iata() to use GeoNames under the covers.
In the meantime, your only real option for displaying a list of hotels in close proximity to the destination airport is to take advantage of a remote Web service. There are thousands of hotels in an ever-changing list, so letting someone else take responsibility for managing this list is pretty compelling.
Yahoo! offers a local search service that allows you to search for businesses near a street address, ZIP code, or even a latitude/longitude point (see Resources). If you registered for a developer key in "RESTful Grails," you can reuse it here. Not surprisingly, the format of the generic search URI you used then and the local search you are about to use now are very similar. Last time, you allowed the Web service to return XML by default. By adding one more
name=value pair (
output=json), you can get JSON instead.
Type this in your browser (without the line breaks) to see a JSON list of hotels near Denver International Airport: YahooDemo&query=hotel&latitude=39.858409881591797&longitude= -104.666999816894531&sort=distance
Listing 14 shows the (truncated) JSON results:
Listing 14. JSON results from Yahoo!
{"ResultSet": {"totalResultsAvailable":"803", "totalResultsReturned":"10", "firstResultPosition":"1", "ResultSetMapUrl":"http:\/\/maps.yahoo.com\/broadband\/?tt=hotel&tp=1", "Result":[ {"id":"42712564", "Title":"Springhill Suites-Denver Arprt", "Address":"18350 E 68th Ave", "City":"Denver", "State":"CO", "Phone":"(303) 371-9400", "Latitude":"39.82076", "Longitude":"-104.673719", "Distance":"2.63", [SNIP]
Now that you have a viable list of hotels, you need to create a controller method as you did for
AirportMapping.iata().
Creating the controller method to make the remote JSON request
You should already have a
HotelController in place from a previous article. Add the
near closure in Listing 15 to it. (You saw similar code in "Grails services and Google Maps.")
Listing 15. The
HotelController
class HotelController { def scaffold = Hotel def near = { def addr = "?" def qs = [] qs << "appid=YahooDemo" qs << "query=hotel" qs << "sort=distance" qs << "output=json" qs << "latitude=${params.lat}" qs << "longitude=${params.lng}" def url = new URL(addr + qs.join("&")) render(contentType:"application/json", text:"${url.text}") } }
All of the query-string parameters are hardcoded except the last two:
latitude and
longitude. The next-to-last line instantiates a new
java.net.URL. The last line calls the service (
url.text) and renders the results. Because you aren't using the JSON converter, you must explicitly set the MIME-type to
application/json.
render returns
text/plain unless you tell it otherwise.
Type this (without the line break) in your browser: 39.858409881591797&lng=-104.666999816894531
Compare the results with the direct call that you made earlier to — they should be identical.
Having a controller method make the remote JSON request offers two benefits: it provides a workaround for the same-source Ajax limitation (see the Why can't I call remote Web services directly from the browser? sidebar), but — more important — it provides some encapsulation. The controller effectively becomes something akin to a Data Access Object (DAO).
You would no more want to have raw SQL in your view than you would a hardcoded URL to a remote Web service. By making a call to a local controller, you protect your downstream clients from implementation changes. A table- or field-name change would break an embedded SQL statement, and a URL change would break an embedded Ajax call. By calling
AirportMapping.iata(), you are free to change the datasource from a local table to the remote GeoNames service while preserving the client-side interface. Long term, you might even decide to cache the calls to the remote service in a local database for performance reasons, building up your local cache one request at a time.
Now that the service is working in isolation, you can call it from the Web page.
Adding the
ShowHotels link
There is no sense in displaying the Show Nearby Hotels hyperlink until the user provides the destination airport. Similarly, there is no sense in making the remote request until you are sure that the user really wants to see a list of hotels. So to start, add the
showHotelsLink() function to the script block in plan.gsp. Also, add a call to
showHotelsLink() to the last line of
addAirport(), as shown in Listing 16:
Listing 16. Implementing
showHotelsLink()
function addAirport(response, position) { ... drawLine() showHotelsLink() } function showHotelsLink(){ if(airportMarkers[1] != null){ var hotels_link = document.getElementById("hotels_link") hotels_link.innerHTML = "<a href='#' onClick='loadHotels()'>Show Nearby Hotels...</a>" } }
Grails provides a
<g:remoteLink> tag that creates asynchronous hyperlinks (much as
<g:formRemote> provides asynchronous form submissions), but life-cycle issues make them unusable here. The
g: tags are rendered on the server. Because this link is being added dynamically on the client side, you need to rely on a pure JavaScript solution.
You probably noticed the call to
document.getElementById("hotels_link"). Add a new
<div> to the bottom of the
search <div>, as shown in Listing 17:
Listing 17. Adding the
hotels_link <div>
<div id="search" style="width:25%; float:left"> <h1>Where to?</h1> <g:formRemote</div> </div>
Refresh your browser and confirm that the hyperlink appears after you provide a destination airport, as in Figure 6:
Figure 6. Displaying the Show Nearby Hotels hyperlink
Now you need to create the
loadHotels() function.
Making the
Ajax.Remote call
Add a new function to the script block in plan.gsp, as shown in Listing 18:
Listing 18. Implementing
loadHotels()
function loadHotels(){ var url = "${createLink(controller:'hotel', action:'near')}" url += "?lat=" + airportMarkers[1].getLatLng().lat() url += "&lng=" + airportMarkers[1].getLatLng().lng() new Ajax.Request(url,{ onSuccess: function(req) { showHotels(req) }, onFailure: function(req) { displayError(req) } }) }
It is safe to use the Grails
createLink method here, because the base part of the URL to
Hotel.near() won't change when the page is rendered server-side. You append the dynamic parts of the URL using client-side JavaScript and then make the Ajax request using the by-now familiar Prototype call.
Handling errors
I ignored error handling in the
<g:formRemote> call for brevity's sake. Now that you are making a call to a remote service (albeit via a local controller proxy), it might be more prudent to provide some feedback instead of just failing silently. Add the
displayError() function to the script block in plan.gsp, as shown in Listing 19:
Listing 19. Implementing
displayError()
function displayError(response){ var html = "response.status=" + response.status + "<br />" html += "response.responseText=" + response.responseText + "<br />" var hotels = document.getElementById("hotels") hotels.innerHTML = html }
This admittedly isn't doing much more than displaying the error to the user in the
hotels <div> below the Show Nearby Hotels link where the results will normally appear. You're encapsulating the remote call in a server-side controller, so you might choose to do some more extensive error correction there.
Add a
hotels <div> below the
hotels_link <div> you added earlier, as shown in Listing 20:
Listing 20. Adding the
hotels <div>
<div id="search" style="width:25%; float:left"> <h1>Where to?</h1> <g:formRemote</div> <div id="hotels"></div> </div>
You have just one more thing to do: add a function to load the successful JSON request and populate the
hotels <div>.
Handling success
This last function, shown in Listing 21, takes the JSON response from the local Yahoo! service, builds up an HTML list, and writes it to the
hotels <div>:
Listing 21. Implementing
showHotels()
function showHotels(response){ var results = eval( '(' + response.responseText + ')') var resultCount = 1 * results.ResultSet.totalResultsReturned var html = "<ul>" for(var i=0; i < resultCount; i++){ html += "<li>" + results.ResultSet.Result[i].Title + "<br />" html += "Distance: " + results.ResultSet.Result[i].Distance + "<br />" html += "<hr />" html += "</li>" } html += "</ul>" var hotels = document.getElementById("hotels") hotels.innerHTML = html }
Refresh your browser one last time and type in a couple of airports. Your screen should look like Figure 1.
I'll end the example here with the hope that you'll continue to play with it on your own. You might decide to plot the hotels on the map using another array of
GMarkers. You could add in additional fields from the Yahoo! results such as phone number and street address. The possibilities are endless.
Conclusion
Not bad for roughly 150 lines of code, is it? In this article, you saw how JSON can be a viable alternative to XML when making Ajax requests. You saw how easy it is to return JSON locally from a Grails application, and that it's not much more difficult to return JSON from a remote Web service. You can use Grails tags like
<g:formRemote> and
<g:linkRemote> when the HTML is being rendered server-side, but knowing how to use the underlying
Ajax.Request call provided by Prototype is critical for truly dynamic Web 2.0 applications.
Next time you'll see the native Java Management Extensions (JMX) capabilities of Grails in action. Until then, have fun mastering Grails.
Resources
Learn
- Mastering Grails: Read more in this series to gain a further understanding of Grails and all you can do with it.
- Grails: Visit the Grails Web site.
- Grails Framework Reference Documentation: The Grails bible.
- "Ajax: A New Approach to Web Applications" (Jesse James Garrett, Adaptive Path, February 2005): Garrett, who coined the term "Ajax," describes its basic architectural and user experience properties.
- Using JSON (JavaScript Object Notation) with Yahoo! Web Services and Using JSON with Google Data APIs: Yahoo! and Google both offer Web services output in JSON format.
- JSON: Visit the JSON site and check out "JSON: The Fat-Free Alternative to XML."
- Metaprogramming in Groovy: Learn more about Groovy's
ExpandoMetaClass.
- Google Maps API Reference: Find out more about Google Maps API objects such as
GMarker,
GLatLng, and
GPolyline.
- GeoNames Search Webservice: The GeoNames search Web service taps into a database of more than 8 million geographical names.
- Local Search Web Services: Yahoo!'s local search service lets you search for businesses near a street address, ZIP code, or latitude/longitude point.
-. | http://www.ibm.com/developerworks/java/library/j-grails11188/ | CC-MAIN-2015-27 | refinedweb | 4,423 | 56.96 |
getopt, optarg, opterr, optind, optopt - command option parsing
#include <unistd.h>
int getopt(int argc, char * const argv[], const char *optstring);
extern char *optarg;
extern int opterr, optind, optopt;
The getopt() function is a command-line parser that shall follow Utility Syntax Guidelines 3, 4, 5, 6, 7, 9, and 10 in XBD[]. If the application sets optind to zero before calling getopt(), the behavior is unspecified..
If the application has not set the variable opterr to 0, the first character of optstring is not a <colon>, and a write error occurs while getopt() is printing a diagnostic message to stderr, then the error indicator for stderr shall be set; but getopt() shall still succeed and the value of errno after getopt() is unspecified.
Parsing Command Line Options
The following code fragment shows how you might process the arguments for a utility that can take the mutually-exclusive options a and b and the options f and o, both of which require arguments:#include <stdio.h> #include <stdlib.h> #include <unistd.h>
int main(int argc, char *argv[ ]) { int c; int bflg = 0, aflg = 0, errflg = 0; char *ifile; char *ofile; . . . while ((c = getopt(argc, argv, ":abf:o:")) != -1) { switch(c) { case 'a': if (bflg) errflg++; else aflg++; break; case 'b': if (aflg) errflg++; else bflg++;
Selecting Options from the Command Line
The following example selects the type of database routines the user wants to use based on the Options argument.#include <unistd.h> #include <string.h> ... const char *Options = "hdbtl"; ... int dbtype,.
While ferror(stderr) may be used to detect failures to write a diagnostic to stderr when getopt() returns '?', the value of errno is unspecified in such a condition. Applications desiring more control over handling write failures should set opterr to 0 and independently perform output to stderr, rather than relying on getopt() to do the output.
The optopt variable represents historical practice and allows the application to obtain the identity of the invalid option.
The description has been written to make it clear that getopt(), like the getopts utility, deals with option-arguments whether separated from the option by <blank> characters | http://pubs.opengroup.org/onlinepubs/9699919799/functions/optarg.html | CC-MAIN-2016-40 | refinedweb | 354 | 59.84 |
On Fri, Jun 05, 2009 at 10:34:00AM +0100, Arnd Bergmann wrote:
> On Thursday 04 June 2009, Sam Ravnborg wrote:
> > Any specific reason why mips does not use include/asm-generic/ioctl.h?
> > Had mips done so this would not have been an issue.
>
> The original include/asm-generic/ioctl.h did not allow overriding
> the values of _IOC_{SIZEBITS,DIRBITS,NONE,READ,WRITE}, so it
> was initially not possible to use it.
>
> Nowadays, you can simply use the same approach as powerpc:
>
> #ifndef _ASM_MIPS_IOCTL_H
> #define _ASM_MIPS_IOCTL_H
>
> #define _IOC_SIZEBITS 13
> #define _IOC_DIRBITS 3
>
> #define _IOC_NONE 1U
> #define _IOC_READ 2U
> #define _IOC_WRITE 4U
>
> /*
> * The following are included for compatibility
> */
> #define _IOC_VOID 0x20000000
> #define _IOC_OUT 0x40000000
> #define _IOC_IN 0x80000000
> #define _IOC_INOUT (IOC_IN|IOC_OUT)
>
> #include <asm-generic/ioctl.h>
>
> #endif /* _ASM_MIPS_IOCTL_H */
>
> This would indeed be a cleaner fix.
In fact that's almost identical to what I already have. But I don't even
recall what _IOC_VOID, _IOC_OUT, _IOC_IN and _IOC_INOUT were meant to be
compatible with. They were added in 2.1.14 so presumably they've become
irrlevant, so I've dropped them. I bet nobody will notice.
Ralf | http://www.linux-mips.org/archives/linux-mips/2009-06/msg00194.html | CC-MAIN-2014-41 | refinedweb | 189 | 64.81 |
NAME
VOP_ADVLOCK - advisory record locking
SYNOPSIS
#include <sys/param.h> #include <sys/vnode.h> #include <sys/fcntl.h> #include <sys/lockf.h> int VOP_ADVLOCK(struct vnode *vp, caddr_t id, int op, struct flock *fl, int flags);
DESCRIPTION
The arguments are: vp The vnode being manipulated. id The id token which is changing the lock. op The operation to perform (see fcntl(2)). fl Description of the lock. flags One of more of the following: F_RDLCK Shared or read lock. F_UNLCK Unlock. F_WRLCK Exclusive or write lock. F_WAIT Wait until lock is granted. F_FLOCK Use flock(2) semantics for lock. F_POSIX Use POSIX semantics for lock. This entry point manipulates advisory record locks on the file. Most file systems delegate the work for this call to lf_advlock().
RETURN VALUES
Zero is returned on success, otherwise an error is returned.
SEE ALSO
fcntl(2), flock(2), vnode(9)
AUTHORS
This manual page was written by Doug Rabson. | http://manpages.ubuntu.com/manpages/lucid/en/man9/VOP_ADVLOCK.9freebsd.html | CC-MAIN-2013-20 | refinedweb | 155 | 63.05 |
55197/how-do-i-delete-a-queue-in-aws-sqs
Its pretty simple to delete a queue in sqs. Try something like this:
import boto3
# Create SQS client
sqs = boto3.client('sqs')
# Delete SQS queue
sqs.delete_queue(QueueUrl='SQS_QUEUE_URL')
You can try these steps to put ...READ MORE
It can work if you try to put ...READ MORE
The property you want is InstanceMonitoring, not ...READ MORE
Hey, I have attached code line by ...READ MORE
Yes, it is possible to lookup for ...READ MORE
Follow the guide given here in aws ...READ MORE
Check if the FTP ports are enabled ...READ MORE
To connect to EC2 instance using Filezilla, ...READ MORE
Hey, you've been using a correct code ...READ MORE
delete_login_profile is the one you should use if ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/55197/how-do-i-delete-a-queue-in-aws-sqs | CC-MAIN-2020-05 | refinedweb | 139 | 87.82 |
Results 1 to 4 of 4
Thread: Null Exception Error
- Join Date
- Feb 2012
- 11
- Thanks
- 2
- Thanked 0 Times in 0 Posts
Null Exception Error
I got a Null Exception Error in my program and I have no ideas how to fix it. I posted my files in the post. The program is a text based go fish game.
- Join Date
- Feb 2012
- 11
- Thanks
- 2
- Thanked 0 Times in 0 Posts
It is in the GoFish Class in line 88
Code:
if((P.cardsInHand[P.getnumCardsInHand()].getRank()) == rank){ return P; }else{ System.out.println("Requested Players turn"); return (requestedPlayer); }
- Join Date
- Sep 2002
- Location
- Saskatoon, Saskatchewan
- 17,026
- Thanks
- 4
- Thanked 2,668 Times in 2,637 Posts
If I *really* have to gander it without looking at the code, its the result of
P.cardsInHand[P.getNumCardsInHand()]call.
I'd presume that the cardsInHand is a pre-sized array that is greater than the number of cards currently within it. When you attempt to fetch a card from an "invalid" location (as defined by running location and not by pre-defined size location which would throw an OutOfBoundsException), it will return null which cannot be dereferenced for a method or member. If the names are logical, and you want the last card in the array based on the size, you can use
P.getnumCardsInHand() - 1instead.PHP Code:
header('HTTP/1.1 420 Enhance Your Calm'); | http://www.codingforums.com/java-and-jsp/318439-null-exception-error.html | CC-MAIN-2017-09 | refinedweb | 238 | 68.6 |
With a simple extension method to ControlCollection to flatten the control tree you can use LINQ to query the control tree:
public static class PageExtensions
{
public static IEnumerable<Control> All(this ControlCollection controls)
{
foreach (Control control in controls)
{
foreach (Control grandChild in control.Controls.All())
yield return grandChild;
yield return control;
}
}
}
Now I can do things like this:
// get the first empty textbox
TextBox firstEmpty = accountDetails.Controls
.All()
.OfType<TextBox>()
.Where(tb => tb.Text.Trim().Length == 0)
.FirstOrDefault();
// and focus it
if (firstEmpty != null)
firstEmpty.Focus();
Pretty cool! I can do all sorts of querying of the control tree now. LINQ you are my hero.
I think your Zelda fetish has gotten a bit out of control. :D
You might need to reformat your code or use a smaller font for the post as it's being cut off on the right side. I think people can still understand the idea with what's shown, but it would be nice to see all the code.
Hmmmm here at home with Firefox, it seems to have a nice scrollable DIV, guess it was IE6 at work.
How can we activate this extension method? I think there already exist an All() method in Page class and I cannot make use of your method.
If you put the PageExtensions class inside a namespace you'll need to make sure you have a using for that namespace. This adds the All method to the Controls collection and not the Page class so you should be ok.
Pingback from LINQ - The Uber FindControl
<rant> I'm spending hours trying to get a reference to a FileUpload control that's inside an InsertTemplate
Pingback from LINQ - The Uber FindControl | Enjoyable Blog of Possibilities
Pingback from Daily Find #44 | TechToolBlog
Pingback from findcontrol
Recursive Find Control using LINQ
Really handy. Will perf come to bite me in the butt though?
Still cannot get it to work getting this error:
CS1501: No overload for method 'All' takes '0' arguments
Using this code:
Page.Controls.All().OfType<WebControl>().ToList().ForEach(c => c.Enabled = (CardID != 0));
Just make sure that whatever namespace PageExtensions is in is in scope. You may need to add a using to your code behind.
Man, I've just thrown away my code... poor. | http://weblogs.asp.net/dfindley/archive/2007/06/29/linq-the-uber-findcontrol.aspx | crawl-002 | refinedweb | 378 | 70.43 |
For most investors, the key is patience
Downsize in a major way
Join the NASDAQ Community today and get free, instant access to portfolios, stock ratings, real-time alerts, and more!
Consistently, one of the more popular stocks people enter into
their
stock options watchlist
at Stock Options Channel is American Express Co. (Symbol: AXP). So
this week we highlight one interesting put contract, and one
interesting call contract, from the October expiration for AXP. The
put contract our YieldBoost algorithm identified as particularly
interesting, is at the $80 strike, which has a bid at the time of
this writing of $2.61. Collecting that bid as the premium
represents a 3.3% return against the $80 commitment, or a 5%
annualized rate of return (at Stock Options Channel we call this
the
YieldBoost
).
Turning to the other side of the option chain, we highlight one
call contract of particular interest for the October expiration,
for shareholders of American Express Co. (Symbol: AXP) looking to
boost their income beyond the stock's 1% annualized dividend yield.
Selling the covered call at the $97.50 strike and collecting the
premium based on the $2.61 bid, annualizes to an additional 4.5%
rate of return against the current stock price (this is what we at
Stock Options Channel refer to as the
YieldBoost
), for a total of 5.5% annualized rate in the scenario where the
stock is not called away. Any upside above $97.50 would be lost if
the stock rises there and is called away, but AXP shares would have
to climb 8.7% from current levels for that to occur, meaning that
in the scenario where the stock is called, the shareholder has
earned a 11.6% return from this trading level, in addition to any
dividends collected before the stock was called.
Top YieldBoost AXP. | http://www.nasdaq.com/article/one-put-one-call-option-to-know-about-for-american-express-cm329191 | CC-MAIN-2016-18 | refinedweb | 309 | 63.09 |
The first part of this series can be found here.
The first part of this series looked at how to implement basic rate limiting in a Rails application. However, as pointed out in the improvements section, the implementation was not complete - it did not provide clients enough information about the rate limiting that is in place and how long they should wait before making further requests once they hit the limit.
In order to tell the client about the rate limit parameters, the mechanism needs to be able to set headers on the response. While a
before_filter is useful to limit the requests, it can not change the response from a valid request. One could use an
after_filter to achieve this, but a Rack middleware 1 is a more suitable solution given that middlewares can act up on a request as well as the response generated by the application for that request.
We will need to comment out the
before_filter that was introduced in Part 1. Then we will define a blank middleware and wire it up. The convention is to define middlwares in
app/middleware.
# app/middleware/rate_limit.rb class RateLimit def initialize(app) @app = app end def call(env) @app.env end end
This middleware is wired up as follows:
# config/application.rb class Application < Rails::Application ... config.middleware.use "RateLimit" end
Basic Rate Limiting
Let’s re-implement what we implemented in Part 1 using the middleware.
def call(env) client_ip = env["REMOTE_ADDR"] key = "count:#{client_ip}" count = REDIS.get(key) unless count REDIS.set(key, 0) REDIS.expire(key, THROTTLE_TIME_WINDOW) end if count.to_i >= THROTTLE_MAX_REQUESTS [ 429, {}, [message] ] else REDIS.incr(key) @app.call(env) end end private def message { :message => "You have fired too many requests. Please wait for some time." }.to_json end
Rate limit status
There are various header conventions for providing a client it’s rate limit status. For this example, we will use the convention that GitHub 2 and Twitter 3 use. The following headers represent the rate limit status:
X-RateLimit-Limit- The maximum number of requests that the client is permitted to make in the time window.
X-RateLimit-Remaining- The number of requests remaining in the current rate limit window.
X-RateLimit-Reset- The time at which the current rate limit window resets in UTC epoch seconds 4.
The middleware will set these headers for all requests with the following change:
def call(env) client_ip = env["REMOTE_ADDR"] key = "count:#{client_ip}" count = REDIS.get(key) unless count REDIS.set(key, 0) REDIS.expire(key, THROTTLE_TIME_WINDOW) end if count.to_i >=" => "60", "X-Rate-Limit-Remaining" => (60 - count.to_i).to_s, "X-Rate-Limit-Reset" => time_till_reset } end
This computes the time remaining till the limit is reset and the number of requests remaining and sets the appropriate headers.
Let’s test this.
bash$ for i in {1..100} do curl -i >> /tmp/headers.log done bash$ less /tmp/headers.log | grep X-Rate-Limit X-Rate-Limit-Limit: 60 X-Rate-Limit-Remaining: 59 X-Rate-Limit-Reset: 1381717125 X-Rate-Limit-Limit: 60 X-Rate-Limit-Remaining: 58 X-Rate-Limit-Reset: 1381717125 ... X-Rate-Limit-Limit: 60 X-Rate-Limit-Remaining: 1 X-Rate-Limit-Reset: 1381717124 X-Rate-Limit-Limit: 60 X-Rate-Limit-Remaining: 0 X-Rate-Limit-Reset: 1381717124 X-Rate-Limit-Limit: 60 X-Rate-Limit-Remaining: 0 X-Rate-Limit-Reset: 1381717124
The code for this implementation is on my GitHub profile.
- RailsCast #151 - Rack Middleware. [return]
- GitHub API V3 - Rate limiting. [return]
- Twitter - REST API Rate Limiting in v1.1. [return]
- Wikipedia - Unix time - Encoding time as a number. | https://sdqali.in/blog/2013/10/13/implementing-rate-limiting-in-rails-part-2/?utm_source=site&utm_medium=related | CC-MAIN-2019-18 | refinedweb | 602 | 50.73 |
In this part we are going to describe working of JPA Entity and persistence with Hibernate(hbm2ddl.auto), how to automate the process of table creation using jpa and hibernate in javaee7. This tutorial also demonstrate the configuration of the persistence unit for the datasource created on server to be used directly in the application.
Tools Used: Eclipse JavaEE - Juno, Wildfly 8.1.0
This is third part of JavaEE7 application series tutorial. In this part we are going to create the JPA entity and will use the datasource we created in the part one, if you haven't read that tutorial you can read it here(MySql Datasource creation in Wildfly 8). Following is the complete Index of the series.
JPA entities and persistence with Hibernate(hbm2ddl.auto)
Assuming that you have your project set up complete, we are proceeding in this tutorial with the demonstration of the JPA entity creation. In this tutorial we will create an entity called Student and will use Hibernate's auto generation property to persist the entity in the database.
To create a new Entity right click the project module which holds your JPA module, In our case we have put our JPA structure in EJB project. So right click EJB project and New > JPA Entity. This will open a JPA entity wizard. In this you need to provide the package name and class name for the entity. Once fields are filled press Next. We are using com.em.jpa.entity and class name as Student.
In the second screen you are presented with the options to create the fields for entity, its up to you that you want to create fields here or code them later manually. We are creating four fields here.
int id String firstName String lastName String Standerd
Click Add and enter the data type in first field and property name in second field, click OK. Repeat the process for the number of properties you want to create for your entity. After creating these fields we are marking "id" as key, this will annotate the id field with @Id which makes it the primary key for the table in database.
Now click Finish to complete the wizard and generate the code. Once you are through the code will be presented to you with the fields you created and pre generated getters and setters method for the fields. Also there shall be Serializable Interface implemented and auto generated serial version id. initially the fields generated looks like following code snippet.
@Id private int id; private String firstName; private String lastName; private String standerd; private static final long serialVersionUID = 1L;
Now we need some changes in this code, first of all we need the primary key to be generated automatically whenever we create and new entry, fo this we are going to use @GeneratedValue annotation with strategy AUTO, this is the way to tell JPA that we want the id to be automatically generated, here you can leave on system to decide the way key is generated or you can specify the way you want to generate it. GenerationType is an enumeration provided with four types, AUTO, IDENTITY, SEQUENCE, TABLE. Choose as it suites you, however we are proceeding with AUTO.
Also we want to make sure that our columns are not null, so we use @NotNull annotation which is a validation constraint to ensure the not null property on the table column, also we want to limit the length of the fields so we are using @Length annotation with property max. This will make sure when the table is generated it has limited length defined for the columns. So after the basic tuning here is how our code looks like.
@Id @GeneratedValue(strategy=GenerationType.AUTO) private int id; @NotNull @Length(max=20) private String firstName; @NotNull @Length(max=20) private String lastName; @NotNull @Length(max=5) private String standerd; private static final long serialVersionUID = 1L;
In future when we query(using JPQL) the table generated corresponding to this entity, we would like to fetch all the rows at once available, beside the custom query structure, for that purpose it is good practice to specify a NamedQuery for the entity. Named query is marked by the @NamedQuery annotation with two properties, name, which is the name of query, and query, which is the actual query to be executed. and this annotation is of entity level so placed on the entity class definition.
@Entity @NamedQuery(name="Student.getAll",query="SELECT s FROM Student s") public class Student implements Serializable { ...
So final code for the entity looks like this
package com.em.jpa.entity; import java.io.Serializable; import java.lang.String; import javax.persistence.*; import javax.validation.constraints.NotNull; import org.hibernate.validator.constraints.Length; /** * Entity implementation class for Entity: Student * */ @Entity @NamedQuery(name="Student.getAll",query="SELECT s FROM Student s") public class Student implements Serializable { @Id @GeneratedValue(strategy=GenerationType.AUTO) private int id; @NotNull @Length(max=20) private String firstName; @NotNull @Length(max=20) private String lastName; @NotNull @Length(max=5) private String standerd; private static final long serialVersionUID = 1L; public Student() { super(); } public int getId() { return this; } }
Now our entity is ready but to connect JPA to the database we need to configure persistence unit of JPA. This configuration is to be done in persistence.xml file which is available in META-INF directory of the application. We have created a datasource which we are going to use here. in the persistence.xml file, create a persistence unit using tag persistence-unit with the property name. And after that specify the JNDI name of the datasource for this persistence unit using tag, jta-data-source tag. so our code in persistence.xml looks like this.
<persistence-unit <jta-data-source>java:jboss/datasources/school</jta-data-source> ...
As we mentioned earlier, that we are going to use hibernate's auto generation property(hbm2ddl.auto) we need to specify this property with our datasource, so that system can operate on it. We use properties and property tag for the purpose. Our complete persistence.xml looks like following.
<?xml version="1.0" encoding="UTF-8"?> <persistence version="2.1" xmlns="" xmlns: <persistence-unit <jta-data-source>java:jboss/datasources/school</jta-data-source> <properties> <property name="hibernate.hbm2ddl.auto" value="update"/> </properties> </persistence-unit> </persistence>
with this property we specify the value as update which tells the system to not create the entities from scratch every time the application is deployed, but to update it with any changes required.
Our persistence unit is now ready to work, when you deploy the application you will notice the creation of table named Student in the database and on checking the structure you will find that table has been created in database.
In the Image above please note the properties of the fields these are what we specified in our entities, also the name of the columns are same as the fields of the entity class.
Please feel free to comment if you find some error in the program or there is some problem in the implementation of the program. In the next part we will describe the use of EJB, we are going to divide the EJB part in two subpart, that would be using Local and Remote interface implementation. | http://www.examsmyantra.com/article/104/javaee/jpa-entity-and-persistence-with-hibernatehbm2ddlauto | CC-MAIN-2018-51 | refinedweb | 1,217 | 53.41 |
This shows up only when there is no sys/cdefs.h on a system at all, such as when building on a musl-libc host.
sys/cdefs.h is not present at all on musl libc systems, so perhaps this should point at bsd/sys/cdefs.h instead.
Example failed build output:
```
clang -std=c11 -I../include/ -O -g -Wall -Wextra -Wpedantic -Wno-missing-field-initializers -Werror -O3 -fstack-protector-strong -D_FORTIFY_SOURCE=2 -finstrument-functions -c -o bworker.o bworker.c
In file included from bworker.c:10:
In file included from /usr/include/bsd/stdlib.h:39:
/usr/include/bsd/libutil.h:43:10: error: 'sys/cdefs.h' file not found with <angled> include; use "quotes" instead
#include <sys/cdefs.h>
^
In file included from bworker.c:10:
/usr/include/bsd/stdlib.h:45:10: error: 'sys/cdefs.h' file not found with <angled> include; use "quotes" instead
#include <sys/cdefs.h>
^
In file included from bworker.c:11:
/usr/include/bsd/sys/tree.h:33:10: fatal error: 'sys/cdefs.h' file not found
#include <sys/cdefs.h>
^~~~~~~~~~~~~
3 errors generated.
make: *** [Makefile:16: bworker.o] Error 1
```
This appears to be a regression on commit db7470b048a14bdc69a34fbd192ec626e1786411
And looks like this is fixed in commit 11ec8f1e5dfa1c10e0c9fb94879b6f5b96ba52dd which I am now testing.
The fix has now been released as part of libbsd 0.9.0. Closing.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy. | https://bugs.freedesktop.org/show_bug.cgi?id=105281 | CC-MAIN-2022-21 | refinedweb | 253 | 54.39 |
The CodeEval page explains in a few words what is that transformation and what it useful for. Even if could look a bit unclear at first, it is quite easy to write a function that operates the transformation. Here is my python 3 code for it:
def bw_transform(s): size = len(s) rots = sorted([s[i:size] + s[0:i] for i in range(size)]) result = [rot[-1] for rot in rots] return ''.join(result)The input line is rotated by one char at time, and all these rotations are stored in a list, that get sorted. Then I extract the last character in each rotation to another list that, joined, gives the result.
However our job is to reverse the result of the transformation, and that leads to a piece of code more complicated. Actually, I'm not sure the fuction I get could be considered a good result. It gets accepted by CodeEval, though.
Here is the test case I used to help me getting through the job:
def test_provided_1(self): self.assertEqual('Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo$', solution('oooooooo$ ffffffff ffffffffuuuuuuuuaaaaaaaallllllllbbBbbBBb')) def test_provided_2(self): self.assertEqual('James while John had had had had had had had had had had had a better effect on the teacher$', solution('edarddddddddddntensr$ ehhhhhhhhhhhJ aeaaaaaaaaaaalhtf thmbfe tcwohiahoJ eeec t e ')) def test_provided_3(self): self.assertEqual('Neko no ko koneko, shishi no ko kojishi$', solution('ooooio,io$Nnssshhhjo ee o nnkkkkkkii ')) def test_extra(self): self.assertEqual('easy-peasy$', solution('yyeep$-aass'))As you can see also from the tests, in this implementation of the Burrows-Wheeler algorithm there is the trick of adding a marker, here a dollar sign, at the end of the line. That's why I defined a constant in my script:
EOL = '$'Then I create a matrix, initialized as a list of empty lists, then, for each element, I repeatedly inserted at the beginning of each rotation each character in line.
But notice that, after each run of the inner for loop, I sort the rotations:
def solution(line): rots = [[] for _ in line] for _ in line: for ch, rot in zip(line, rots): rot.insert(0, ch) rots.sort()So, in the end, rots contains all the rotations of the original string.
Now I should find out the right rotation. But that's easy, since I used the EOL marker to identify it.
for candidate in rots: if candidate[-1] == EOL: return ''.join(candidate)It is just a matter of selecting the rotation that has it as its last character.
CodeEval took quite a longish time to give its positive response on this solution. I think I should work more on it to get a better result. Anyway, I put the test case and the python script I have written on GitHub. | http://thisthread.blogspot.com/2017/02/codeeval-burrows-wheeler-transform.html | CC-MAIN-2018-43 | refinedweb | 468 | 62.07 |
OSL740 Lab 2
Contents
- 1 LAB PREPARATION
- 2 INVESTIGATION 1: SETUP FOR NESTED VIRTUAL MACHINES
- 3 INVESTIGATION 2: INSTALL NESTED VIRTUAL MACHINES (KVM)
- 4 INVESTIGATION 3: MANAGING VIRTUAL MACHINES (KVM)
- 5 INVESTIGATION 4: USING PYTHON TO AUTOMATE MANAGING VIRTUAL MACHINES
- 6 LAB 2 SIGN-OFF (SHOW INSTRUCTOR)
-.
- perform a software update on your c7host VM by issuing the following command:
sudo yum update
- Using elevated privileges, install the virtualization software by issuing the command:
sudo:
sudo systemctl start libvirtd
NOTE: The most recent variants of CentOS and Fedora are using a service called firewalld that replaces iptables, however the iptables service is still in relatively common usage and knowing how to use it also works with firewalld. In this course we will concentrate on iptables.
- To disable and remove firewalld, issue the following commands:
sudo systemctl disable firewalld
sudo systemctl stop firewalld
sudo yum remove firewalld
- To install and enable the IPTables services, issue the following commands:
sudo yum install iptables-services
sudo systemctl enable iptables
sudo systemctl start iptables
- Start the graphical tool by selecting the menu options Applications>System Tools>Virtual Machine Manager or by typing the command
virt-manager(without sudo!)
- use elavated privileges to. Do not make the following changes on more than one entry!
- Insert the boot option: kvm-intel.nested=1 (for AMD processors kvm-amd.nested=1) at the end of the Linux kernel boot options.
linuxefi /vmlinuz-3.10.0-1062-1062.1.2.el7.x86_64.img
OR
linuxefi /boot/vmlinuz-3.10.0-1062-1062
- For AMD processors, check the /sys/module/kvm_amd/parameters/nested file. You should get the output
1
- And if kvm_intel directory doesn't exist, double-check your Processors => Virtualization Engine (Intel VT-x/EPT...) settings in VMWare Workstation.
Answer the INVESTIGATION 1 observations / questions in your lab log book.
INVESTIGATION 2: INSTALL NESTED VIRTUAL MACHINES (KVM)
Part 1: Installing VM from a Network (Graphical)
- VM Details:
- VM Name (and hostname): centos1
- Boot media: Network installation
- CentOS Full Network Install URL:
- Seneca Lab: (NOTE: requires VPN)
- VM Image Pathname: /var/lib/libvirt/images/centos1.qcow2
- Memory: 2048MB
- Disk space: 15GB
- CPUs: 2
- Perform the following steps:
- Launch the KVM virtual machine manager by clicking Applications -> System Tools -> Virtual Machine Manager.
- When prompted, enter your.
- Make sure that when you create your regular user account you check the box to make them an administrator.
- Complete the installation. Login to your regular user account, and perform a sudo yum update for the centos1 VM (reboot if required). Make certain to adjust your screen-saver settings if desired.
- and perform a yum update.
- Issue the following command to obtain the IPADDR for your centos1 VM to record in your lab2 logbook:
ip address show
-:
- Seneca Lab: (NOTE: requires VPN)
- logical volume to 8 GiB and add a logical volume with a size of 2 GiB (mount point: /home, name: home, and make certain root and /home logical volumes have ext4 file system).
- Complete the installation. Login to your regular user account.
- (using the command 'vi' instead of 'vim') and perform a yum update.
- Issue the following command to obtain and record your centos2 IPADDR in your lab2 logbook:
ip address show
-):
- Home: ks=
(NOTE: Ensure your VPN is connected!)
-. Restart your c7host machine and redo the VM setup for a new instance of the centos3 VM.
- What happens when the installation is finished?
- In a web browser, click the kickstart (KS) link above. This link is a text file. Read through it to find the following information (pay attention to lines starting with #) and record it in your Lab Logbook:
- turn off SELinux and perform a yum update.
- You'll notice something when you go to set SElinux to permissive. The kickstart file already did that for you. It could even have performed the switch from firewalld to iptables for you (but it didn't).
- 3: MANAGING VIRTUAL MACHINES (KVM)
Part 1: Backing Up Virtual Machines
- Perform the following steps:
- Shut down your centos1, centos2, and centos3 VMs. For centos2 and centos3, which are CLI-only, you can issue the following command to shutdown:
poweroff. Please be patient, the VMs will shut down!
- In your c7host VM, open a new Terminal window.
- Use elevated privileges to list the size and names of files in
/var/lib/libvirt/images/
- What do these files contain?
- Use the command
sudo -iand enter your password if prompted. You are now root until you use the command
exitto return to your normal user account.
- Change to the images directory by issuing the following command:
cd /var/lib/libvirt/images/. Note that you did not need to use sudo, as you are already using elevated permissions.
- Make a compressed backup of your centos1.qcow2, centos2.qcow2, and centos3.qcow2 files to your regular user's home directory by issuing each command - one at a time (create backups directory within your regular user's home directory before running these commands):
gzip < centos1.qcow2 > ~YourRegularUsername/backups/centos1.qcow2.gz
gzip < centos2.qcow2 > ~YourRegularUsername/backups/centos2.qcow2.gz
gzip < centos3.qcow2 > ~YourRegularUsername/backups/centos3.qcow2.gz
- Compare the size of the compressed and original files (hint: use ls -lh). If file is very large (like 15GB), you didn't compress it and you need to remove that file and perform the previous step until you get it right!
- Once you are sure you have all three VMs backed up, use the
exitcommand to revert back to your normal user.
- Start the centos3 VM.
- Make certain that you are in your VM and not in your main system!
- Wreck only your centos3 system! Try this command inside the centos3 virtual machine:
sudo rm -rf /*(ignore error messages).
- Shut down and restart the centos3 VM (you may need to use the Force Reset option to do so).
- When the machine restarts it will not boot since all system files have been removed!
- Use the Force Off option to turn centos3 back off.
- Restore the original image from the backup from your home directory to your images directory by typing
sudo -icommand first [do not forget to exit when you are done], then this command:
gunzip < ~YourRegularUsername/backups/centos3.qcow2.gz > /var/lib/libvirt/images/centos3.qcow2
- Restart the VM. Is it working normally?
- You should also make a copy of the XML configuration file for each VM:
sudo virsh dumpxml centos3 > centos3.xml
- Examine the file
centos3.xml. What does it contain? What format is it in?
Part 2: Restoring Virtual Machines
- We will now learn how to download a compressed image file and XML configuration file and add it as a VM to the Virtual Machine Manager menu.
- Issue the following commands:
- Use gunzip with elevated privileges to decompress the qcow2 image file into the /var/lib/libvirt/images directory.
- Issue the command:
sudo virsh define centos4.xml
- What happened in the virtual manager window? In order.
- Login with the password ops235. Feel free to explore the new environment.
- 3: Using the Command Line for VM State Management
You will continue our use of the Bash Shell by examining commands will allow the Linux sysadmin to gather information about and manage their:
sudo virsh list
sudo virsh list --all
sudo virsh list --inactive
- Now, shut-down your centos1 VM normally, and close the centos1 VM window.
- Switch to your terminal and issue the command:
sudo?
Answer INVESTIGATION 3 observations / questions in your lab log book.
INVESTIGATION 4: USING PYTHON TO AUTOMATE MANAGING VIRTUAL MACHINES
This week you have added some significant capabilities to your python scripting. The ability to run loops and make decisions makes your scripts much more powerful. In this investigation you will write a python script that backs up the centos1, centos2, and centos3 VMs, or lets the user specify which VMs they want backed up.
- In your bin directory, create the file backupVM.py, and populate with our standard beginning
#!/usr/bin/env python3
# backupVM.py
# Purpose: Backs up virtual machines
#
# USAGE: ./backupVM.py
#
# Author: *** INSERT YOUR NAME ***
# Date: *** CURRENT DATE ***
import os
currentuser = os.popen('whoami')
if currentuser.read() != 'root':
print("You must be root")
exit()
else:
for machine in ('centos1','centos2','centos3'):
print('Backing up ' + machine)
os.system('gzip < /var/lib/libvirt/images/' + machine + '.qcow2 > ~YourRegularUsername/backups/' + machine + '.qcow2.gz')
- Try to run that script. You'll notice it does not work. No matter what you do, it always says you are not root.
- Modify the print statement that tells the user they must be root to also include the current username, then run the program again.
- It should print out root, but with an extra new-line. You may have noticed this in your other python scripts so far: the data we get from os.popen() has an extra new-line on the end. We will need to modify the string(s) it gives us a bit. See the side-bar for hints on how to do so.
- Modify the if statement so it is just getting the current username, not the username and a newline. You can do this using several steps and several variables, but it can also be done in a single line.
- Now that the script recognizes you as being root (or at least running the script with root permissions), it should work. Notice how we've used the + to combine several strings together to pass to the os.system command. We did this because this script needs the python variable to be evaluated before the whole line gets handed over to os.system. If you left the variable names inside the quotes, python will ignore them as just being part of a string. By putting them outside of a string, and concatenating their value to that string, we can evaluate them and feed them into that command.
- Test your script to make sure it works. If it doesn't, go back and fix it. Do not continue until it successfully makes backups of your VMs.
- There is a weakness to this script as written. Every time you run it, it will make a backup of all three VMs. But what if you only made a change to one of them? Do we really need to wait through a full backup cycle for two machines that didn't change? As the script is currently written, we do. But we can make it better. We've provided the scripts with some comments below.
#!/usr/bin/env python3
# backupVM.py
# Purpose: Backs up virtual machines
#
# USAGE: ./backupVM.py
#
# Author: *** INSERT YOUR NAME ***
# Date: *** CURRENT DATE ***
import os
#Make sure script is being run with elevated permissions
currentuser = os.popen('whoami').read().strip()
if currentuser != 'root':
print("You must be root")
exit()
#The rest of this script identifies steps with comments 'Step <something>'.
#This is not a normal standard for commenting, it has been done here to link the script
# to the instructions on the wiki.
#Step A: Find out if user wants to back up all VMs
#Step B-1:use the existing loop to back up all the VMs
for machine in ('centos1','centos2','centos3'):
print('Backing up ' + machine)
os.system('gzip < /var/lib/libvirt/images/' + machine + '.qcow2 > ~YourRegularUsername/backups/' + machine + '.qcow2.gz')
#Step B-2: They don't want to back up all VMs, prompt them for which VM they want to back up
#Step C: Prompt the user for the name of the VM they want to back up
#Step C-1: If the user chose Centos1, back up that machine.
#Step C-2: If the user chose Centos2, back up that machine.
#Step C-3: If the user chose Centos3, back up that machine.
- Before the for loop that backs up each machine add a prompt to ask the user if they want to back up all machines. Use an if statement to check if they said yes (See comment 'Step A').
- if they did say yes, back up all the VMs using your existing for loop (Comment step B-1).
- If they didn't say yes, do nothing for now.
- Test your script to make sure it works. Check what happens if you say 'yes' to the prompt, and check what happens if you say things other than 'yes'.
- Now we have a script that asks the user if they want to back up all VMS, and if they say they do it does. But if they don't want to back up every VM, it currently does nothing.
- Add an else statement at comment Step B-2 to handle the user not wanting to back up every VM. Inside that else clause (Comment step C) ask the user which VM they would like to back up (you can even give them the names of available VMs (Centos1, Centos2, Centos3).
- Now nest an if statement inside that else (Comments C-1, C-2, and C-3) so that your script can handle what your user just responded with. If they asked for Centos1, back up Centos1. If they want to back up Centos2, only back up Centos2, etc. Hint: You might want to use elif for this.
- Test your script again. You should now have a script that:
- Makes sure the user is running the script with elevated permissions.
- Asks the user if they want to back up every VM.
- If they want to back up every VM, it backs up every VM.
- If the user does not want to back up every VM, the script asks them which VM they do want to back up.
- If the user selected a single VM, the script will back up that one VM.
LAB 2 SIGN-OFF (SHOW INSTRUCTOR)
Follow the submission instructions for lab 2 on Blackboard.
- Perform the Following Steps:
- Use the virsh start command to launch all the VMs (centos1, centos2, and centos3).
- Inside each virtual machine, run
ip aon the command line. Open a Terminal window in centos1 to do so. You'll need the IP address of each machine for the next steps.
-c7host VM:
- Run the lab2-check.bash script in front of your instructor (must have all
OKmessages)
- ✓ Lab2 logbook notes completed.
- Upload a screenshot of the proof listed above, the output file generated by the lab2-check.bash script, your log book, and your backupVM.py to blackboard.?
- Show a few examples how loops can be used to error-check when prompting the user for data.
- What does the command rpm -qi centos-release do and why is it important?
- What is the difference between rpm -q centos-release and uname -a? | https://wiki.cdot.senecacollege.ca/wiki/OSL740_Lab_2 | CC-MAIN-2022-27 | refinedweb | 2,422 | 65.42 |
Back to index
import "nsILocalFileMac.idl";
Definition at line 54 of file nsILocalFileMac.idl.
appendRelative[Native]Path
Append a relative path to the current path of the nsILocalFile object..
getCFURL
Returns the CFURLRef of the file object. The caller is responsible for calling CFRelease() on it.
NOTE: Observes the state of the followLinks attribute. If the file object is an alias and followLinks is TRUE, returns the target of the alias. If followLinks is FALSE, returns the unresolved alias file.
NOTE: Supported only for XP_MACOSX or TARGET_CARBON
getFSRef
Returns the FSRef of the file object.
NOTE: Observes the state of the followLinks attribute. If the file object is an alias and followLinks is TRUE, returns the target of the alias. If followLinks is FALSE, returns the unresolved alias file.
NOTE: Supported only for XP_MACOSX or TARGET_CARBON
getFSSpec
Returns the FSSpec of the file object.
NOTE: Observes the state of the followLinks attribute. If the file object is an alias and followLinks is TRUE, returns the target of the alias. If followLinks is FALSE, returns the unresolved alias file.
getRelativeDescriptor
Returns a relative file path in an opaque, XP format. It is therefore not a native path.
The character set of the string returned from this function is undefined. DO NOT TRY TO INTERPRET IT AS HUMAN READABLE TEXT!
initToAppWithCreatorCode
Init this object to point to an application having the given creator code. If this app is missing, this will fail. It will first look for running application with the given creator.
initWithCFURL
Init this object with a CFURLRef
NOTE: Supported only for XP_MACOSX or TARGET_CARBON NOTE: If the path of the CFURL is /a/b/c, at least a/b must exist beforehand.
initWithFile
Initialize this object with another file
initWithFSRef
Init this object with an FSRef
NOTE: Supported only for XP_MACOSX or TARGET_CARBON
initWithFSSpec
Init this object with an FSSpec Legacy method - leaving in place for now!
isPackage
returns true if a directory is determined to be a package under Mac OS 9/X
Not a regular file, not a directory, not a symlink.
launch
Ask the operating system to attempt to open the file. this really just simulates "double clicking" the file on your platform. This routine only works on platforms which support this functionality.
launchWithDoc
Launch the application that this file points to with a document.).
openDocWithApp
Open the document that this file points to with the given application.
This will try to delete this file.
The 'recursive' flag must be PR_TRUE to delete directories which are not empty.
This will not resolve any symlinks..
Use with SetFileType() to specify the signature of current process.
Definition at line 167 of file nsILocalFileMac.idl.
Definition at line 71 of file nsIFile.idl.
Returns an enumeration of the elements in a directory.
Each element in the enumeration is an nsIFile.
Definition at line 336 of file nsIFile.idl.
Definition at line 109 of file nsILocalFile.idl.
Definition at line 176 of file nsILocalFileMac.idl.
WARNING! On the Mac, getting/setting the file size with nsIFile only deals with the size of the data fork.
If you need to know the size of the combined data and resource forks use the GetFileSizeWithResFork() method defined on nsILocalFileMac.
Definition at line 227 of file nsIFile.idl.
Definition at line 228 of file nsIFile.idl.
fileSizeWithResFork
Returns the combined size of both the data fork and the resource fork (if present) rather than just the size of the data fork as returned by GetFileSize()
Definition at line 162 of file nsILocalFileMac.idl.
fileType, creator
File type and creator attributes
Definition at line 175 of file nsILocalFileMac.idl.
followLinks
This attribute will determine if the nsLocalFile will auto resolve symbolic links. By default, this value will be false on all non unix systems. On unix, this attribute is effectively a noop.
Definition at line 102 of file nsILocalFile.idl.
File Times are to be in milliseconds from midnight (00:00:00), January 1, 1970 Greenwich Mean Time (GMT).
Definition at line 218 of file nsIFile.idl.
Definition at line 219 of file nsIFile.idl.
Accessor to the leaf name of the file itself.
For the |nativeLeafName| method, the nativeLeafName must be in the native filesystem charset.
Definition at line 119 of file nsIFile.idl.
Definition at line 120 of file nsIFile.idl.
Definition at line 258 of file nsIFile.idl.
Definition at line 256 of file nsIFile.idl.
Create Types.
NORMAL_FILE_TYPE - A normal file. DIRECTORY_TYPE - A directory/folder.
Definition at line 70 of file nsIFile.idl.
Parent will be null when this is at the top of the volume.
Definition at line 327 of file nsIFile.idl.
Definition at line 257 of file nsIFile.idl.
Definition at line 210 of file nsIFile.idl.
Definition at line 211 of file nsIFile.idl.
Accessor to a null terminated string which will specify the file in a persistent manner for disk storage.
The character set of this attribute is undefined. DO NOT TRY TO INTERPRET IT AS HUMAN READABLE TEXT!
Definition at line 132 of file nsILocalFile.idl.., XP.
Definition at line 255 of file nsIFile.idl. | https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/interfacens_i_local_file_mac.html | CC-MAIN-2016-44 | refinedweb | 850 | 60.31 |
>> better rasterize a plot without blurring the labels in matplotlib?
To rasterize a plot in a bettery way without blurring the labels in matplotlib, we can take the following steps.
Steps
Set the figure size and adjust the padding between and around the subplots.
Create a figure and a set of subplots.
Axis 0 – Fill the area between the curve with alpha and rasterized=False.
Add text to the axes.
Axis 1 – Fill the area between the curve with alpha and rasterized=True.
Add text to the axes.
Axes 2 and 3 – Fill the area between the curve without alpha and rasterized=True and False, respectively.
Add text to the axes.
To display the figure, use Show() method.
Example
import matplotlib.pyplot as plt import numpy as np plt.rcParams["figure.figsize"] = [7.00, 3.50] plt.rcParams["figure.autolayout"] = True fig, axes = plt.subplots(nrows=4, sharex=True) axes[0].fill_between(np.arange(1, 10), 1, 2, zorder=-1, alpha=0.2, rasterized=False) axes[0].text(5, 1.5, "Label 1", ha='center', va='center', fontsize=25, zorder=-2, rasterized=True) axes[1].fill_between(np.arange(1, 10), 1, 2, zorder=-1, alpha=0.2, rasterized=True) axes[1].text(5, 1.5, "Label 2", ha='center', va='center', fontsize=25, zorder=-2, rasterized=True) axes[2].fill_between(np.arange(1, 10), 1, 2, zorder=-1, rasterized=True) axes[2].text(5, 1.5, "Label 3", ha='center', va='center', fontsize=25, zorder=-2, rasterized=True) axes[3].fill_between(np.arange(1, 10), 1, 2, zorder=-1, rasterized=False) axes[3].text(5, 1.5, "Label 4", ha='center', va='center', fontsize=25, zorder=-2, rasterized=True) plt.show()
Output
It will produce the following output −
Observe that, since we have not used any "alpha" on axes 2 and 3, the labels are not visible.
- Related Questions & Answers
- How to center labels in a Matplotlib histogram plot?
- How to plot a Bar Chart with multiple labels in Matplotlib?
- How to show tick labels on top of a matplotlib plot?
- How to create correlation matrix plot without variables labels in R?
- How to build colorbars without attached plot in matplotlib?
- Getting empty tick labels before showing a plot in Matplotlib
- How to change the color of the axis, ticks and labels for a plot in matplotlib?
- How to remove or hide X-axis labels from a Seaborn / Matplotlib plot?
- How to remove a frame without removing the axes tick labels from a Matplotlib figure in Python?
- Plot 3D bars without axes in Matplotlib
- How to hide axes but keep axis-labels in 3D Plot with Matplotlib?
- How can I make the xtick labels of a plot be simple drawings using Matplotlib?
- How to change the separation between tick labels and axis labels in Matplotlib?
- How to a plot stem plot in Matplotlib Python?
- How to put xtick labels in a box matplotlib?
Advertisements | https://www.tutorialspoint.com/how-to-better-rasterize-a-plot-without-blurring-the-labels-in-matplotlib | CC-MAIN-2022-33 | refinedweb | 484 | 61.12 |
User talk:McCormack
/Archive 2007 | /Archive Jan-Feb 2008
[edit] We need "non-technical" people helping to make the "technical" pages!
Hi McCormack -
I have made some suggestions that you may be interested in reading here:
The summary is that we need "non-technical" people helping to create the "technical" content sites. I am a "technical" guy myself, so unfortunately I can't help much with my own vision! But I would like to offer my assistance (I'm not sure how, I'm hoping you will have some ideas!) to pursue these ideas.
Daviddoria 14:02, 11 June 2010 (UTC)
[edit] Learning resources
Hi McCormack,
I am interested in helping to categorize pages, in order to highlight valuable learning resources. Only Cormac responded positive as well. Brent doesn't like the process.--Daanschr 08:20, 7 March 2008 (UTC)
-)
I have put some suggestions on Wikiversity talk:Learning resources. I agree that consensus based is better. Otherwise conflicts might arise that could have been avoided. At the moment Wikiversity is rather small, so we will surely have a lack of personnel. I will take a look at the sibpage question. I don't know what DPL is.--Daanschr 12:43, 7 March 2008 (UTC)
My remark about age was a bit too pointy. I wanted to revert it now, but you already read it.--Daanschr 15:58, 7 March 2008 (UTC)
[edit] Japanese Wikiversity
Hello McCormack JApanese wikiversity was approved i will notify you when it will be created to change the main portal.--ZaDiak 16:49, 11 March 2008 (UTC)
[edit] Extension
ping, ----Erkan Yilmaz Wikiversity:Chat 22:23, 11 March 2008 (UTC)
[edit] In search of the simple life...
Better stucture, not more commands
- I am a teacher at Wikiversity because I like to teach. So to answer your question, "NO" I do not wish to be a Sys-Op.
- The reason is a total lack of time. Already, I am running as fast as I can.
- And, "Yes", the problem might go away if Wiki was designed with a better file structure systems which is easier for teachers to work with. Robert Elliott 14:27, 12 March 2008 (UTC)
[edit] Adding some structure to my life
- A normal course with normal structure
- I understand that many people prefer to more progressive methods of learning but I just want to do a traditional set of 10 courses, each containing 10 to 20 lessons, each with four to six lesson pages (separate steps that must be done in order)... along with separate pages for each pop quiz or practical example. All lessons are concluded with a page containing all of the completed assignments.
- Each of these courses is part of a large school department or other structure. So what every I do, it must also reflect where I am in the structure of Wikiversity.
- Getting started
- When I write a course, I have no idea where it will go. I know the final goal but I do not know how to get the students there.
- First, I rough out a set of lessons (such as the course for film scoring for filmmakers) and then fill in the missing pages. As soon as it becomes clear that students are not understanding something, I have to reorder the lessons or the pages of the lesson and I have to add more lessons or pages of lessons. If this does not work, I write a parallel course (such as Film Scoring For Musicians) and incorporate pages from the other course which seem to work and add new lessons and lesson pages.
- The one thing that the designers of Wiki seem to ignore is I must create all my rough work on line where everyone can see it. These pages are either eventually cleaned up or abandoned. The course for film scoring for filmmakers has been abandoned and some of the pages from this course are now used in the film scoring course for musicians. Those pages are still out there somewhere. They should be in a folder called unneeded pages.
- I need folders, not sub pages
- I need is a set of folders which I put pages in. This is exactly like a table of contents for text book. It must look like simple folders but in reality it generates the structure for my courses.
- Example
- Here is how it works.
- When I put a page in a course folder, that page becomes the introductory page of the course. If I do not like the page later, I can fix it or I can remove it and replace it with an entire new page.
- When I put a folder called "lessons" in the folder of the course, any pages that I put in this folder becomes a lesson of that course.
- But when I put a folder in the lesson folder, that becomes a folder for all of the lesson pages.
- When I put a page on a lesson folder, it becomes the introductory page of the lesson.
- When I put a page in the lesson page folder in the lesson folder, that page becomes the start of a page of an actual lesson.
- At any time, I can move a page from one folder to another and the page will take on new meaning. A lesson page in one lesson, if moved to a different lesson, becomes a lesson page of the other lesson. This should be easy and totally automatic.
- Navigation
- Then when I look at my pages, I want to see all of the navigation buttons fully populated with the correct addresses. Once I have done all the work of organizing my lessons, I don't want to have to start all over again creating a navigation system. That is what I have to do now for EACH course. Why make me do more work when I have already done it by defining lesson, lesson pages, pop quiz, example, completed assignments, etc.
- Name Changes
- When I create the name of the document (which then automatically appears in the navigation buttons), when I need to change the name (probably because I also need to reorganized the pages), I should be able just to type a name and that new name is reflected throughout the entire navigation of the course.
- When I write or build anything, I am like a artist carving a figure from stone. I keep chipping away until the shape seems is correct. I will probably change the order and the names of my lesson page half a dozen times. With the current system, the original pages continue to float around WikiLand forever. What a mess! ~~~~ Robert Elliott 09:27, 16 March 2008 (UTC)
- All of these are long term changes, not something which can be done in 2009. Slowly, there needs to be a project which defines the needs of the next generation of Wikiversity. That is where these comments should go. My comments are only about well structured lessons but that will be an important element of Wikiversity II. ~~~~ Robert Elliott 15:46, 24 March 2008 (UTC)
[edit] Re:
I've responded on my talkpage as best as I could. Terra What do you want? 10:40, 15 March 2008 (UTC)
[edit] Re:Please go ahead with your custodianship request
Hi there, thank you for the message you've placed on my talkpage if your fine with me going through with the request then I'll be happy to proceed. Terra What do you want? 18:24, 15 March 2008 (UTC)
[edit] well, thank you.
Thank you. I find this to be a worthwhile project, and I admire and appreciate you're contributions in kind. --Remi 22:41, 15 March 2008 (UTC)
[edit] Help:User contributions
I've copied the User Contributions page from Meta and transferred it here, it's this fine. It's on every wiki site which I've come across and have stated that the content was from a Master copy. Terra Welcome to my talkpage 12:44, 20 March 2008 (UTC)
[edit] Grand schemes
- Sigh* I know - and you're absolutely right - and thanks a million! I'm running out the door at the moment, and on holidays tomorrow, but I'd like to extend this idea actually - have a better grand scheme for all development initiatives - perhaps linked from the community portal or even (main page)... Cormaggio talk 17:33, 20 March 2008 (UTC)
[edit] Thanks
for the welcome. I was zz =)
[edit] Your wish
"I would like to see sebmol retired from WV bureaucracy." May I ask why? sebmol ? 23:27, 21 March 2008 (UTC)
- Can I expect a response? sebmol ? 13:14, 24 March 2008 (UTC)
- Still too busy? sebmol ? 21:12, 2 May 2008 (UTC)
[edit] category trees
That looks highly interesting, I will definitely take a look at it and probably play with it some. I need to figure out its exact mechanics first, though. Thanks. : ) --Remi 18:05, 22 March 2008 (UTC)
- Do you know any way to display a category tree in multiple columns? e.g., for School:Psychology#Topics. -- Jtneill - Talk 12:39, 27 April 2008 (UTC)
- Thanks for the pointer to mw:Extension:CategoryTree. I'm not having much luck with trying out the example syntax from there though: School:Psychology#Topics. Any ideas? -- Jtneill - Talk 15:57, 27 April 2008 (UTC)
- FYI, I've moved my CategoryTree testing to User:Jtneill/Wikiversity/CategoryTree. -- Jtneill - Talk 16:13, 27 April 2008 (UTC)
[edit] Vision & Multimedia
Thanks for finding ways to harness momentum, ideas, etc. And for looking into the Multimedia options. I would like to see it done properly, so that the functionality would be available across appropriate WM foundation projects. At this stage it looks well out of my league. But perhaps we can start by initially deciding whether such functions are priorities for the WV and, if so, getting it into our mission/vision/goals, etc. I am happy to do whatever I can.
From a purely selfish POV, in transferring my materials from the university's confluence wiki system to wikiversity, I think the only part I can't currently replicate is the embedding of slideshare lectures (which are in .odp files). I am trying WikiEducator, where it looks like its possible, but very far from user-friendly or easy to work out the code. So, I will probably end up with the presentations embedded over there - unless perhaps something can be worked on the sandbox server for WV? The main reason I prefer embedding to sending students to slideshare is pedagogical - by embedding I can control and provide a clean screen/interface. Slideshare pages have too many adverts for my liking, but they do a great job of hosting, sharing, and showing presentations. Google have recently released their version of slidesharing, which has some way to catchup, but may eventually become the preferred option. -- Jtneill - Talk 00:49, 23 March 2008 (UTC)
[edit] using the template namespace for boxed-section content transclusions
What do you think about putting transcluded pages used for content boxes in the template namespace to avoid having so many content snippets in topic, portal, and school namespaces? Then these could always be forked and put in other namespaces. --Remi 07:54, 23 March 2008 (UTC)
- Hi again and thanks for your message. I didn't quite catch the context of your message - i.e. perhaps you could give a few examples? But "yes", in principle it's probably a good idea. --McCormack 08:06, 23 March 2008 (UTC)
- I'm suggesting that instead of having templates like Template:Department place pages like Topic:Elasticity/Quotes in the topic namespace, that they go in the template namespace, maybe using {{BASEPAGENAME}}. I would be glad to do it, but I think it would be a change compared to what has been done much of the time thus far. --Remi 08:20, 23 March 2008 (UTC)
- My feelings about the issue are not too strong. And unfortunately I am unable to be much help to you on those technical matters.
That is the code on Template:Introduction 0.4: [}}|200px|left]] maybe that might help. --Remi 08:38, 23 March 2008 (UTC)
- So, my concern is that using all the subpages for transcluded content in the topic, portal, and school namespaces seems to make the random functions less useful to varying degrees. Maybe this is okay or it can be viewed in another way. --Remi 19:07, 23 March 2008 (UTC)
[edit] Importing from Confluence
Appreciate your offer of help. Here's where I'm at:
- Someone's off-list advice suggested that xml wouldn't be much good for importing.
- So, I've been experimenting with html export. html via OO Writer comes through as MW syntax nicely, but its not a bulk solution. I have posted about this to the OO Writer forum and MW SOC 2008 suggestions (can find links if you want).
- Confluence at UC seems to break on large exports - I've notified them, but it's probably just too big.
- I've uploaded a 'manageable' 10-page zip file of an html export, with most of the typical elements e.g., hierarchical text-based pages, with one image.
- Let me know if I can do anything.
-- Jtneill - Talk 12:57, 23 March 2008 (UTC)
- Thanks, I'll try those wikificators, and keep exploring OO writer. Maybe if I had a bunch of flat html files in a folder I could get a bulk convert. Images could be stripped out, no problem. But at the end of the day I figured it was probably going come down to pretty manual conversion. It never hurts to rework the material in the process of converting, so its not a bad option, just takes longer, but the final product at WV end is probably better for the extra effort. Thanks for checking it all out and appreciate your advice. -- Jtneill - Talk 13:40, 23 March 2008 (UTC)
[edit] the concept
I saw that this morning before I checked my messages and I thought and still think that as it is now looks extraordinary nice and quite useful. Good work =) --Remi 18:06, 24 March 2008 (UTC)
- It didn't seem appropriate to apply that to the templates currently in use. Perhaps something with icons would be useful for primary or secondary areas. --Remi 19:47, 24 March 2008 (UTC)
[edit] Using subpages
Hey, McCormack, thanks for the encouragement! You didn't come off as a wikidragon, but I'll tread carefully anyway ;-). You told me to ask if anything in your comment me was unclear, so I thought that I'd ask you what gave the impression that I don't use subpages? In my Study of Genesis (currently my largest contribution) I use subpages for each of the lessons in the course. Moreover, all of my non-learning plan contributions to The Department of Biblical Studies and its Center of Biblical Survey are done entirely (to my knowledge :-) using subpages.
I was trying to think of where I might not use subpages (where they may be appropriate), and I noted that none of the studies listed on the Center of Biblical Survey are subpages of the center. If this is what you were referring to, I can see where it might be desirable to make them as such. It was my impression that the general design of Wikiversity was to have learning plans as top-level pages in the main namespace, with subpages for individual lessons, etc. Then, such top-level learning plans could be grouped together by "Topic" pages, and ultimately "school" and "portal" pages.
So what I did was to create a "topic" page for the Center of Biblical Survey, which served to link together the individual courses (in the top-level, main namespace). I had considered making them all subpages of the topic page, but that seemed to go against the general design of Wikiversity, and it would make it harder to group the courses under a different topic, should such grouping be desirable.
What I did do was create a navigation bar at the top of each course's main page, showing where in the School of Theology each course resided. Was this nav bar what caused it to seem like I didn't use subpages? At any rate, let me know where I could use improvement. And sorry for this lengthy response; I wanted to make a justification for my design decisions, in case they were what prompted the comment.
Thanks for looking out for Wikiversity (and me),
- --Opensourcejunkie 09:15, 25 March 2008 (UTC)
- Oh, I see...how sneakily superb a way to find such misdemeanors! :) Thanks again for working to keep Wikiversty in-check and pruned. cya 'round,
- --Opensourcejunkie 09:36, 25 March 2008 (UTC)
Hey, by the way. I've had a question about subpages that's been nagging me in the back of my mind, but I haven't taken the time to do the research; perhaps you can answer it. I have a page, Topic:Biblical Overview that I want to move to Topic:Biblical Survey because the name fits better with traditional universities. However, I've created a number of subpages underneath of Topic:Biblical Overview that would also need to be moved. Does the functionality also move subpages, or merely the page in question?
Any idea?
- --Opensourcejunkie 10:03, 25 March 2008 (UTC)
- Aight, thanks. Shouldn't be too difficult :-)
- --Opensourcejunkie 10:50, 25 March 2008 (UTC)
[edit] Re: Religious policies
Wow. I'm honored that you'd think of me. I would love to be so intimately involved in the inception of such a thing, especially as I have a vested interest ;-). Now, as I've never written a wiki policy draft before, I could use a hand with some information:
- you mentioned that a dialogue has been going on about the potential policy; do you know where I can go to view what's already been discussed?
- do you know of any policy writing tutorials/faqs etc., either on here or another wikimedia project? If not I can probably wing it, but such a resource would be helpful.
Thanks for this opportunity, McCormack!
- --Opensourcejunkie 13:19, 27 March 2008 (UTC)
- Replied: --Opensourcejunkie 13:43, 27 March 2008 (UTC)
- Replied: --Opensourcejunkie 13:46, 27 March 2008 (UTC)
- New Thread: --Opensourcejunkie 11:44, 29 March 2008 (UTC)
- Replied: --Opensourcejunkie 15:43, 30 March 2008 (UTC)
- Replied: --Opensourcejunkie 17:42, 30 March 2008 (UTC)
- Replied: --Opensourcejunkie 17:42, 30 March 2008 (UTC)
- I moved the (pertinent parts of the) discussion over to the policy's talk page. Also, I added a section defining religious scholarship.
- Replied: --Opensourcejunkie 12:09, 2 April 2008 (UTC)
- Replied: --Opensourcejunkie 12:54, 3 April 2008 (UTC)
- Replied: --Opensourcejunkie 13:38, 7 April 2008 (UTC)
[edit] Re:Templates for portals and schools
ping- Thuvack 09:26, 3 April 2008 (UTC)
[edit] Re:White backgrounds for tables
ping- Thuvack 06:15, 4 April 2008 (UTC)
[edit] Re:Participants lists
ping- Thuvack 15:07, 3 April 2008 (UTC)
[edit] Re: Getting the background around image thumbnails right
ping- Thuvack 16:52, 4 April 2008 (UTC)
[edit] barnstar
I think I never told it since I know you: thank you for your efforts in Wikiversity. Alone from looking at your user page one can see you are a valuable asset to Wikiversity. Please keep it up, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 17:14, 7 April 2008 (UTC)
[edit] Underconstruction Template
ping-Thuvack 09:04, 8 April 2008 (UTC)
[edit] Templates - Class
Could you explain how
class="infobox" works e.g., on Template:About Wikiversity. Where is that infobox code stored? -- Jtneill - Talk 16:07, 9 April 2008 (UTC)
[edit] Template - Mission statement
Can you think of a nice template for Wikiversity:Mission/Current? -- Jtneill - Talk 03:38, 11 April 2008 (UTC)
- A little bit more info - I would also like to be able to include the mission statement on other pages, e.g., on the OmahaUnited sisterproject interview page. -- Jtneill - Talk 03:43, 11 April 2008 (UTC)
- Cool. I've contacted Cormaggio for clarification, but have also gone ahead and created Template:Mission. I also want to create Template:Vision, but first we need a vision :) Wikiversity:Vision. Can you think of a groovy template for displaying Wikiversity:Slogan? Something big, bold, centred, maybe with WV logo? -- Jtneill - Talk 03:43, 12 April 2008 (UTC)
[edit] Re:McCormack was here.
Ping- Thuvack 13:21, 11 April 2008 (UTC)
[edit] Template Automation.
Hi. I think you have seen my new look pages for Electrical Engineering courses. I've got 1 problem. I created thr following templates Template:Elec Eng Hierarchy-0 and Template:Elec Eng Hierarchy-1 I think it is a cool way to help the student/user see where the course is in the greater picture of what he/she wants to achieve. At this rate I will have to create 3 other Templates to complete the above. Is there a way I can automate the above template to achieve this: { Template ; level } i.e Create template Elec Eng Hierarchy which can accept argument of level where course belongs in. I'd be glad if you could help. Say does wikiversity support java code? I know how to achieve above using Java, I just wouldn't know where to put the code. Thanks , I'll only see your response on monday. :-) Thuvack 12:28, 12 April 2008 (UTC)
- I've sort of figured out bits & pieces from Film making school and used that to create Template:Circuit Analysis Courses Content and Template:Electrical Engineering Orientation Courses Content, these have worked well you can check Electric_Circuit_Analysis & Electrical Engineering Orientation courses. I do still think that the above Template on Hierarchy is necessary and I have not found a work-arround this problem as yet. Any Ideas, Help ? wellcome.-Thuvack 06:21, 16 April 2008 (UTC)
[edit] Featured content
Could you help me to understand how 'featured content' works? I have seen Help:Portal#Updating featured content and I am interested in having a go at adding featured content (don't know what yet) to Portal:Pre-school Education. -- Jtneill - Talk 14:18, 12 April 2008 (UTC)
[edit] Wikiversity:Browse
I like the new layout which you've done, i've taken a Wikibreak on Wikipedia to be more active here. Shall I transfer my Help page contents in my subpage to replace the current Help:Contents and move it to Help:Contents/old. Terra 19:23, 14 April 2008 (UTC)
[edit] please...
I just created those so there would be some minimal content and the links would be blue. Please edit them and even just totally scrap what I started. I didn't realize you were intending to do something with them, especially so soon. --Remi 20:53, 14 April 2008 (UTC)
[edit] Jtneill custodianship
Hi, thanks for reminding me - done! Cormaggio talk 09:32, 16 April 2008 (UTC)
- Thanks for the nomination, support, etc. I'm underway. I think I'm ready to stuff beans up my nose and get wacked by trout. -- Jtneill - Talk 13:15, 16 April 2008 (UTC)
- Feel free to throw/suggest tasks; for now I'm working through How to be a Wikimedia sysop and will try to improve that resource as I go along. Is it really OK to try blocking one's self - or should I try on a test account? -- Jtneill - Talk 13:43, 16 April 2008 (UTC)
- The mentorship of 4 weeks have passed. Please evaluate James' performance. Thx, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 06:46, 17 May 2008 (UTC)
[edit] Blocking
OK, I blocked/unblocked myself and read over Wikiversity:Blocking policy. Could you suggest how to... "determine if the address or range is allocated dynamically." -- Jtneill - Talk 14:37, 16 April 2008 (UTC)
[edit] main page .5 horizontal scroll
here this are... --Remi 07:34, 21 April 2008 (UTC)
[edit] Main page
Aha - I see Remi's adding screenshots too! No, I've no objections - I think, as ever, we can tweak as we go. No, I haven't seen an FAQ (apart from the existing Wikiversity:FAQ) - where is it? Cormaggio talk 12:26, 21 April 2008 (UTC)
[edit] somewhat more on transclusions-
i posted the thought on colloquium, and i was curious what you thought:
Subpages used as soley as transclusions should not be given their own page in the namespace at which they are used as transclusions. They should be in the template namespace. They arguably just contribute to clutter, especially if they serve the purpose of soley being used as a transclusion and cannot hold their own and hold no particular use wholly as an individual page. --Remi 16:06, 21 April 2008 (UTC)
[edit] untagged
Just curious if you could help me understand use of the "untagged" tool. -- Jtneill - Talk 01:33, 23 April 2008 (UTC)
- FYI, thanks for the follow-ups on this one. Makes sense now. -- Jtneill - Talk 00:03, 26 April 2008 (UTC)
[edit] Random topic
"(removing random topic - it's embarrassing)" - What do you mean? --Remi 18:50, 25 April 2008 (UTC)
- Perhaps it would be best to try and establish broader consensus on the issue. I can see it both ways. Personally, topics seem more accessible given the new browse page, and the namespace isn't the cleanest space... yet, it could be a really awesome namespace to stumble around inside. --Remi 23:56, 26 April 2008 (UTC)
[edit]
Thanks for the feedback on this delete, McCormack. You're right - I need to check 'what links here'. Appreciate it. So now I've moved the content to another page and left additional comment User talk:134.67.15.6. -- Jtneill - Talk 00:06, 26 April 2008 (UTC)
[edit] SubPageList on MediaWiki 1.12
Hello McCormack, I use you nice extension for MW called "SubPageList" but it seams that it doesn't work anymore on MediaWiki 1.12. When I try to install the exentions, I have the following error while loading those extension : Fatal error: Call to undefined function wfLoadExtensionMessages() In this thread, they said that you shoud change the way you call the function wfLoadExtensionMessages() in your script. Could you do that for us ? Thanks in advance. -- Phil -- 26 april 2008
[edit] {{orphan}}
Hi, McCormack, actually I was just trying to work if there is any point to using this manual 'orphan' template, since orphaned pages are automatically listed in special pages. It seems a bit useless to me. What do you think? BTW - should all pages in the Template: namespace be categorised as at least 'Template' or a more specific type of template. Is that that the best way to organise and navigate to them? -- Jtneill - Talk 11:42, 26 April 2008 (UTC)
- Oh, btw, I didn't create this template, I just came across it, played with it, pondered it. It just seems to manually duplicate what is automatically organised. But I guess it does serve to point out to future visitors/editors that a page is orphaned, which they may otherwise not realise. No big deal. -- Jtneill - Talk 15:39, 26 April 2008 (UTC)
[edit] Categorising (Psychology)
I've tried to tidy up Category:Psychology to make better use of CategoryTree. Let me know if you have any suggestions. -- Jtneill - Talk 13:05, 28 April 2008 (UTC)
- Category:Psychology -- Jtneill - Talk 13:06, 28 April 2008 (UTC)
- That's odd - putting the link above is putting your talk page in the category?? -- Jtneill - Talk 13:07, 28 April 2008 (UTC)
[edit] Schools
Thanks for the warning. FYI, I think I've only changed the namespace for School:Psychology as an experiment as reported on Colloquium and one particularly 'dodgy' psychology sub-school which I've moved to the mainspace. I don't intend to go any further at this stage. -- Jtneill - Talk 13:05, 28 April 2008 (UTC)
[edit] topic, school box
It seems that more specificity probably wouldn't hurt. However, it also seems that we want to avoid information overload. So I'm fairly neutral on the matter. Done properly, it could probably be made prudent and useful. --Remi 18:51, 28 April 2008 (UTC)
[edit] Biology Portal
Thanks for creating the Biology portal. I'll help out as much as I can. It could take some time to get a vast amount of material out there though, I'm writing the Wikibook FHSST Biology while I'm making resources for Wikiversity. I'm also trying to get involved with other projects around the Wiki communities. I appreciate your help.
Cheers,
Wesley Gray 19:59, 28 April 2008 (UTC)
[edit] Columns
Do you know a way to create columns which are automatically even in length? Or must one manually specify and manage column breaks? I searched MW and meta and didn't find much - it's all based around tables from what I can see. -- Jtneill - Talk 10:31, 29 April 2008 (UTC)
- Found what I wanted: meta:Help:List. -- Jtneill - Talk 10:35, 29 April 2008 (UTC)
- Oh good, this worked - 3 columns for category tree: Psychology#Categories. -- Jtneill - Talk 10:46, 29 April 2008 (UTC)
[edit] Main page experiments
Thanks for taking a look & responding. Do you think I should remove or move those v0.6 pages I created? They were just really for my testing and not new versions. -- Jtneill - Talk 10:07, 30 April 2008 (UTC)
[edit] Blindspots
Are there any aspects of custodianship which you think I've overlooked so far? I've tried to 'tour' myself around and try out most functions I'm aware of. I probably could pay some more attention to working through some of the special pages - and cleaning stuff up. And I haven't been on IRC lately, but do intend to get back there eventually. Appreciate any thoughts. -- Jtneill - Talk 10:07, 30 April 2008 (UTC)
- Thanks for the feedback; that's bonza. -- Jtneill - Talk 12:22, 1 May 2008 (UTC)
- OK, so it took me all day to work out... ha ha... and i had to read in this context: 2058 state of the nation address by the honourable victor bitter jr.; president of the republic of australia. -- Jtneill - Talk 12:45, 1 May 2008 (UTC)
[edit] Filmmaking - Course level (answer to your question)
- Initial goal
- I have tried to design the course materials for the three filmmaking courses (Basic filmmaking-preproduction, editing, and scoring) for 16 year old students.
- Overall goal
- By keeping the three courses simple enough for high school kids who speak English, the courses are now used by:
- 1. International high school graduates who are unable to enter local trade schools because of cost, availability or the lack of English skills. (30%)
- 2. First year film students at regular universities who do not have access to these specific areas. (5%)
- 3. High school students who need to create a demo reel to be accepted to film school. (15%)
- 4. 12-year-old kids who just want to have a bit of fun. (50%)
The last group usually completes only the first two lessons. Still, it is enough to make them happy. ~~~~ Robert Elliott 01:53, 1 May 2008 (UTC)
- Special Note
- The first two lessons of the basic filmmaking course are extremely popular with people waiting for airplanes all around the world. Don't ask me why!!! Maybe you should have a category for "Things to do when you are bored." Robert Elliott 02:09, 1 May 2008 (UTC)
- Update May 22, 2008
- I recently looked at the requirements for the filmmaking courses to be offered by high schools in the State of Utah. They have a complete description of what they want for a FINE ARTS COURSE in filmmaking. It is the opposite of my course. Therefore, clearly my course is NOT fine arts filmmaking. My filmmaking course at Wikiversity is either commercial art or a trade school course based on the criteria of the State of Utah for high school courses. Robert Elliott 01:11, 23 May 2008 (UTC)
[edit] Gadgets
I have been checking out gadgets in user preferences here and on WV. I'm curious which, if any, of these you use/recommend? -- Jtneill - Talk 14:52, 1 May 2008 (UTC)
- Other than 'maybe' WikEd on WP (for complicated pages), I haven't yet found a gagdet which lasts behind a short period of novelty. I like the principle of keeping the interface as similar as possible to that for newcomers. -- Jtneill - Talk 00:13, 3 May 2008 (UTC)
[edit] Talk link in your signature
Curious if there's any particular reason you don't have a talk link in your sig? -- Jtneill - Talk 14:52, 1 May 2008 (UTC)
[edit] Narrative Filmmaking = Replacements for school plays
- Not an easy fit
- Currently, narrative filmmaking is very hard to categorize. It does not fit well with traditional high school courses.
- School productions
- Instead, narrative filmmaking will eventually be in offered in high school drama departments in the same category as a school play or a school musical and in the music department in the same category as a school concert.
- Rather than something which is taught, it will be something which is done. (But before that happens, step by step procedures for schools to follow need to be developed in the same way that schools buy a complete package of step-by-step instructions when they obtain the rights to a play such as Disney's musicals. My lessons do not go that far. That will require another ten years.)
- Not IT or fine art
- In contrast, "Information Technology" is linked with "Multimedia", not "narrative filmmaking". "Art and design" in high school refers mostly to "fine art" which "documentary filmmaking" fits better. Narrative filmmaking should be associated with "commercial art" but few high schools offer commercial art, just fine art. (If you are not sure of the difference, fine art is made for the appreciation of the creator, commercial art is designed for the appreciation of the viewer. Fine art is spiritual, commercial art is a craft.)
- Vocational
- Or to put this another way, in Germany, narrative filmmaking would be taught in vocational school along with automobile design and commercial cooking. Narrative filmmaking fits much better into the vocational category. Narrative filmmaking is a craft. Robert Elliott 00:37, 2 May 2008 (UTC)
[edit] Templating
[edit] Wikiversity:Mission
Could you help explain how to template, say, Wikiversity:Mission in such a way that it can be incorporated, if desired, into a box on the new portal pages. This is a more of an experimental learning question on my part, but could be useful in some places e.g., Portal:Wikiversity. -- Jtneill - Talk 00:17, 3 May 2008 (UTC)
[edit] Edit Page
For this template, Template:Edit page, how could it be changed so that one could add the parameter 0 to switch off the text "(and any subpages)", and 1 to switch it on. I haven't done this stuff before, but am interested. -- Jtneill - Talk 01:55, 3 May 2008 (UTC)
[edit] Portal:Pre-school Education
I'm pondering Portal:Pre-school Education... and thinking that a two-column format might be more user-friendly for this page. What do you think and if that sounds appropriate, how? -- Jtneill - Talk 03:48, 3 May 2008 (UTC)
- Thanks for the kids-only fork; that works well. I'm enjoying exploring it. I like the create a box input box. Small mystery - if one clicks create a page with no text in the box, it seems to go into editing the main page... Wouldn't want to see a toddler scoring you a trout... :) -- Jtneill - Talk 13:54, 3 May 2008 (UTC)
[edit] Where is the list of completed courses?
- Just a quick question. Where is the list of completed courses (or courses which are working well enough to be listed as completed courses by their instructor)? Robert Elliott 04:37, 4 May 2008 (UTC)
- I now see the templates and the symbols for completed courses at Help:Resources by completion status.
- Question: How should courses which are ready for testing be listed?
- Suggestion:
- "This course is ready for a test drive." - - 99% complete? (with a picture of a bumpy ride?) Robert Elliott 22:00, 4 May 2008 (UTC)
- Another suggestion -- I think that a CATEGORY for each of the templates should be included inside the template. (Sorry if that is not clear.)
- Example: the template {{Template:100%done-2}} should include the category of [[Category:Completed course]] (or something like that) so that a List of Completed Courses is automatically generated. (Again, I am not sure if this makes sense. If not, let me know and I will try again.) Robert Elliott 22:22, 4 May 2008 (UTC)
- Correction -- I think that the link should not be to the help page but rather to the list of completed lessons category.Robert Elliott 03:29, 5 May 2008 (UTC)
- Huh? -- I do not think that automatic categorizing is working correctly. You say "Your suggestion about auto-categorising is already there in the system." but this is not producing the results that I expected. How does this work? Things seem to be appearing on the wrong pages or not at all. Robert Elliott 05:48, 6 May 2008 (UTC)
- High expectations -- Maybe I had different expectations for how this feature works. Question: How to I refer to list of completed courses in my pages. [[Category:Completed resources|Click here to see the list of completed courses]] does not seem to work.
- I will try experimenting with different ideas for a while. If it does not work, we can always roll back. Robert Elliott 06:43, 6 May 2008 (UTC)
- Ahhh! -- Yes, I did not understand about namespace. Thanks.
- Yes, I would like to be able to point people to the page of completed lessons but the simple approach does not work. Obviously, there must be an easy way to include that in my text but I cannot think of it at the moment. The goal is for people to click on the word complete in the {{complete}} template and take people to the page of completed lessons. Robert Elliott 14:08, 6 May 2008 (UTC)
- My Thoughts - Here is an example of what I am thinking about.
- 1. If we make this template more useful for students, teachers will accept it quicker.
- 2. For the most part, only students will see this.
- 3. The help page can be announced in other locations which will be more effective. (Example to follow soon.)
- Note that I use the word "students". Half of my students do NOT speak English. They know the word "student" but they don't understand the word "learner". The foreign students are my best students so I do what they need most. Robert Elliott 00:02, 7 May 2008 (UTC)
- My Plans - This is what I will be doing.
- 1. I like the work that you have done. I think that this will be extremely useful.
- 2. I will continue to experiement and to promote your marking system to identify completed resources.
- 3. However, my goal is very specific. I want to create a list of Traditional courses which are complete or ready for testing. This list is for a very select audience (the average person who comes to Wikiversity to learn.) Therefore, I will be creating a parallel list which will be based on your list. It should compliment what you are doing. (This will be clearer when I get more done.) Robert Elliott 14:46, 9 May 2008 (UTC)
- Something looks odd -- I am testing the Category:Resources by completion status and the Category:Completed resources. One works OK but one does not seem to. When I select the + sign, I see the complete list on the Category:Completed resources page but not on the Category:Resources by completion status page Robert Elliott 21:01, 9 May 2008 (UTC)
- Category trees -- Now I see why programmers like categories so much. There is a lot you can do with it. The challenge is to make it meaningful and easy to read for students. Robert Elliott 03:58, 10 May 2008 (UTC)
[edit] Extension:Gnuplot
Hi McCormack, just wondering if you had any initial thoughts about mw:Extension:Gnuplot. It could have some potential for WV? -- Jtneill - Talk 17:23, 4 May 2008 (UTC)
- Well, I thought it could potentially save time drawing plots externally, then exporting, uploading, etc. for teaching statistics. Maybe there's an external online graphing solution which could be embedded. Not a major priority, but I'm looking for other ways of presenting data/graphs - and if it could be done dynamically, so that others can change the data, even better for learning. -- Jtneill - Talk 04:35, 5 May 2008 (UTC)
[edit] Random mascot welcome
[edit] Project boxes
They are way cool :). Thankyou. Just trying to work out stacking and preventing overlapping e.g., -- Jtneill - Talk 01:03, 9 May 2008 (UTC)
- btw, this looks different in FF and IE. -- Jtneill - Talk 01:24, 9 May 2008 (UTC)
- Any ideas for layout here Level of measurement (robelbox + project box)? It looks fine in IE, but overlapping in FF? -- Jtneill - Talk 03:08, 10 May 2008 (UTC)
- Excellent initiative, James! It's great to see the metadata efforts being revived in such a practical and potentially "addictive" way. :-) Cormaggio talk 12:15, 11 May 2008 (UTC)
- I think maybe {{stub}} and {{welcome and expand}} both contain "lost at wv?" e.g., see Colour. -- Jtneill - Talk 01:43, 26 May 2008 (UTC)
- Actually, i meant {{uncat}} and {{welcome and expand}}. Is there some way of using "IF" with parser functions to detect if another template is being used on a page? Just curious. -- Jtneill - Talk 01:45, 26 May 2008 (UTC)
[edit] edit summaries and IRC chat
"Please do not write my name at all in your edit summaries" <-- I'm sorry you got upset because of my reference to your user name. Maybe I can figure out how to refer to your edits using the version numbers for article revisions. I'm also sorry you got upset because of my comment about a vacation....I should have put a :) after "I think you need a vacation". --JWSchmidt 18:13, 9 May 2008 (UTC)
[edit] Main Page/News
I added an announcement to Wikiversity:Announcements, but didn't see it on the main page. Checking Main Page/News it seems this is done separately? Any way to transclude from annoucements? -- Jtneill - Talk 04:45, 10 May 2008 (UTC)
- Thanks; I can see value in keeping annoucements and main page news separate. -- Jtneill - Talk 21:16, 11 May 2008 (UTC)
[edit] barnstar
I thought that between the mainpage, all the work on templates you've been doing, the browse page, and a multitude of other things that you have done, you deserve a barnstar. I found this nice green one, so... keep up the good work. : ) --Remi 02:18, 12 May 2008 (UTC)
[edit] Cat tree
I can see both sides. What if "Category:Paths by user type" was open by default? It may give more empahsis to headings, but then also get the message accross that, "these things expand!". --Remi 08:08, 12 May 2008 (UTC)
[edit] Portal reform
FYI:
- merged & redirected portal:sociology into school:sociology (all material transferred; and double/single redirects tidied (thanks for prompt to check this)
- deleted portal:TestPortal (consisted of single anonymous test edit) -- Jtneill - Talk 10:15, 12 May 2008 (UTC)
- Portal:Thought content has been merged with and redirected to Topic:Thought -- Jtneill - Talk 10:31, 12 May 2008 (UTC)
- Portal:Social Theory has been moved to Topic:Social theory (reflecting the nature of the content). -- Jtneill - Talk 10:35, 12 May 2008 (UTC)
[edit] Software issue
You certainly did not make it clear that was how you were using those tags prior to this! I did not laugh at your pun, and I did not find it funny. I certainly don't feel welcome to say any more about that proposed categorization scheme. --HappyCamper 16:58, 17 May 2008 (UTC)
[edit] :)
thanks. : p --Remi 08:56, 18 May 2008 (UTC)
[edit] Sisterproject boxes layout
Hi; i noticed that perhaps the tweaks to get project boxes to line up may have made the layout of e.g., Template:wikipedia a little wonky for stand alone use; e.g., T-test#See also - the box now seems to push the text down? -- Jtneill - Talk 12:00, 19 May 2008 (UTC)
- OK, try this - User:Jtneill/Sandbox#Template troubleshooting hope you see a difference too. -- Jtneill - Talk 14:33, 19 May 2008 (UTC)
[edit] Traudction
Bonjour camarade ,¿est-que tu as pu parler avec quelq'un pour faire le travaille de traduction? .Por quelque chose ,tu peux me parler . D'artagnan
[edit] Transcluded subpages in the topic namespace
More on that.). --Remi 05:11, 22 May 2008 (UTC)
- I think what you are saying is maybe what Wikibooks does. The disadvantage to that, if that is what Wikibooks does, is that one cannot open a bunch of random pages in tabs then. Yet... this may not matter, or perhaps it can be done in a different manner as to not have this limitation. --Remi 06:10, 22 May 2008 (UTC)
[edit] arabic topic
If you think that Bahaa may attempt to reference that page other than from his username (User_talk:Bahaa#Hello), you don't think keeping it would create an undue load for the servers, and you don't think it would cause any other problems, then I see no need to delete it. I doubt Bahaa would miss it though, but I could be wrong. --Remi 07:13, 22 May 2008 (UTC)
[edit] Becoming a custodian
Hi there I wondered if it would be possible to put me on as an apprentice custodian? Businesslawyer 13:45, 22 May 2008 (UTC)
- Ok then thanks for telling me. Businesslawyer 15:40, 22 May 2008 (UTC)
[edit] "Template:Robelbox/C1" broken?
I am working with the "Template:Robelbox/C1". I have no problem with using it normally. But when I put in inside another template, the new template looks great by itself yet when used someplace, it does not work correctly. I can create an example for you if you wish but since you have just been working on the "Template:Robelbox/C1" template, I assume you already know what the problem might be. Robert Elliott 01:02, 23 May 2008 (UTC)
- Try working with {{Template:Film Scoring-Sound of Fear-Completed Assignments-2}} Click here to see the original. Robert Elliott 01:29, 23 May 2008 (UTC)
- See bottom of Tonto Silver, not the top. Near the top, I have the box included. At the bottom, I use the template. At the bottom, my browser does not show the whole template, only the title of the first box and a surrounding line around the bottom with nothing in the middle. (P.S. I like the simpler box you showed me. Thanks.) Robert Elliott 10:45, 23 May 2008 (UTC)
[edit] Thanks
Thanks for the supportive (& expert) 'thrust' into custodianship; I would otherwise probably have bided my time forever and possibly descended into asking increasingly annoying questions, rather than trying to help and solve problems :). So, I'm on board now, but still much to learn of course. I sent the email I've been waiting to send for some time to the vice-chancellor yesterday letting him know that I am officially moving all my teaching materials off-site. It seems to have rattled a few cages, so I am meeting next week with the powers that be next week to explain further. I am still really only half-engaged here until I've got my PhD submitted (very close), so I may go a little quiet, but then we might have some more substantial conversation - or at least I'll find you a serious picture :). -- Jtneill - Talk 02:45, 23 May 2008 (UTC)
[edit] Robelbox
I tried to earn a trout earlier by editing {{Robelbox}}. Check the edit history. Weirdly it looks OK, but it was playing up and showing "Template loop errors" for the examples, and the examples were showing in pages using Robelbox transculation - including main page ;). I reverted it, but then within seconds or minutes all the edit history pages looked OK again. What do you think happened? And do you want to have a go at including the suggested edits, with the list of colours and another example using a theme? -- Jtneill - Talk 02:06, 26 May 2008 (UTC)
- I worked this out now, I think; I had left a dodgy line in there. -- Jtneill - Talk 02:59, 26 May 2008 (UTC)
[edit] custodianship
Hi! I have created a new subpage User:Gbaor/Custodianship, where you can leave me some notes what I should do/read in order to become full time custodian. --Gbaor 04:49, 26 May 2008 (UTC)
[edit] Big trouble
I have done something bad with Topic:Statistics:OpenStat, but I have no more time to fix it. :( Can you please check it?--Gbaor 13:28, 26 May 2008 (UTC)
- Accidentally moved also the subpages, good news are, that Topic:Statistics:OpenStat/Intro and the other subpages still exist. --Gbaor 13:32, 26 May 2008 (UTC)
[edit] subpage list
Would you point me to an example of where subpage list is in action? I cannot find one on the wikis listed at the extension page at mediawiki.org. Thanks. --Remi 09:11, 27 May 2008 (UTC)
[edit] Tardis
What do you think? Should/could it go to Wikiversity:Tardis; or should it stay as a user subpage but not be list as 'completed'? Of course its not "completed", but the "concept" is done sufficiently that it "works" and is "usable". Maybe there is a better completion status category? -- Jtneill - Talk 11:40, 31 May 2008 (UTC)
Done (used Template:Incomplete instead) -- Jtneill - Talk 12:29, 31 May 2008 (UTC)
[edit] revert?
Hi James - why this revert? I thought it was perfectly fair comment. I'm going to revert it back if you don't mind... Cormaggio talk 14:54, 31 May 2008 (UTC)
- :-) Well, I wouldn't treat something as borderline vandalism just because it didn't meet a certain standard of discourse, and was added by an anon. Jokey, matey, and slightly opaque though it is, I still think it provokes comment in a good way. Hopefully my own comment will provoke further, more thoughtful comments... Cormaggio talk 16:02, 31 May 2008 (UTC)
[edit] Those templates...
They look like they could probably use a facelift. Not that have not served their purpose...
Help:Resources by quality looks interesting. Maybe those templates could go on talk pages, like similar templates on Wikipedia do. But... --Remi 06:35, 1 June 2008 (UTC)
[edit] Colors...
I probably won't play with more it at this moment... but thank you. :) --Remi 10:42, 1 June 2008 (UTC)
- So I am... Teaching_with_Applied_Academics/How is not displaying Template:Lost. Why might this be? Do the servers need to update? --Remi 10:49, 1 June 2008 (UTC)
[edit] You broke it!
Template:75%done-2 is not displaying correctly on the Help:Resources_by_completion_status page. Robert Elliott
[edit] IRC meeting
Hi James - just notifying you of a suggested time/date for an IRC meeting about 'Wikiversity learning'. Could you indicate if you would be available, or would prefer a different time/date? Cormaggio talk 14:44, 1 June 2008 (UTC)
- Thanks for the quick feedback - perhaps we should again consider two meetings (though even this is difficult to accommodate everyone's availability). Cormaggio talk 14:56, 1 June 2008 (UTC)
[edit] Clarification I guess :)
Shows some of the activity with other pages already deleted. I think I was hasty in trying to find sd template - a better description would have been "Xrumer bot IP" I think. Thanks & regards --Herby talk thyme 12:52, 2 June 2008 (UTC)
[edit] category maintenance
Hi! Today I tried to pull together all category maintenance stuff on WV into Category:Category maintenance. 3 specialpages in this category are still redlinks, and I cannot figure out why. Any ideas?--Gbaor 11:16, 4 June 2008 (UTC)
[edit] Re: Action research
Hi James, hmm, interesting question! I think it could well be the first step of an action research cycle. That would be the question around action research - "what kind(s) of action can be taken, in order to address what need, and what does the process and effect of the action tell us about the context, and what further action does this suggest?" So, if a 'plan of action' is seen in this context, then it constitutes a type of action research - there's really not a "high" or "low"-brow delineation here (although, action research is itself seen as a bit low-brow by some realms of the academic world - and almost as academia's saviour by others). So, does this correspond with the way in which you've developed that resource? Cormaggio talk 20:50, 9 June 2008 (UTC)
[edit] Remind me again please...
Why are we keeping subpages in the topic namespace? If we use a bunch of pages in the Template name space and include them, will they not still be examined when the topic namespace is searched? --Emesee 03:36, 15 June 2008 (UTC)
[edit] advice needed
Hi! I am looking for an advice here (initial question). Thank you!--Gbaor 10:54, 15 June 2008 (UTC)
[edit] Wikiversity:Random
Leaving the message where you did is probably fine. I think you are quite skilled with making MediaWiki do all sorts of interesting things; the page is rather interesting. --Emesee 08:46, 16 June 2008 (UTC)
[edit] Quiz layout
Whadyareckon: quiz example. It seems to me that there is maybe too much blank space between the last response item and the "submit" button. How could this be reduced? -- Jtneill - Talk 14:32, 18 June 2008 (UTC) (PS Are you waiting for a 100 talk threads before archiving?? :))
- While I'm here with this.... is there anyway to adjust the quiz question numbering? - I'm going to end up with all questions numbered 1 if I use the technique at the link above. -- Jtneill - Talk 14:37, 18 June 2008 (UTC)
[edit] Just thought...
It's been some time since I visited your talk page. Just thought I should say hi!... I am working on finishing the last 4 pages of Electric Circuit Analysis course. It is interesting to see a few people already spell chiking the course. Thanks for all the help thus far ;-) Thuvack 15:27, 18 June 2008 (UTC)
[edit] Re:
In regard to your comment on my talkpage - I wouldn't mind going through the vote, I haven't used the Custodian tools in any negative way and used it when needed to like an author requesting a page to be erased, or a page to be imported. I Know SB will strongly oppose due to reading the comments, but this is to be expected in a Custodianship voting and someone is bound to be supportive, neutral or oppose - but I will cease creating the categories and see if I could do other things on the site - but wouldn't mind going through the vote - if need be the voting could be long as 10 days through your suggestion on the Custodian noticeboard. Terra 17:54, 20 June 2008 (UTC)
[edit] User talk:195.194.136.252
Hi. Are you forgetting to log in? McCormack
Oops, yes as I jump between (class) machines at this frantic time of year. Paulmartin42 16:04, 24 June 2008 (UTC)
[edit] DM
[edit] Problem uploading images - McCormack
Hi McCormack,
Terribly sorry to post this here but I'm not having any luck in the wikiworld on my own!
I ran into the same error you did as noted on this page...
I have posted a reply to your previous comment on that page but would imagine that you don't check back there regularly (why would you?!)
Can you tell me how you resolved that issue - that is - where in SpecialUpload.php I need to insert those lines (see copy below)? Also, do I need to change anything or can I simply copy and paste that text exactly as is?
$src = $_FILES['wpUploadFile']['tmp_name']; $dst = $somewheresafe . $_FILES['wpUploadFile']['name']; $res = move_uploaded_file($src, $dst); $this->mTempPath = $dst;
Again, sorry to post this on your talk page. Your time and any help you can offer are very much appreciated!!!
Cheers
Stanbridge
[edit] Wikiversity:Policies
Can I ask what you are referring to? Please be direct with me as I am still a relatively new user
[edit] Circuit Analysis
ping --:-)Thuvack 15:38, 2 July 2008 (UTC)
[edit] chat
Can we talk when you go online again ? See you, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 16:17, 8 July 2008 (UTC)
[edit] Template:Wikipedia
I'm trying to work out using the parameters for this template, e.g., I want something like
or just
. Clues? -- Jtneill - Talk - c 15:41, 9 July 2008 (UTC)
[edit] Statistics
Mainly tertiary. -- Jtneill - Talk - c 05:08, 21 July 2008 (UTC)
[edit] 100th unarchived message award
McCormack, I've been waiting to award you with something special, and since I'm not seeing too many trouts or paperbags in the pipeline, I thought I would take this long awaited opportunity to congratulate you on the grand occasion of your 100th unarchived talk message. I have not seen this achieved before - will you go for 200? -- Jtneill - Talk - c 05:08, 21 July 2008 (UTC)
- Anyone can do 200, let's do 2000 :-) ----Erkan Yilmaz uses the Wikiversity:Chat (try) 15:45, 21 July 2008 (UTC)
[edit] Plotting extension
When you get a chance, can you check this page - User:Jtneill/Graphing. I pasted some code from Wikipedia which doesn't work here, so I'm guessing it needs an extension. Do you know which one? If its already on WP, I'm guessing it could be a relatively simple exercise to get it implemented on WV?? -- Jtneill - Talk - c 14:04, 24 July 2008 (UTC)
- The timeline extension is installed (same version), see Special:Version: EasyTimeline (Version r37843)
- At de.WV once also that problem appeared and the next day it worked out just like that, see de:Wikiversity:Cafeteria/Archiv/2008/01#Timeline
- Perhaps it needs some initialization by the developers or some cache problem ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) 15:53, 24 July 2008 (UTC)
- I'm feeling slow today. --McCormack 16:51, 24 July 2008 (UTC)
- Well, slow worked :) I went to bed and on looking at this in the morning, it works! So, same kind of situation as Erkan reports. Is this a one-off, I wonder, and from now should be OK? We'll see - I'll try it out some more. I didn't realise until seeing this code that the timeline extension could be used for graphing! Will stare at the code and see more about what's possible. -- Jtneill - Talk - c 21:46, 24 July 2008 (UTC)
[edit] categories
One user made some observations here and I think then began to analyze the technics of the display of the category (see here). I remember you made a small howto for newcomers. Do you remember where that was ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) 14:22, 3 August 2008 (UTC)
- found it, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 14:58, 3 August 2008 (UTC)
[edit] Featured page templates
Just wondering if you could explain the templates that should be applied to featured and proposed featured pages. I had a quick look at a small sample of featured pages (Web design, Technical writing, and Economic Classroom Experiments) and got the impression that that there wasn't a consistent template applied to indicate them as featured and to do appropriate category. I could have spent longer poking around, but figured I'd touch base instead. -- Jtneill - Talk - c 14:25, 3 August 2008 (UTC)
[edit] Educational Media Awareness Campaign
Hi McCormack,
I just wandered by WV and noticed this. I think it's really brilliant. What is its status? Can the Wikimedia Commons community help or be more involved somehow? We have some pages that would be useful for this like Schemes. It would be cool if it could be 'mirrored' on Commons too. If it's active and ongoing I would like to ask someone to create a feed for it too. --Pfctdayelise 04:05, 4 August 2008 (UTC)
- Good to see this being recognised! - it is a fabulous resource. -- Jtneill - Talk - c 12:18, 5 August 2008 (UTC)
[edit] Unlicensed image uploads
Thanks for being onto that and the suggested strategy - she's been very responsive and it prompted me to start some "image uploading/licensing education" in tutorials this afternoon - more next week. For Image:Self.gif I added {{PD}}. Do you think that is all that is needed? I also went looking for something like w:Template:Copy to Wikimedia Commons on here, but couldn't find anything. So I copied that template here to Template:Copy to Wikimedia Commons, but it's going to need some tweaking by the looks to get it working. If you have a chance could you take a quick look? I can work it out, but you can probably work it out faster. I'll delete any unlicensed images (if they haven't been already) once I catch my tail. -- Jtneill - Talk - c 12:18, 5 August 2008 (UTC)
[edit] Social psychology (psychology) homepage
Thanks for the comments about use of collapsible boxes on the home page layout for Social psychology (psychology). I'm mulling it over. User:Leighblackall also had some suggestions I'm considering. But basically the reason is that I'm find vertical scrolling with students leads to more of them falling off the back of the truck... -- Jtneill - Talk - c 12:18, 5 August 2008 (UTC)
- BTW, I'm still mulling this over - I haven't yet had feedback from students, so I'll try to get some. -- Jtneill - Talk - c 00:53, 10 August 2008 (UTC)
[edit] Electric circuit Analysis User
Hi there. A student e-mailed me their results and other quiries concerning Circuit Analysis. I have posted their results on the course pages. The user calls himself Ozzimotosan. this does not appear to ba a registered user. But he insists it is a registered user. Could you check this out for me? Thanks --Thuvack | talk | Blog 12:19, 5 August 2008 (UTC)
- I have requested the user to edit my talk page. here please have a look. --Thuvack | talk | Blog 13:27, 5 August 2008 (UTC)
[edit] Just saying 'hi'
Hey, McCormack - OpenSourceJunkie here. I'm just saying 'hi' - I know I was AWOl for a while, but I had a VERY busy summer :) hopefully I'll be back to contributing on a semi-regular basis, although things are still a mite hectic.
--Opensourcejunkie 22:37, 8 August 2008 (UTC)
[edit] Category:Lectures
With this category, I'm thinking it would be better if we could list subcategories by course. Maybe we could do this by adding a parameter to {{Lecture}} to allow switching off of category listing in Category:Lectures and instead just list the main lecture page in that category. Then specific lectures could have their own course lecture category. Hopefully you know what I mean - I'm happy to experiment with a way for Social psychology, but figured you might have some ideas/suggestions. -- Jtneill - Talk - c 00:52, 10 August 2008 (UTC)
- Thanks for working on this so fast; just noting currently there are some extra characters appearing "COK" when transcluded e.g, Personality/Lectures/Dispositional perspectives. I realise you're in the middle of it - but I went looking for vandalism at first! :) -- Jtneill - Talk - c 04:47, 10 August 2008 (UTC)
- Well I was thinking of finding a tin of spam for you for placing rude words on my lectures! :). But I'm still not sure if its doing (or helping to do) some better organising of Category:Lecture. e.g., it lists all social psychology lectures, whereas I was hoping to just list social psychology, with the lectures in a sub-category. Is there some parameter I can add to use of {{lecture}} to indicate "don't use category"? Or is it perhaps just that its already doing this and the cache needs to update the category listing? -- Jtneill - Talk - c 05:17, 10 August 2008 (UTC)
- OK, I'll wait to see the job queue catch up - ta. rootname/Lectures make sense to me - but that's probably because that's the structure I've used - it may less intuitive to others. Maybe let's see how it goes as you've created it. Note I have the same type of structure also for Category:Tutorials and I guess ultimately all resource types. Would rootname/Resource type make sense more broadly? -- Jtneill - Talk - c 05:26, 10 August 2008 (UTC)
- What do you think about capitalisation e.g., /lectures vs. /Lectures. I've tended to use capitals and didn't really think it mattered very much, but although I created Category:Personality/Lectures, you can see at Personality/Lectures/Dispositional perspectives that its still asking for Category:Personality/lectures to be created? -- Jtneill - Talk - c 05:39, 10 August 2008 (UTC)
[edit] Please have a look and comment
Hi there. I have started my long awaited contribution towards Generation of Electrical Power topic. Please helpout or comment on the following problems:
- Citation/Refference:
Your help is much valued thanks.--Thuvack | talk | Blog 14:32, 11 August 2008 (UTC)
- McCormack, I've just had a look at the page problem which Thuvack's been having he's right - there is a problem with the page, it may depend on what type of browser you are using - I'm using Safari and the bottom page seems to be incorrect, even in preview try downloading via [1] which leads you to the safari page on apple - it may then show up on the browser, I don't know what code to use to prevent the template from acting-up. Dark Mage 18:58, 11 August 2008 (UTC)
[edit] Sub-pages in Lesson pages
Hi there. As you know I am busy with some major course development work currently. I found that for circuit analysis people generally refrained from editing because of the daunting mark-up used. I have decided to use sub-pages with easy edit links. each lesson page would therefore end up with atleast 10 sub pages (a sub page per part). Is this a desirable way of developing courses, is there a downfall to using sub pages? --Thuvack | talk | Blog 14:46, 12 August 2008 (UTC)
[edit] Page fomatting
Can we discuss this please, thanks --Thuvack | talk | Blog 06:47, 13 August 2008 (UTC)
[edit] Wikversity proposal
I'd like to contact you via email re Wikiversity_talk:Introduction_Overhaul_Taskforce/copy. I have developed my own version of Wikiversity on my own website and am looking for some one to review it unofficially. If all goes well, it should not take more than 20 minutes to assess its potential. Since I am still learning how to secure the site, I'd prefer not to disclose its address here.
Its main features are:
- Uniform navigation within and between namespaces.
- Templates for consistent look and feel.
- Includes course schema for classifying and presenting course material.
This is a radical departure from Wikipedia and yet should be intuative to both gurus and newbies alike. If you're interested (ot any other curator) I can post my email here (or you can look it up?). PeterMG 11:31, 14 August 2008 (UTC)
[edit] Podcasting lectures
This is on my 'challenges'/to do list. Currently, I can record lecture audio and video using the university service - but the sad story is that they only provide this in .wma and .wmv - the system is homegrown and almost 20 years old! I've noticed some other lecturers taking in their own recording equipment, which is looking like a more viable option than long-awaited institutional change. Meantime, several people are eyeing off the iTunesU eye-candy and I'm having a bit of trouble trying to convince them that this is not such a good idea. But the weakness in my argument is that I can't yet demonstrate .ogg podcasting as a viable alternative. So, among other things, I'm looking for a digital audio-recorder (and software) which will help me create .ogg podcasts, etc. The uni is switching from WebCT to Moodle next year and I'm about to start playing on the new moodle site. Not sure yet if it offers a podcasting module. Interested in ANY suggestions. I've played some with Audacity, but otherwise not very experienced with digital audio. -- Jtneill - Talk - c 11:37, 20 August 2008 (UTC)
[edit] Total bytes in a page and its subpages
Just curious if you know of any way to total the bytes in a page and its subpages? The specific context would be to add another column to Social psychology (psychology)/Participants#Profile list. Some participants are adding content into user subpages. -- Jtneill - Talk - c 11:44, 20 August 2008 (UTC)
- Just for a quick way to sort/scan/organise students according to "volume" of contributions in their user area. Of course size doesn't equal quality, but its helps me (besides recent changes) to keep an eye on those who are active - using a sortable table. Yes, word count might help, but I realise now to mostly work within available tools and pick one's proposals for changes carefully. Maybe I need to look up basic mathematical functions - there must be a way to add say user page size and user talk page size? -- Jtneill - Talk - c 13:04, 20 August 2008 (UTC)
[edit] Re:E-Mail
I've responded. Dark Mage 10:40, 28 August 2008 (UTC)
[edit] Wikiversity:Student union
I was shocked that this page which should be used to welcome new participants was filled to the brim with confusing meta-discussion about deletionists and meta issues. It seems almost conduct unbecoming to do something that counter productive. Is there something that can be done to prevent people from getting angry and using various pages at this project for revenge? Salmon of Doubt 13:43, 5 September 2008 (UTC)
[edit] Re: Wikiversity:Requests for Deletion#Terry Ananny
"Doubtless anti-deletionist elements will oppose this nomination and treat it as a cause celebre"
I am just smiling - thanks that is the first smile of the day for me. I wish you will also get one today, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 10:47, 6 September 2008 (UTC)
[edit] Let's talk...
Hello McCormack, are you willing to meet in chat so we can talk alltogether ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) 17:45, 6 September 2008 (UTC)
[edit] Resource Project Boxes
Just wanted to make sure you saw this. The Jade Knight 10:58, 10 September 2008 (UTC)
- Many thanks. The Jade Knight 04:56, 11 September 2008 (UTC)
- I prefer fewer words: how about this: {{history|{{{1}}}|{{{2}}}}}, where {{{1}}} is what category it goes into (defaults to some History sorting cat), and {{{2}}} is the resource name it displays (defaults to "History")? The Jade Knight 06:42, 11 September 2008 (UTC)
- I see. Tell me when you've finished with tinkering this time around. The Jade Knight 06:58, 11 September 2008 (UTC)
- Did you update these? How do they work now? The Jade Knight 09:49, 13 September 2008 (UTC)
[edit] colloquium post and disruptive users
McCormack: Thanks for you post to my user page re: this. I was in fact looking at a different set of edits than the ones you have now pointed out - so thank you for that. I had intended to reply more fully to the issue and your concerns but have recently had a couple of days away from WV at a conference, and since reading Cormaggios post, I'd now prefer to try to get some time to consider my response/opinion in regards to all this. Its all seeming a bit unfortunate now and I'm weighting up whether keeping out of it might be the best option. Countrymike 20:39, 10 September 2008 (UTC)
[edit] Subject boxes
I just noticed that use of the Template:Psychology now says "(subcategorised into Category:Psychology resources)." (e.g., Aggression) in the box. Since the last edit was June and the text isn't in the template, I'm guess it must be some other more generic template change (perhaps you're playing?). But ideally, how can it be deleted - I hope it won't stay because I like my project boxes with little text :) and this I think will be confusing and unnecessary for most readers, etc. -- Jtneill - Talk - c 06:16, 11 September 2008 (UTC)
- Yes, all good, ta. You know I'm just trying to keep the trouts from your door :0. -- Jtneill - Talk - c 06:30, 11 September 2008 (UTC)
[edit] Category Spy
Really neat tool, there. Thanks. The Jade Knight 07:51, 11 September 2008 (UTC)
- There's evidently an infinite category loop between Categories>Wikiversity>Wikimedia>Content>Categories. The Jade Knight 07:59, 11 September 2008 (UTC)
[edit] Image deletion
Hi James. I don't see how Image:Deletionism.png is a candidate for speedy deletion. However, I can see it bothers you. Would you be prepared to participate in Learning from conflict and incivility, and share your feelings? Hope you're keeping well, Cormaggio talk 14:13, 11 September 2008 (UTC)
- There was uploaded a new version of this media file: [2]: see edit summary please. ----Erkan Yilmaz uses the Wikiversity:Chat (try) 17:36, 11 September 2008 (UTC)
[edit] Portal's
McCormack, you seem to be experienced in understanding templates - if so would it be possible if you could assist me on Wikiversity:Student Union with the new layout template, most of the links are still claiming to be redirected from the Portal page which was moved to the new one however I've had a look at the coding and it seems fine - could you have a look at the coding itself and see if it's possible to stop the redirect message from appearing which keeps saying this (Redirected from Portal:Student Union/Page2) - all of it should now be in the new student union page and the coding should all be in the correct subpage. Dark Mage 19:12, 11 September 2008 (UTC)
[edit] MATLAB
Do you know who the professor is for the MATLAB course? I want to explain to him/her about students using subpages for their notes. I could hunt it out, but I think you might already have been in touch? -- Jtneill - Talk - c 01:29, 12 September 2008 (UTC)
- Ah, yes, exactly (about MATLAB) - but our friends with Eas4200c user names are all studying MATLAB and posting their notes - most are going into user pages and user subpages, but a good chunk are also landing in the main space. I've been shifting some to user subpages, but there's more than I can keep up with, so perhaps we need address the issue at a slightly higher level. Anyway, now you know what I mean :). -- Jtneill - Talk - c 04:49, 12 September 2008 (UTC)
[edit] Three Profiles)
[edit] Move log
What happened there with Topic:Body langauge - we moved it at the same time?? Special:Log/move. I can't quite work out what to do/undo. -- Jtneill - Talk - c 12:20, 13 September 2008 (UTC)
- I see you've sorted it! -- Jtneill - Talk - c 12:22, 13 September 2008 (UTC)
[edit] G'day Mc
It's been great to chat on IRC! - here's the link to my 'case study', and I'd appreciate your thoughts on whether or not the material meets your recent proposals :-) Privatemusings 20:52, 14 September 2008 (UTC)
[edit] CSD
Your page Main page learning project/www draft main page is a candidate for speedy deletion and is blank. Do you have any objections to the removal of this page? Ottava Rima (talk) 02:30, 15 September 2008 (UTC)
- I deleted it. Thanks for pointing this out. --McCormack 04:11, 15 September 2008 (UTC)
- Sure thing. I wanted to try and clean out the CSD category. :) Ottava Rima (talk) 13:22, 15 September 2008 (UTC)
[edit] Category:Resources by type and Category:Class notes
Just seeking yr thoughts about Resources by type. What do you think of the "class notes" (e.g., my social psych students - but at least they're in user subpages, the MATLAB course, etc.) that are appearing? For a quick thing, I added Category:Class notes to such a page, then added that cat to Category:Resources by type. Then I noticed that this is a hidden category, which I haven't yet encountered (but realise I should learn about). And is there a better way? Really, I probably should create a resources by type template for a project box with auto-categorisation. Or maybe then I thought about tweaking an existing one e.g., Template:Course to have a parameter which could be renamed to "class notes". Anyway, that's the process my mind went through before arriving here for further advice just in case there is trout waiting to leap out the water :). -- Jtneill - Talk - c 15:05, 16 September 2008 (UTC)
[edit] Mentor
If you would like to become my mentor, I would appreciate it. I would also appreciate seeing you around the IRC channel more often to discuss things with. The specific topic of conversation? Whatever. :) 23:28, 21 September 2008 (UTC)
Note, the one astronomy picture showing the big bang and the expansion of the universe was great to see on the main page today. Great pick. Ottava Rima (talk) 16:46, 3 October 2008 (UTC)
[edit] G'day M)
[edit] Here is a friendly notice
that in a discussion between John Schmidt's talk page and mine we mentioned your work. Hillgentleman | //\\ |Talk 23:27, 20 October 2008 (UTC)
[edit] Invitation to the Wikipedia course project
Hello,
I have noticed that you have developed nice course material related to Wikipedia. Would you like to contribute to the Wikipedia university level course as well? You may for example add your name to the list of interested course developers, and fact-check the quizzes. Kind regards. Mange01 09:55, 18 November 2008 (UTC)
[edit] Nobody can be free if they need anybody else to tell them what "free" means
You mean to say telling people what freedom means is actually not that difficult (e.g. explaining Higher-order volitions) but most people don't realize it, unless they are told? Why do you phrase it in Two's complement? --fasten 11:13, 31 December 2008 (UTC)
[edit] TestWiki
I found your name on the participant list of Quiz. What do you think about TestWiki? Would that be useful for Wikiversity? --fasten 11:13, 31 December 2008 (UTC)
[edit] Threaded Discussions
Hello! I was checking around d wikiversity and saw that wikiversity allows threaded discussions as learning projects.
However, is it possible that we have a programmed page - dynamic one - which updates the order of the discussions as and when they are edited? I am referring to the standard format of discussion forums. A little more convenience translates into regular participation of a lot of students who are here not to learn as much as they want to help/share. We already have templates that add a page into a category etcetra, but will this one be possible?
On a closing note, I was actually relieved that discussions qualify as teaching resources; given the fact that wikibooks is pretty stringent to the definition of textbooks. They won't allow a book of questions and answers! --Thewinster 06:50, 3 January 2009 (UTC)
[edit] Icon in Robelbox
Dear Sir, I need to ask you about a problem in Robelbox icon. I place to Robelboxes side by side in a table that have the td value of 50% of the page. Every thing is fine but when the browser (IE) window is stretched or shorten by mouse the Robelbox icons jumps out of position, I mean a displacement will happen. Is there any method to prevent this situation? I will follow the changes in this page here to find your reply. Thankyou.
[edit] Project incubator update on HHF
Please check Wikiversity:Project_incubator for an update on HHF. I have made a forum like structure for discussing topics in Mathematics, Physics, Chemistry and Biology. I have a target audience for this forum, so that won't be a problem. We have definite use for this forum, but there are somethings that are required for a smooth operation of the forum. Please feel free to brainstorm on the project incubator about this project. In the mean while, you can have a look at HHF. --Dharav talk 12:58, 24 August 2009 (UTC)
[edit] Please unprotect my picture so I can edit the copyright
Hello my friend. I have a copyright notice on one of my technical writing pictures, but because you linked to it on one of your protected pages, I cannot now edit the picture metadata to say that it's mine and copyright free...ugh.
Thanks, TWFred 18:34, 25 January 2010 (UTC)
Addendum...it's the picture of me and the pretty woman. Thanks!
[edit] how is it going?
see also here, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 20:03, 31 March 2010 (UTC)
[edit] Hello
Where have you been? Ottava Rima (talk) 15:00, 29 July 2010 (UTC)
[edit] You are invited to register for the Wikiversity Assembly
-.
[edit] You are invited to name a proxy
-.
[edit] Comments:41, 12 September 2011 (UTC)
[edit] PLE feedback
Hello again McCormack, since you were involved from begin here in regards to PLE's would you mind sharing your experiences? Thanks, ----Erkan Yilmaz uses the Wikiversity:Chat + Identi.ca 05:17, 13 February 2012 (UTC) | http://en.wikiversity.org/wiki/User_talk:McCormack | crawl-003 | refinedweb | 13,766 | 70.84 |
Making the Impossible Possible with Tachyon: Accelerate Spark Jobs from Hours to Seconds
Barclays Data Scientist Gianmario Spacagna and Harry Powell, Head of Advanced Analytics, describe how they iteratively process raw data directly from the central data warehouse into Spark and how Tachyon is their key enabling technology.
Join the DZone community and get the full member experience.Join For Free
Cluster computing and Big Data technologies have enabled analysis on and insights into data. For example, a big data application might process data in HDFS, a disk-based, distributed file system. However, there are many reasons to avoid storing your on data disk, such as for data regulations, or for reducing latency. Therefore, if you need to avoid disk read/writes, you can use Spark to process the data, and temporarily cache the results in memory.
There are a number of use cases where you might want to avoid storing your data on disk in a cluster, in which case our configuration of Tachyon makes this data available in-memory in the long-term and shared among multiple applications.
However, in our environment at Barclays, our data is not in HDFS, but rather, in a conventional relational database management system (RDBMS). Therefore, we have developed an efficient workflow in Spark for directly reading from an RDBMS (through a JDBC driver) and holding this data in memory as a type-safe RDD (type safety is a critical requirement of production-quality Big Data applications). Since the database schema is not well documented, we read the raw data into a dynamically-typed Spark DataFrame, then analyze the data structure and content, and finally cast it into an RDD. But there is a problem with this approach.
Because the data sets are large, it can take a long time to load from an RDBMS, so loading should be done infrequently. Spark can cache the DataFrame in memory, but the cached data in Spark is volatile. If we have to restart the Spark context (for example due to an error in the code, null exceptions or changes to the mapping logic) we will then have to reload the data, which could take (in our case) half an hour or more of downtime. It is not unusual to have to do this a number of times a day. Even after we have successfully defined the mapping into typed case classes, we still have to re-load the data every single time we run a Spark job, for example if there is a new feature we want to compute, a change in the model, or a new evaluation test.
We need an in-memory storage solution.
Tachyon is the in-memory storage solution. Tachyon is the in-memory storage layer for data, so any Spark application can access the data in a straightforward way through the standard file system API as you would for HDFS. Tachyon enables us to do transformations and explorations on large datasets in memory, while enjoying the simple integration with our existing applications.
In this article, we first present how our existing infrastructure loads raw data from an RDBMS and uses Spark to transform it into a typed RDD collection. Then, we discuss the issues we face with our existing methodology. Next, we show how we deploy Tachyon and how Tachyon greatly improves the workflow by providing the desired in-memory storage and minimizing the loading time at each iteration. Finally, we discuss some future improvements to the overall architecture.
Previous Architecture
Since the announcement of DataFrame in Spark 1.3.0 (experimental) and its evolution in recent releases (1.5.0+), the process of loading any source of data has become simple and nicely abstracted.
In our case, we generate parallel JDBC connections which partition and load a relational table into a DataFrame. The DataFrame rows are then mapped into case classes.
Our methodology allows us to process raw data directly from source and build our code even though the data is not physically available to the cluster disks.
The following is our typical iterative workflow:
Setting up the JDBC Drivers
In order to create a JDBC source DataFrame, you must distribute the JDBC drivers jar (or jars) to each node of your cluster. Please ensure that the drivers must be available when you instantiate the JVM for your job. You cannot simply specify those jars as you would for normal dependencies even though it might be possible to distribute them using resource managers such as YARN.
We use a script to copy those files to the local file system of each node and then submit the Spark job by specifying both the executor and driver extraClassPath properties. Make sure that spark.driver.extraClassPath does not work in client-mode. Spark documentation says:
Note: In client mode, this config must not be set through the SparkConf directly in your application, because the driver JVM has already started at that point. Instead, please set this through the --driver-class-path command line option or in your default properties file.
Thus the following is an example of properly setting the drivers:
--driver-class-path "driver_local_file_system_jdbc_driver1.jar:driver_local_file_system_jdbc_driver2.jar" --class "spark.executor.extraClassPath=executors_local_file_system_jdbc_driver1.jar:executors_local_file_system_jdbc_driver2.jar"
The JDBC driver class must be visible to the primordial class loader on the client session and on all executors. This is because Java’s DriverManager class does a security check that results in it ignoring all drivers not visible to the primordial class loader when one goes to open a connection. One convenient way to do this is to modify compute_classpath.sh on all worker nodes to include your driver JARs. Alternatively, you can modify the compute_classpath.sh script in all worker nodes, the Spark documentation says:
If you are using the SparkNotebook, in addition to the spark properties you also need to make the driver available to the JVM that runs the back-end server.
You can do it by setting EXTRA_CLASSPATH before to start the notebook:
export EXTRA_CLASSPATH=path_to_the_first_jar:path_to_the_second_jar
DataFrame Partitions
Once we have successfully set up the JDBC drivers we can now use the read.jdbc API from DataFrame to load a particular table from source. See for documentation.
The default configuration only requires you to specify:
- url, the JDBC url to connect to (e.g. “jdbc:teradata://hostname.mycompany.com/,,”)
- dbtable, either the table name or a query wrapped between parenthesis
- driver, the driver class name (e.g. “com.teradata.jdbc.TeraDriver”)
This will create one single partition and perform a SELECT * on all of the rows and columns through a single connection. This setting might be fine if the table is small enough but is not scalable for large data.
In order to parallelize the query we have two options:
- Partitioning by uniform ranges of a specified column; or
- Partitioning by custom predicates.
Partitioning By Uniform Ranges
We need to specify extra parameters:
- columnName, the column used for partitioning. The contents of this column must be numbers.
- lowerBound, the minimum value of the column we want to select.
- upperBound, the maximum value of the column we want to select.
- numPartitions, in how many sub ranges we want to split. Please pay attention that even though Spark can distribute the collection over hundreds of partitions, the data warehouse may set some limitations on the number of parallel queries from the same user or on the allocated spool space. We suggest keeping this value low and to repartition into more folds after have been loaded from the JDBC source.
- connectionProperties, specify optional driver specific properties for tuned optimizations.
Starting from Spark 1.5.0+ :
sqlctx.read.jdbc(url = "<URL>", table = "<TABLE>", columnName = "<INTEGRAL_COLUMN_TO_PARTITION>", lowerBound = minValue, upperBound = maxValue, numPartitions = 20, connectionProperties = new java.util.Properties() )
Partitioning By Custom Predicates
In some cases there is no uniform numerical column to partition on. In other cases we might simply want to filter the data using custom logic.
To do this, specify an array of strings where each string represents a predicate to be inserted in the WHERE statement.
For example, let’s suppose we are interested in partitioning on a specific ranges of dates, we could write it as follows:
val predicates = Array("2015-06-20" -> "2015-06-30", "2015-07-01" -> "2015-07-10", "2015-07-11" -> "2015-07-20", "2015-07-21" -> "2015-07-31").map { case (start, end) => s"cast(DAT_TME as date) >= date '$start' " + "AND cast(DAT_TME as date) <= date '$end'" } sqlctx.read.jdbc(url = "<URL>", table = "<TABLE>", predicates = predicates, connectionProperties = new java.util.Properties())
This will generate a bunch of SQL queries with a WHERE statement that looks like:
WHERE cast(DAT_TME as date) >= date '2015-06-20' AND cast(DAT_TME as date) <= date '2015-06-30'
See documentation at
Union of Tables
Suppose now that our raw data spans over multiple tables, each with the same schema. We could first map them individually and then concatenate them into a single DataFrame using the unionAll operator:
def readTable(table: String): DataFrame List("<TABLE1>", "<TABLE2>", "<TABLE3>").par.map(readTable).reduce(_ unionAll _)
The .par is a Scala feature that simply means that the individual readTable function calls can happen in parallel rather than sequentially. The Scala framework will automatically spin one thread for each call based on the idle CPUs..
Issues With Existing Architecture
The main problem with our existing methodology is that the Spark cache is volatile across different jobs. Even though Spark provides a cache functionality, every time we restart the context, update the dependency jars or re-submit the job, the loaded data is dropped from the memory and the only way to restore it is to reload it from the central warehouse.
The following is a chart showing the time to load different tables (12 partitions each) from our data warehouse into a Spark cluster of 6 nodes in function of the row count:
As we can see from the chart, the loading process may take minutes or hours depending on the data volume and how busy the database is. Considering that we have on average dozens of restarts per day, we cannot just rely on the Spark cache alone.
We would like to:
- Cache the raw DataFrame so we can iterate until we find the right mapping configuration.
- Cache the typed RDD for interactive exploratory analysis.
- Quickly fetch our intermediate results for responsive access and sharing the data between different applications.
In short, we want an in-memory storage system.
Tachyon as the Key Enabling Technology
Tachyon is an in-memory storage system that solves our issues and enables us to take the current deployment to the next level.
We set up Tachyon in our Spark cluster and configured no under file system (which may be Amazon S3, or HDFS, or other storage systems).
Since Tachyon is being used as an in-memory distributed file system, we can use it as storage for any text format and/or efficient data formats (such as Parquet, Avro, Kryo) together with compression algorithms (such as Snappy or LZO) to reduce the memory occupation.
To integrate with our Spark applications, we simply have to call the load/save APIs of both DataFrame and RDD and specify the path URL including the Tachyon protocol.
By having the raw data (whether it might be in Parquet-format DataFrame or Kryo-serialized Case Classes) immediately available in our Spark nodes at any time, we can now be agile and quickly iterate with our exploratory analysis and evaluation tests. We are now able to efficiently design our model and build our MVP directly from the raw source without have to face complicated and time-consuming data plumbing operations.
The following is the diagram of our workflow after deploying Tachyon and loading the data for the first time:
The orange arrows show stages in our workflow where the intermediate results/data are loaded/stored into Tachyon for convenient, in-memory access.
Configuring Tachyon
In our environment, we have configured Tachyon to use only the top memory tier and to work with the tmpfs mounts, typically available in unix systems under /dev/shm.
On the Tachyon master node:
1. We change the tachyon-env.sh conf file as following under the Linux settings:
export TACHYON_RAM_FOLDER="/dev/shm/ramdisk"
export TACHYON_WORKER_MEMORY_SIZE=${TACHYON_WORKER_MEMORY_SIZE:-24GB}
In the TACHYON_JAVA_OPTS we leave the default configuration with:
-Dtachyon.worker.tieredstore.level.
-Dtachyon.worker.tieredstore.level0.
-Dtachyon.worker.tieredstore.level0.dirs.path=${TACHYON_RAM_FOLDER}
-Dtachyon.worker.tieredstore.level0.dirs.quota=${TACHYON_WORKER_MEMORY_SIZE}
See
As under fs configuration we leave it empty, see.
2. We copy the new configuration to each worker:
./bin/tachyon copyDir ./conf/
3. We format Tachyon:
./bin/tachyon format
4. We deploy Tachyon without mount option (does not require root access):
./bin/tachyon-start.sh all NoMount
The following is the architecture diagram of all of the involved components in our configuration:
Using Tachyon With Spark
Writing and reading Spark DataFrames to and from Tachyon are very simple.
To write a DataFrame to Tachyon:
dataframe.write.save("tachyon://master_ip:port/mydata/mydataframe.parquet")
Make sure that Parquet is the default format when saving in DataFrame.
To read a DataFrame from Tachyon:
val dataframe: DataFrame = sqlContext.read.load("tachyon://master_ip:port/mydata/mydataframe.parquet")
See the recent documentation of Spark 1.6.0 at
Similarly, writing and reading Spark RDDs to and from Tachyon are very simple.
To write an RDD to Tachyon:
rdd.saveAsObjectFile("tachyon://master_ip:port/mydata/myrdd.object")
Make sure that the default serializer is Kryo for RDDs.
To read an RDD from Tachyon:
val rdd: RDD[MyCaseClass] = sc.objectFile[MyCaseClass] ("tachyon://master_ip:port/mydata/myrdd.object")
Make sure that MyCaseClass is the same class as when the RDD was cached into Tachyon, otherwise it will throw a class version error.
Making the Impossible Possible
We sped up our Agile Data Science workflow by combining Spark, Scala, DataFrame, JDBC, Parquet, Kryo and Tachyon to create a scalable, in-memory, reactive stack to explore the data and develop high quality implementations that can then be deployed straight into production.
Thanks to Tachyon, we now have the raw data immediately available at every iteration and we can skip the costs of loading in terms of time waiting, network traffic, and RDBMS activity. Moreover, after the first ETL, we save the normalized and cleaned data in memory, so that the machine learning jobs can start immediately, allowing us to run many more iterations per day.
By configuring Tachyon to keep data only in memory, the I/O cost of loading and storing into Tachyon is on the order of seconds, which in our workflow scale is simply negligible.
Our workflow iteration time decreased from hours to seconds. Tachyon enabled something that was impossible before.
Future Developments
The presented methodology is still an experimental workflow. Currently, there are some limitations and room for improvement. Here, we present some of the limitations and potential for future work.
- Since we have not specified an under file system layer, Tachyon can only handle as much data as it fits into the allocated space. The tiered storage feature in Tachyon can solve this issue.
- Setting up JDBC drivers, the partitioning strategy and the case class mapping is a big overhead and is not very user-friendly.
- Memory resources are shared with Spark and Tachyon, so to avoid duplication and unnecessary garbage collection, some fine-tuning is required
- If a Tachyon worker fails, the data is lost, since we configured no tiered storage nor under file system, and Tachyon does not know how to re-load the data from a JDBC source. This is fine for our use case, since our goal is in-memory storage layer to use as long-term cache, but it can be a very important improvement for future developments.
Nevertheless, we believe that the described methodology, combined with Tachyon, is a game-changer for effectively applying agile data science into large corporations.
***
About the authors:
Gianmario Spacagna is a Data Scientist at Barclays Personal and Corporate Bank, building scalable machine learning and data-driven applications in Spark and Scala.
Harry Powell leads the Data Science team at Barclays Personal and Corporate Bank.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/Accelerate-In-Memory-Processing-with-Spark-from-Hours-to-Seconds-With-Tachyon | CC-MAIN-2021-39 | refinedweb | 2,688 | 51.48 |
gethugepagesize man page
gethugepagesize — Get the default huge page size
Synopsis
#include <hugetlbfs.h>
long gethugepagesize(void)
Description
The gethugepagesize() function returns the default huge page size used by libhugetlbfs. This will be either the system default, or a valid value set by the environment variable HUGETLB_DEFAULT_PAGE_SIZE.
If the system does not support any huge page sizes an error is returned.
Return Value
On success, the default huge page size is returned. On failure, -1 is returned and errno is set appropriately.
Errors
- ENOSYS
The system does not support huge pages.
See Also
libhugetlbfs(7)
Authors
libhugetlbfs was written by various people on the libhugetlbfs-devel mailing list.
Referenced By
get_hugepage_region(3), get_huge_pages(3), hugetlbfs_find_path(3), hugetlbfs_unlinked_fd(3), libhugetlbfs(7).
March 7, 2012 | https://www.mankier.com/3/gethugepagesize | CC-MAIN-2018-05 | refinedweb | 123 | 51.44 |
Intro To C++/Making statement
Controlling with if and elseEdit
The C++ if keyword performs the basic conditional test. If something is true, then perform action. And it's syntax looks like this:
if(test-expression){statements-to-perform-when-true}
We could want the different action if that relationship is false. By appending else statement, it is possible.if-else statement looks like this:
if(test-expression){statements-to-perform-when-true} else{statements-to-perform-when-true}
#include <iostream>; using namespace std; int main{ int i=18; if(i>17){ cout << "It's nice temperature." << endl; return 0; }
#include <iostream>; using namespace std; int main{ int i=4; if(i<15){ cout << "Too cold, turn on the heater."<< endl;} else if(15<=i<25){ cout << "It's nice temperature."<< endl;} else{ cout << "Too hot! Please turn on the air conditioner" << endl;} return 0; }
Controlling with switchEdit
When we have multiple case to check, using if-else statement repeatedly is not efficient.In this case, switch is more useful. The switch works in the different way. A given variable-value in parentheses try to find out a matching value among several case values in braces and execute the statements according to case value. Each statement finish with semicolon. Each value in braces finish with break;. break; let the switch block stop as the statement is executed and switch block satisfied the program. The switch syntax is as follows:
switch(variable-value){
case value1: statements-to-be-executed;break;
case value2: statements-to-be-executed;break;
...................
case valuen: statements-to-be-executed;break;
default:statements-to-be-executed; }
#include <iostream> using namespace std; int main(){ int season=2 switch( season ) { case 1: cout << season <<":spring with sprout ";break; case 2: cout << season <<":cool summer with ice tea";break; case 3: cout << season <<"colorful autumn with fallen leaves:";break; case 4: cout << season <<":snowy winter and hot chocolate";break; } return 0; }
LoopingEdit
A loop is a piece of code in the program that automatically repeat. The three type of loop in C++ programming are for loop, while loop and do while loop. The syntaxes are as following table:
#include <iostream> using namespace std; int main(){ cout<<"Fibonacci sequence by For loop"<< endl; int i,j=0; for(i=0;i<10;++i){ cout << j+=i << endl; return 0; }
#include <iostream> using namespace std; int main(){ cout<<"Fibonacci sequence by while loop"<< endl; int i=0,j=0; while(i<10){cout << j+=i << endl;++i;} return 0; }
#include <iostream> using namespace std; int main(){ cout<<"Fibonacci sequence by do while loop"<< endl; int i=0,j=0; do{cout << j+=i << endl;++i;} while(i<10) return 0; }
#include <iostream> using namespace std; int main(){ return 0; }
Declaring & Defining FunctionsEdit
In C++, a function is a group of code that provides some functionality to the program. When a function is called from the main program, the statements in a function are executed. Functions make program code easier and help working several programmer together. Tested functions can be reused. Each function needs to be declared at the start of the program. The syntax of a function prototype declaration is as follows:
return-data-type function-name(arguments-data-type list)
Function could be defined outside main function after declaration of function. The syntax of a function looks like this:
return-data-type function-name(arguments-data-type list){statements-to-be-executed}
#include <iostream> using namespace std; float trianglearea(float, float); int main(){ float x,y,z; cout << "Enter the width of triangle area:"<< endl; cin >> x; cout <<endl<<"Enter the height of triangle area:"<< endl; cin >> y; z=trianglearea(x,y); cout <<endl<<"The dimension of triangle area:"<< z <<"sq.ft"<< endl; return 0; } float trianglearea(float width, float height){ return ((width/2)*height); } | https://en.m.wikibooks.org/wiki/Intro_To_C%2B%2B/Making_statement | CC-MAIN-2022-05 | refinedweb | 633 | 51.18 |
In the OOPs concepts guide, we learned that object oriented programming is all about objects. The eight primitive data types byte, short, int, long, float, double, char and boolean are not objects, Wrapper classes are used for converting primitive data types into objects, like int to Integer etc. Lets take a simple example to understand why we need wrapper class in java.
For example: While working with collections in Java, we use generics for type safety like this: ArrayList<Integer> instead of this ArrayList<int>. The Integer is a wrapper class of int primitive type. We use wrapper class in this case because generics needs objects not primitives. There are several other reasons you would prefer a wrapper class instead of primitive type, we will discuss them as well in this article.
Why we need wrapper class in Java
1. As I mentioned above, one of the reason why we need wrapper is to use them in collections API. On the other hand the wrapper objects hold much more memory compared to primitive types. So use primitive types when you need efficiency and use wrapper class when you need objects instead of primitive types.
The primitive data types are not objects so they do not belong to any class. While storing in data structures which support only objects, it is required to convert the primitive type to object first which we can do by using wrapper classes.
Example:
HashMap<Integer, String> hm = new HashMap<Integer, String>();
So for type safety we use wrapper classes. This way we are ensuring that this HashMap keys would be of integer type and values would be of string type.
2. Wrapper class objects allow null values while primitive data type doesn’t allow it.
Lets take few examples to understand how the conversion works:
Wrapper Class Example 1: Converting a primitive type to Wrapper object
public class JavaExample{ public static void main(String args[]){ //Converting int primitive into Integer object int num=100; Integer obj=Integer.valueOf(num); System.out.println(num+ " "+ obj); } }
Output:
100 100
As you can see both primitive data type and object have same values. You can use obj in place of num wherever you need to pass the value of num as an object.
Wrapper Class Example 2: Converting Wrapper class object to Primitive
public class JavaExample{ public static void main(String args[]){ //Creating Wrapper class object Integer obj = new Integer(100); //Converting the wrapper object to primitive int num = obj.intValue(); System.out.println(num+ " "+ obj); } }
Output:
100 100 | https://beginnersbook.com/2017/09/wrapper-class-in-java/ | CC-MAIN-2018-05 | refinedweb | 421 | 52.09 |
def isOne[A](a: A) = a == 1
is a little bit odd. If you know it’s numeric, then you can do
def isOne[A: Numeric](a: A) = Numeric[A].toInt(a) == 1
instead. If it’s not numeric, then I’m not sure I get the point of the method.
For migration purpose, we can drop the usage of == in all case class code generation and in Scala collections before other actions.
==
case class
Another idea is making == a placeholder, whose implementation must be introduced by explicit importing.
import numbericEquality._
Double.NaN == Double.NaN // false
(Double.NaN, 1) == (Double.NaN, 1) // Compilation error because no NumbericEquality for Tuple2
null == Double.NaN // Compilation error because no NumbericEquality for Null
import structuralEquality._
Double.NaN == Double.NaN // true
(Double.NaN, 1) == (Double.NaN, 1) // true
null == Double.NaN // false
import referentialEquality._
Double.NaN == Double.NaN // Compilation error because Double is not an AnyRef
(Double.NaN, 1) == (Double.NaN, 1) // false
null == Double.NaN // Compilation error because Double is not an AnyRef
structuralEquality can be implemented as equals with null checking, which is useful in case class code generation and Scala collections.
structuralEquality
equals
I agree it’s silly, but the point is that someone can write it and it will behave differently from what they expect because of boxing which should be just an implementation detail.
Current:
// Scala
Set(1, 1.0).size // 1
Set(1, 1.0, 1L).size // 1
// Scala.js
Set(1, 1.0).size // 1
Set(1, 1.0, 1L).size // 1
1.isInstanceOf[Double] // true
1.isInstanceOf[Long] // false
// Java
Set<Object> set = new HashSet<>();
set.add(1);
set.add(1.0);
set.size(); // 2
set.add(1L);
set.size(); // 3
// Kotlin
setOf(1, 1.0).size // 2
setOf(1, 1.0, 1L).size // 3
// Kotlin-js
setOf(1, 1.0).size // 1
setOf(1, 1.0, 1L).size // 2, because Long in Kotlin is not mapped to any JavaScript object.
(1 as Any) is Double // true
(1 as Any) is Long // false
I think getting rid of cooperative equality is a good idea, because the behavior above in Scala would become the same to Java and Kotlin in JVM.
Though the new behavior above for collections in JVM and JavaScript becomes inconsistent, It is not bad to align with platform (JVM/JavaScript) collection behavior.
As some people comment, 1 == 1.0 is reported as a compile error is a good idea IMHO. Kotlin doesn’t allow you to compare two AnyVal instance (though Kotlin doesn’t have this type concept).
1 == 1.0
AnyVal
// kotlin
1 == 1.0 // compile error
1 as Any == 1.0 // false
1 == 1.0 as Any// false
1 as Int == 1.0 // compile error
fun <T> isOne(t: T): Boolean {
return t == 1
}
isOne(1) // true
isOne(1.0) // false
I think this is a better design, because if we can’t compare 1 to 1L, people will not confuse why 1 == 1L is not inconsistent to (1: Any) == (1L: Any), Set(1, 1L).size is 1 or 2. And we can get rid of 'A' == 65 as sjrd mentioned.
Though I know 1 == 1L compiles has better semantic consistency, the Kotlin way looks better in practice.
If some people still want to compare 1 to 1L, we can let he/she import something, and use this feature. We can also group this feature and any2stringadd into the same package to tell people the features in this package is not strict.
1 == 1L
(1: Any) == (1L: Any)
Set(1, 1L).size
'A' == 65
any2stringadd
BigInt(1) == 1 // true look likes a bug to me. Because in BigInteger.valueOf(1) == 1 is false in Scala.
BigInt(1) == 1 // true
BigInteger.valueOf(1) == 1
I think we need to look at it in another way. It’s not boxing that is to blame, but the interaction of overloading with type abstraction. Have a look the example with === I gave towards the beginning of the thread. It behaves in exactly the same way as == without co-operative equality, Yet there is no boxing. So this is a fundamental property (or shortcoming, depending how you want to look at it) of systems that mix type abstraction and overloading.
===
I agree that it would probably be good to get rid of cooperative equality.
But I also agree with critiques of the fact that (1 == 1L) != ((1:Any) == 1L). This looks pretty surprising to me, so I’m in favor of disabling or at least warning against the numeric == overloads. As long as these overloads were just an optimization of the more general cooperative ==, they were fine –– but without cooperative equality, their semantics becomes problematic.
(1 == 1L) != ((1:Any) == 1L)
It doesn’t matter that it makes perfect sense or that it’s regular from the point of view of language experts. It’s a problem because it flies in the face of intuition. It’s a problem for the same reason that the case classes with === described above are an anti-pattern. People should be discouraged from using anti-patterns (such as unrestricted implicit conversions, which are now behind a feature flag), so lifting that one anti-patterns into the language does not seems to be going in the right direction.
In IntelliJ, I can’t even control-click on == in 1 == 0.5. It is (or at least, it’s close to be) a built-in method, so a reasonable amount of magic is expected –– but surprising behaviors are not. I’d wager that most current Scala users, irrelevant of expertise level, don’t even know that these overloads exist. The only reason I knew is because I’ve been working on a metaprogramming framework where that turned out to be relevant. That I only learned about them through metaprogramming is a good thing, because it means that overloading == was truly an implementation detail that people generally need not worry about. Making that gruesome detail suddenly relevant to all Scala users by making its behavior counter-intuitive will only work to add to the mental overhead that Scala developers already have to carry while working with the language.
1 == 0.5
Personally, I like statically-typed languages because the compiler and libraries can relieve some of the cognitive burden from my mind, not because they can add to it (via surprising overloads).
Going away from this might be ‘ok’ for library code, but not for user code. It’s a punch in the face of dynamic-feel applications where you don’t want the user to be confronted with 1f versus 1.0 versus 1.
Going away from this might be ‘ok’ for library code, but not for user code. It’s a punch in the face of dynamic-feel applications where you don’t want the user to be confronted with 1f versus 1.0 versus 1.
Any user code that does not comprehend the difference between 1.0f, 1.0 and 1 is bound to be broken. I see making
def f: Float
f == 1L
fail to compile be a very, very good thing. For example,
scala> def f: Float = 1.2f - 1
f: Float
scala> f * 5 == 1
res5: Boolean = false
scala> f * 5
res6: Float = 1.0000002
equality on floating point numbers is an anti-pattern for most users, who don’t know enough about them to avoid traps like the above.
Users will be confronted with 1.0f vs 1 whether they like it or not.
x == 150L returns true in Java. It won’t if one side uses new Long though. Auto-boxing uses Long.valueOf and 150 is guaranteed by the spec to be a flyweight instance because its magnitude is small. It would break if the Long was large enough, and this is something that is a problem in Java.
x == 150L
new Long
Long.valueOf
Anyhow, most of the problems with java are due to auto-boxing peculiarities that are not relevant for Scala if implemented well.
you are thinking that a user is someone writing a regular Scala program.
But that’s not the case in a REPL, in a DSL, in a Jupyter type worksheet
etc. And I was specifically talking about float, double, int, i.e.
leaving away an f suffix.
f
best, .h.h.
Absolutely. The boxing side of things is behind the scenes and not fundamental. In Java’s case, auto-boxing sometimes causes a surprise switch from reference equality to numeric equality, Scala can avoid that.
We know a few important things:
This leads to a simple solution:
final def == (that: Any): Boolean =
if (null eq this) null eq that else this equals that
And thus, for AnyRef, == is simply short-hand for null-safe equivalence.
And here is where the dilemma lies. For reference types, the ‘secondary’ notion of equivalence is a valid equivalence relation. This makes a default implementation for equals simple. However, IEEE numeric equivalence (and partial order) is not suitable for collection keys, be it for hash-based or order based key identification. There is no default implementation that can satisfy both notions of equivalence for numeric values.
If we want to avoid Java’s issues where == means one thing in one place, and something entirely different in another (numerics vs references; numerics get boxed and suddenly == means something else) then the choice for == on numerics should be the same as for references: a null-safe equivalence relation. AnyVal types are not null, but are sometimes boxed and could be null at runtime, so the null-safe aspect is at least important as an implementation detail. But otherwise, it can be the same as with AnyRef, we just need to specify equals for each numeric type N as
def equals(that: Any): scala.Boolean = that match {
case that: N => compare(that) == 0
case _ => false
}
Or in English, equals is consistent with the total ordering if the type matches, and false otherwise. On the JVM this is effectively the contents of each Java boxed type’s equals method.
The above defines == to be a null safe equivalence. This is incompatible with IEEE’s == for floating point numbers. It however is consistent with IEEE ==, <, <= for integer types. I propose that we implement the IEEE754 total ordering for floating point numbers in these cases ( What Java’s compareTo on the boxed types do). In short, NaN == NaN. After all, most users would expect that. Also, it is very fast – comparison can be done by converting to integer bits and then comparing the integers - at the assembly level just using a two’s complement integer compare op.
<
<=
NaN == NaN
I would not find it strange to specify that ==, <=, >=, <, and > in scala represent equivalence and total order by default. It is what we do for reference types, why not numerics? That breaks away from Java but I suspect it is more intuitive to most users and more consistent in the language IMO. It is certainly more useful for collections.
>=
>
For users who are more advanced with floating point, they can pull out the IEEE tools. It is only with floating point types where there is a gap and we need parallel notions of equality to go with partial order.
The users that truly want IEEE semantics for numeric operations on floating point values must know what they are doing to succeed in writing an algorithm that depends on NaN != NaN anyway. For them, switching to some other syntax for IEEE will not be difficult. Perhaps there are different symbols, perhaps different names, or perhaps an import will switch a scope to the IEEE definitions.
1.0 == 1L
true
false
(1.0: Any) == 1L
(1.0: Any).equals(1L)
Double.NaN == Double.NaN
Double.NaN.equals(Double.NaN)
1.0F < Float.NaN
1.0F > Float.NaN
Set(1.0F, 1, 1L).size
1
3
Map(Double.NaN -> "hi").size
Map(Double.NaN -> "hi").get(Double.NaN)
None
Some(hi)
TreeMap(Double.NaN -> "hi").get(Double.NaN)
(1.0, 1) == (1, 1.0)
Some(1.0) == Some(1)
List(1, 1.0) == List(1, 1)
BigInt(1) == 1
UnsignedInt(1) == 1
Left(1) == Right(1)
List(1) == Vector(1)
The proposal boils down to a couple rules for consistency:
Two values of different nominal types are never equal. This holds for case classes today, the proposal makes it work consistently with tuples, case classes, and plain numeric types. The compiler can error when a result is guaranteed to be false due to mismatched types. It would be consistent with Valhalla. I don’t have an opinion on what to do with List(1) == Vector(1), that is more collection design than language.
For use cases where we want to compare across types in a cooperative way (perhaps the DSL / worksheet use case mentioned) one can either provide different methods, or use an import to switch the behavior. Or perhaps there are better ideas.
equals is consistent with == This leaves the definition for == as short-hand for null-safe equals – an equivalence relation – consistent with Ordering for a type. The consequence is that NaN == NaN and the default behavior is conformant to use of values as collection keys. Every other option I thought of was just far more inconsistent overall. Give up on NaN != NaN and the rules are clean and consistent. Otherwise you have to carve out an exception for floating point numbers and have collections avoid using == in some cases, or make equals inconsistent with ==.
NaN != NaN
Combined, these two rules would make it much simpler to extend numeric types, and add things like UnsignedInt – there is no quadratic explosion of complexity if equivalence is not cooperative.
kudos for making the table, which really is the most important thing on this thread
imo the desired outcome would be for those samples to not compile, but the the other result is okay too
i’m sure this would not have much impact even on unityped programs (contentious NaN cases included)
isn’t this trivially implemented by not exposing .equals and requiring a Equal typeclass and a === op? as per scalaz.
That will requires each of collection stores the Equal typeclass, which is impossible for java.util collections.
Equal | https://contributors.scala-lang.org/t/can-we-get-rid-of-cooperative-equality/1131?page=4 | CC-MAIN-2017-43 | refinedweb | 2,369 | 65.62 |
In this tutorial we will learn how to compare two ArrayList. We would be using contains() method for comparing two elements of different ArrayList.
public boolean contains(Object o)
It returns true if the list contains the Object o else it returns false.
Example:
In this example we have two ArrayList
al1 and
al2 of String type. We have compared these ArrayLists using
contains() method and stored the comparison result in third ArrayList (
al3 and
al4).
package beginnersbook.com; import java.util.ArrayList; public class Details { public static void main(String [] args) { ArrayList<String> al1= new ArrayList<String>(); al1.add("hi"); al1.add("How are you"); al1.add("Good Morning"); al1.add("bye"); al1.add("Good night"); ArrayList<String> al2= new ArrayList<String>(); al2.add("Howdy"); al2.add("Good Evening"); al2.add("bye"); al2.add("Good night"); //Storing the comparison output in ArrayList<String> ArrayList<String> al3= new ArrayList<String>(); for (String temp : al1) al3.add(al2.contains(temp) ? "Yes" : "No"); System.out.println(al3); //Storing the comparison output in ArrayList<Integer> ArrayList<Integer> al4= new ArrayList<Integer>(); for (String temp2 : al1) al4.add(al2.contains(temp2) ? 1 : 0); System.out.println(al4); } }
Output:
[No, No, No, Yes, Yes] [0, 0, 0, 1, 1]
What is the logic in above code?
If the first element of ArrayList al1 is present in
al2 then ArrayList
al3 would be having “Yes” and
al4 would be having “1” However if the element is not present “No” would be stored in
al3 and 0 would be in
al4.
Hello sir,
What is temp here??
its only a reference variable that is using for the printing of arraylist content.
Hi Chaithanya,
can you explain these following code..Actually I didn’t understand what you are trying to explain..
ArrayList al3= new ArrayList();
for (String temp : al1)
al3.add(al2.contains(temp) ? “Yes” : “No”);
System.out.println(al3);
Thanks in Advance
Thats the enhanced for loop introduced in Java 5.0.
A new arraylist is initialized as al3.
The al1 ArrayList is stored in temp using the for each loop.
Then the content of temp is compared with the 2nd ArrayList al2,
If present YES else NO. This is done by using ternary condition.
That Yes or No is now added to the al3 and finally output is displayed.
How can we use Question mark and colon between yes and no without using inverted commas?
Secondly error comes when i remove question mark. | http://beginnersbook.com/2013/12/how-to-compare-two-arraylist-in-java/ | CC-MAIN-2016-50 | refinedweb | 407 | 60.01 |
27 March 2012
By clicking Submit, you accept the Adobe Terms of Use.
One of the difficulties in learning JavaScript for anyone coming from a classical object-oriented programming (OOP) language background is adjusting to how objects are defined and extended. Obviously, JavaScript has neither a traditional concept of classes nor classical inheritance. Many JavaScript frameworks try to overcome this deficiency by providing a syntax for defining classes and methods for handling inheritance and composition. CoffeeScript and similar languages that compile to JavaScript also provide a more traditional syntax for handling this as well. Clearly, despite assertions that trying to force JavaScript into a classical object model isn't really a good idea, the issue keeps coming up.
I recently came across Minion, a solution created by Taka Kojima. It is a lightweight JavaScript library for defining classes with support for classical style object-oriented inheritance and composition, among other features. While it currently requires Node.js to compile and can work out of the box with Node, you can use Minion in the browser. In this tutorial, I revisit the simple example and model that I created for a JavaScript prototype inheritance tutorial that I posted previously on my blog. I recommend that you read this post, in which I describe how I built simple Portal turret objects, because it will help you better understand the example used in this article. This article covers how I rebuilt the example with Minion and how this approach compares with a straight JavaScript (that is, framework-less) solution.
To use Minion in your browser-based application, you first need to download it via the Node Package Manager (
npm ) and compile it via Node.js. Now that
npm comes with Node and Node has easy-to-use installers even for Windows, this is pretty simple to accomplish.
npmto download Minion:
npm install minion -g
minion src minion.js
This will generate the JavaScript file, minion.js (a non-minified version), that you can copy to your scripts folder and include in your web application. See the Minion getting started guide for additional details, including how to generate a minified version. Minion also gives you the option to minify the JavaScript containing your classes when finished, though I do not cover that in this tutorial.
The most basic class syntax in Minion simply has a property, an initialization method, and another basic method. The following class (Weapon.js) doesn't do much but it will serve as a basis for demonstrating how Minion handles inheritance later on.
minion.define("portal", { Weapon : minion.extend("minion.Class", { fireType : "auto", init: function(){ }, getFireType: function(){ return this.fireType; } }) });
Classes are created using the
minion.define() function in which you define the namespace (in this case it is
"portal" ) and then the class. Every class in Minion must, at some level, extend the base class, which is named minion.Class. Inside the
minion.extend() call, you see a fairly typical way of writing pseudo-classes in JavaScript. Much like traditional OO languages, Minion expects that every class is written in its own file and provides mechanisms to let you organize these in folders much the way you might see classes organized in ActionScript or Java.
As I said, at some level, every class needs to inherit from minion.Class, but your classes can inherit from each other provided this requirement is fulfilled somewhere up the chain. For example, my MachineGun.js and HeatSeekingMissile.js classes, which provide the weaponry of my Portal turrets, both inherit from Weapon.js above. Here is my MachineGun.js class:
minion.define("portal", { MachineGun : minion.extend("portal.Weapon", { weaponType : "machine gun", init: function(){ this.__super(); this.getFireType(); this.weaponType += " (" + this.fireType + ")"; }, getFireType: function(){ this.fireType = this.__super(); } }) });
Note that not much changed from the prior class example except that this class extends portal.Weapon. To access the super class, you call the
__super() method from within your class methods. In
init() this will invoke the
init() method of the super class; in
getFireType() it will invoke
getFireType() of the super class and so on. You can pass arguments with your
__super() call as well. I didn't find a means of directly accessing the super properties other than via a method call nor a way to call a super method other than the one of the same name. Still there are some fairly easy and obvious workarounds to this by building accessor methods for the values you require.
Minion can also handle classes that have a or have many instances of another class. For example, my Turret.js class comprises a weapon, which by default is a machine gun (MachineGun.js). Here is the code for this class:
minion.define("portal", { require : [ "portal.MachineGun" ], Turret : minion.extend("minion.Class", { laserEye : true, isEnemyInSight : false, isFiring : false, isKnockedOver : false, motionDetectedArr : ["Target acquired","There you are","I see you","Preparing to dispense product","Activated"], knockedOverArr : ["Critical error","Shutting down","I don't hate you","Hey, hey, hey","Malfunctioning"], ignoredArr : ["Are you still there"], machineGun : null, init: function(){ this.machineGun = new this.__imports.MachineGun(); }, arm: function() { return this.machineGun.weaponType + " armed"; }, fire : function(){ this.isFiring = true; console.log("firing"); }, motionDetected : function(){ this.isFiring = false; this.isEnemyInSight = false; this.isKnockedOver = true; return this.knockedOverArr[Math.floor(Math.random() * this.knockedOverArr.length)]; }, knockedOver : function() { this.isFiring = false; this.isEnemyInSight = false; this.isKnockedOver = true; return this.knockedOverArr[Math.floor(Math.random() * this.knockedOverArr.length)]; }, ignored : function(){ if (this.isKnockedOver) { return "I am knocked over"; } this.isEnemyInSight = false; this.isFiring = false; return this.ignoredArr[Math.floor(Math.random() * this.ignoredArr.length)]; } }) });
While the code for this class is rather long, it really doesn't stray much from the prior examples and is, in fact, quite similar to the code from my framework-less sample. Because Turret is composed of MachineGun, I needed to add the snippet near the top that tells Minion to require the portal.MachineGun class. You can specify any number of required classes here in the same manner. Once this is done, I can access a special scope named
__imports that contains the imported classes. You can see that I assign
this.machineGun as a new
MachineGun() in this way within the
init() function:
this.machineGun = new this.__imports.MachineGun();
Now that my classes are complete, I want to use them in the same sample page as my framework-less example. Most everything about the code remains the same except for loading in the classes via Minion. Here is the HTML:
<!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Minion Sample</title> <script src="js/minion.js"></script> <script> var rTxt, turret, heatSeekingMissile; function load() { rTxt = document.getElementById('responseTxt'); minion.configure({ classPath : "com" }); minion.require( ["portal.Turret", "portal.HeatSeekingMissile"], function(Turret, HeatSeekingMissile){ turret = new Turret(); rTxt.value = turret.arm(); heatSeekingMissile = new HeatSeekingMissile(); } ); } function motionDetected() { rTxt.value = turret.motionDetected(); } function knockedOver() { rTxt.value = turret.knockedOver(); } function ignored() { rTxt.value = turret.ignored(); } function modTurret() { turret.heatSeekingMissile = heatSeekingMissile; turret.arm = function() { return this.heatSeekingMissile.weaponType + " and " + this.machineGun.weaponType + " armed"; }; rTxt.value = turret.arm(); } </script> </head> <body onload="load()"> Response: <input name="responseTxt" id="responseTxt" type="text" size="60" /><br /> <input type="button" name="motionBtn" value="Motion Detected" onclick="motionDetected()"/> <input type="button" name="knockedBtn" value="Knocked Over" onclick="knockedOver()"/> <input type="button" name="ignoredBtn" value="Ignored" onclick="ignored()" /> <input type="button" name="missilesBtn" value="Heat Seeking Missiles!"onclick="modTurret()" /> </body> </html>
Take a look at the
load() function. It defines for Minion the classpath for my JavaScript files, which is just the relative location of the base file folder containing all my class scripts. In this case, I've placed them in a folder named com. Next, it informs Minion to require any classes that my application needs, which in this case is simply the Turret and HeatSeekingMissile classes (the latter for reasons I'll cover momentarily).
Now, despite being a more classical object-oriented way of defining classes, inheritance, and composition, Minion does not change the dynamic nature of JavaScript. In my prior example, I illustrated this by modding a heat seeking missile onto my Turret. As you can see in the
modTurret() function above, I can still dynamically add
heatSeekingMissile objects to my class defined via Minion.
In my testing I ran into an interesting issue. Specifically, the loaded JavaScript was cached by the browser (Chrome) and not reflecting my changes. I regularly needed to clear the cache after changes to the JavaScript classes for them to be reflected. This can be corrected, however, by appending a timestamp to file calls via the
configure() method. Simply modify the configure method shown above as follows:
minion.configure({ classPath : "com", fileSuffix : Date.now() });
This article has not touched on all the features of Minion, just the basics you'd need to build your classes. The framework provides a means of defining static properties and methods, creating singletons and static classes, and publishing and subscribing to events and notifications triggered within your model. You can get more details about these in the getting started guide. While Minion is relatively new, it seems pretty robust as a start and the author informs me that it is in use in some large projects, including one for Toyota. Still, the documentation was limited as I developed these classes, so even this simple example did take some trial and error. I do think it is worth checking out if you are looking for a way to structure your JavaScript application in a more classical OO manner, though I am staying out of the debate on the value of this.
If you would like to run the example from this tutorial, you can find it on my site here. The source code is available with the sample files for this article.
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License. Permissions beyond the scope of this license, pertaining to the examples of code included within this work are available at Adobe. | http://www.adobe.com/devnet/html5/articles/pseudo-classical-object-oriented-programming-in-javascript-with-minion.html | CC-MAIN-2013-48 | refinedweb | 1,665 | 50.02 |
findfiles - Utility to locate files containing specific content
findfiles [switches]
Very often when you are programming in any programming or scripting language, you want to find out how a particular function works or whether a particular property is settable, or any of a number of other questions. In many cases, you can find the answers to your questions by looking at the source code of the application or tool you’re using. This is sometimes referred to as "code shopping," particularly when what you really hope to find is a method that does exactly what you want to do. The PythonCard findfiles tool is designed to support you in these efforts. Type in a string for which to search, tell findfiles the directories (yes, you can have more than one) in which to search for files containing that string, and send findfiles off to locate files with that specific content. Scroll through the list of files, each with a line reproducing part of the located line for each occurrence in the file, find the one you think is what you are looking for, and double- click the line. Voila! The PythonCard codeEditor tool opens and scrolls instantly to the line you’ve selected.
-p Show property editor -m Show message watcher -l Enable logging -s Show shell -m Show namespace -d Show debug menu
The findfiles utility uses classic Unix grep (regular expression) searches. The grep utility uses a technique called regular expression matching to locate information. In regular expressions, some characters have a special meaning. If you want to search for any of these special characters in the strings you supply in findfiles, you’ll have to escape them by preceding them with a backward slash (\) character. While there are many such characters in regular expressions, the ones with which you will need to be most careful are: question mark (?), asterisk (*), addition/concatenation operator (+), pipe or vertical bar (|), caret (^) and dollar sign ($). To search for a dollar sign in the target directories, for example, put "\$" into the search field. (Putting in a $ by itself will crash findfiles fairly reliably.) On a Debian system, you can see the manpages for grep(1) or regex(7) for more information on grep and regular expressions.
The findfiles:
Although it is considered to be stable, this is still development-level software. Please report bugs in this or any PythonCard component to the Debian Bug Tracking system using reportbug(1).
This manpage was written by Kenneth J. Pronovici <pronovic@debian.org>, for use by the Debian project. Content was based on previously- existing PythonCard documentation in other forms.
codeEditor(1), resourceEditor(1), | http://huge-man-linux.net/man1/findfiles.html | CC-MAIN-2018-47 | refinedweb | 438 | 59.33 |
1 #!/usr/bin/python -tt
2 # vim:et:ai:sw=4
3 # $Header: /Users/collin/pycalc/RCS/calc.py,v 0.1 2008/01/10 01:11:34 collin Exp $
4 #
5 # simple calculator. Read and process an expression per line.
6 import sys
7
8 def do_one_expression(expr):
9 # For now, let us just see if we can print a line
10 print(aline)
11 return
12
13 if __name__ == '__main__':
14 for aline in sys.stdin:
15 do_one_expression(aline)
16 sys.exit(0)
17
18 # $Log: calc.py,v $
19 # Revision 0.1 2008/01/10 01:11:34 collin
20 # initial revision. this one double-spaces the input.
21 #
22 # $EndLog$
OK, let me go through this. You can safely ignore lines 1-5.
I want to read all the lines from "stdin" -- i.e., the sort of usual input for the program -- which I remembered in Python is called "
sys.stdin"; hence I have to type "
import sys".
Then I write the sort of workhorse routine, "
do_one_expression" -- for this first version, this routine will simply print its input parameter and return. Remember, I'm just re-learning the syntax.
Finally, lines 13-16 will send "aline" to the workhorse routine, until we run out of input, at which point we exit.
What is that "__name__" business on line 13? Not needed here, strictly speaking, but this is useful if you ever want to run the Python debugger on the code. Line 14 is Python's way of iterating over all the lines in the standard input.
Line 16 is probably superfluous, but it's my habit and it doesn't seem to hurt anything.
So I ran this program, feeding it a file that looked like this:
and it gave me this:and it gave me this:
line1
two
number three
That's right, it was double-spaced.That's right, it was double-spaced.
line1
two
number three
I remembered then that when you read from
sys.stdinthe way I do here, you get the line with its terminating newline character. Since the '
do_one_expression(aline)", I tried saying "
do_one_expression(aline.rstrip())" but nothing changed.
What happened? Take a look at line 10; it prints the (global) variable "
aline" -- not the parameter "
expr"! Having fixed that, the program now looks like this:
1 #!/usr/bin/python -tt
2 # vim:et:ai:sw=4
3 # $Header: /Users/collin/pycalc/RCS/calc.py,v 0.2 2008/01/10 01:17:25 collin Exp $
4 #
5 # simple calculator. Read and process an expression per line.
6 import string
7 import sys
8
9 def do_one_expression(expr):
10 # For now, let us just see if we can print a line
11 print(expr)
12 return
13
14 if __name__ == '__main__':
15 for aline in sys.stdin:
16 do_one_expression(aline.rstrip())
17 sys.exit(0)
18
19 # $Log: calc.py,v $
20 # Revision 0.2 2008/01/10 01:17:25 collin
21 # Solved two problems: First, the subroutine do_one_expression was
22 # printing aline, not its parameter! Second, we now kill trailing
23 # blanks from the string read.
24 #
25 # Revision 0.1 2008/01/10 01:11:34 collin
26 # initial revision. this one double-spaces the input.
27 #
28 # $EndLog$
By the way, the lines with the yellow background are lines that were added or changed from the previous version.
Okay -- now it can read lines and print them out without double-spacing them. How about doing something useful? Let's see if we can break the line into tokens. Since we are not going to do anything very fancy, maybe something like the string routine "split" will work. If you're not sure (as I wasn't) whether "split" would handle multiple spaces or tabs, you could try this:
% python
Python 2.3.5 (#1, Aug 22 2005, 22:13:23)
[GCC 3.3 20030304 (Apple Computer, Inc. build 1809)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> " oneword after_tab after_spcs 17 21 ".split()
['oneword', 'after_tab', 'after_spcs', '17', '21']
>>>
Cool. It even strips leading and trailing whitespace! So if I put in a string like
123 456 +then we should be able to "
split" it and get an array containing "123", "456", and "+".
We could do something like "
atoi(3)" on each of the numeric strings (to convert strings like "123" into a number, i.e., 123), and look at the operator ("+" in this case) to decide what to do with the numbers.
I thought I'd use a sort of RPN syntax, so that I could put only numbers on the stack (rather than numbers and operators), and I wouldn't have to deal with parentheses. So the flow for
do_one_expressionwould be something like this:
split the expression into 'tokens'
for all tokens in the expression (the line):
if it's numeric
transform into a number
push it onto the stack
if it's "+"
pop 2 numbers off the stack
add them
push the sum onto the stack
if it's "-"
do the analogous operations...
When tokens exhausted, print the top item from stack.
I looked in the Python library reference manual (in your distribution or google it) for "atoi" and found:
Deprecated since release 2.0. Use the int() built-in function.So that's what I'll use for converting strings to numbers. Here's the simple integer calculator:
1 #!/usr/bin/python -tt
2 # vim:et:ai:sw=4
3 # $Header: /Users/collin/pycalc/RCS/calc.py,v 0.3 2008/01/10 01:35:50 collin Exp $
4 #
5 # simple calculator. Read and process an expression per line.
6 # Make that an RPN calculator. No (parens)
7
8 import string
9 import sys
10
11 def do_one_expression(expr):
12 tokens = expr.split()
13 nstk = []
14 for atoken in tokens:
15 if atoken.isdigit():
16 nstk.append(int(atoken))
17 continue
18 if atoken == '+':
19 second = nstk.pop()
20 first = nstk.pop()
21 nstk.append(first + second)
22 continue
23 if atoken == '-':
24 second = nstk.pop()
25 first = nstk.pop()
26 nstk.append(first - second)
27 continue
28 # If we get here, unsupported operator.
29 print "** Unknown operator: '" + atoken + "'; ignored."
30 continue
31 # Done with tokens.
32 print "Problem:" , expr
33 print " answer:", nstk.pop()
34 return
35
36 if __name__ == '__main__':
37 for aline in sys.stdin:
38 do_one_expression(aline.rstrip().strip())
39 sys.exit(0)
40
41 # $Log: calc.py,v $
42 # Revision 0.3 2008/01/10 01:35:50 collin
43 # this one does addition and subtraction, integers only
44 #
45 # Revision 0.2 2008/01/10 01:17:25 collin
46 # Solved two problems: First, the subroutine do_one_expression was
47 # printing aline, not its parameter! Second, we now kill trailing
48 # blanks from the string read.
49 #
50 # Revision 0.1 2008/01/10 01:11:34 collin
51 # initial revision. this one double-spaces the input.
52 #
53 # $EndLog$
As you can see, lines 12-33 form the bulk of the changes. Line 12 splits the line into tokens, as described earlier. The stack is kept in a Python "list", which is defined and initialized at line 13. (It gets newly minted every time do_one_expression gets called.)
Line 15-17 handle the case where the "token" is a number. I determined (by experiment) that the string function "isdigit" (line 15) returns
Truewhen the entire string consists only of digits. So if
atokenis all-numeric, it represents a number, and so
int(atoken)is its numeric value. We push it onto the end of the list by calling
nstk.append()(line 16). The
continuestatement at line 17 says that's it for this pass through the loop, so we go back to the top of the loop.
Lines 18-22 process the addition operation. If we see that the present token is a '+', then we pop two elements off the stack (lines 19-20), and push (or append) their sum onto the stack in line 21.
Now in 19-20 it doesn't matter which order I pop the elements off the stack, but in lines 24-25 it does. Imagine how this code handles the expression "7 3 -":
- we push 7 onto the stack;
- we push 3 onto the stack;
- when we see the "-" we pop 3... we set
second = 3
and likewise set
first = 7
- then, executing line 26, we append (or "push") first-second (i.e., 7-3→4) onto the stack.
nstk.append(nstk.pop() + nstk.pop())
it would not be safe to replace lines 24-26 with such a line.
Line 29 handles the case where neither a number nor a known operator was supplied. Rather than completely giving up, though, the program continues. Hence if the user types "
7 4 should_be_3 -" we'll still eventually print the answer.
Next time I'll describe how to make the program handle floating-point numbers, and also how to make it a little more interactive. | http://collinpark.blogspot.com/2008/01/simple-python-program-part-1-revised.html | CC-MAIN-2018-13 | refinedweb | 1,493 | 76.62 |
Red Hat Bugzilla – Bug 1028966
require openjdk version which solves the memory leak in RHEV-M: service ovirt-engine gets OOM killed after few days of uptime
Last modified: 2014-03-26 11:24:01 EDT
Created attachment 822361 [details]
/var/log/messages
Description of problem:
possible memory leak in RHEV-M: service ovirt-engine gets OOM killed after few days of uptime. Last OOM on our setup occured on Sunday when there is no load on it (the system is idle over the weekend)... When used, the load is quite low (~ 4 concurrent manual users) and the VM has plenty of memory for such a load (6 GB, recently increased from 4 GB that was increased from 3 GB that sufficed for all previous versions).
Version-Release number of selected component (if applicable):
is21 / rhevm-backend-3.3.0-0.31.beta1.el6ev.noarch
How reproducible:
always on our setup in given version
Steps to Reproduce:
1. Use RHEV-M lightly
2.
3.
Actual results:
ovirt-engine service gets OOM'd
Expected results:
ovirt-engine has +- constant memory requirements under a given load over time
Additional info:
duplicate of bug 1026100?
looks like a duplicate of Bug 1026100
David - is this a physical rhevm host or a VM - and what is the amount of memory assigned to it ?
Can you please verify the workaround mentioned in comment #23 of the above Bug ?
(In reply to Barak from comment #5)
> looks like a duplicate of Bug 1026100
>
> David - is this a physical rhevm host or a VM
VM hosted in another RHEV
> - and what is the amount of
> memory assigned to it ?
>
currently 6 GB, (specified in Description...)
> Can you please verify the workaround mentioned in comment #23 of the above
> Bug ?
I just added it to engine configuration but I will see the results in day or two.
The situation here is similar to bug 1026100, but not exactly the same. In this case the database has a total of 81 processes and consumes a total of 2.31 GiB, that is very similar to the other bug.
But in this case the engine is consuming 3.24 GiB of RAM. Unlike in the other bug, where the engine was consuming only 440 MiB. This isn't normal, as the heap of the engine is configured for a maximum of 1 GiB, and as this happens after a long time running it certainly looks like a leak of memory or threads.
I suggest the same that in the other bug, reduce the minimum number of connections:
ENGINE_DB_MIN_CONNECTIONS=1.
(In reply to Juan Hernández from comment #7)
...
> I suggest the same that in the other bug, reduce the minimum number of
> connections:
>
> ENGINE_DB_MIN_CONNECTIONS=1
>
already done (comment 6)
>.
OK, I've raised the memory of the VM to hopefully ample 20 GB and I'll keep eye on RAM usage in following days.
RHS-C bug 1030460 looks very similar to this one.
After analyzing a machine that reached a point where the engine is consuming more than 9 GiB of RSS I found the following:
* Generated a heap dump as follows:
# ps -u ovirt
PID TTY TIME CMD
1665 ? 00:00:00 ovirt-engine.py
1677 ? 01:21:54 java
1691 ? 00:00:31 ovirt-websocket
# su - ovirt -s /bin/sh
-sh-4.1$ jmap -dump:format=b,file=/var/log/ovirt-engine/dump/juan-20131118.dump 1677
The size of this dump is 972 MiB. Still need to analyze its content, but the size of a dump is always larger than the heap in use, so this means that it isn't the heap that is consuming the RAM, so we can probably discard a memory leak inside the engine.
* Generated a thread stack dump as follows:
# kill -3 1677
# cp /var/log/ovirt-engine/console.log /var/log/ovirt-engine/dump/juan-20131118.txt
There are 196 threads in that dump. Taking into account that each thread has a default stack size of 1 MiB this will consume 200 additional MiB. Still far from the total 9 GiB, so we can also discard a thread leak.
* Generated a memory map as follows:
# pmap 1677 > /var/log/ovirt-engine/dump/juan-20131118.pmap
The interesting thing from this memory map is that there are many mappings of approx 64 MiB:
# grep 65536K *.pmap | wc -l
87
That means that more than 5 GiB of RAM are dedicated to this purpose. This gets us really close to the 9 GiB.
These 64 MiB mappings are probably generated by the glibc allocator: it has been reported that the glibc allocator (malloc and friends) may over-allocate memory for applications with many threads:
I don't have a conclusion yet, will continue studying the issue.
The GLU C library has a MALLOC_ARENA_MAX environment variable that allows control of the number of 64 MiB memory areas that are created in x86_64 machines.
These areas are created to avoid contention when threads use the locks needed to coordinate access to the "malloc" function. In Java objects aren't allocated with "malloc", but with the Java specific allocator, so this contention shouln't be a problem.
David Jaša is already running the job after adding the following to /etc/sysconfig/ovirt-engine:
export MALLOC_ARENA_MAX=1
If this setting has a positive effect in the longevity and use of memory of the engine then we should probably add it by default.
The change in MALLOC_ARENA_MAX did have an effect, but not very positive. The engine doesn't generate now those 87 memory areas, but only one. I think this is the main malloc arena. But that area is growing continually. This is from the output of "pmap":
0000000000cc3000 1666792K rw--- [ anon ]
00000000aff80000 1311232K rw--- [ anon ]
The second line corresponds to the Java heap, and it isn't growing. But the first line corresponds (I believe) to the main malloc arena, and it is continually growing since the Java virtual machine was started. It is currently, after only one day, taking 1.6 GiB. The only explanation I can find for this is an abuse of direct memory buffers in the Java code or a leak in the Java virtual machine native code. As far as I know we don't use direct buffers in the engine.
I continue studying the issue.
After analyzing the Java heap dump I don't find a large amount of direct memory buffers, this confirms that this is a leak generated in native code. In order to get more information I would suggest to run the engine under the control of the valgrind memcheck tool:
1. Stop the engine:
# service ovirt-engine stop
2. Install valgrind and the debug info for the OpenJDK package:
# yum -y install valgrind yum-utils
# debuginfo-install java-1.7.0-openjdk
3. Replace the Java launcher with an script that runs the Java application under the control of the valgrind memcheck tool:
# which java
/usr/bin/java
# readlink -f /usr/bin/java
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre/bin/java
# cd /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre/bin
# mv java java.original
# cat > java <<'.'
#!/bin/sh
exec /usr/bin/valgrind --tool=memcheck --leak-check=full /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre/bin/java.original $@
.
# chmod +x java
# restorecon java
You may need to adjust the location of the Java virtual machine in the above steps.
4. Truncate the console.log file:
# > /var/log/ovirt-engine/console.log
5. Start the engine:
# service ovirt-engine start
6. Run the engine for a while, till it generates a noticeable leak of native memory (you can use the pmap command and check the size of the memory area described in comment #12), then stop it:
# service ovirt-engine stop
This should write to the /var/log/ovirt-engine/console.log information about the memory leaks caused by calls to the malloc family of functions. Hopefully will be able to extract some useful information from there.
Remember to restore the original Java launcher when finished.
Valgrind won't be the way to go - just jboss initialization takes over 10 minutes and engine intilialization on top of it another 10+ - meaning that engine-initiated ldap connections and host connection time out as well as client-initiated connections to web UIs.
It seems that monitoring *alloc*'s as suggested by Pavel could be the way to go.. Pavel, do you have some handy strace invocation that could help, please? :)
You could try the following:
1. Install the glibc-headers, glibc-utils and gcc packages.
2. Create a trace.c file with the following content:
---8<---
#include <mcheck.h>
void __trace_start() __attribute__((constructor));
void __trace_stop() __attribute__((destructor));
void __trace_start() {
mtrace();
}
void __trace_stop() {
muntrace();
}
--->8---
3. Compile it as follows:
# gcc -fPIC -shared -o /usr/share/ovirt-engine/bin/libtrace.so trace.c
(You can do the compilation in a different machine to avoid installing the above packages in the machine that is running the engine.)
4. Create a "java_with_malloc_trace.sh" script in the same directory where the current "java" executable is (/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre/bin, for example). Use this content:
---8<---
#!/bin/sh
# This is the location of the malloc trace file, needs to be a place
# where the engine can write:
export MALLOC_TRACE=/var/log/ovirt-engine/malloc.trace
# This is needed to enable tracing:
export LD_PRELOAD=/usr/share/ovirt-engine/bin/libtrace.so
# Run the Java virtual machine:
exec /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre/bin/java $@
--->8---
5. Modify the engine start script to use "java_with_malloc_trace.sh" instead of just "java". In the /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py file near line 256:
---8<---
#self._executable = os.path.join(
# java.Java().getJavaHome(),
# 'bin',
# 'java',
#)
self._executable ="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64/jre/bin/java_with_malloc_trace.sh"
--->8---
6. Restart the engine. It should generate a *huge* amount of information in the /var/log/ovirt-engine/malloc.trace file. Hopefully we will be able to analyze it with the mtrace script included in the glibc-utils package.
OK, the engine is now running with this library loaded and malloc.trace log file is filling with messages..
(In reply to Juan Hernández from comment #17)
>.
OK, the commands is running in crontab every 5 minutes, in addition the date + time before each loging is logged.
After one day running with the native memory tracking enabled the memory mapping that concerns us has already grown to approxy 2 GiB:
# pmap -x 8979
Address Kbytes RSS Dirty Mode Mapping
0000000000400000 4 4 0 r-x-- java
0000000000600000 8 8 4 rw--- java
0000000001dfd000 1978264 1978136 1978136 rw--- [ anon ] <---
00000000aff80000 1311232 766464 766464 rw--- [ anon ]
I believe that this is the malloc arena, and there is only one because we are using MALLOC_ARENA_MAX=1.
This mapping doesn't appear in the list of mappings generated from the native memory tracker, and the total use of memory reported by the native memory tracker doesn't take into account this area.
I am attaching a pmap and the output of the native memory tracker.
Created attachment 829178 [details]
Output of pmap -x after one day running
Created attachment 829187 [details]
Output of the native memory tracker after one day running
Created attachment 829233 [details]
Thread dump after one day running
strace might not be the best thing for this problem because syscalls to allocate mem are used to get a quite big amount of RAM, ie it's not used for each malloc() and free(). It would be better to use mtrace as mentioned above, let me check..
(In reply to Juan Hernández from comment #24)
>.
mtrace is not thread-safe:
As far as I can see, we haven't backported the fix. In order to pin-point the leak, we'd need deeper stack traces anyway, mtrace only logs the immediate caller.
I can confirm that the memory leak is in the PKCS11 library and is probably this bug:
After more than 12 hours running there is no increase in memory consumption with PKCS11 disabled. I think this is enough to prove that PKCS11 is the cause of the problem.
Disabling PKCS11 is a suitable workaround for the rime being.
The solution I am proposing for this is to add to the engine startup scripts support for using a custom java.security file:
Then use a custom java.security file that disables the PKCS#11 provider:
To verify that after this changes the provider isn't actually loaded use the following command:
# su - ovirt -s /bin/sh -c "jmap -histo $(pidof ovirt-engine)" | grep sun.security.pkcs11.SunPKCS11
Before this changes it should list at least an instance of the SunPKCS11 provider, something like this:
309: 67 4824 sun.security.pkcs11.SunPKCS11$P11Service
432: 69 2208 sun.security.pkcs11.SunPKCS11$Descriptor
1278: 1 144 sun.security.pkcs11.SunPKCS11
After the changes the list should be empty.
Note that the SunPKCS11 provider is enabled by default since version 1.7.0.11-2.4.0.2.el6 of the java-1.7.0-openjdk package, so users can start to see this issue even with previous versions of the engine.
Note also that the data ware house component will also probably need to disable the PKCS#11 provider, probably using the same custom java.security file.
Hello Itamar,
Why isn't this solved by an update to rhel base? Why do we need to workaround java issues introduced by rhel?
Thanks,
Alon
*** Bug 1035789 has been marked as a duplicate of this bug. ***
(In reply to Juan Hernández from comment #27)
>
Engine with this workaround applied stays around 1 GB of resident memory even after 5 days of uptime.
Why isn't a bug opened against rhel openjdk? opening.
Alon, please take the bug and add the reference to the patch you suggested.
I added the gerrit id.
However, I am not the owner of this issue, I wrote a patch that summarize the comments I had for the initial patches and were not applied as alternative method.
From this point on I think you should take care of the bug, as you know better the solution that will be accepted and its implications. I'm explicitly asking you to take the bug. But if you don't want I will continue owning it, no problem.
Reopening as fix will be provided at rhel level, needs a different patch.
Before all ACK, can someone explain how in single channel for rhev we can have different dependencies for rhel-6.4 and rhel-6.5? Or we force people to upgrade to rhel-6.5?
(In reply to Alon Bar-Lev from comment #43)
> Before all ACK, can someone explain how in single channel for rhev we can
> have different dependencies for rhel-6.4 and rhel-6.5? Or we force people to
> upgrade to rhel-6.5?
In the classic RHN model (and new subscription manager model AFAIK) minor releases are shipped as a rolling release using a single repo (or channel). The openjdk is only provided by the base RHEL 6 channel and as such all a 6.4 user would be required to do to get this latest 6.5 version is a simple `yum update *openjdk* -y'. Obviously by requiring the 6.5 version in our .spec we simply automate this during our own upgrade process.
Hi Lee,
I am feeling uneasy to force upgrade working systems to newer jdk only because rhel-6.5 introduced this.
What do you think about adding a conflict with java-1.7.0-openjdk-1.7.0.45-2.4.3.3 ?
Alon
With:
Conflicts: java-1.7.0-openjdk = 1:1.7.0.45-2.4.3.3.el6
# yum --disableplugin=versionlock --disablerepo=java update rhevm
<snip>
Error: rhevm conflicts with 1:java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64
# yum --disableplugin=versionlock update rhevm
<snip>
Dependencies Resolved
==============================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================
Updating:
java-1.7.0-openjdk x86_64 1:1.7.0.45-2.4.3.4.el6_5 java 26 M
rhevm noarch 3.2.5-0.0.6.20131210git70fc6ff.root.el6ev temp 1.1 M
<snip>
-------
# engine-upgrade
Checking for updates... (This may take several minutes)...[ DONE ]
12 Updates available:
* java-1.7.0-openjdk-1.7.0.45-2.4.3.4.el6_5.x86_64
* java-1.7.0-openjdk-devel-1.7.0.45-2.4.3.4.el6_5.x86_64
* rhevm-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
* rhevm-backend-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
* rhevm-config-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
* rhevm-dbscripts-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
* rhevm-genericapi-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
* rhevm-notification-service-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
* rhevm-restapi-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
* rhevm-tools-common-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
* rhevm-userportal-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
* rhevm-webadmin-portal-3.2.5-0.0.6.20131210git70fc6ff.root.el6ev.noarch
Checking for updates... (This may take several minutes)...[ DONE ]
During the upgrade process, RHEV Manager will not be accessible.
All existing running virtual machines will continue but you will not be able to
start or stop any new virtual machines during the process.
Would you like to proceed? (yes|no): yes
ok, is27. running for couple of days, no oom killed and memory usage is ok.
Private + Shared = RAM used Program
1.0 GiB + 844.0 KiB = 1.0 GiB java 1072537 has been marked as a duplicate of this bug. *** | https://bugzilla.redhat.com/show_bug.cgi?id=1028966 | CC-MAIN-2018-26 | refinedweb | 3,005 | 65.42 |
02 June 2011 20:12 [Source: ICIS news]
HOUSTON (ICIS)--US fuel ethanol inventories were down in the last week of May despite a small gain in daily production, the US Energy Information Administration (EIA) said on Thursday.
Ethanol stocks stood at 20.23m bbl in the week ended 27 May, down 2.7% from 20.80m bbl in the previous week.
Meanwhile, US production of the biofuel rose by 0.8% to 909,000 bbl/day in the same period, the EIA said.
US ethanol output in May averaged 893,250 bbl/day, up 1.2% from 882,800 bbl/day in April.?xml:namespace> | http://www.icis.com/Articles/2011/06/02/9465840/us-ethanol-stocks-fall-2.7-week-on-week-production-up-slightly.html | CC-MAIN-2015-06 | refinedweb | 106 | 78.14 |
.
Javablock around statements such as
Transport.send(msg), though you can if you want..
Groovy also provides Ant integration through its
AntBuilder class. This class follows a special DSL-style pattern of coding supported by Groovy's built-in builder facilities and bundled builder classes. Builders are most useful for creating any kind of nested data structure, whether this be a straight XML file, a hierarchical object structure, or nested widgets in a GUI.
AntBuilder builds Ant projects that contain a nested set of build instructions. It interfaces directly with the Ant API, so the normal Ant XML build file is skipped altogether. The advantage is that
AntBuilder code tends to be less verbose than the equivalent XML and can make use of Groovy features such as conditional statements and loops, as during construction of the Ant project.
Ant supports sending emails through its
port = 1025 fixture = new EmailFixture(port) ant = new AntBuilder() ant.mail(mailhost:'localhost', mailport:"$port", subject:'Successful build'){ from(address:'cruise@mycompany.org') cc(address:'partners@mycompany.org') to(address:'devteam@mycompany.org') message("Successful build for ${new Date()}") } fixture.assertEmailArrived(from:'cruise@mycompany.org', subject:'Successful build')
Writing this class was the easiest of all the solutions and has yielded short, easy-to-maintain code. But we could do a lot more if we wanted. Suppose we wanted to send attachments of the build artifacts along with our notification email. Also suppose we wanted the content of the message to be HTML instead of the simple text content used in previous examples. Sound like a lot of work? It really isn't. Here is what the result would look like:
results = [Unit: '898 Tests, 0 Failures, 0 Errors', Integration: '45 Tests, 0 Failures, 0 Errors', Acceptance: '75 Tests, 0 Failures, 1 Errors'] ant = new AntBuilder() writer = new StringWriter() today = new Date() allOk = results.every{ entry -> entry.value.contains(' 0 Errors') && entry.value.contains(' 0 Failures') } new groovy.xml.MarkupBuilder(writer).html { body { h1(allOk ? 'Successful' : 'Failed' + " build for $today") table { tr { th('Test Suite'); th('Result') } results.each { suite, result -> numbers = result.split('').grep(~/\d| /).join() passed = numbers.endsWith('0 0 ') tr { td(suite) td(bgcolor: passed?'green':'red', result) } } } } } ant.mail(mailhost:'localhost', messagemimetype:'text/html', subject:"Build notification for $today"){ from(address:'cruise@mycompany.org') to(address:'devteam@mycompany.org') cc(address:'partners@mycompany.org') message(writer) attachments(){ fileset(dir:'dist'){ include(name:'**/*.zip') } } }
Here, build results would typically be extracted from logfiles; for our purposes, we just used a list. We have also ignored the test fixture (but see the next section, where we change the test fixture to use a real email server).
If you point this to a real mail server, you will receive an email similar to the one shown in Figure 1.
Figure 1. Screenshot of received email
To wrap up, we are going to look at testing our code against a real email server instead of the Wiser mock server. We'll simply replace the code inside our email fixture with code to access a real email server using the POP3 protocol. We'll use the Canoo WebTest open source testing tool. It is written as an Ant extension, so we can follow the same coding style as shown for sending our email via Ant. Here is what the result would look like:
def ant = new AntBuilder() def webtest_home = System.properties.'webtest.home' ant.taskdef(resource:'webtest.taskdef'){ classpath(){ pathelement(location:"$webtest_home/lib") fileset(dir:"$webtest_home/lib", includes:"**/*.jar") } } ant.testSpec(name:'Email Test'){ steps { emailSetConfig(server:'localhost', password:'password', username:'devteam@mycompany.org', type:'pop3') emailStoreMessageId(subject:'/Build notification/', property:'msg') emailStoreHeader(property:'subject', messageId:'#{msg}', headerName:'Subject') groovy('''def subject = step.webtestProperties.subject assert subject.startsWith('Build notification')''') emailMessageContentFilter(messageId:'#{msg}') verifyText(text:'Failed build') } }
Some points of interest:
'/'symbols surrounding
'Build notification'indicate to WebTest that we should match as a regular expression (ignoring the changing date part of the subject in our case).
msgis used to store the message id of the message we are interested in. The syntax
#{msg}is used to refer to the stored id to distinguish it from Ant properties prefixed with the
'$'symbol.
groovystep allows you to run Groovy code from within any Ant script. So here, we are calling Groovy to Ant back to Groovy again. We don't need to here (because we are in Groovy, so we can just write Groovy code), but this illustrates that we can.
That wraps up our quick tour of some of the common ways to send emails. I certainly haven't tried to provide you with an exhaustive list of ways to send emails from Groovy. For instance, you could use platform-specific means just as easily if portability across platforms is not a high priority. As an example, on Windows, you could write some code that uses Groovy's Scriptom module to talk directly to Outlook and ask it to send an email for you.
Groovy allows you to send emails easily. This isn't because the Groovy language designers particularly set out to make this task easy but because they designed into the language a cohesive set of features that can leverage one anther. This design makes the language productive in many scenarios. In our examples, we made use of some of Groovy's features that support Agile development, its unsurpassed Java integration, the neat syntax for doing closures, and the great support for writing your own DSLs, including the builder concept. Groovy is not alone in having many of these features. Other languages have them as well. Groovy does, however, package these features in a very cohesive way, and this puts the fun back into programming. Even mundane tasks like sending emails are a pleasure with Groovy!
Return to ONJava.com. | http://www.onjava.com/lpt/a/6974 | crawl-002 | refinedweb | 965 | 57.37 |
django-youtube 0.1
Youtube API wrapper app for Django. It helps to upload, display, delete, update videos from Youtube
==============
Django Youtube is a wrapper django app around youtube api. It helps you to implement the frequent api operation easily.
The main functionality is to use Youtube API to upload uplisted videos and show them in the website as a social web site do.
Basically implementing video features on a website using Youtube. In order to achieve this goal, you need a developer account on Youtube and use them to authenticate, and upload videos into this account.
Django Youtube designed to work with built in 'contrib.auth' app, although you can modify the views.py to work without authentication.
Please feel free to contribute.
Features
--------
1. Retrieve specific videos
2. Retrieve feed by a user
3. Browser based upload
4. Authentication to reach private data
5. Admin panel ready
6. Supports i18n
Features are not yet implemented
--------------------------------
1. Retrieve feeds (most visited etc)
2. Direct upload
3. oAuth authentication
Dependencies
------------
gdata python library ()
Installation
------------
Add 'django_youtube' folder at your Python path.
Add 'django_youtube' to your installed apps
Add following lines to your settings.py and edit them accordingly
YOUTUBE_AUTH_EMAIL = 'yourmail@gmail.com'
YOUTUBE_AUTH_PASSWORD = 'yourpassword'
YOUTUBE_DEVELOPER_KEY = 'developer key, get one from'
YOUTUBE_CLIENT_ID = 'client-id'
Optionally you can add following lines to your settings. If you don't set them, default settings will be used.
# url to redirect after upload finishes, default is respected 'video' page
YOUTUBE_UPLOAD_REDIRECT_URL = '/youtube/videos/'
# url to redirect after deletion video, default is 'upload page'
YOUTUBE_DELETE_REDIRECT_URL = '/myurl/'
Add Following lines to your urls.py file
(r'^youtube/', include('django_youtube.urls')),
Don't forget to run 'manage.py syncdb'
Usage
-----
Go to '/youtube/upload/' to upload video files directly to youtube. When you upload a file, the video entry is created on youtube, 'Video' model that includes video details (video_id, title, etc.) created on your db and a signal sent that you can add your logic to it.
After successful upload, it redirects to the specified page at YOUTUBE_UPLOAD_REDIRECT_URL, if no page is specified, it redirects to the corresponding video page.
Youtube API is integrated to the 'Video' model. In order to change information of the video on Youtube, just save the model instance as you normally do, django_youtube will do the necessary changes using Youtube API.
Api methods can be used separately. Please see 'api.py' to get info about methods. Please note that some operations requires authentication. Api methods will not do more than one operation, i.e. will not call authenticate method. So you will need to authenticate manually. Otherwise api methods will raise 'OperationError'. Please see 'views.py' for a sample implementation.
You can use views for uploading, displaying, deleting the videos.
You can also override templates to customise the html. 'Iframe API' used for displaying the videos for convenience. Please see Youtube API Docs () to implement other player API's on your template files. Other options are 'Javascript API' and 'Flash API'.
Signals
-------
The 'video_created' sent after video upload finished and video created successfully. You can also choose to register 'post_save' event of 'Video' model
Following is an example of how you process the signal
from django_youtube.models import video_created
from django.dispatch import receiver
@receiver(video_created)
def video_created_callback(sender, **kwargs):
"""
Youtube Video is created.
Not it's time to do something about it
"""
pass
- Author: Suleyman Melikoglu
- License: BSD licence, see LICENCE.txt
- Package Index Owner: laplacesdemon
- DOAP record: django-youtube-0.1.xml | https://pypi.python.org/pypi/django-youtube/0.1 | CC-MAIN-2017-30 | refinedweb | 581 | 60.92 |
Results 1 to 1 of 1
Thread: AS2 and random images
- Join Date
- Sep 2009
- 2
- Thanks
- 0
- Thanked 0 Times in 0 Posts
AS2 and random images
Hey all,
I'm new to Flash and AS (i'm using 2.0), and I have a question or two. I would like to have a section of my flash file to generate a picture I provided at random to load on my first page. I am having trouble doing this. I can make a .fla file using a .xml file that will play just fine. however it does not seem to work when I copy the frames into my other .fla file. The coding does not seem to work togther. I put a stop(); code so it will not just play all my keyframes, however this seems to put an end to my random pictures. does anyone know how to have a section of my page play the random pictures but will not run the timeline looping through all my pages. please of anyone can help that would be wonderful.
here is my code:
[CODE]
import mx.transitions.Tween;
import mx.transitions.easing.*;,1()
}
}
}
[CODE] | https://www.codingforums.com/flash-and-actionscript/176128-as2-random-images.html | CC-MAIN-2017-47 | refinedweb | 195 | 83.25 |
Page 1
Published by the College of Tropical Agriculture and Human Resources (CTAHR) and issued in furtherance of Cooperative Extension work, Acts of May 8 and June
30, 1914, in cooperation with the U.S. Department of Agriculture. Charles W. Laughlin, Director and.
Cooperative Extension Service
AgriBusiness
Dec. 1998
AB-12
Rev. 6/99
Economics of Ginger Root Production in Hawaii
Kent Fleming1 and Dwight Sato2
1Department of Horticulture
2Cooperative Extension Service, Hilo
T
ger root (Zingiber officinale
Roscoe) in Hawaii’s major ginger
growing area, the eastern half of the Big Island. The
economic analysis is based on a computer spreadsheet
budget for managing a ginger root enterprise and uses
information gathered from knowledgeable growers and
packers and from research and extension faculty and
publications of the College of Tropical Agriculture and
Human Resources (CTAHR), University of Hawaii at
Manoa. The production data used in the model are typi-
cal for a small ginger root farm in the late 1990s. How-
ever, the economic model is flexible, including over 100
variables, any of which can be changed by the user to
accommodate individual ginger root farming situations.
This budget has a wide range of uses, but it is pri-
marily intended as a management tool for growers of
edible ginger. Growers who enter their own farm data
will find the model useful for
• developing an end-of-the-year economic business
analysis of their ginger root enterprise,
• projecting next year’s income under various cost-
structure, production, and marketing scenarios,
• considering the economic impact of business
environment changes (e.g., regulatory or wage rate
changes),
• determining the economic benefit of adopting new
technology, and
• planning new or expanded operations.
his publication examines the
economics of producing gin-
Assumptions
The first step in determining profitability is to establish
some overall production and economic assumptions. The
farm in this example is five acres. For horticultural rea-
sons, ginger is usually grown in a rotation system in
which one year of ginger production is followed by three
years in which the land is not used for ginger. There-
fore, the annual ginger root
crop comes from only 25%
of the land. Some growers
simply move to new rented
land each year. The model accommodates either sys-
tem. The average cost of hand labor is assumed to be $6
per hour, with machine labor at $8 plus 33% in “ben-
efits” (e.g., FICA, etc.). Payment for the crop is received
two months after delivery. The desired rate of return on
equity capital is 6%, and the bank interest rate is 9% for
debt capital and 10% for working capital.
Gross income
It is assumed that the example ginger farm sells 90% of
its marketable production as mature ginger root, with
about 80% selling as Grade A. Packers report that the
proportion of Grade A has been slightly but steadily in-
creasing over the years. “Young ginger,” a specialty prod-
uct of limited demand, accounts for 5% of the marketed
production sold. The season price averages about 50%
higher than the Grade A price, but the yield is signifi-
cantly lower (Nishina et al., p. 3). (The production costs
might be slightly lower, although in this study they are
assumed to be the same regardless of grade.) Nishina et
al. reported that growers normally keep back about 5%
(assuming a 1:20 “seed”: crop ratio) of their production
for the next season’s planting, although one grower in-
terviewed reported retaining 10% of one season’s pro-
duction for the next season’s “seed.” This grower plants
more densely and obtains a higher yield. In this study
we follow the 5% described by Nishina et al.
Mature ginger root yields vary substantially from
year to year, primarily because of plant disease incidence.
Page 2
AB-12 Economics of Ginger Root ProductionCTAHR — Dec. 1998
2
Since 1980 the yields have ranged from a high of 50,000
pounds per acre of marketable ginger root (1997/98 sea-
son) to a low of 27,500 (1993). The Hawaii Agricultural
Statistics Service (HASS) bases its 1998 Outlook Re-
port on “the most recent 3-year average of 47,300 pounds
per [harvested] acre” (HASS, p. 3). Our example uses a
most-recent-5-year weighted average yield of 46,200
pounds per harvested acre. All growers interviewed be-
lieved that their marketable yields, and those of other
growers they knew, were greater than those reported by
HASS. The marketable yield figure used in this study
should be viewed as a conservative estimate. Growers
should enter the yield that they believe reflects their situ-
ation.
The price per pound received by growers and used
in this study is the weighted average price received for
all grades of ginger root marketed throughout the sea-
son. The HASS reported price is the Grade A price, the
major but not the sole component of the weighted aver-
age price. The weighted average price will be close to
but usually lower than the Grade A price. This fact per-
haps accounts for the growers’ common observation that
they never receive a price quite as high as that reported
by HASS. As with the annual yields, the Grade A prices
have fluctuated considerably since 1980, ranging from
a low of 40¢ per pound (1997) to a high of 92.3¢. The
most recent 5-year weighted average Grade A price is
68.1¢ per pound. (HASS does not project Grade A prices,
although using its method for estimating yield, its price
estimate would be about 67.3¢ per pound.) In light of
both the 1997/98 year’s exceptionally low Grade A price
and the feelings of packers that the industry will not
again experience the recent high prices, the estimated
Grade A price used in our model is adjusted downward
by 20% to a more conservative 54.5¢ per pound. Given
the marketing pattern of the example farm, the weighted
average price comes out to be 53.4¢ per pound. The re-
sulting gross income is $24,674 per harvested acre or
$30,843 for the whole ginger enterprise.
Operating costs
Operating costs are all the costs directly associated with
growing and harvesting the ginger crop. All costs are
expressed as costs per harvested acre and per farm and
as a percentage of gross income. The various percent-
ages of gross income can be viewed as the number of
cents from each dollar generated by ginger sales that
are spent on a particular operating expense. For example,
9.3¢ of every dollar of revenue is spent on methyl bro-
mide and plastic sheeting. This item is a major compo-
nent of the land preparation cost. In this example farm,
the land preparation activity is the single largest grow-
ing cost, constituting 13.5% of the total growing expen-
diture. Land preparation costs are likely to increase fur-
ther as the proposed deadline for the elimination of me-
thyl bromide approaches.
Total growing costs take one-third of the gross rev-
enue; harvesting activities absorb another quarter. Hired
labor is the single most significant operating input, con-
suming over one-quarter of the gross income. Labor is
about evenly divided between growing and harvesting
activities. The example farm uses a custom operator to
provide the machinery operations associated with land
preparation and planting. If he did not, the itemized la-
bor cost would be higher (as would his machinery own-
ership costs). Overall, $23,026, three-quarters of the
gross income from this example ginger farm, is expended
on total operating costs.
This budget includes two overhead costs that are
often overlooked. The first is the cost of working capi-
tal (often an operating loan). The second is the cost of
retaining ownership of an already delivered crop, as
opposed to being paid for it upon delivery to the buyer.
Ginger growers typically wait one to three months for
payment. In the example farm, payment is deferred two
months, reducing the net price 1.7% (0.9¢ per pound).
This deferred payment is a hidden cost of marketing,
but in effect it functions like a commission. If one’s cost
of operating capital was 12% and payment was not re-
ceived for three months, the financial impact would be
doubled.
Gross margin
The gross margin is the gross income minus the total
operating (or “variable”) costs. Therefore the gross mar-
gin for the whole enterprise is $7,475. It represents the
total amount available to pay the ownership (or “fixed”)
costs of production. Gross margin resembles another
frequently used term, “return over cash costs.” It is what
farmers popularly refer to as their “profit,” because it is
close to the return to their management and investment
(if there is no debt associated with the farming opera-
Page 3
3
AB-12 Economics of Ginger Root ProductionCTAHR — Dec. 1998
*The “capital recovery charge” method consists of calculat-
ing an annual loan payment, using the historic cost minus the
salvage value as the principle, the “life” as the term, and the
average cost of capital as the interest rate. To this amount is
added the cost of holding the asset’s salvage value, using the
owner’s opportunity cost or desired return on capital. If the
asset is already fully depreciated (i.e., the capital has already
been recovered), enter zero for historic cost.
**If one were to set the “desired return on owner equity” (in
the assumptions section above) to zero, the indicated “return
to management” would in fact be the frequently used “man-
agement and investment income” (M.I.I.), the return to the
owner/manager for his or her management and capital invest-
ment.
tion). If one were to deduct depreciation and rent, farm
gross margin would approximate “taxable income.”
Gross margin is a good measure for comparing the
economic and productive efficiency of similar sized
farms. More importantly, it represents the bare minimum
that a farm must generate in order to stay in business.
(Even if a farm were to lose money overall, a positive
gross margin would enable it to continue to operate, at
least in the short run.) But gross margin is not a good
measure of a farm’s true profitability or long-term eco-
nomic viability.
Ownership costs
These costs are the annualized costs for those produc-
tive resources that last longer than the annual produc-
tion cycle. For example, because capital items last more
than one production cycle, they have to be amortized
over their “useful lives.” In the economic analysis, a
“capital recovery charge” is calculated for all capital
items. This charge is an estimate of what it costs the
producer to own the capital assets for one year.* The
example farm’s total annualized capital cost is $6,554,
just over one-fifth of the farm’s gross income. It would
be higher if custom machinery services were not uti-
lized, because additional machinery would need to be
owned.
“The bottom line”
Total cost includes all cash costs and all opportunity
costs. Any return above total cost is economic profit.
Because economic profit considers all costs, a manager
would understandably be satisfied with his or her busi-
ness’ performance if economic profit were zero or
greater. Economic profit is the single best measure of
true profitability. Economic profit serves as a “market
signal” to indicate how attractive the enterprise is for
potential investors and for potential new entrants into
the industry.
The only problem with the economic profit concept
is that it may be confusing to hear that one should be
satisfied with an “economic profit of zero,” or it may be
intuitively difficult to grasp the meaning of a “negative
economic profit.” Perhaps a more easily understood
“bottom line” term is “return to management.” In a typi-
cal year, this example ginger farm manager receives a
return (before income taxes) of $1,742 for his or her
managerial efforts,** that is, 5.6% of the gross income.
Because this return to the management resource is
slightly greater than the resource’s value (using the “rule
of thumb” for the value of management, 5% of the gross
income, which in the example farm would be $1,542),
we can say the business is in fact profitable. (Of course,
this farm manager also would receive additional com-
pensation for any of the manual farm labor which he or
she provided.).
Risk
Our model’s particular production scenario appears
marginally adequate. However, the ginger market in-
cludes considerable foreign competition. Prices have
generally been good for ginger root, but the 1997/98
average price of ginger dropped to 40¢ per pound, an
all-time low. Despite excellent yields, the price was be-
low the break-even point, and generally ginger farming
was not economically profitable. In addition to abruptly
fluctuating prices, ginger root is relatively susceptible
to serious disease problems (Nishina et al.), providing
an ever-present possibility for a cultural problem to
sharply reduce yields. In 1993, for example, the aver-
age yield dropped to 27,500 pounds per acre.
Risk is inherent in all of agriculture, but the ginger
root industry appears to be more exposed to risk than
many other Hawaii agricultural endeavors. A review of
the HASS summary of prices and yields reveals consid-
erable ginger root price and yield volatility with rela-
tively little correlation between the two variables. The
Page 4
AB-12 Economics of Ginger Root ProductionCTAHR — Dec. 1998
4
Economics of ginger root production in Hawaii—cost-and-returns spreadsheet
This research was funded by the County of Hawaii, Department of Research and Development, and the Univrsity of Hawaii
at Manoa, College of Tropical Agriculture and Human Resources. Mention of specific products or practices does not imply an
endorsement by these agencies or a recommendation in preference to other other products or practices.
Page 5
5
AB-12Economics of Ginger Root ProductionCTAHR — Dec. 1998
45,907
53.1 | http://www.researchgate.net/publication/29737801_Economics_of_Ginger_Root_Production_in_Hawaii | CC-MAIN-2015-40 | refinedweb | 2,321 | 52.8 |
Builders
Hibernate Criteria Builder
The Hibernate Criteria Builder allows you to create queries that map to the Hibernate Criteria API. There are equivalent builder nodes for most criterion within the Hibernate Expression class. For more info see the Builders Reference
The builder can be used standalone by passing a persistent class and sessionFactory instance:
new grails.orm.HibernateCriteriaBuilder(User.class, sessionFactory).list { eq("firstName", "Fred") }
Or an instance can be retrieved via the createCriteria method of Grails domain class instances:
def c = Account.createCriteria() def results = c { like("holderFirstName", "Fred%") and { between("balance", 500, 1000) eq("branch", "London") } maxResults(10) order("holderLastName", "desc") }
The results variable above cannot be typed as a List since it may actually be a proxy.
By default the builder returns a list of results, but can be forced to retrieve a unique result using the "get" node:
def c = Account.createCriteria() def a = c.get { eq("number", 40830994) }
Builders may contain control structures. The following example shows how to build a query that will match any item in a list.
def branchList = [ "London", "Newmarket", "Cambridge"] def c = Account.createCriteria() def results = c.list { or { for (b in branchList) { eq("branch", b) } } }
OpenRico Builder
Gra.
def result = google.doSearch(); new grails.util.OpenRicoBuilder(response).ajax { object(id:"googleAutoComplete") { for (re in result.resultElements) { div(class:"autoCompleteResult", re.URL) } } }
It should be noted that at the moment, OpenRico requests are not handled by the Grails Ajax tags; there is no OpenRico implementation in JavascriptTagLib. | http://docs.codehaus.org/display/GRAILS/Builders | crawl-002 | refinedweb | 247 | 50.02 |
Created on 2008-01-04 05:50 by lpd, last changed 2008-01-07 19:17 by georg.brandl. This issue is now closed.
In the following, dir(Node) should include the name 'z', and Node.z
should be 'Node'. However, dir(Node) does not include 'z', and Node.z is
undefined (AttributeError). This is directly contrary to the Python
documentation, which says "metaclasses can modify dict".
class MetaNode(type):
def __init__(cls, name, bases, cdict):
cdict['z'] = name
type.__init__(name, bases, cdict)
class Node(object):
__metaclass__ = MetaNode
print dir(Node)
print Node.z
When designing a metaclass, you should override __new__, not __init__:
class MetaNode(type):
def __new__(cls, name, bases, cdict):
cdict['z'] = name
return type.__new__(cls, name, bases, cdict)
class Node(object):
__metaclass__ = MetaNode
Please reopen this issue as a documentation bug.
The documentation for __new__ in section 3.4.1 says:
__new__() is intended mainly to allow subclasses of immutable types
(like int, str, or tuple) to customize instance creation.
The documentation for metaclasses in 3.4.3 says nothing about __new__
vs. __init__. It says the metaclass will be "called" for class creation,
and it says the metaclass can be any "callable". This would imply the
use of __call__ rather than __new__ or __init__.
I think 3.4.1 should say:
__new__() is intended mainly to allow subclasses of immutable types
(like int, str, or tuple) to customize instance creation. It is also
used for custom metaclasses (q.v.).
I think 3.4.3 should be reviewed in its entirety to replace the
misleading language about "called" and "callable" with language that
explicitly mentions __new__.
I'll look into it.
Actually, "called" and "callable" are OK, if the documentation says
somewhere that the normal effect of "calling" a type object is to invoke
__new__. The places I looked first (sections 3.1, 3.3, and 3.4.1) do not
say this. 5.3.4 does say that the result of calling a class object is a
new instance of that class, but it doesn't mention __new__. So perhaps
it would OK to just add something like the following to 3.4.3:
Note that if a metaclass is a subclass of <code>type</code>, it should
override <code>__new__</code>, not <code>__call__</code>.
This should now be appropriately explained in the trunk, r59837. I also
added an example of using __new__ in a metaclass. | https://bugs.python.org/issue1734 | CC-MAIN-2018-05 | refinedweb | 404 | 77.13 |
16 August 2010 17:10 [Source: ICIS news]
PRAGUE (ICIS)--?xml:namespace>
If successful, Ciech would likely use zloty (Zl) 300m ($95.8m, €75.0m) received from the ERBD to help it repay a Zl 400m tranche of debts worth Zl 1.34bn, which are owed to a group of banks that has set a payment deadline for the end of March 2011, Wood & Company added.
The proceeds could not be used for investment because “Ciech’s agreement with the bank consortium, binding until the end of 2011, states Ciech can not take any new debt on and that any proceeds from a share capital increase, [which Ciech is currently considering,] must be used to reduce the overall level of external debt”, the bank said.
“Securing long-term financing is one of the key risk factors for Ciech and thus in our view news about the possible engagement of EBRD could trigger a positive market reaction,” it added.
The application, backed by the Polish treasury ministry, had passed the preliminary EBRD approval stages, Wood & Company noted.
Ciech confirmed that last week it submitted to the EBRD the application for financing, the terms and use of which will be subject to further negotiations and discussions.
In another move aimed at reducing debt risk, Ciech had signed a deal with Poland-based ING Bank Slaski on converting its outstanding foreign currency options into debt of Zl 64m.
The company was last year hit by negative foreign currency options caused by the financial crisis.
Ciech hopes to qualify for a relaunch of its privatisation by paying down its debts and restructuring its four divisions into two.
($1 = Zl 3.13/€1 = Zl 4.00) | http://www.icis.com/Articles/2010/08/16/9385498/ciech-seeks-ebrd-financing-to-help-with-debt-worries.html | CC-MAIN-2014-10 | refinedweb | 281 | 67.38 |
When I started using .Net Core and xUnit I found it difficult to find information on how to mock or fake the Entity Framework database code. So I’m going to show a minimized code sample using xUnit, Entity Framework, In Memory Database with .Net Core. I’m only going to setup two projects: DataSource and UnitTests.
The DataSource project contains the repository, domain and context objects necessary to connect to a database using Entity Framework. Normally you would not unit test this project. It is supposed to be set up as a group of pass-through objects and interfaces. I’ll setup POCOs (Plain Old C# Object) and their entity mappings to show how to keep your code as clean as possible. There should be no business logic in this entire project. In your solution, you should create one or more business projects to contain the actual logic of your program. These projects will contain the objects under unit test.
The UnitTest project specaks for itself. It will contain the in memory Entity Framework fake code with some test data and a sample of two unit tests. Why two tests? Because it’s easy to create a demonstration with one unit test. Two tests will be used to demonstrate how to ensure that your test data initializer doesn’t accidentally get called twice (causing twice as much data to be created).
The POCO
I’ve written about Entity Framework before and usually I’ll use data annotations, but POCOs are much cleaner. If you look at some of my blog posts about NHibernate, you’ll see the POCO technique used. The technique of using POCOs means that you’ll also need to setup a separate class of mappings for each table. This keeps your code separated into logical parts. For my sample, I’ll put the mappings into the Repository folder and call them TablenameConfig. The mapping class will be a static class so that I can use the extension property to apply the mappings. I’m getting ahead of myself so let’s start with the POCO:
public class Product { public int Id { get; set; } public string Name { get; set; } public decimal? Price { get; set; } }
That’s it. If you have the database defined, you can use a mapping or POCO generator to create this code and just paste each table into it’s only C# source file. All the POCO objects are in the Domain folder (there’s only one and that’s the Product table POCO).
The Mappings
The mappings file looks like this:
using DataSource.Domain; using Microsoft.EntityFrameworkCore; namespace DataSource.Repository { public static class ProductConfig { public static void AddProduct(this ModelBuilder modelBuilder, string schema) { modelBuilder.Entity<Product>(entity => { entity.ToTable("Product", schema); entity.HasKey(p => p.Id); entity.Property(e => e.Name) .HasColumnName("Name") .IsRequired(false); entity.Property(e => e.Price) .HasColumnName("Price") .IsRequired(false); }); } } }
That is the whole file, so now you know what to include in your usings. This class will be an extension method to a modelBuilder object. Basically, it’s called like this:
modelBuilder.AddProduct("dbo");
I passed the schema as a parameter. If you are only using the DBO schema, then you can just remove the parameter and force it to be DBO inside the ToTable() method. You can and should expand your mappings to include relational integrity constraints. The purpose in creating a mirror of your database constraints in Entity Framework is to give you a heads-up at compile-time if you are violating a constraint on the database when you write your LINQ queries. In the “good ol’ days” when accessing a database from code meant you created a string to pass directly to MS SQL server (remember ADO?), you didn’t know if you would break a constraint until run time. This makes it more difficult to test since you have to be aware of what constraints exist when you’re focused on creating your business code. By creating each table as a POCO and a set of mappings, you can focus on creating your database code first. Then when you are focused on your business code, you can ignore constraints, because they won’t ignore you!
The EF Context
Sometimes I start by writing my context first, then create all the POCOs and then the mappings. Kind of a top-down approach. In this example, I’m pretending that it’s done the other way around. You can do it either way. The context for this sample looks like this:
using DataSource.Domain; using DataSource.Repository; using Microsoft.EntityFrameworkCore; namespace DataSource { public class StoreAppContext : DbContext, IStoreAppContext { public StoreAppContext(DbContextOptions<StoreAppContext> options) : base(options) { } public DbSet<Product> Products { get; set; } protected override void OnModelCreating(ModelBuilder modelBuilder) { modelBuilder.AddProduct("dbo"); } } }
You can see immediately how I put the mapping setup code inside the OnModelCreating() method. As you add POCOs, you’ll need one of these for each table. There is also an EF context interface defined, which is never actually used in my unit tests. The purpose of the interface will be used in actual code in your program. For instance, if you setup an API you’re going to end up using an IOC container to break dependencies. In order to do that, you’ll need to reference the interface in your code and then you’ll need to define which object belongs to the interface in your container setup, like this:
services.AddScoped<IStoreAppContext>(provider => provider.GetService<StoreAppContext>());
If you haven’t used IOC containers before, you should know that the above code will add an entry to a dictionary of interfaces and objects for the application to use. In this instance the entry for IStoreAppContext will match the object StoreAppContext. So any object that references IStoreAppContext will end up getting an instance of the StoreAppContext object. But, IOC containers is not what this blog post is about (I’ll create a blog post on that subject later). So let’s move on to the unit tests, which is what this blog post is really about.
The Unit Tests
As I mentioned earlier, you’re not actually going to write unit tests against your database repository. It’s redundant. What you’re attempting to do is write a unit test covering a feature of your business logic and the database is getting in your way because your business object calls the database in order to make a decision. What you need is a fake database in memory that contains the exact data you want your object to call so you can check and see if it make the correct decision. You want to create unit tests for each tiny little decision made by your objects and methods and you want to be able to feed different sets of data to each tests or you can setup a large set of test data and use it for many tests.
Here’s the first unit test:
[Fact] public void TestQueryAll() { var temp = (from p in _storeAppContext.Products select p).ToList(); Assert.Equal(2, temp.Count); Assert.Equal("Rice", temp[0].Name); Assert.Equal("Bread", temp[1].Name); }
I’m using xUnit and this test just checks to see if there are two items in the product table, one named “Rice” and the other named “Bread”. The _storeAppContext variable needs to be a valid Entity Framework context and it must be connected to an in memory database. We don’t want to be changing a real database when we unit test. The code for setting up the in-memory data looks like this:
var builder = new DbContextOptionsBuilder<StoreAppContext>() .UseInMemoryDatabase(); Context = new StoreAppContext(builder.Options); Context.Products.Add(new Product { Name = "Rice", Price = 5.99m }); Context.Products.Add(new Product { Name = "Bread", Price = 2.35m }); Context.SaveChanges();
This is just a code snippet, I’ll show how it fits into your unit test class in a minute. First, a DbContextOptionsBuilder object is built (builder). This gets you an in memory database with the tables defined in the mappings of the StoreAppContext. Next, you define the context that you’ll be using for your unit tests using the builder.options. Once the context exists, then you can pretend you’re connected to a real database. Just add items and save them. I would create classes for each set of test data and put it in a directory in your unit tests (usually I call the directory TestData).
Now, you’re probably thinking: I can just call this code from each of my unit tests. Which leads to the thought: I can just put this code in the unit test class initializer. Which sounds good, however, the unit test runner will call your object each time it calls the test method and you end up adding to an existing database over and over. So your first unit test executed will see two rows Product data, the second unit test will see four rows. Go head and copy the above code into your constructor like this and see what happens. You’ll see that TestQueryAll() will fail because there will be 4 records instead of the expected 2. How do we make sure the initializer is executed only once for each test, but it must be performed on the first unit test call. That’s where the IClassFixture comes in. This is an interface that is used by xUnit and you basically add it to your unit test class like this:
public class StoreAppTests : IClassFixture<TestDataFixture> { // unit test methods }
Then you define your test fixture class like this:
using System; using DataSource; using DataSource.Domain; using Microsoft.EntityFrameworkCore; namespace UnitTests { public class TestDataFixture : IDisposable { public StoreAppContext Context { get; set; } public TestDataFixture() { var builder = new DbContextOptionsBuilder<StoreAppContext>() .UseInMemoryDatabase(); Context = new StoreAppContext(builder.Options); Context.Products.Add(new Product { Name = "Rice", Price = 5.99m }); Context.Products.Add(new Product { Name = "Bread", Price = 2.35m }); Context.SaveChanges(); } public void Dispose() { } } }
Next, you’ll need to add some code to the unit test class constructor that reads the context property and assigns it to an object property that can be used by your unit tests:
private readonly StoreAppContext _storeAppContext; public StoreAppTests(TestDataFixture fixture) { _storeAppContext = fixture.Context; }
What happens is that xUnit will call the constructor of the TestDataFixture object one time. This creates the context and assigns it to the fixture property. Then the initializer for the unit test object will be called for each unit test. This only copies the context property to the unit test object context property so that the unit test methods can reference it. Now run your unit tests and you’ll see that the same data is available for each unit test.
One thing to keep in mind is that you’ll need to tear down and rebuild your data for each unit test if your unit test calls a method that inserts or updates your test data. For that setup, you can use the test fixture to populate tables that are static lookup tables (not modified by any of your business logic). Then create a data initializer and data destroyer that fills and clears tables that are modified by your unit tests. The data initializer will be called inside the unit test object initializer and the destroyer will need to be called in an object disposer.
Where to Get the Code
You can get the complete source code from my GitHub account by clicking here. | http://blog.frankdecaire.com/2017/04/02/dot-net-core-in-memory-unit-testing-using-xunit/ | CC-MAIN-2018-05 | refinedweb | 1,907 | 63.09 |
--
Mats
Printable View
--
Mats
Further, if you follow the rule "never use sizeof() while reading a file format" then you eliminate a whole class of errors, like the aforementioned, as well as assuming a particular endianness and integer width. Without sizeof(), you are forced to read a single byte at a time, which forces you to deal with endianness and width from the very outset instead of ignoring them.
These are the structs provided by Microsoft.
Which equal:Which equal:Code:
typedef struct tagBITMAPFILEHEADER {
WORD bfType;
DWORD bfSize;
WORD bfReserved1;
WORD bfReserved2;
DWORD bfOffBits;
} BITMAPFILEHEADER, *PBITMAPFILEHEADER;;
I remember I have used these once to read bitmap files. Microsoft itself uses this way too...I remember I have used these once to read bitmap files. Microsoft itself uses this way too...Code:
typedef struct tagBITMAPFILEHEADER {
unsigned short bfType;
unsigned long bfSize;
unsigned short bfReserved1;
unsigned short bfReserved2;
unsigned long bfOffBits;
} BITMAPFILEHEADER, *PBITMAPFILEHEADER;
typedef struct tagBITMAPINFOHEADER{
unsigned long biSize;
long biWidth;
long biHeight;
unsigned short biPlanes;
unsigned short biBitCount;
unsigned long biCompression;
unsigned long biSizeImage;
long biXPelsPerMeter;
long biYPelsPerMeter;
unsigned long biClrUsed;
unsigned long biClrImportant;
} BITMAPINFOHEADER, *PBITMAPINFOHEADER;
And GameDev has this as a tutorial too:
So, your argument is "If Microsoft does it, it must be right?" ;)
And by the way, MS also does:
pshpack2.h in turn does:pshpack2.h in turn does:Code:
#include <pshpack2.h>
typedef struct tagBITMAPFILEHEADER {
....
#pragma pack(2)
[It's a bit more complex than that - it tests for differnet compiler versions and a bunch of other #if type things before it does this, but that's what it amounts to].
You would have seen that if you read the line just above the header you copied and pasted from WinGDI.h - or did you just copy'n'paste from some other place that doesn't show the context of the structure definition?
--
Mats | http://cboard.cprogramming.com/cplusplus-programming/95340-loading-bitmap-2-print.html | CC-MAIN-2016-18 | refinedweb | 308 | 51.68 |
Chicago GNU/Linux User Group Planet 2011-02-23T07:17:23+00:00 Planet/2.0 + Account Hackers Have No Mojo 2011-02-22T22:07:19+00:00 <p>Today while I was writing some code, I got an instant message from a friend of mine I haven’t spoken to in a while. At first I figured it was his dumb self because he can’t spell worth a crap, or is actually pretty dumb in many cases. I shrugged it off and thought nothing of it. After another line or so the next message threw me for a loop. He wanted me to go to some website and trying something. OK, I haven’t talked to you in over a year, and that is the first thing out of your keyboard? I responded with something about porn, and the next response from him is what gave it away. He used ‘<em>plz</em>‘ instead of ‘<em>please</em>‘. Sorry Matt, but you aren’t hip to the Internet chat lingo. After that, I responded letting them know I was on to them, and after a little research I knew it wasn’t Matt at all.</p> <p>Here is the conversation in its entirety. Thought it was kind of funny, especially since many people would have fallen for this. FYI, the website he wanted me to lookout was revealed by Google of course to be a phishing, virus, and that other crud Windows users have to deal with, website.</p> <p><a href=""><img src="" alt="Yahoo! Messenger Hacked" title="sm_yahoo_im_hacked" width="308" height="319" class="aligncenter size-full wp-image-994" /></a></p> <p>Either his password was insanely simple, which I don’t think it was, or he will be calling me within the next couple of days stating something along the lines of, “Can you fix my computer, I think I have a virus?”</p> <p><strong>UPDATE:</strong> After that conversation I filed a report on Yahoo!, just like any good contributor does. I gave them my system information and all of the details letting them know I didn’t have to worry about clicking links. Well it seems they throw that information out and use some USER_AGENT sniffing instead. Boy did they get that all wrong. First off, here is a snippet of what they replied to me with, of course you can tell it is computer generated:<">Dear Richard, Thank you for writing to Yahoo! Messenger. I understand that you have received an Instant Message or Messages containing a suspicious link or links. The links appear to have been sent by one or more of your contacts, but were actually sent by a malicious third party. Please do not click these links or download the associated EXE files. Remember, we always recommend that you never click suspicious links or download executable files sent from anyone including your contacts. Also, keep in mind that we are working to identify the source of the issue as well as to take down the sites that are the destination of these links. To remove and prevent further infection, please update your anti-virus software.</pre></div></div></div></div></div></div></div> <p>I told them previously in my report that I was using Linux and had nothing to worry about. Typically this helps with the pre-generated email responses, but in this case it didn’t. Then it went on and detailed the conversation I had with my hacked friend. After that though is what got me, and that was their information about my computer I used to contact them. Here that: Unknown OS: unknown Browser: Default Browser 0 REMOTE_ADDR: xxx.xxx.xxx.xxx REMOTE_HOST: xxx-xxx-xxx-xxx.somerouter.insomelocation.onsomenetwork.net Date Originated: Tuesday February 22, 2011 - 13:47:01 Cookies: disabled AOL: yes</pre></div></div></div></div></div></div></div> <p>Umm, for one I am not using <em>AOL</em>, and the last I checked, you couldn’t use it with Linux. If their sniffing were correct, it should have looked something like: ShakaDoobie OS: Linux (probably either Ubuntu or Kubuntu, as the WordPress sniffers pick this up) Browser: Default Browser 0 (should say Google Chrome, and it isn't my default browser) ... Cookies: enabled AOL: hell no!</pre></div></div></div></div></div></div></div> <p>Ahh the fun an excitement I tell you. OK, you can go back to doing whatever you were doing now that I wasted 5 minutes of your time.</p> <p><a href="">Account Hackers Have No Mojo< Urinal model tag:,2011-02-15:,blog/entry;2011/2/15/duchamp-fountain-urinal-model 2011-02-19T22:22:24+00:00 <p>In December <a href="">Rob Myers</a> contacted me about a commision of making a urinal in <a href="">Blender</a> as <a href="">CC BY-SA 3.0</a>. He wanted to get it 3d printed via <a href="">Shapeways</a>, etc. I agreed to it with moderate enthusiasm. Most of the things I do are more <a href="">gobliny</a> or <a href="">monsterish</a>. But, I figured, Rob Myers is such an awesome free culture advocate and a good friend, it would be a challenge to do something different, and that it would be pretty awesome to see a model of mine 3d printed. Besides, how long could an object that looked so simple take?</p> <p>Well it ended up taking about 5 times longer than I expected. But the results, I thought, were pretty good:</p> <p> <img src="" alt="urinal render" /> </p> <p>As usual, I neglected blogging about cool things once I'd done them, but Rob Myers pushed it all over the place. First a post on his blog called <a href="">Freeing Art History: Urinal</a>. Then he <a href="">uploaded it to Thingiverse</a> and... super cool... <a href="">BotFarm</a> (from the MakerBot people!) <a href="">printed one</a>. It looks super cool. Click that last link. Click it!</p> <p>But last, and most awesomely, <a href="">Rob got his Shapeways urinal print</a>, which looks super awesome. And guess what? <a href="">BoingBoing picked it up!</a> <i>Holy cow, I'm on BoingBoing!</i></p> <p>Anyway, the <a href="">urinal.blend</a> is available if you want to open it in Blender (also <a href="">CC BY-SA 3.0 Unported</a> licensed). Hopefully you can have fun using it (digitally or physically)! I also have some <a href="">renders</a> from <a href="">alternate</a> <a href="">angles</a> up if you want to look at those.</p> <p>Anyway, sometimes when I do things, I think "maybe I should blog about or promote these things". But then I feel like they aren't that impressive, don't matter too much, and sometimes I lose enthusiasm for putting them out there (which is somewhat ironic since a good portion of my life is about encouraging other people to put things out there in a free-as-in-freedom manner). I guess maybe the biggest thing I've learned from this is that maybe I should be more confident and enthused about showing the cool things I've done. Thanks Rob, for giving me an opportunity to learn that. :)</p> <p><b>PS:</b> I mentioned that most of my 3d modeling involves monsters, spaceships, robots, etc, and that here was an excuse to do something different. But that didn't stop me from making a <a href="">spaceshipified version</a>. :)</p> cwebber DustyCloud Brainstorms Christopher Webber's boring blog. 2011-02-23T07:17:13+00:00 Gonna speak on Blender at PyCon 2011 tag:,2011-02-19:,blog/entry;2011/2/19/gonna-speak-on-blender-at-pycon-2011 2011-02-19T17:48:10+00:00 <p>Are you going to <a href="">(the US) PyCon</a> this year? I am! And I'm pretty excited about it, since I will also be <a href="">presenting on Blender's new Python API</a>!</p> <p>The <a href="">talk lineup</a> looks really great this year. If you're planning to go, and you read this, maybe consider <a href="">contacting me</a>; maybe we could say hello, potentially having one or more interesting conversations!</p> cwebber DustyCloud Brainstorms Christopher Webber's boring blog. 2011-02-23T07:17:13+00:00 Linux and GMail Part III – Thunderbird 2011-02-18T03:20:43+00:00 <p>OK, as you can probably tell now, I have been wasting a lot of time playing with GUI email clients. Why you ask? Simple, I am nuts, like that wasn’t obvious! Like I did in <a href="">Part I</a> and <a href="">Part II</a>, I am going to do the same this go round, but with <a href="">Mozilla Thunderbird</a> instead.</p> <p>First off, I am using version <code>3.1.9~hg20110206r5951</code> from the <a href="">Ubuntu Mozilla Team Daily Builds PPA</a>. Forgot I added that PPA to check out Firefox, so because of that, I have the version of Thunderbird that I do.</p> :</p> <ul> <li>Unsubscribing from an IMAP folder does not hide that folder, you can still see it in the list, annoying</li> <li>I don’t use local folders, so I had to download <a href="">Mail Tweak</a> just to hide it. Mail Tweak has about 50 or so other tweaks built into it, but I am only using one of the tweaks</li> <li>You have to try a few shitty extensions until you find the right one</li> </ul> <p>I have 2 GMail accounts set up, and there are different folder views you can use. I was using the <em>Unified Folders</em> <a href="">IMAP IDLE</a>.</p> <p><a href=""><img src="" alt="Thunderbird" title="sm_tbird" width="308" height="191" class="aligncenter size-full wp-image-991" /></a></p> <p>It doesn’t look to bad in KDE. Of course it doesn’t fit in look wise, but that is easily overlooked when it comes to functionality, speed, and usability. I have installed the <a href="">Zindus< <a href="">Timothy Richardson<.</p> <p>So, are you a Thunderbird user? Am I missing anything? Any extension that is a must have? Any tips or tricks I need to know? Speak up in the comments and let me know.</p> <p><strong>NOTE:</strong> Inbox zero!!!</p> <p><a href="">Linux and GMail Part III – Thunderbird< Linux and Gmail II – Zimbra Desktop 2011-02-17T01:35:22+00:00 <p>Just the other day I posted about <a href="">Linux and Gmail</a> in reference to clients other than a web browser. I had noted trying out Evolution, KMail, Thunderbird, and of course Mutt which I use daily already. Well, <a href="">one of the comments</a>, by <a href="">David Fraser</a>, was about the Zimbra Desktop. I don’t think I have ever used a Zimbra client but I am fairly certain I have used their backend products in the past. Anyways, I went ahead and downloaded <a href="">Zimbra Desktop</a>, and after a fairly simple installation, have it up and running.</p> <p>The installation was fairly simple. You extract the tarball, then <code>sudo ./install.pl</code>,.</p> <p><a href=""><img src="" alt="Zimbra Desktop" title="sm_zimbra" width="308" height="191" class="alignnone size-full wp-image-988" /></a></p> <p <em>Reply</em> <em>r</em> key, and wouldn’t you know, my reply was ready to be created.</p> <p <em>Social</em> tab. This does not belong in an <em>Email</em> client at all. I don’t need Twitter, Facebook, or Digg in my email client. Also, one of the columns that were displayed to me under the <em>Social</em> tab was a Twitter trending topic, <a href="">#verysexy</a>..</p> <p>Oh, and one more thing, it is actually a web client, and uses <a href="">Prism</a>, which Mozilla is discontinuing and rolling the good stuff into Chromeless. So, keep an eye out, and if I think it is worthy of more discussion, I will add more to the comments, update this post, or create a new post in the future.</p> <p><a href="">Linux and Gmail II – Zimbra Desktop< #14: Knitting group and final coat of paint 2011-02-15T04:43:02+00:00 <p>Mondays are when the PSOne knitting group meets, so I went and put some rows on my scarf. I also put the last bits of paint on my cabinet. One more day left on that project until I reveal what it’s for!</p> <p>After examining the dry paint from yesterday I firmly believe the foam roller is the way to go. I think I could sand that out in no time and get a nice finish. However, I think I would need to prep the wood better than I did to make this particular piece look as nice as I want, so it’s just not going to happen this time. Oh well, another project!</p> Tim Saylor Tim Saylor Web Developer 2011-02-15T05:17:22+00:00 Linux and GMail 2011-02-15T01:15:34+00:00 <p>I.</p> <p.</p> <p:</p> <blockquote><p>All mail clients suck. This one just sucks less</p></blockquote> <p>I just went in and did an <code>apt-cache search gmail</code> and this is what it told me:</p> <p><img src="" alt="" title="apt-cache-search-gmail" width="469" height="265" class="size-full wp-image-986" /></p> <p <a href="">sparrow</a> or <a href="">Mailplane</a> for Linux. If you want to be opportunistic, there you go. I think an application like either of those 2 would be great.</p> <p>That’s all, just wanted to have a little fun today and it has been a while since I blogged, so I figured I would bother you all really quickly <img src="" alt=":)" class="wp-smiley" /> </p> <p><strong>EDIT:</strong> I Google’d <em>mac gmail</em> and realized they are as bad as Linux when it comes to the notifiers too. I didn’t Google <em>windows gmail</em> because they don’t matter anyways <img src="" alt=":)" class="wp-smiley" /> </p> <p><a href="">Linux and GMail< #13: Lathe Stuff & Painting 2011-02-14T05:03:31+00:00 <p>I had a very productive maker day today to make up for all the knitting throughout the week. First, I installed the new bearing on the metal lathe’s lead screw. It seems to be working much better now, but the change gears have been modified since I tested it before. I’ll have to change it back to absolutely confirm that it was an issue with that bearing, but for now it’s working well.</p> <p><a title="Metal Lathe by tsaylor, on Flickr" href=""><img src="" alt="Metal Lathe" width="500" height="281" /></a></p> <p>On the left of the lathe you can see my second project of the day. We have several brushes and tools that go with the lathe and they usually just lay in the chip tray or on the workbench. I made this rack for some of the tools and brushes to get the area more organized. It’s just some 1 inch holes in a scrap of 2×4, but it does the job well enough. </p> <p><a title="Tool Rack by tsaylor, on Flickr" href=""><img src="" alt="Tool Rack" width="500" height="281" /></a></p> <p>Finally, I put what is hopefully the last coat of paint on the secret cabinet. Like I said in the last post, I think my sanding and use of a brush was a problem. I got a sander that could take my higher grit sandpaper and used some 400 on it to smooth out the painted faces. It worked very well, some other people commented that it feels like plastic rather than wood. I think I probably should have started even lower than 400, but 400 did the job. I also got a foam roller and did most of the painting with that. It left a nice finish and any bubbling that occurred during application had dissipated by the time I got back around to that side to examine it. I still had to use the brush on the inner corners, but it’s greatly reduced. Maybe that’s what those foam brushes are for. Anyway, this isn’t actually the last application of paint because the frame for the door is painted on both sides. I’ll have to get the other side painted tomorrow. However, this is hopefully the last time I paint over existing paint.</p> <p><a title="Secret Cabinet Paint Job by tsaylor, on Flickr" href=""><img src="" alt="Secret Cabinet paint job" width="500" height="281" /></a></p> Tim Saylor Tim Saylor Web Developer 2011-02-15T05:17:22+00:00 Thing-A-Day #12: More paint on the secret cabinet 2011-02-14T03:13:59+00:00 <p>I put another coat of paint on the secret cabinet project. The reason this is taking so long is that I’d like to get it smoothed out to a mirror finish. I’m starting to get frustrated with that though. I keep putting on more paint and sanding but I don’t think I’m making any progress. Here are some of the problems I think I’m having:</p> <ol> <li>I’m using a paint brush. When I bought the paint I asked the Home Depot guy what the best way to get a really smooth finish was. He said either a Purdy brand brush or a foam roller. I went with the brush since the cabinet is small and the tray is more of a hassle. I think that was a mistake. A brush will always leave stroke marks in the surface, and that has to be sanded out. I’m going to switch to a foam roller tomorrow.</li> <li>I’m not sanding properly. I don’t know what grit of paper to use at which level of roughness, and committing the elbow grease to try something for it to be a failure is eating up too much time. I’m going to get a sander and try some lower grits.</li> </ol> Tim Saylor Tim Saylor Web Developer 2011-02-15T05:17:22+00:00 Thing-A-Day #7 – #11: Scarf knitting 2011-02-14T03:05:37+00:00 <p>Due to an explosion of work this week I was unable to do any making besides scarf knitting on the weeknights. I’ve made some nice progress on the black portion of the scarf and I’m going faster than I used to. Maybe I’ll actually be done with it in time to put it away in a box for next winter!</p> Tim Saylor Tim Saylor Web Developer 2011-02-15T05:17:22+00:00 Thing-A-Day #6: More knitting on the scarf 2011-02-07T03:52:01+00:00 <p>I’m crunched at work lately but I made time to knit for a while again. Hopefully I can finish this scarf by the end of February. I also bought some new yarn and needles to make the next project go a bit faster.</p> Tim Saylor Tim Saylor Web Developer 2011-02-15T05:17:22+00:00 Thing-A-Day #5: More progress on the secret cabinet 2011-02-06T04:01:24+00:00 <p>Here’s some more progress on the secret cabinet project. The door is framed and both are painted. Also, the plexiglass is cut to size for the door, but I’m not going to drill and nail it on until the painting is done. Here’s a current picture.</p> <p><a href="" title="Secret cabinet project 2 by tsaylor, on Flickr"><img src="" width="500" height="375" alt="Secret cabinet project 2" /></a></p> Tim Saylor Tim Saylor Web Developer 2011-02-15T05:17:22+00:00 Thing-A-Day #4: More Knitting 2011-02-05T03:51:32+00:00 <p>Work crushed me today and I had a lot of other stuff to do so I just did some knitting again tonight. I feel like I’m making good progress on the scarf but there’s just a ton of rows to go through. No picture today, it looks pretty much the same as last time.</p> Tim Saylor Tim Saylor Web Developer 2011-02-15T05:17:22+00:00 Thing-A-Day #3: Knit Scarf 2011-02-04T05:54:41+00:00 <p>I’ve been working on this scarf for a while with the PSOne knitting group. I didn’t have time today to get back in to working on the secret cabinet project so I knit on this for a while.</p> <p><a title="Thing-A-Day #3, work on my scarf by tsaylor, on Flickr" href=""><img class="alignnone" src="" alt="Thing-A-Day #3, work on my scarf" width="375" height="500" /></a></p> Tim Saylor Tim Saylor Web Developer 2011-02-15T05:17:22+00:00 Thing-A-Day #2: Progress on a cabinet 2011-02-03T05:03:23+00:00 <p>Today I worked on a larger project. I don’t want to reveal the whole thing until it’s done but it’s basically a cabinet. So far I cut the sides of the cabinet, routed out a groove to hold the back, and screwed it all together. Here’s a picture:</p> <p><a href="" title="secret cabinet project 1 by tsaylor, on Flickr"><img src="" width="500" height="375" alt="secret cabinet project 1" /></a></p> Tim Saylor Tim Saylor Web Developer 2011-02-15T05:17:22+00:00 February is Thing-A-Day month! Thing #1: Snowshoes 2011-02-02T04:08:53+00:00 <p>February is <a href="" target="_blank">thing-a-day month</a>, and I’m participating. Well, I’m not blogging with them, but I’m making something every day and blogging here about it.</p> <p>In honor, or perhaps in defiance, of the snowpocalypse setting in tonight, I made snowshoes. It’s just a rectangular wooden frame with a small platform to stand on and wrapped in fabric to add surface area. It didn’t work great, but it worked well enough. You can see the effectiveness in the photoset linked below:</p> <div class="wp-caption aligncenter"><a href="" target="_blank"><img title="Snowshoe victory!" src="" alt="" width="375" height="500" /></a><p class="wp-caption-text">Snowshoe victory!</p></div> Tim Saylor Tim Saylor Web Developer 2011-02-15T05:17:22+00:00 Blizzard of 2011 2011-02-02T03:45:06+00:00 <p>As I sit here writing this right now, we here in Chicago are hunkered down with the expectation of receiving more than 24 inches (60cm), of snow. Hoping I can get this posted before the power totally goes out here due to the wind. The wind is making it even worse. We have sustained winds.</p> <p><strong>EDIT:</strong> Forgot to add a link to pics I will be updating of our storm just in case you were interested. Head over to my <a href="">Picasa Pictures</a> and enjoy. Getting ready to head out and try to stream via U-Stream, because it is nuts right now.</p> <p>Well, I was looking at the state of documentation in both <a href="">KDE</a> and <a href="">Kubuntu</a> and realized it really needs a lot of help. Now that I am starting to have a bit more time available, I am looking at making my way back into contributing to both projects again. I have missed doing the work and hanging out with everyone online.</p> <p.</p> !</p> <p><a href="">Blizzard of 2011< I found the perfect project to dive into Haskell tag:blogger.com,1999:blog-3916802132854520262.post-1349234404922009183 2011-01-30T00:00:44+00:00 I've had a recurring project in my life, a modular software synth. <a href="">See here</a> to get an idea of what I'm doing. The difference being that with software synths, you're not limited to how many components you have and how they're configured. I somehow thought I came to this revelation on my own years ago, but there's plenty of this sort of thing out there, such as <a href="">SuperCollider</a>, which sounds like it's fairly popular.<br /><br />I've been trying to get into Haskell, but have been struggling to get myself out of my Python comfort zone. A friend of mine actually told me about SuperCollider recently, and I realized that my own synth would be the perfect project to get me started on Haskell, and one day last week I got inspired to get started. I found a simple example to start with of a sine wave being played through Pulse Audio and I went from there. <a href="">Here's my repo</a><br /><br />Unfortunately, you need PulseAudio to run this. I'm working on either getting Alsa output or file generation working soon.<br /><br />When I was making this a few times before, I made it in C++, the latest instance being several years ago. Amazingly enough, despite still being a novice in the language, I found that doing this in Haskell is easier. The infinite lazy lists work perfectly as signals. Before I considered each component to be an object that had a value, and input signals. And I had to have a global "tick" that conveyed info between items. (And I thought that was really neat at the time.) Now I just have components be functions that "output" (return) infinite lists, and take infinite lists as inputs. It all sortof just sorts itself out.<br /><br />Speed is sacrificed to be sure, at least so far, but real-time synths have been made for Haskell, so I bet I can profile it and optimize it significantly.<br /><br />Also you will notice that everything is hard coded! I sortof like it that way, it's amusing, particularly when it starts making beat sequences (which is a point I got to in my old version), but I'll probably make an interface at some point. Or maybe not, Haskell is a nice interface.<br /><br />Here's why I'm writing now of all times though. On top of being functional with the lazy infinite lists and such, Haskell also has a type system from Nazi Germany. This is actually an advantage, though. I have some trouble remembering all the unit conversions involved in the oscillators, when I'm dealing with cycles, seconds, and samples. So today, I made a type framework that provided functions that did the conversions properly. When I was writing out an improved version of my oscillator function, I used these types.<br /><br />It took me a long time to figure out exactly how I wanted it all to work, and how to make it work. I would start on an expression, and then realize that I was adding different units, and Haskell wouldn't let me do it. Or sometimes the compiler told me so. But eventually I got through it. <a href="">This is the monstrosity that resulted.</a><br /><br />And the kicker: I used this to make a new version of the square wave oscillator, and it sounded exactly the same as the old one, <i>the first time I ran it.</i><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div> Dan noreply@blogger.com Ill Logic Tech tag:blogger.com,1999:blog-3916802132854520262 2011-02-15T22:17:16+00:00 Qt in the land of Gnome-based desktops: The issue of copyright in Free software 2011-01-23T17:43:31+00:00 <p>Recently Mark Shuttleworth wrote about how <a href="">Qt will become part of the Ubuntu 11.10 desktop</a>,.”</p> <p!).</p> <p>The question is: How do you make that happen? All technical matters aside, how do you encourage GNOME developers to consider using Qt for their applications?</p> <p>To me, one major consideration in further developing the Qt-to-GNOME bridge, and encouraging developers to use Qt in GNOME-based desktops, involves copyright. I think the GNOME project, and its large group of developers, would be more likely to embrace Qt if Canonical did not put the dconf binding work (or other such Qt/GNOME integration work) under strict Canonical ownership via their <a href="">contributor agreement</a>.</p> <p>The issue is that the contributor agreement gives all copyrights of the work (even contributions made by non-Canonical employees) to Canonical, and permits Canonical to relicense the work (even make it proprietary) at their discretion. To me, this would present a considerable risk for the GNOME developers and for the GNOME project.</p> <p>The folks at Canonical have not yet indicated whether or not the contributor agreement applies, or will apply, to the early QT/dconf binding work. My thinking is that, if Canonical is disinclined to having the larger GNOME project use Qt, Canonical will request full copyright ownership of any Qt/dconf work. Thus, Canonical would “own the bridge” between the land of Qt and the land of GNOME, and anyone who wants to use that bridge would have to do so knowing that it could eventually be made proprietary.</p> <p>Moreover, I think it would be a bit ironic if Canonical put the Qt/dconf work under their contributor agreement. As I understand it, Canonical’s main justification for requiring copyright assignment is that they “wrote the code,” for that project, and would like to maintain ownership of it. While folks at Canonical may have done the initial Qt/dconf bindings work, a primary reason that Canonical is even able to safely use Qt in their business is because Nokia <a href="">opened up Qt, and removed the copyright assignment requirement</a> from Qt contributions. Surely the Qt codebase, along with all of its associated tools, is much larger than any binding work (no matter how significant), so Canonical’s reasoning wouldn’t seem to be as applicable here.</p> <p>However, if Mark and the rest of the folks at Canonical actually wants GNOME developers to embrace Qt on equal footing with GTK, they will either donate out the Qt/GNOME integration work to the larger GNOME community, or they will push the integration work upstream to Qt.</p> <p>I’m hopeful that the folks at Canonical will choose either of the latter two options and make their initial Qt/Gnome integration work available under the same copyright-free terms that Qt has been made available to them. I agree with Mark when he writes, “it’s the values which are important, and the toolkit is only a means to that end.” While it may ruffle some feathers initially, having Qt as a viable option for development in GNOME-based desktops can only improve the free software ecosystem by giving developers more choices in the tools that they are able to use.</p> <p>As a closing note, some of what I’ve written here is speculation and opinion, but if I’ve misunderstood anything, or if anyone can shed further light on this topic, please share a note in the comments.</p> Jim Campbell Notes from the mousepad user help, free and open source 2011-02-23T05:17:24+00:00 Testing Multiple Login Sessions Simultaneously tag:blogger.com,1999:blog-3916802132854520262.post-3365435037514887745 2011-01-02T18:04:36+00:00 <div>One annoyance in developing websites is that you sometimes have to log in and out all the time to test interaction between multiple users.</div><div><br /></div><div>Have you ever visited or administered a website (say,) which lets you visit "" or "www2.example.com", etc, and doesn't forward to "example.com"? Did you ever try logging in at one subdomain, and then switch to another? You'll be logged out, it's a different login session. If you needed to test something remotely with multiple users logging in at once, that's a nice trick to use.</div><div><br /></div><div>Now let's do the same thing locally (*nix systems only afaik, sorry):</div><div><br /></div><div><b>In /etc/hosts you should see:</b></div><div><span class="Apple-style-span"><span class="Apple-style-span"><br /></span></span></div><pre>127.0.0.1 localhost</pre><div><br /></div><div><b>Add the following:</b></div><div><br /></div><pre>127.0.0.1 localhost2<br />127.0.0.1 localhost3<br />127.0.0.1 localhost4<br /></pre><div><br /></div><div>And so on for however many you need. Now each one will access your site with a different session, so you can log in as a different user for each.</div><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div> Dan noreply@blogger.com Ill Logic Tech tag:blogger.com,1999:blog-3916802132854520262 2011-02-15T22:17:16+00:00 RelaxNG, Entities, and Namespaces 2010-12-19T16:32:19+00:00 <p.</p> <h2>Entities?</h2> <p>Before I start, though, if anyone is wondering what I’m talking about when I say, “entities,” they are a handy variable-like feature of XML and a couple of other markup languages. For example, they allow you to type something like <code>&exaile;</code> into your document, and then have it magically parsed as:<br /> <code><pre class="brush: xml"><guiseq><gui>Applications</gui><gui>Multimedia</gui><gui>Exaile</gui><guiseq></pre></code><br /> The final, rendered result would be a familiar GUI click-path like, <code>"Click Applications > Multimedia > Exaile."</code> While entities have their limitations and are not ideal for all use-cases, they serve a purpose. <a href="">This web page</a> gives a good overview of entities and how to use them.</p> <h2>Using Entities in Mallard / RelaxNG documents</h2> <p>To set up and use entities in your XML-based document, you basically need three things. You need a file that contains the entities you want to use*, you need to declare where those entities are tracked, and you need to actually use the entities in your document.</p> <h3>The Entities File</h3> <p>Previously, step one was very easy. You would just create a file that contained values like this:<br /> <code><pre class="brush: xml"><?xml version="1.0" encoding="UTF-8"?> <!-- MENUS --> <!ENTITY abiword '<guiseq><gui>Applications</gui><gui>Office</gui> <gui>AbiWord</gui></guiseq>'> <!ENTITY about-me '<guiseq><gui>Applications</gui><gui>System</gui> <gui>Users and Groups</gui></guiseq>'></pre></code><br />:</p> <p><code><pre class="brush: xml"><?xml version="1.0" encoding="UTF-8"?> <!-- MENUS --> <!ENTITY abiword '<guiseq xmlns=""><gui>Applications</gui> <gui>Office</gui><gui>AbiWord</gui></guiseq>'> <!ENTITY about-me '<guiseq xmlns=""><gui>Applications</gui> <gui>System</gui><gui>Users and Groups</gui></guiseq>'></pre></code></p> <h3>Out of the Woods</h3> <p>Once you make it past step one, the rest is a walk in the park.</p> <p>Step two is to declare your entities in the start of your documentation files. I modified a <a href="">DocBook 5 example</a> to come up with this:<br /> <code><pre class="brush: xml"><?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE page [ <!ENTITY % entities-xubuntu SYSTEM "libs/xubuntu.ent"> %entities-xubuntu; ]> <page xmlns="" type="topic" style="task" id="music"></pre></code></p> <h3>Wrapping Things Up</h3> <p>Step three, using your entities in your documents, is no different than what you would have done with DocBook 4 or any other XML-based syntax. For example, typing <code>&abiword;</code> will be parsed as:<br /> <code><pre class="brush: xml"><guiseq><gui>Applications</gui><gui>Office</gui><gui>AbiWord</gui></guiseq></pre></code><br /></p> <p>and will cause “<code>Application > Office > Abiword,</code>” to magically appear in your rendered documentation.</p> <p>If you have any questions, corrections, or suggestions (as I’m sure you’re all keen to be chatting about XML entities), feel free to leave me a note in the comments.</p> <p>* I’m using external entities, which requires a separate file, but you can also use named or charater entities.<br /> ** Apparently the namespace of the parent node <a href="">is no longer inherited by the entities</a>, so you need to declare the namespace in the entity itself.</p> Jim Campbell Notes from the mousepad user help, free and open source 2011-02-23T05:17:24+00:00 Cryptonomicon: A Lesson for my Hyper-Logical Friends tag:blogger.com,1999:blog-3916802132854520262.post-3634689946251510003 2010-12-19T11:04:55+00:00 <div>I'm currently reading Cryptonomicon by Neil Stephenson. I'm not very acquainted with literature at large, so forgive me if I'm being ignorant here, but it seems that this book is unique or among very few that are in wide release and yet somewhat esoteric. That is to say, anybody can appreciate it, but I think it speaks specifically to computer programmers and mathematicians, and may not be 100% understood by those who are unfamiliar with certain mathematical and engineering concepts, and who don't share that mentality. Then again, the purpose could be to provide some insight to outsiders who may want to understand the hyper-logical nerd mentality. Tom Wolfe seems to do a similar thing, for instance, with the investment bankers in Bonfire of the Vanities.</div><div><br /></div><div>Though I think Neil Stephenson must have a closer personal connection with this mentality. It's a great book for a nerd because it's literature we can really relate to. It's told from the perspective of those of us who try to make logical sense of everything, see patterns all around us, and are confused by strange things like social niceties.</div><div><br /></div><div>All in all I think it teaches an important lesson to nerds and non-nerds alike. I only just now crossed the 1/3 way mark (it's like 1100 pages), but I just came across some particular dialog which I think is particularly insightful. In this scene, Randy Waterhouse pulls Eberhard Föhr aside during a business meeting, and explains to him why, for their own legal protection, information has been withheld from them by one of their business partners, Avi. Ebehard, being of this nerd mindset, is frustrated that his business partners are not behaving logically. Randy, being of the same mindset but somewhat more enlightened, explains to Ebehard the realities of dealing with illogical people, but he does so in logical terms that Ebehard can relate to. This conversation is amusing like a lot of things in this book, because it demonstrates how us analytical types like to deconstruct everything.</div><div><br /></div><div>Rather than risk inviting Neil Stephenson's lawyers (I have no idea how likely a scenario this is be but I don't care to do the research right now) I'll just invite you to <a href="">read this page via Google Books.</a></div><div><br /></div><div><br /></div><div>I appreciate a couple things about this passage. Firstly, I appreciate that Randy's character is sort of an enlightened techie, who we should aspire to, who respects the qualities of other sorts of people, even if he doesn't understand their mentality. Business people clueless about technology, idealistic designers with a vision, techies who can't design a usable interface to save their life, we should all accept our own limitations of understanding, respect the others, and occasionally yield our own ideals for the sake of other ones. (ex: if "doing it right" means taking twice as long, and failing in the market, what use is your ideally laid out code if nobody's going to use it?)</div><div><br /></div><div>The other thing I like about this passage is, as I mentioned above, the logical way that it approaches illogical people. Some nerds have a tendency to refuse to approach the world in anything other than a logical manner. Normal People may try to explain to them that the world, particularly other individuals, aren't rational at all, and we should stop seeing things so logically. I include myself in this group of nerds, so honestly, this line of argument is ridiculous to me. The universe is logical. But, I think that sometimes we as nerds are just Doing It Wrong, and we can take a cue from Randy here.</div><div><br /></div><div>What we need to do is to appreciate that the fact that people act irrationally, out of emotion, is just a condition of the world. Just as we accept that animals are irrational, or that the sun is hot. It's a datum. Further, accept that you yourself, the nerd, are also emotional, particularly when people don't act logically. This frustration with others' illogical behavior is based on an expectation for people to act contrary to their nature. You're ignoring a data point. You're mad at the sun for being hot. You're a non-techie who's mad at your computer for doing something other than exactly what you told it to. Now look who is being irrational? I'm going to agitate a little and propose that we are in fact being hypocritical here.</div><div><br /></div><div>The main problem I think we sometimes have is the distinction between Logic and Logical Faculties. The expectation of perfection in Logic is not the same as expecting a human to have perfect Logical Faculties. The universe works by rational laws. People are part of the universe, so their workings are rationally explainable. But this is entirely distinct from their Logical Faculties being able to perfectly model the world around them. Furthermore, people's Logical Faculties being able to model the world around them is distinct from their ability to defend it from any of their Emotional Faculties getting in the way. We humans are but animals who happen to possess a limited amount of logical faculties.</div><div><br /></div><div>Expecting people to act in a rational straightforward manner is like expecting a computer to compute beyond its capacity. A problem may be Logically solvable. There is a perfect Logical progression toward the answer. If we treated computers the same way we sometimes treat other humans, we would demand that we should be able to stick the problem into a computer and get an instant output. But again, Logical Faculties are in limited supply. Somehow we don't seem to have a problem accepting this in computers. In fact, we have entire sub-fields of computer science, taking RAM, HD, and time limitations as data, and creating a whole new set of Logical problems. Why not accept the same limitations and challenges in humans?</div><div><br /></div><div>Perhaps it's that there is one fundamental difference between computers and humans, which is that our departure from being perfect logic solvers is not just in our processing capabilities, but also, as Randy pointed out in the passage linked above, in our interfaces. Human interfaces are more like neural networks than serial connections. To gain access to the Logical Faculties, one must enter a pattern that is accepted by the neural network. The patterns include such things as social niceties and innuendo. Some of us have simpler interfaces than others. (And as Randy described, some may even require other humans to act as intermediate interfaces. When I worked at Oracle, there was a guy who was fluent in both Engineer and Customer, and intermediated all conversation. I understand this is a common thing to have in a company.)</div><div><br /></div><div>And you, the nerd, are a neural network, at your core, not a Turing machine. You operate in that domain. That means you have the natural ability, however impaired by years sitting in front of the computer, to interface with other neural networks, if you would just accept your nature. This is in fact the only way you can communicate with other humans, so you might as well accept it for what it is. You may try to approximate a Turing machine, but your neural network nature will still show on occasion. For instance, as I pointed out above, when you are frustrated about others not behaving like Turing machines.</div><div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div> Dan noreply@blogger.com Ill Logic Tech tag:blogger.com,1999:blog-3916802132854520262 2011-02-15T22:17:16+00:00 Promise me you won't fail like this 2010-12-15T06:08:15+00:00 <p><strong>UPDATE:</strong> after just over 2 days, all of my whining may have paid off. I woke up 12/15 to find my GMail working again. I would like to think a Google employee who contacted me on Twitter got it fixed or at least escalated for me.</p> <p><img src="" alt="" title="gmail_fail" width="500" height="138" class="alignnone size-full wp-image-924" /></p> <p>If you can’t read what that says, here it is:<br /> <strong>Temporary Error(500)</strong><br /> <cite>We’re sorry, but your Gmail account is temporarily unavailable. We apologize for the inconvenience and suggest trying again in a few minutes.</cite></p> <p>Well, 2 days later, not a few minutes, it is still dead. So I did what everyone else would do in a situation like this, I head to <em>Tech Support</em>..</p> <p>So I post my problem initially with a subject of <em>Temporary Error(500) – Numeric Code: 93</em>.:</p> <blockquote><p>Did you check this link? Have you done this?</p></blockquote> <p>At this point I have steam coming out of my ears. How in the hell could you ask me that question if you read my initial post? How? I don’t get it? Supposedly this person is a ‘Level 4′,.</p> <p.</p> <p.</p> !</p> <p>OK, my venting, or dribble, is done here. Everyone enjoy your day and the rest of your week!</p> <p>NOTE: I have another GMail account for cycling stuff that works perfectly. I have tried no less than 6 other browsers, 2 other operating systems, cleared cache, history, cookies, and candy bars.</p> <p><a href="">Promise me you won't fail like this</a> is a post from <a href="">Richard A. Johnson</a>'s <a href="">blog</a>.< That’s the last time I trust Ubuntu to upgrade correctly 2010-12-09T00:18:37+00:00 <p>I host <a href="" target="_blank">Barcamp Chicago’s website</a>. It’s a custom Django site on Ubuntu. I recently upgraded the server to the latest LTS and later discovered my Postgresql database was gone. Postgresql had gone from 8.3 to 8.4 in the upgrade, but since it didn’t warn me about needing to migrate the data I assumed it took care of that. That was a mistake. I had three databases on Postgresql 8.3 and none of them were present anymore. I read on a forum that I could reinstall 8.3, but that person was working from Karmic not Lucid like I had. I ultimately had to:</p> <ol> <li>add the Karmic repositories to apt,</li> <li>shut down 8.4,</li> <li>install 8.3,</li> <li>go through the normal data dump procedure for upgrading Postgresql manually,</li> <li>uninstall 8.3,</li> <li>start 8.4,</li> <li>and load all the data again.</li> </ol> <p>After that the CLI showed my databases were present so I relaunched the barcamp site, but it still wasn’t connecting. A little more googling revealed that Postgresql likes to increment the port number it listens on when there are two versions installed on the same machine. That was indeed the problem, so I changed the port number back, restarted it and Apache, and finally I’m back to where I started.</p> <p>I should have known better than to trust Ubuntu to migrate the data, but even if I did that myself I’d never expect a minor version upgrade to listen on a different port when that upgrade disables the old version anyway.</p> Tim Saylor Tim Saylor Web Developer 2011-02-15T05:17:22+00:00 The blurry line between the web and the desktop 2010-12-07T13:52:50+00:00 <p>I implement an HRIS web-app for my dayjob, and one of my clients was having trouble with our application today. She said that she had uploaded some graphics to the site, and that now they were gone. This sounded strange to me – I’ve never seen any content just disappear from our site. That being said, I checked a few things from my end, took down some notes, and explained that I would look into it a bit further and call her right back.</p> <p>About 5 minutes later, I got an email from her saying that she had fixed her problem. She had rebooted her computer, and now the pictures were back. Apparently she either thought that rebooting her computer had fixed an issue that had been ocurring at our data center (?!?), or that the issue was caused by something on her own computer (and that rebooting her entire PC had somehow fixed it).</p> <p>I’ve never seen such confusion between web and desktop-based apps before, and wonder if others have ever seen the same thing.</p> <p>She is actually one of my favorite clients, so I think I’m going to ask her a few more questions about her computing problems from yesterday. Then I’ll probably need to explain how clearing browser’s cache and cookies can be a means of “rebooting” a web application.</p> Jim Campbell Notes from the mousepad user help, free and open source 2011-02-23T05:17:24+00:00 Flourish Open Source Conference – Call for Speakers 2010-11-23T01:54:03+00:00 <p>The University of Illinois at Chicago is once again planning to host their <a href="" target="_blank">Flourish Open Source conference</a>, and have put out a <a href="" target="_blank">call for speakers</a>. The event will be held April 1-3, 2011, on the university’s campus on the near Northwest side of Chicago.</p> <p>Though the conference is run by student volunteers, you wouldn’t know it judging by how well it is run. This will be its fifth year, and each event has been well-organized, informative and fun.</p> <p>For those who are interested in speaking:</p> <p>Presentations are typically an hour long (including Q&A) and discuss<br /> open-source-related matters of technical, community, or industry importance.<br /> Past presentations have tackled a diverse array of topics – from kernel hacking and programming languages, to community/project management and women in open-source. If they get several proposals around a particular topic, they may opt to build a panel discussion.</p> <p>Workshops are usually three hours long, and explore a particular topic in an<br /> intensive, hands-on environment. In the past, Flourish has offered workshops<br /> on Android, Websphere, Erlang, Processing, Plone and Drupal. The organizers<br /> provide all necessary connectivity.</p> <p>The organizers will be accepting proposals via their <a href="" target="_blank">speaker proposal</a> page up through the Christmas holiday.</p> Jim Campbell Notes from the mousepad user help, free and open source 2011-02-23T05:17:24+00:00 Ubuntu Chicago Maverick Release Party Today! 2010-11-21T17:18:40+00:00 <p><strong><u>WHO</u></strong><br />Ubuntu Chicago LoCo Team</p> <p><strong><u>WHAT</u></strong><br />Maverick Release Party</p> <p><strong><u>WHEN</u></strong><br />Today! Sunday, November 21, 2010 from 3PM until 6PM</p> <p><strong><u>WHERE</u></strong><br /><a href="">Pumping Station One</a><br />3354 N. Elston<br />Chicago, IL 60618<br /><br /><small><a href="">View Larger Map</a></small></p> <p>Please stop by and say hi. We will have some snacks, some CDs, and a lot of fun! Hope to see you there!</p> <div class="shr-publisher-919"></div><p><a href="">Ubuntu Chicago Maverick Release Party Today!< Emacs appointment notifications via XMPP tag:,2010-11-21:,blog/entry;2010/11/21/emacs-appointment-notifications-via-xmpp 2010-11-21T16:48:37+00:00 <p> Since I've started using Emacs' appointment notifications with orgmode, I've wished that I could get notifications via XMPP. I think it's the most sensible system to use; I have it running on both my desktop, my phone, and my laptop, and the whole issue of "figuring out which device to send this notification to" has already been evaluated and solved by the XMPP community long long ago (back when everyone called XMPP Jabber, even ;)). </p> <p> I initially thought I'd use a <a href="">SleekXMPP</a> bot connected to emacs via D-Bus, but then I decided that maybe I would eventually want to add more commands to this that integrated more closely with emacs, so maybe I should use emacs lisp directly. I had heard of <a href="">Jabber.el</a> but thought that it was mainly aimed at users who want a client, and that writing a bot in it would end up cluttering up my emacs with extra UI stuff I don't want. Then I was pointed at <a href="">Steersman.el</a>, and that seemed like a cleanly written bot, so I decided to give it a shot. </p> <p> I was running a newer version of JabberEl than the copy of Steersman's code I looked at, so it took a little bit to figure out how to adjust for the multi-account code, but once I did that the implementation happened fairly quickly. Here's the relevant code: </p> <p> </p><pre>;; Copyright (C) 2010 Christopher Allan Web. (require 'jabber) (load-file "~/.emacs.d/emacs-jabberbot-login.el") (defun botler->appt-message-me (min-to-app new-time appt-msg) "Message me about an upcoming appointment." (let ((message-body (format "Appointment %s: %s%s" (if (string-equal "0" min-to-app) "now" (format "in %s minute%s" min-to-app (if (string-equal "1" min-to-app) "" "s"))) new-time appt-msg))) (jabber-send-sexp (jabber-find-connection "thisbot@example.org") `(message ((to . "sendto@example.org") (type . "normal")) (body () ,message-body))))) ; I don't care when people come online to my bot's roster. (setq jabber-alert-presence-hooks nil) (setq appt-display-format 'window) (setq appt-disp-window-function 'botler->appt-message-me) (setq appt-delete-window-function (lambda ()))</pre> <p> Adjust "thisbot@example.org" with your bot's JID and "sendto@example.org" with who you want to send messages to. You can replace emacs-jabber-bot-login.el with whatevever you want to login with, but you probably want to setq jabber-account-list and then run (jabber-connect-all). Note that if you're connecting with a self-signed cert with Jabber.el you'll need to do: </p> <p></p><pre>(setq starttls-extra-arguments '("--insecure")) (setq starttls-use-gnutls t)</pre> <p> I haven't yet figured how to whitelist my own self-signed cert yet, and passing in --insecure makes me feel like a monster, but it works for now. Maybe it's about time I finally got my ssl cert signed for dustycloud.org. </p> <p> Anyway! It works, and I've been successfully getting appointment messages from my emacs session over IM for the last week, and it's pretty great. Next up, configuring things so that I can retrieve my agenda over IM when I request it and be able to IM myself new tasks and events. </p> cwebber DustyCloud Brainstorms Christopher Webber's boring blog. 2011-02-23T07:17:13+00:00 allCombinations: leaveOut tag:blogger.com,1999:blog-3916802132854520262.post-1410413499752719095 2010-11-15T01:43:00+00:00 <b>leaveOut</b><br /><br />Ok, on my previous post I brought up a small Python module I was inspired to throw together while writing tests. It's still sitting in a gist, though I'll probably move it to a real repo before too long:<br /><br /><a href=""></a><br /><br />So I admit, as I was posting it, it occurred to me that to a large extent this stuff could be replaced with a nested for loop. For instance, this:<br /><br /><pre>for lst in allCombinations([1, 2, oneOf(3,4), oneOf(5,6)]):</pre><br />can be pulled off with:<br /><br /><pre>for x in (3, 4):<br />for y in (5, 6):<br /> lst = [1, 2, x, y]</pre><br />Not a huge gain necessarily on my part. So as I was using it in my testing I realized I once again had engineered something for a tiny use that, neat as it is, could have been done much faster by brute force. But then, I realized another thing I could add that would make my code much more concise. I've added another keyword called "leaveOut". It lets you opt to not have the element show up at all. Here's an example:<br /><br /><pre>allCombinations([1,2, oneOf(3, leaveOut), oneOf(4, leaveOut)])</pre><br />This will return:<br /><br /><pre>[ [1, 2, 3, 4], [1, 2, 3], [1, 2, 4], [1, 2] ]</pre>And of course, the "leaveOut" case will omit dictionary entries and object data members as well.<br /><br /><b>BTW</b><br /><br />I should also mention another use case I thought of, "leaveOut" aside, that might be a real pain to do without an aide such as allCombinations, which is dynamically created structures, with an arbitrary amount of variables:<br /><br /><pre>allCombinations( [ oneOf(1, 2) ] * x )</pre><br />I've just generated all possible lists of either 1 or 2, of an arbitrary length, which can be set at runtime. Or how about something a bit more fun:<br /><br /><pre>allCombinations( [ oneOf( *range(y) + [leaveOut] ) for y in range(x) ] )<br /></pre>Taking all combinations of lists of length x, where each element can equal any integer from zero to its index, and then adding combinations where items are omitted. Not horribly useful, but complicated.<br /><br />To do these in a standard way you'd need x for loops, which you can't do directly. (I bet you could do it with recursion).<br /><br /><span>Fixes</span><br /><br />I'll also mentione that I fixed a couple general errors. oneOf on Data members had a big bug. And now if you don't have oneOf in your structure, allCombinations just returns a list containing only the original structure, instead of looping to death.<div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div> Dan noreply@blogger.com Ill Logic Tech tag:blogger.com,1999:blog-3916802132854520262 2011-02-15T22:17:16+00:00 allcombinations - generating combinations of python structures tag:blogger.com,1999:blog-3916802132854520262.post-6526394923062999111 2010-11-12T15:53:52+00:00 Alright, on a whim I decided to make another tricky thing in Python. This one is less of a hack, and is more likely to be useful.<br /><br />So let's say you want to do something with all combinations of... something.<br /><br /><pre>[5, 6, oneOf(7,8,9), oneOf(10, 11, "shazaam")]</pre><br />So you want to turn this structure into all the possibilities represented within:<br /><br /><pre>[<br />[5, 6, 7, 10],<br />[5, 6, 7, 11],<br />[5, 6, 7, "shazaam"],<br />[5, 6, 8, 10],<br />[5, 6, 8, 11],<br />[5, 6, 8, "shazaam"],<br />[5, 6, 9, 10],<br />[5, 6, 9, 11],<br />[5, 6, 9, "shazaam"],<br />]</pre><br />Well with allcombinations, you can do just that:<br /><br /><pre>from allcombinations import allCombinations<br /><br />allCombinations( [5, 6, oneOf(7,8,9), oneOf(10, 11, 12)] )</pre><br /><a href="">Here it is.</a> The gist includes more complicated example.<br /><br /><span>Features</span><br />The oneOf should be able to reside almost anywhere in your expression. It can be in a list (as seen here), in a dict, or even in the attribute of an object. It can also reside in a list within a dict within an object's attribute, etc, as long as it's nowhere within an unsupported container type.<br /><br /><span>Limitations:</span><br />This will only work if oneOf resides in a list, dict, or an object's attributes. It shouldn't work anywhere within a set, or any other structure I can't think of. If you try it in something unsupported, the oneOf object should just stick around in all your combinations.<br /><br />I'm probably actually going to use this, particularly (again) for testing. Anyone else think they'd find it useful? Should I package it?<div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div> Dan noreply@blogger.com Ill Logic Tech tag:blogger.com,1999:blog-3916802132854520262 2011-02-15T22:17:16+00:00 Testing (Django) views with pyquery tag:blogger.com,1999:blog-3916802132854520262.post-2090118035059062933 2010-11-12T10:39:11+00:00 <a href="">PyQuery</a> is basically what it sounds like. Using <a href="">jQuery</a> syntax, you can query and even manipulate XML files. Obviously we don't (yet!) have Python in the browser, so it's not useful in the same domain, but it can help out in dealing with XML in general, in the same way as, say, lxml, but without having to learn about things like ElementTree for simple cases. It's particularly good for XHTML because jQuery (and thus PyQuery) uses CSS syntax for class= and id=. Which brings me to how I'm using it:<br /><pre><br />from django.test.client import Client<br />from pyquery import PyQuery<br />from django.test.testcases import TestCase<br /><br />...<br /><br />class TestSomeViews(TestCase):<br /><br /> def testAView(self):<br /><br /> client = Client()<br /><br /> ...<br /><br /> response = client.get("/someurl/")<br /><br /> self.assertTrue("expected text" in PyQuery(response.content)("#someid").html() )<br /></pre><br />(If you're unfamiliar with testing Django views, <a href="">see this</a>.)<br /><br />For some basic tests, you can just search the entire response html, and not have to worry about where it shows up. But suppose you're searching for a username in a particular part of your response. You're pretty likely to find that username elsewhere on the page, so you have to select out the part of the file you expect it. I think this is much easier than using a regex.<br /><br />So what this bit of PyQuery does is find the tag with the id of "someid" (presumably there's only only one, being an id), and returns the html within that tag. (If you search for a class that returns multiple tags, it seems that a simple call to .html() will only return the contents of the first one. This very well may match jQuery's behavior, I'm admittedly not that familiar, but just a head's up.) For more details look at the <a href="">PyQuery API</a>.<div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div> Dan noreply@blogger.com Ill Logic Tech tag:blogger.com,1999:blog-3916802132854520262 2011-02-15T22:17:16+00:00 Linux Software Developer Required 2010-11-10T05:42:12+00:00 <p>My buddy Curtis is at it again with his awesome company, <a href="">Bluecherry</a>..</p> <p>Well Curtis hit me up on IRC this evening, and it seems my <a href="">previous post where he was looking for a Qt dev</a> worked out well for him. He asked me again to help out, and of course I am more than happy. Seems I am doing better at getting other people employed than I am with getting myself employed. Oh well.</p> <p:</p> <ul> <li>Prior experience in Linux based software design / implementation including design</li> <li>Extensive knowledge of Ubuntu, including building / maintaining Debian packages</li> <li>Extensive knowledge of the Video4Linux2 and ALSA sound API</li> <li>Prior experience with gstreamer and RTSP</li> <li>Prior experience with SQLite, Postgres and Mysql</li> <li>Excellent verbal and written communication skills</li> <li>Linux operating system development (device and kernel level) recommended</li> <li>Strong knowledge of C, PHP, Javascript required. Knowledge of Perl and Python suggested</li> <li>Strong knowledge of Apache2 and prior experience in writing PHP modules</li> <li>Played a leading role in the design and develop of previous client / server based applications</li> <li>Previous work with and understanding of working with video / audio formatting / codecs including MPEG4 and H.264</li> <li>Internet and operating system security fundamentals</li> <li>Sharp analytical abilities and proven design skills</li> <li>Strong sense of ownership, urgency, and drive</li> <li>Demonstrated ability to achieve goals in a highly innovative and fast paced environment</li> </ul> <p>Sound like your type of job? If so, head on over to the <a href="">Monster page for the position, Linux video surveillance software developer</a>..</p> <p>There are great perks, so hurry up while it lasts!</p> <div class="shr-publisher-914"></div><p><a href="">Linux Software Developer Required< Xfce 4.8pre1 is released 2010-11-07T19:21:47+00:00 <p>Today the Xfce team released the first official pre-release build of what will later become Xfce version 4.8.</p> <p>From the release announcement:</p> <p>This release incorporates major changes to the core of the Xfce desktop<br /> environment and hopefully succeeds in fulfilling a number of long time<br /> requests. Among the most notable updates is that we have ported the<br /> entire Xfce core (Thunar, xfdesktop and thunar-volman in particular)<br /> from ThunarVFS to GIO, bringing remote filesystems to the Xfce desktop.<br /> The panel has been rewritten from scratch and provides better launcher<br /> management and improved multi-head support. The list of new panel<br /> features is too long to mention in its entirety here. Thanks to the new<br /> menu library garcon (formerly known as libxfce4menu, but rewritten once<br /> again) we now support menu editing via a third-party menu editor such as<br /> Alacarte (we do not ship our own yet). Our core libraries have been<br /> streamlined a bit, a good examplle being the newly introduced libxfce4ui<br /> library which is meant to replace libxfcegui4.</p> <p>Perhaps the most important achievement we will accomplish with Xfce 4.8<br /> is that, despite suffering from the small size of the development team<br /> from time to time, the core of the desktop environment has been aligned<br /> with today’s desktop technologies such as GIO, ConsoleKit, PolicyKit,<br /> udev and many more. A lot of old cruft has been stripped from the<br /> core as well, as has happened with HAL and ThunarVFS (which is still<br /> around for compatibility reasons).</p> <p>There will be several additional pre-releases prior to the final release in January. Read the <a href="" target="_blank">full release announcement</a> (best viewed in Firefox) for more information about this particular pre-release build.</p> Jim Campbell Notes from the mousepad user help, free and open source 2011-02-23T05:17:24+00:00 Project News & Status Updates 2010-10-27T02:43:03+00:00 <p>Here’s a somewhat quick run-down of some projects with-which I’ll be participating, and some other projects that, while I might not be a direct participant, I am curious to watch develop.</p> <p><span><strong>Xfce Updates</strong></span></p> <p>I see the LXDE project get a good amount of attention lately, in large part (I think) because it uses somewhat less memory than Xfce. Xfce is still going on strong, though, and plans are in the words for the eventual release of Xfce 4.8.</p> <p>Jérôme Guelfucci recently provided a brief update on <a title="Xfce 4.8 status updates" href="" target="_blank">what’s going on with Xfce</a>, and one of the big things is a push for updated documentation. I’ll be contributing to that, and will likely be borrowing some of the user-help topic “stubs” that have been put-together by the GNOME Documentation team. I’ll be sure to share any relevant topic stubs with them, too.</p> <p>Jannis Pohlman has also started the process of <a href="" target="_blank">forming an Xfce foundation</a>. Jannis notes that this would make Xfce a legal entity with a board of directors, and that it would help to raise funds through sponsors and other contributors for hackfests and other events.</p> <p><span><strong>My Documentation Projects</strong></span></p> <p>Lately I have been continuing work on <a href="" target="_blank">gedit documentation</a> and have also done some initial work on updating the <a href="" target="_blank">Ubuntu Packaging Guide</a>. I should have the gedit docs well-drafted within another week or so, but I welcome suggestions and contributions with regards to the Packaging Guide.</p> <p>Thus far, I’ve drafted the Packaging Guide in Mallard, and although Mallard is XML-based, it is much simpler than DocBook. It is not difficult to learn, and you can draft-up a nice-looking, topic-focused documentation set with it rather quickly. I also know that there was a UDS session about the Packaging Guide today, so I welcome any feedback that resulted from that session, too.</p> <p><span><strong>Other Documentation Projects of Note</strong></span></p> <p>In non-Linux-help news, there are a couple of interesting DITA-related projects that I’ve been wanting to mention. If you haven’t heard of it before, DITA* stands for the <a href="" target="_self"><em>Darwin Information Typing Architecture</em></a>, an XML-based syntax originally developed (and later open-sourced) by IBM. The toolkit that processes the syntax, the DITA Open Toolkit, is Java-based, though, which I think has somewhat slowed its adoption in the Linux community. (Currently, only OpenSUSE packages DITA and the DITA Open Toolkit, but their implementation is a bit broken, perhaps due to an outdated version of Saxon in the OpenSUSE repositories.)</p> <p>When people ask me what the big deal is about DITA**, I like to point them to <a href="" target="_blank">this white paper</a> (PDF). It seems to provide a pretty clear picture of what DITA can help you do, even if it does make it look easier to implement than it is in real life.</p> <p>There are a couple of DITA tools on the horizon that look to make it a bit easier to work with, though. A group of Drupal developers are working with DITA developers to build <a href="">a Drupal-based DITA authoring platform</a>. From what I can tell, it will be released under an open-source license. They are just in the planning stages now, but I’ve relayed the Ubuntu Documentation / Ubuntu Manual / Ubuntu Learning Team’s requirements from what we talked about this past summer when considering the Ubuntu Learning Center.</p> <p>Also, Don Day, the chair of the OASIS DITA Technical Committee, has put together a Free-as-in-Freedom web-based DITA platform. It’s in its early stages, too, but you can get a look at it <a href="">here</a>. You can log in as a guest, and then select <strong>topic tools</strong> from the bottom of the page to have a go at editing the document.</p> <p><span><strong>A Docs Conference? In Ohio?</strong></span></p> <p>Finally, there is word on the street about the possibility of a docs conference in Cincinnati during the first weekend in June. I’ve expressed interest in helping with planning and organizing that conference. For now I will keep my calendar open, and will post more news here as conference plans solidify.</p> <p>*Whenever you do a Google search for DITA, it’s typically a good idea to exclude the phrase “Von Teese” from your search query. That is, unless you want your documentation searches to also include results for a fabulous burlesque dancer / entertainer. If you do want dancer / entertainer results in your documentation search queries, then make sure to include the phrase “Von Teese” in your queries.</p> <p>** Generally, people do not ask me about DITA.</p> Jim Campbell Notes from the mousepad user help, free and open source 2011-02-23T05:17:24+00:00 Django Model Validation tag:blogger.com,1999:blog-3916802132854520262.post-1235413319916005675 2010-10-27T02:15:04+00:00 I'm really excited about model validation, in Django circa 1.2. It's going to save me from the disastrous hack of ModelForms I've used so far. One wants something to validate data before saving it, since validating it <span>by</span> saving it is apparently a Bad Idea, since you may have already made changes to the database by the time the error is thrown.<br /><br />I wanted this done automatically, and Django only had form validation until 1.2, so I hacked forms into some sort of all-purpose wrapper for models. I've since learned to program Django like an adult, and model validation came around just in time anyway. But there's a problem I have with it.<br /><br />First I should point out that there's a few different things that get validated, but the two I'll point out are A) checking that required fields are set and B) whatever I put into the clean() function. Judging by some errors I've gotten while debugging/testing, Django seems to be checking B before A.<br /><br />Now, when I use a ModelForm, sometimes I want to use the commit=False option when I save. This returns a model that hasn't been committed to the database. Sometimes there's extra data I want to add to the model that the form didn't supply. Sometimes that data is in fact necessary for the model to be valid. So clearly Django shouldn't check A, and it doesn't. Here's the funny thing though: it does check B. Why would it do that? I can understand checking when I call is_valid(), I can control when that's called, making sure I added everything first.<br /><br />So far the consequences of checking B early have been trying to access members that haven't yet been set, in my clean() function. So in clean() I just check for those items, and just let it pass if they don't exist. I figure, yeah, it'll pass certain tests when it shouldn't. But if it's really running through validation (not just B as in the save (commit=False) case) that means it'll catch the missing members on the A pass anyway.<br /><br />Maybe I got something wrong, but I thought it was a weird design decision.<div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div> Dan noreply@blogger.com Ill Logic Tech tag:blogger.com,1999:blog-3916802132854520262 2011-02-15T22:17:16+00:00 This past Sunday I went to see the ugliest football game I have... 2010-10-25T19:50:46+00:00 <img src="" /><br /><br /><p>This past Sunday I went to see the ugliest football game I have seen in a long time, Bears vs Redskins. At least I was hanging out with a good friend so the game didn’t totally suck.</p> <p><a href="">Getting Ready for the Opening Kick Off< UDS Remotely 2010-10-25T18:17:00+00:00 <p>So, as you probably already know, since it is quiet in Orlando, I am not at UDS. Because of that, I am tuning in remotely. Thanks to Harald for the <a href="">Amarok script</a> that is working amazingly right now. Thus far, here is my little summary of what I have witnessed today:</p> <ul> <li>Mark announced Unity by default, in turn uniting a bunch of pissed off people on Twitter. Psst! <em>gnome-desktop-environment</em> will still be there</li> <li>The live video feed of Mark looked black and white, except Ubuntu was aubergine</li> <li>Gobby (<em>gobby-0.5</em> is the package you want to use by the way, or <em>kobby</em> as it actually works with this UDS) can’t make up its mind on if it wants to be up or down</li> <li>Kubuntu is thinking about going with the stable Kontact/KMail, stable being version 3.5.10, until KMail2 is alive and well</li> <li>Scott brought up default browsers, in turn causing every Kubuntu developer to bring out some fangs and claws</li> <li>Oh and concerning the live video feed, it is true about the camera adding 5 or more pounds, but I also learned it also removes 5 or more hairs on your head at the same time</li> </ul> <p>Working remotely, I also realized something else, and this concerns the plenaries. Remotely, these would be best at the end of the day, as then I would only have to worry about the lunch break as a disruption for remote participation. Instead of 2 hours of disruption, there would only be an hour. Also, I remember the plenaries after lunch while being at UDS physically. At times, they were hard to stay awake for after having a belly full of food. I think having them the last hour of the day as the closer would be good, as it brings everyone to the same location at that time.</p> <p>Also, playing the UDS live audio streams along with <a href="">Severed Fifth</a> is quite amusing.</p> <p><a href="">UDS Remotely</a> is a post from <a href="">Richard A. Johnson</a>'s <a href="">blog</a>.</p><div class="feedflare"> <a href=""><img src="" border="0" /></a> <a href=""><img src="" border="0" /></a> </div><img src="" height="1" width= Wherein I Go From One Business To Three 2010-10-25T01:18:29+00:00 <p>A few months ago I decided that I wasn’t going to tolerate excuses any more and it was time to get started with my own business. Since that time nothing has gone as I planned, but my quest to owning a business couldn’t be developing any better. I have my hands in three different businesses right now.</p> <p>The first is <a href="" target="_blank">RuggedScents</a>. I mentioned this a little bit in my last post. <a href="" target="_blank">Sacha De’Angeli</a> and I started this business half as a goof to enter in the BARcompany competition at <a href="" target="_blank">BARcamp Chicago</a> 2010. Surprisingly we won the competition, so we used the prize money to develop a production process and start selling our line of masculine colognes. We have the process figured out for our first product; Smoque, a campfire scented cologne. Our products will be available for purchase soon.</p> <p>The next one to come along was <a href="" target="_blank">Maker Tees</a>, though it has its roots in some things I did much earlier in the year. Pumping Station: One wanted to sell logo t-shirts, but nobody was interested in making that happen. As they were crippled by indecision, a group of members including myself stumbled upon a separate awesome t-shirt idea, the “Sir, I Practise Hacking” shirt. I had no choice but to have those shirts made, and since I was doing it already I made the Pumping Station: One shirts too. I carried around a big box of shirts, selling them in person until I broke even, then I temporarily dropped out of the t-shirt business. The process was so simple and appealing though that I decided to take it larger scale and sell a wide variety of maker themed shirts, and seek to sell shirts on behalf of hackerspaces.</p> <p>Finally, I’m a big part of another venture that’s just getting some momentum. A few months ago I had the idea that a social network for hackerspace members could be very popular and useful. I mentioned this to a fellow PS:One member Jordan Bunker, and that mention along with some discussions he had with the <a href="" target="_blank">Space Federation</a> led to us working out a set of features, a few possible business models, and deciding to actually build it. We will be building this software and developing the community around it in the coming months, and with sponsorship from the Space Federation we may actually get paid to make it.</p> <p>This is definitely a very exciting, and far too busy, time in my life.</p> Tim Saylor Tim Saylor Web Developer 2011-02-15T05:17:22+00:00 The Cast of The League on FX did a live tour to promote the show... 2010-10-20T21:26:57+00:00 <img src="" /><br /><br /><p>The Cast of The League on FX did a live tour to promote the show and stopped in Chicago. The show was great and you must get your ass out to see them if they come to your town.</p> <p><a href="">The Cast of The League< OfflineIMAP and Byobu hacks 2010-10-18T15:55:42+00:00 <p>Just a quick post showing a couple of hacks I have done using <a href="">OfflineIMAP</a> and <a href="">Byobu</a> this easy. So here we go.</p> <p><strong>OfflineIMAP</strong><br /> First, here is my <code>~/.offlineimaprc</code> configuration:</p> <div class="wp_syntax"><div class="code"><pre class="bash"><span>[</span>general<span>]</span> metadata = ~<span>/</span>.offlineimap accounts = GMAIL maxsyncaccounts = <span>1</span> ui = Noninteractive.Quiet <span>[</span>Account GMAIL<span>]</span> localrepository = LocalGmail remoterepository = RemoteGmail <span>[</span>Repository LocalGmail<span>]</span> <span>type</span> = Maildir localfolders = ~<span>/</span>.maildb<span>/</span>GMAIL <span>#restoretime = no</span> <span>[</span>Repository RemoteGmail<span>]</span> <span>type</span> = Gmail remotehost = imap.gmail.com remoteuser = your_gmail_login<span>@</span>gmail.com remotepass = your_gmail_password ssl = <span>yes</span> realdelete = no</pre></div></div> <p>To fire off OfflineIMAP, I use a cronjob:</p> <div class="wp_syntax"><div class="code"><pre class="bash"><span>*/</span><span>5</span> <span>*</span> <span>*</span> <span>*</span> <span>*</span> <span>$HOME</span><span>/</span>bin<span>/</span>cron-run-offlineimap.sh</pre></div></div> <p>And my <code>~/bin/cron-run-offlineimap.sh</code> looks like this:</p> <div class="wp_syntax"><div class="code"><pre class="bash"><span>#!/bin/sh</span> <span>ps</span> aux <span>|</span> <span>grep</span> <span>"\/usr\/bin\/offlineimap"</span> <span>if</span> <span>[</span> <span>$?</span> <span>-eq</span> <span>"0"</span> <span>]</span>; <span>then</span> logger <span>-i</span> <span>-t</span> offlineimap <span>"Another instance of offlineimap running. Exiting."</span> <span>exit</span> <span>0</span> <span>else</span> logger <span>-i</span> <span>-t</span> offlineimap <span>"Starting offlineimap..."</span> <span>chmod</span> +x <span>$HOME</span><span>/</span>.byobu<span>/</span>bin<span>/</span><span>1234</span>_OFFLINEIMAP offlineimap logger <span>-i</span> <span>-t</span> offlineimap <span>"Done offlineimap..."</span> <span>chmod</span> <span>-x</span> <span>$HOME</span><span>/</span>.byobu<span>/</span>bin<span>/</span><span>1234</span>_OFFLINEIMAP <span>exit</span> <span>0</span> <span>fi</span></pre></div></div> <p>You can see that this script changes the file mode bits to executable when it runs, and removes the executable bit when it finishes, on the <code>~/.byobu/bibn/1234_OFFLINEIMAP</code> file which is a Byobu script.</p> <p><strong>Byobu</strong><br /> Here is what <code>~/.byobu/bin/1234_OFFLINEIMAP</code> looks like:</p> <div class="wp_syntax"><div class="code"><pre class="bash"><span>#!/bin/sh</span> <span>printf</span> <span>"\005{= rw}IMAP\005{-}"</span></pre></div></div> <p>So now every time my OfflineIMAP cronjob runs, I will get <span>IMAP</span> in my Byobu bar.</p> <p>A super simple hack that lets me know when OfflineIMAP is running. Another reason I use this is because sometimes OfflineIMAP hangs, and when it does, I will know this if <span>IMAP</span> stays displayed in Byobu after a minute or so. Then I can check <code>/var/log/syslog</code> to see exactly when OfflineIMAP started. Normally OfflineIMAP runs for about a minute on my server every check. This could all be streamlined into one script as well with Byobu, but I know you don’t want to fire off processes or other things that may cause resource hogging.</p> Richard Johnson Richard A. Johnson - Blog Archives Contains all blog posts from. They could be personal, they could be about Linux, heck they could even be about you! 2011-02-23T05:17:19+00:00 View from my first ever USMNT soccer match. Action on the Field... 2010-10-13T20:35:36+00:00 <img src="" /><br /><br /><p>View from my first ever USMNT soccer match.</p> <p><a href="">Action on the Field< Significant Whitespace in Python Data Structures tag:blogger.com,1999:blog-3916802132854520262.post-5861468665941316699 2010-10-05T11:49:12+00:00 I recently wrote a program in Python for parsing files. I'm pretty naive still when it comes to functional programming, but I'm still excited about it, so I wanted it to be more functional in style. It had a complicated data structure representing the file structure, instead of a loop with bunch of if-thens. By Python standards I may have gone a bit overboard. Guido probably would not have approved of my code (not to mention what follows in this blog post).<br /><br />So as a result, most of the program became whitespace irrelevant. Huge dicts of lists of tuples, etc. It made me think that relevant whitespace might become handy for data too. And while talking on IRC about it this morning I realized I could sortof hack it using decorators and generators. So here's what it looks like:<br /><br /><a href="" target="_blank"></a><br /><br />So I have two examples:<br /><br /><ul><li><span>example.py</span> - This shows how you can define a more complicated structure with whitespace instead of a bunch of ){(}[].<br /></li></ul><ul><li><span>inlinefunc.py</span> - This demonstrates a sort of side-effect benefit. You can have multi-line functions inline in a list (or tuple or dict). Usually you're stuck with lambdas, and of course that starts to look confusing too.</li></ul>I'd like to clean up the syntax. Obviously having to have a decorator before and a yield after isn't great. I'll have to think about how I could do that. Maybe make an even dirtier hack by doing introspection.<br /><br />Any thoughts?<div class="blogger-post-footer"><img width="1" height="1" src="" alt="" /></div> Dan noreply@blogger.com Ill Logic Tech tag:blogger.com,1999:blog-3916802132854520262 2011-02-15T22:17:16+00:00 Astronomers help! 2010-09-30T02:13:15+00:00 Dear [...]<img alt="" border="0" src="" width="1" height="1" /> Freddy Martinez Technological Freedom Freedom -- Ubuntu -- Free Software -- Life : Not neccesarily in that order 2011-02-23T07:17:21+00:00 Moved to teh Deklabbs (DeKalb) tag:,2010-09-11:,blog/entry;moved-to-dekalb 2010-09-11T16:43:44+00:00 <p>About a month ago Morgan and I left our wonderful apartment in Andersonville, Chicago, IL and moved most of our things to our new apartment in DeKalb, IL (or as my friend Miles calls it, "the Deklabbs", a nonsense name that's just silly enough to stick). Morgan has started a graduate studies and teaching assistanceship program at Northern Illinois University, and since I telecommute, there was no real reason not to make the move. Meanwhile my good friend <a href="">Lunpa</a> is currently moving into our old apartment in Andersonville. Oh Andersonville, I miss you.</p> <p>The Dekalbbs aren't too bad of a place to live. It's a small college town, has a nice food co-op, etc etc. Except that I really don't know much of anyone. I've only found one other person in the area who is interested in programming and free software, and it's unclear if I can attend the university GLUG except as a presenter. In Chicago, my group of friends (aside from people I met at college and work) WAS the free software community.</p> <p>Which all and all means I'll probably be back now and then to attend usergroups. After all, teh Deklabbs is only about two hours away from Chicago, and I have several friends who have offered me couch-space if I need somewhere to crash. Chicago, you haven't gotten rid of me quite yet.</p> <p>Overall, DeKalb is not bad so far. It's a small, quiet, beautiful town, not too far from Chicago... oh and the rent is cheap. The rent is <i>so cheap</i>.</p> cwebber DustyCloud Brainstorms Christopher Webber's boring blog. 2011-02-23T07:17:13+00:00 BARcamp Chicago 2010 and Why I Haven’t Built a Hero’s Fountain 2010-09-03T04:00:37+00:00 <p.</p> <p <a href="" target="_blank">RuggedScents</a>, a campfire scented cologne put made <a href="" target="_blank">Sacha De’Angeli</a> and myself. Check out his write up of <a href="" target="_blank">how the idea came about</a>.!</p> Tim Saylor Tim Saylor Web Developer 2011-02-15T05:17:22+00:00 Open Letter to our newest community blogger: UofI President Mike Hogan 2010-08-31T01:41:23+00:00 <p>Congratulations on your position and your new blog. I find it refreshing that you have taken the time out of your busy schedule to share with students and community members what you believe to be in the interest of our community and our young people. This is, of course, not without some sense of irony… I have a hard time grasping how a man making $620,000 in the days of budget cuts, tuition hikes, and unpaid furloughs could possibly have any clue as to what would be best for either our students or our community. That said, it does seem like you have quite a knack for looking out for what’s best for you and the other wealthy elites and blue-bloods in our community, so maybe its not your ability to act in the best interest of a population or organization that should be in question, but perhaps your ability to do so for people other than the rich, connected, and yourself.</p> <p>I would like to extend my sincerest congratulations for managing to land such a lucrative position on the backs of working-class Illinois residents. I don’t know how you did it, but I suppose at least I must concede that you’ve displayed an ability to succeed in an endeavor we all had hoped was impossible: fooling the whole of the State of Illinois into making another rich, connected person even more rich and connected at tax-payer expense.</p> <p>I look forward to seeing how you will spin what is best for yourself and your connections into what is best for students, the Champaign-Urbana community, and the state of Illinois as a whole. I wish you the best, but in all reality I do hope you are ashamed of yourself.</p> Mike Stemle manchicken here... Rantings of a Questionably Sane Chicken 2010-08-31T02:18:40+00:00 Duck, Duck, Gnu: Mallard and DocBook 5 support in Emacs 2010-08-29T20:31:30+00:00 <p.</p> <div id="attachment_300" class="wp-caption alignleft"><a href=""><img class="size-full wp-image-300 " title="emacs" src="" alt="" width="192" height="128" /></a><p class="wp-caption-text">From flickr user tsmall. Attribution-ShareAlike 2.0 Generic license</p></div> <p.</p> <p>Thus, to take full advantage of the latest in <a href="" target="_blank">duck</a>-<a href="" target="_blank">based</a> documentation technologies, I needed to modify my .emacs file and the nxml-mode files themselves. What follows is an overview of exactly what I did in the hopes that others can make use of the same changes, too.</p> <p><span><strong>An nXML-mode foundation</strong></span></p> <p>Before we get started, though, you should know that much of what follows is derived from information on <a href="" target="_blank">this website</a>. I encourage you to visit the site, as it provides an introduction to how nXML-mode is configured, and enough of an introduction to using nXML-mode to make you at least modestly productive right away.</p> <p><strong><span>Part One: Setting up your .emacs file</span></strong></p> <p.</p> <p><pre class="brush: xml">;; '("/usr/share/emacs/site-lisp/nxml/"))) ;;-- ;; Make sure nxml-mode can autoload ;;-- (load "/usr/share/emacs/site-lisp/nxml/rng-auto.el") ;;-- ;; Load nxml-mode for files ending in .xml, .xsl, .rng, .xhtml .page ;;-- (setq auto-mode-alist (cons '("\.\(xml\|xsl\|rng\|xhtml\|page\)\'" . nxml-mode) auto-mode-alist))</pre></p> <p.)</p> <p><span><strong>Part Two: Modifying the nXML-mode configuration files</strong></span></p> <p>The second step is to modify the nXML-mode configuration files themselves. This will add the appropriate nXML-mode secret sauce to handle Mallard and DocBook 5 documents.</p> <p>Fortunately for you, I grabbed the latest nXML-mode source files from the <a href="" target="_blank">nXML-mode website</a>, and made the appropriate modifications myself. You can download my customized nXML-mode files <a title="nxml-mode files" href="" target="_blank">from this archive</a>.</p> <p>Here’s what I changed from the default setup:</p> <ul> <li>I added the docbookxi.rnc and mallard-1.0.rnc schemas to the schemas directory</li> <li>I modified the schemas.xml file, thus including the docbookxi.rnc and mallard-1.0.rnc schemas as part of the XML-validation process.</li> </ul> <p>Note that I have used the DocBook 5 “docbookxi.rnc” schema which allows for <a href="" target="_blank"> xinclude</a> functionality. If you want to use the schema that does not allow for use of xinclude features, you’ll need to adjust the files accordingly. The current DocBook 5 schemas are <a href="" target="_blank">available here</a>.</p> <p><strong><span>Part Three: Copy your nXML-mode files to the proper location<br /> </span> </strong.</p> <p><span><strong>Wrapping up</strong></span></p> <a href="" target="_blank">nXML-mode website</a>, and view some of the nXML-mode information and resources that are available as you get started.</p> Jim Campbell Notes from the mousepad user help, free and open source 2011-02-23T05:17:24+00:00 My ZaReason Laptop 2010-08-28T02:35:41+00:00 <p>First off let me make a quick apology to Earl over at <a href="">ZaReason</a> for publishing this write up a bit late. Right after I received the new laptop, I had my daughter for the end of the summer, so needless to say, I decided to spend time with her. Once again sorry Earl.</p> <p!</p> <p!</p> <p? <img src="" alt=":)" class="wp-smiley" /> </p> <p><strong>Unboxing</strong><br /> <div id="attachment_881" class="wp-caption alignnone"><a href=""><img src="" alt="zareason box" title="zareason_1a" width="300" height="206" class="size-medium wp-image-881" /></a><p class="wp-caption-text">The nicest box in the industry</p></div></p> <div id="attachment_882" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_1b" width="300" height="225" class="size-medium wp-image-882" /></a><p class="wp-caption-text">Inside the ZaReason Box</p></div> <div id="attachment_883" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_1c" width="300" height="225" class="size-medium wp-image-883" /></a><p class="wp-caption-text">The <a href="">Kubuntu</a> CD</p></div> <div id="attachment_884" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_1d" width="300" height="225" class="size-medium wp-image-884" /></a><p class="wp-caption-text">The ZaReason Open Hardware Warranty. This rocks, keep reading to find out more!</p></div> <div id="attachment_885" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_1e" width="300" height="225" class="size-medium wp-image-885" /></a><p class="wp-caption-text">The ZaReason Quick Start Paper</p></div> <p><strong>Sexy Is The Name</strong><br /> <div id="attachment_886" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_2a" width="300" height="137" class="size-medium wp-image-886" /></a><p class="wp-caption-text">The ZaReason Notebook Lid, it needs stickers doesn't it?</p></div></p> <div id="attachment_887" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_2b" width="300" height="194" class="size-medium wp-image-887" /></a><p class="wp-caption-text">ZaReason!!! Stickers!!!</p></div> <div id="attachment_888" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_2c" width="300" height="94" class="size-medium wp-image-888" /></a><p class="wp-caption-text">The ZaReason Notebook Right Side: DVD burner and a lonely USB port.</p></div> <div id="attachment_889" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_2d" width="300" height="64" class="size-medium wp-image-889" /></a><p class="wp-caption-text">The ZaReason Notebook Back: Security lock spot, a plugged hole, some rectangle plastic thing I haven't figured out (yet?), power, VGA, HDMI, and USB times two.</p></div> <div id="attachment_890" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_2e" width="300" height="119" class="size-medium wp-image-890" /></a><p class="wp-caption-text">The ZaReason Notebook Left Side: Gigabit Ethernet, headphone jack, microphone jack, funky card slots.</p></div> <div id="attachment_891" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_2f" width="297" height="300" class="size-medium wp-image-891" /></a><p class="wp-caption-text">The ZaReason Notebook Opened: Anyone order a real keyboard?</p></div> <p><strong>Hey, where did the Windows key go?</strong><br /> <div id="attachment_892" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_3" width="94" height="89" class="size-full wp-image-892" /></a><p class="wp-caption-text">The Ubuntu Key</p></div></p> <p><strong>This baffles me, no Window’s sticker either</strong><br /> <div id="attachment_893" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_4" width="300" height="224" class="size-medium wp-image-893" /></a><p class="wp-caption-text">Energy, NVIDIA, and Ubuntu</p></div></p> <p><strong>Open Hardware What?</strong><br /> <div id="attachment_894" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_5a" width="300" height="55" class="size-full wp-image-894" /></a><p class="wp-caption-text">The ZaReason Open Hardware Warranty</p></div></p> <p>In a recent post by Jono Bacon concerning <a href="">his new ZaReason laptop</a>, he talks about this and that, and says:</p> <blockquote><p>Zareason are a company that I think really gets Open Source.</p></blockquote> <p>Jono, you know I love you, but let me fix this for you. ZaReason is a company that really gets the meaning of being <strong>OPEN</strong>. ZaReason provides you with what they refer to as the <em>Open Hardware Warranty</em>. What exactly does this mean? Just look at the next picture to see what they say.</p> <div id="attachment_895" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_5b" width="350" height="66" class="size-full wp-image-895" /></a><p class="wp-caption-text">The ZaReason Open Hardware Warranty</p></div> <p>That’s right, you are free to tinker with your hardware. Go ahead, open up the case, there is no <em>Warranty Void if Seal Broken</em> sticker like everyone else uses. Heck, they even provide you with a small ZaReason screw driver to do just this. So I did what anyone else would do in this case, I opened it up!</p> <div id="attachment_896" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_5c" width="300" height="225" class="size-medium wp-image-896" /></a><p class="wp-caption-text">The ZaReason Notebook Hardware</p></div> <p>I am in love! I didn’t void my warranty! If I broke it, I fix it, but if I didn’t break it, then ZaReason will fix it. How kick ass is that?</p> <p><strong>Up and running</strong><br /> <div id="attachment_897" class="wp-caption alignnone"><a href=""><img src="" alt="zareason" title="zareason_6" width="350" height="75" class="size-full wp-image-897" /></a><p class="wp-caption-text">BLOOOOOOOOOOOO!</p></div></p> <p.</p> <p.</p> <p><strong>Conclusion</strong><br /> <a href="">KDE Software Compilation</a> out of the box! I am super happy and super in love with my new machine!</p> <p!</p> <p>If you have any questions please feel free to ask in the comments.</p> Richard Johnson Richard A. Johnson - Blog Archives Contains all blog posts from. They could be personal, they could be about Linux, heck they could even be about you! 2011-02-23T05:17:19+00:00 Upcoming FLOSS-related Events in the Midwest 2010-08-21T15:35:02+00:00 <p>Here’s a quick rundown of some upcoming Free and Open-Source software events in the Chicagoland / greater-midwest area.</p> <p><strong>Saturday and Sunday, August 21 and 22, 2010</strong>: <a title="Barcamp Chicago" href="" target="_blank">Barcamp Chicago</a></p> <ul> <li>Running all day and night this Saturday and Sunday</li> <li>215 E. Ohio Street in downtown Chicago</li> <li>Numerous presentations, and a “BarCompany” startup hackathon</li> </ul> <p><strong>Sunday, August 29</strong>: Ubuntu Chicago’s <a href="" target="_blank">Ubuntu Global Jam</a></p> <ul> <li>Socialize and work with the team on bug-triaging and fixing, documentation, and more!</li> </ul> <p><strong>Friday through Sunday, September 10th through 12th</strong>: <a href="" target="_blank">Ohio Linux Fest</a></p> <ul> <li> Keynote presentations by Stormy Peters, Executive Director of the GNOME Foundation, and Christopher “Monty” Montgomery, creator of the open-source audio format Ogg Vorbis</li> <li>Tracks dedicated to getting started in Free and Open-Source software, Linux security, and so much more.</li> <li>For Ubuntu users, there will be an “Ubucon” during the morning and mid-afternoon hours on Friday.</li> </ul> <p><strong>Saturday and Sunday, October 2nd and 3rd</strong>: <a href="" target="_self">Barcamp Milwaukee</a></p> <ul> <li>A wide range of sessions relating to Free and Open Source software and general geekery.</li> <li>Like Barcamp Chicago, this event will run throughout the entire weekend, including overnight activities between Saturday and Sunday.</li> </ul> <p><strong>Saturday and Sunday, October 22nd and 23rd</strong>: <a href="" target="_blank">Erlang Camp</a>, Chicago</p> <ul> <li>A weekend devoted to learning how to confidently put Erlang code into production for your company, group, or individual project.</li> </ul> <p><strong>Events at the Chicago hacker space</strong>, <a href="" target="_blank">Pumping Station One</a></p> <ul> <li>Check their <a href="" target="_blank">calendar</a> for their ongoing list of events and activities</li> </ul> Jim Campbell Notes from the mousepad user help, free and open source 2011-02-23T05:17:24+00:00 Kubuntu and Kubuntu Netbook 10.04.1 Released 2010-08-18T03:15:56+00:00 <p><!--3e67ae7cd0f24b9db9ab33b35bf07760--></p> <p><img src="" alt="" /></p> <p>Along with the latest point release of <a href="">Ubuntu</a>, the <a href="">Kubuntu</a> developers have been busy whipping up the 10.04 release in to a shape good enough to present you all with the latest point release. This release will bring along with it any security fixes, bug fixes, or updated packages or applications that have been made available since the original 10.04 release.</p> <blockquote><p>Both.</p></blockquote> <p><strong>Note to those already running 10.04:</strong> There is nothing you need to do. As long as you have been doing your regularly scheduled updates then you are running the same as 10.04.1.</p> <p>Please visit the website under <a href="">Get Kubuntu</a> to see your options for obtaining this latest release.</p> <p>Thank you for listening, and we will now return you to your regularly scheduled programming.</p> Richard Johnson Richard A. Johnson - Blog Archives Contains all blog posts from. They could be personal, they could be about Linux, heck they could even be about you! 2011-02-23T05:17:19+00:00 | http://feeds.feedburner.com/PlanetChicagoGLUG | crawl-003 | refinedweb | 17,771 | 61.67 |
Created on 2006-06-07 03:09 by zseil, last changed 2006-06-07 07:00 by tim_one.
Recently test_exceptions was reporting random
memory leaks. It seems that this was caused
by the pickling part of testAttributes, which
uses a random integer for pickling protocol.
With this patch applied, I couldn't reproduce
the leak anymore, although I didn't run it 666
times.
The patch also includes a fix for the pickling
part of the test (previously it was checking
the attributes of the original exception, not
the unpickled one), removes another unused
import and moves all the imports to the
top of the file.
Logged In: YES
user_id=31435
Woo hoo! This appears to cure the oddball "leaks" for me
too, but no idea why. Outstanding anyway ;-)
I fiddled the patch a bit to exercise both pickle and
cPickle, and to use pickle.HIGHEST_PROTOCOL instead of the
hardcoded 2. Checked in as revision 46705 on the trunk.
Thank you! | http://bugs.python.org/issue1501987 | crawl-002 | refinedweb | 161 | 64.61 |
Python 3.2 won't import cookielib
I've searched everywhere for this and just can't seem to find the answer. I checked my python version and it is version 3.2. When I try to import
cookielib
, I get:
ImportError: No module named cookielib
I saw that in Python 3.0 it was renamed to
http.cookiejar
and that it will automatically import
cookielib
.
I thought that maybe there was some wild error in my python config, so I decided to try and import
http.cookiejar
like this one
import http.cookiejar
. It does not work, and I get an error:
EOFError: EOF read where not expected
.
This is not the error I was expecting, because it
import http.cookies
only imports fine.
Anyone have a solution to this problem? What am I missing?
Full error:
source to share
Automatic renaming business only applies if you are using 2to3 . Therefore you need
import http.cookiejar
.
The error
EOFError: EOF read where not expected
is only outputted by Python sorting. This is most likely caused by a race condition fixed in Python 3.3 where multiple processes were trying to write concurrently with the pyc file. Removing all .pyc files can be a workaround.
source to share
My initial guess is that you have a corrupted library file. Inside your Python installation, browse
lib/python3.2/http/cookiejar.py
and scroll down to the bottom. Mine (Python 3.2.2) ends up in a method definition
save()
with
finally: f.close()
If you see anything else, your installation is likely to be corrupted and I recommend reinstalling it.
source to share | https://daily-blog.netlify.app/questions/1895357/index.html | CC-MAIN-2021-43 | refinedweb | 272 | 69.68 |
Okay for the life of me I cannot get the right output for my program. I need to get the total number of pennies in dollar amount for the output of Total Pay. This is my program. The directions are:
Write a program that calculates the amount a person would
earn over a period of time if his or her salary is one penny the first day,
two pennies the second day, and continues to double each day. The program should then show the total pay at the
end of the period. The output should be displayed in a dollar amount, not the number
of pennies. Do not accept a number less than 1 for the number of days worked.
Code java:
import java.util.Scanner; import java.text.DecimalFormat; public class PenniesForPay { public static void main(String[] args) { int numDays; int pennies = 1; int day = 1; double totalSalary = 0.01; Scanner keyboard = new Scanner(System.in); DecimalFormat formatter = new DecimalFormat("#,###.##"); System.out.print("Please enter the number of days worked: "); numDays = keyboard.nextInt(); while (numDays < 1) { System.out.print("Enter the number of days worked: "); numDays = keyboard.nextInt(); } System.out.println(" "); System.out.println("Day " + " Pennies(earned)"); System.out.println("------------------------"); while (numDays > 0) { System.out.println(day + " = " + " " + formatter.format(pennies)); pennies *= 2; totalSalary += pennies / 100; day++; numDays--; } System.out.println(" "); System.out.println("Total pay: $" + formatter.format(totalSalary)); } }
--- Update ---
totalSalary should be total number of pennies / 100....However its not picking up only day 30 of pennies which is 536,870,912 pennies and then dividing it? | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/36693-cannot-get-right-output-missing-something-please-help-printingthethread.html | CC-MAIN-2016-30 | refinedweb | 258 | 52.46 |
I am trying to complete an assignment for my beginner's computer science. I have to write a program that proves that every even number greater than four is the sum of two prime numbers. I prompt the user for the top end of the range (the lowest being four) and then prove Goldbach's theory. However, I have to determine which numbers of the factoring are prime in another function. Here is what I have so far:
import sys def main(): print "This program will verify Goldbach's Conjecture from 4 to a number of your choosing." num = input("What should the top end of the range be? ") findPrimeAddends(num) def isPrime(n): for n in range(4,int(num**0.5)+1): if(num % n == 0): return False return True,0 def findPrimeAddends(num): for n in range(4, num + 1): x = n/2 (a,b) = isPrime(n) if(a > b): sys.stdout.write("%d = %d + %d\n" % (x,a,b)) main()
Right now, I keep getting an error because I haven't defined "num" in the isPrime function. Can anyone help me out? | https://www.daniweb.com/programming/software-development/threads/389479/prove-modification-of-goldbach-s-conjecture | CC-MAIN-2017-17 | refinedweb | 186 | 72.26 |
Hello, I am trying to help get the project Adva-CMS () up to date and ready for Rails 2.2. Adva relies heavily on Engines and uses a fork that Sven Fuchs made to get timestamped migrations to work. I see that the incompatibility with rails 2.2 has been addressed on the edge branch on github:... My question is, what state is edge Engines in? and is the timestamped migrations issue fixed in edge Engines? I would like to re-integrate the base Engines repo into Adva-CMS if I could, but if it is not feasible, I could patch Sven's fork and keep using that. Please advise
on 2008-11-05 23:25
on 2008-11-06 12:02
The simplest way to check the state of edge engines is to check out the edge branch and run the tests: $ git clone git://github.com/lazyatom/engines.git $ git checkout edge $ rake test RAILS=edge At the moment, there seems to be an issue with the templating - one of the ActionMailer unit tests is failing, and a bunch of functional tests are also failing. The functional failures are due to the removal of a method we were relying on to determine what was being rendered ('first_render'). I suspect that these errors might be hiding another minor template error. The branch which contains Samuel Williams' patch for timestamped migrations ('use_rails_own_migration_mechanism') isn't merged with edge or master yet, although I expect to merge it soon (potentially this afternoon). If you have any problems running the tests, let me know. Hope that helps! James
on 2008-11-06 23:01
well... I get one unit test error and 6 functional errors. The unit test error is is in ./test/unit/action_mailer_test.rb:34:in `test_should_be_able_to_create_mails_from_plugin' > > ActionView::MissingTemplate: Missing template > plugin_mail/mail_from_plugin.erb > The functional test errors are all the same: ActionView::TemplateError: undefined method `first_render' for > #<ActionView::Base:0x236b670> > Here is a pastie of the full test results If anyone has any idea about these that would be great... in the mean time, I have been trying to track them down. Thanks all, Ned
on 2008-11-07 22:18
I have a question... the first error, `test_should_be_able_to_create_mails_from_plugin', seems to be cause by there being a missing template in /test/plugins/test_plugin_mailing/app/views/plugin_mail/. the error is: ActionView::MissingTemplate: Missing template plugin_mail/mail_from_plugin.erb in view path What confuses me, is that there is a standard mail_from_plugin.text.plain.erb template. I copied this template into a file called mail_from_plugin.html.erb and the same test passed. Does anyone have any idea why this is? Also I am not sure if there is a reason that just the plain text template is in there and not the html.erb template. Is this test testing particularly for the plain text template? Ned
on 2008-11-07 22:40
The second series of errors were all related to #first_render being removed from ActionView::Base. i noticed a ticket was opened in lighthouse about this so I responded there:... however, I thought I would continue this conversation here as well. Hi, I tried globally changing the <%= self.first_render %> to <%= self.instance_variable_get(:@_first_render) %> this seems to work pushing the errors to failures: <"namespace/app_and_plugin/a_view (from app)"> expected but was <"namespace/app_and_plugin/a_view.html.erb (from app)"> etc. This seems acceptable if the test condition was just changed from assert_response_body 'alpha_plugin/a_view' to assert_response_body 'alpha_plugin/a_view.html.erb' would that still satisfy the test? are the file extensions a problem?
on 2008-11-09 16:54
I just resolved this bug, by implementing a specific mechanism for determining the path of the template that was rendered. This is cleaner than relying on ActionView's internal state, and should hopefully be more future-proof! Now to track down that action mailer issue... Thanks for your help with this! James
on 2008-11-09 17:06
The test was testing the 'default' behaviour, but the template overspecified that the email should be text/plain. The simplest way to 'fix' this test was to just rename the default template to '.erb', rather than '.text.plain.erb', as this seems to be how Rails 2.2 expects ActionMailer to be used. So... we're now passing all tests against Edge Rails! In other words, the engines plugin should now be compatible with Rails 2.2 (seems like it always was, but the tests were a bit 2.1-specific). Thanks again for your investigations here, James
on 2008-11-10 00:06
Awesome! Thanks a lot.
on 2008-11-10 10:38
Great news - thanks for this James, Ned, et al :) 2008/11/9 James Adam <james@lazyatom.com>:
on 2008-11-10 15:47
Wow! This is really great news to wake up to first thing on monday! Great news and well done! N. | http://www.ruby-forum.com/topic/170133 | CC-MAIN-2013-20 | refinedweb | 809 | 67.25 |
Lifecycle of a Request-Response Process for a Spring REST API
Lifecycle of a Request-Response Process for a Spring REST API
The steps involved in the lifecycle of a request process and how the request is mapped to the appropriate controller method and then returned to the client..
Developing a REST API or microservice using the Spring Boot framework accelerates the development process, and allows API developers to only focus on writing the core business logic and not worry about all the underlying configurations and setup. This article describes the steps involved in the lifecycle of a request process and how the request is mapped to the appropriate controller method and how a response is returned to the client.
In order to create a REST API to serve a client with a list of users, the tasks involved are
- Create a class with the
@RestControllerannotation. Due to the annotation, this class will be auto-detected through classpath scanning and the methods annotated with
@RequestMappingannotation will be exposed as HTTP endpoints. When an incoming request matches the requirements specified by the
@RequestMappingannotation, the method will execute to serve the request.
For our example of a users API, the controller class will look like this:
@RestController @RequestMapping("/users") public class UserController { @Autowired UserService userService @RequestMapping(method = RequestMethod.GET) public List<UserDTO> findAllUsers() { return userService.findAllUsers(); } }
From a developer’s perspective, the flow to fetch the list of users from the database can be viewed as below:
However, with Spring doing a lot of work for us behind the scenes, the lifecycle of the entire process for making an HTTP request to a resource to serving the response back to the client in either XML/JSON format involves many more steps.
This article describes the entire request to response lifecycle with steps which are managed by Spring.
When a user makes a request for a resource, for example:
Request:
Accept:
application/json
This incoming request is handled by the DispatcherServlet, which is auto-configured by Spring Boot. While creating a project through the Spring Boot framework, and when we mention the Spring Boot Starter Web as a dependency in pom.xml, Spring Boot’s auto-configuration feature configures
dispatcherServlet, a default error page, and other dependent jar files.
When a Spring boot application is run, the log will have a message like this:
[ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/]
DispatcherServlet is the front controller and all incoming request goes through this single servlet.
The process from a request to response is shown in the below flow chart:
The blocks in the green are the ones which are implemented by developers.
In our request for
/users resources, the activities below are performed in each step:
In Step 1, the dispatcher servlet will intercept the request for the resource
/users.
In Step 2, the servlet determines the handler for the request (a good link on this topic).
In Step 3, Spring checks which controller method matches the incoming lookup path of the “/users” request. Spring maintains a list of all mapping registries fetched from the
@RequestMappingof the controller class and iterates the list to look for the matching method in the controller class implemented by the developer.
In Step 4, after determining the right method it executes the controller method.
Step 5 returns an ArrayList of users.
The response type accepted by the client can be either JSON or XML. Therefore, Step 6 does the job of marshaling the Java object to the response type requested by the client. Spring takes the ArrayList of users and uses the message converter method to marshal it to the type requested by the client. If the converted message is not available, then the client will get a 406 error. In the case of users, as the requested type is JSON, thus a JSON object for users is returned as a response.
Conclusion
Understanding the lifecycle of the request and response process and other classes involved helps one to understand the issues better and troubleshoot it more easily. To check the process lifecycle, open the Eclipse Open Type DispatcherServlet class and add a breakpoint at the
doDispatch method.
Thanks a lot for reading this article! If you have any questions or suggestions, then please drop a note and I'll try to answer your question. }} | https://dzone.com/articles/lifecycle-of-a-request-response-process-for-a-spri | CC-MAIN-2018-51 | refinedweb | 724 | 57 |
Hi,
The new CLion 2018.1 EAP, build 181.4096.19 is now available! Patch update from the previous EAP build will be available shortly.
This build brings:
- C++17: structured bindings
- Editor improvements: breadcrumbs for C/C++
- Code transformations: Unwrap/remove code blocks
- WSL: custom paths for CMake, compiler and debugger
- Debugger: hex values
Download CLion 2018.1 EAP
C++17: structured bindings
Structured binding introduced in C++17 is a convenient and compact way of binding a list of identifiers to a set of objects. The C++ language engine in CLion is now aware of this feature, thus handles it correctly and ensures more accurate code analysis with less false positives:
Editor improvements: breadcrumbs for C/C++
While reading the code or navigating through it, it’s good to easily keep track of your location within the code base. Breadcrumbs were created exactly for that type of navigation. These small markers at the bottom of the editor shows your current location inside namespaces, classes, structures, functions, and lambdas. Click on them to navigate to the corresponding code element:
Code transformations: Unwrap/remove code blocks
When editing complicated code with lots of nested statements, you sometimes need to accurately remove the enclosing parts. To avoid manual changes (that can break the code by accident), use Code -> Unwrap/Remove… action (Ctrl+Shift+Delete on Linux/Windows, ⌘⇧⌦ on macOS). It suggests the options depending on where your caret is:
In CLion for C and C++ you can now unwrap the following control statements:
if,
else,
while,
do...while,
for,
try...catch, or just remove the enclosing statement (for example, when you’d like to extract a part of a ternary operator expression).
WSL: custom paths for CMake, compiler and debugger
We continue our work on WSL support in CLion. In this EAP we’ve addressed an issue with the custom paths to CMake, compiler and debugger executables. You can now provide any custom path to these tools in Build, Execution, Deployment | Toolchains settings for WSL.
Debugger: hex values
We are glad to share that we’ve started working on a top-voted debugger feature: hexadecimal formatting for numerical variables! It’s not there yet, but CLion now shows hex for simple types (int, long, char, etc.). Hexadecimal format for float or double is not yet available. However, we’ll be glad to get your feedback at this stage of development. To enable the feature, turn on cidr.debugger.value.numberFormatting.hex setting in Registry.
- In Find Action dialog (Shift+Ctrl+A on Linux/Windows, ⇧⌘A on macOS) type Registry; open Registry, type cidr.debugger.value.numberFormatting.hex (or just hex) to search for the setting and turn it on.
- In Build, Execution, Deployment | Debugger | Data Views | C/C++ settings turn on showing hex values for numbers:
The setting is also available from the Debug tool window.
You can now see hexadecimal values in the debug tool window:
and in the editor (in Inline Variables View):
Hexadecimal formatting for simple types is available for both debuggers (GDB, LLDB) on all platforms, including remote debug case and WSL.
That’s it! Full release notes are available by the link.
Download CLion 2018.1 EAP
Your CLion Team
JetBrains
The Drive to Develop
How can I add custom breadcrumbs? I have a C project and only the last one is shown.
Thanks.
There is no such API / option available.
Now even more amazing tools that I won’t be able to live without.
Great job!
Thank you for your support!
I don’t know where to ask…
I am writting CLion plugin, how i can dig into CLion toolchain settings from OpenApi? Thank!
Piotr, what does your plugin need specifically?
When support remote develop toolchain from SSH ?
WSL was the first step in this direction. So hopefully this year we will come up with some solution.
But where are binary values? When I want to check which flags are set in CSR, binary values with separators for bytes are more readable then hex.
.gdbinit/.lldbinit helps me just right as of now.
As long as the setting is not per-variable, there’s little difference.
Not yet there. This is just a first step to this task.
Do you plan to work on latency in this release cycle? In my opinion latency is killing auto-completion feature: you never know what will be faster : type method name manually or wait 2-3 seconds until Clion will offer an auto-completion list.
Second that! It’s been a disaster since 2017.
It’s in progress. Not sure the changes will be merged before the release. There is a huge work in progress started this cycle, but probably we’ll merge to 2018.2, not 2018.1
Anastasia,
This is a great start and looking forward to refinements. Couple of comments:
1) This is a style thing, but I would like an option for the letters in the hex values to be capitalized.
2) Having all the data view present makes reading a bit clumsy. It would be nice if I can specified how the view is display and only have that show. Being able to right-click on the variable and specify how the data should be view would be great, IMHO.
3) Is there a reason why “0x” is not same color as the rest of the number? I think it being a consistent color would look better, just seems odd to me.
Mark
Mark, thank you for the feedback, we really appreciate that!
Your suggestions on the value formatting (1 and 3) are in progress now, we’re are going to address that before the feature leaves it’s experimental state.
We’ll consider your request for selective per-variable formatting when coming up with the roadmap for the next release.
+1
Mark, regarding items 1 & 3, what you describe would comply with convention, but each would slow down the reading of values. Its easier for the mind to distinguish from among lower case letters than capital letters, and its easier for the mind to overlook the 0x prefix when it is a different shade. I believe the UI experts made the right call here.
Thanks for the hex view.
Now it works much better with my ARM embedded development plugin.
A new version is released, under review in the plugin repository
Cool! Thanks
I love the breadcrumb idea. However, I find it incredibly frustrating that I keep trying to navigate to alternative places from the breadcrumb bar with the mouse, but it’ll only let me navigate within the current tree. I want to be able to right-click a part of a breadcrumb and choose a parallel navigation point from there. For example, if I’m in a class method, then right-clicking on the method should give me the option to go to another method on the class. If I click the class, then right-clicking should have an option to go to another item in the class’ namespace, etc.
If somehow the functionality is already there and I’m missing it, please let me know
I guess Structure View (Alt+7 on Linux/Windows, Cmd+7 on macOS) fits your case better. You can turn there auto scroll from/to source. | https://blog.jetbrains.com/clion/2018/03/clion-2018-1-eap-structured-bindings-breadcrumbs-hex/?replytocom=50277 | CC-MAIN-2019-13 | refinedweb | 1,212 | 64.41 |
We’ve, which allows you to bind a list of items to a ComboBox that is displayed whenever the user wishes to edit the cell. When the cell is not being edited, the selected item from the ComboBox is displayed as an ordinary TextBlock.
This feature would be nice in a more general way. For example, a common field in a database is a date, so it would be nice if a DataGrid cell could display a date as text, but switch to a DatePicker control when the user wishes to edit the date. The DataGridTemplateColumn provides just this sort of feature.
Let’s look at how we could implement the date display/edit column. Going back to our comic book database, suppose we wished to store the date that we last read a particular issue. We insert a DateRead column in the MySQL database. This column has the MySQL data type of ‘date’. As before, we load the data from the ‘issues’ table into a WPF DataTable and use data binding to select which column from the DataTable is displayed in each column of the DataGrid. In the case of a template column, however, we need to specify the two types of cell (displaying and editing) separately. The XAML looks like this:
<DataGridTemplateColumn Header="Date read"> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding DateRead, Converter={StaticResource DateConverter}}" HorizontalAlignment="Center"/> </DataTemplate> </DataGridTemplateColumn.CellTemplate> <DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <DatePicker SelectedDate="{Binding DateRead}" /> </DataTemplate> </DataGridTemplateColumn.CellEditingTemplate> </DataGridTemplateColumn>
The CellTemplate part specifies what each cell displays when it’s not being edited. In this case we insert a TextBlock which is bound to the DateRead field. (We’ll look at the converter in a minute.) This means that the value of DateRead is displayed and since it’s a TextBlock (as opposed to a TextBox), there’s no way to edit it, which is fine.
The CellEditingTemplate shows what is displayed when the user double-clicks on the cell in order to edit its contents. In this case we display a DatePicker, which is a control with a small text area and a calendar icon which, when clicked, displays a calendar month. Clicking on the header of a DatePicker allows the user to choose another month or year, and a particular date is selected by clicking on the date’s day number. The selected date is then displayed in the text area. When the cell loses focus (when the user clicks somewhere else), the cell goes back to its TextBlock incarnation. The binding on the DatePicker ensures that the selected date is transmitted back to the DataTable.
That’s really all there is to using a template column. Remember that you can put any layout you like inside a DataTemplate, so if you wanted to get fancy you could even put in a Grid and then enclose multiple controls inside the Grid. Although the power is there, of course, putting in too much clutter makes for a bad user interface, so usually template columns use only one or two controls as we’ve done here with the DatePicker.
One final note concerning dates. Although we’re using a ‘date’ data type and not a ‘datetime’ type, for some reason, dates always seem to come attached to a time as well (usually 00:00:00), and unless we do something about it, the date and time get displayed in the TextBlock. This sort of thing is easy to fix using a converter for a data binding. We’ve seen data converters before, so this one doesn’t really introduce anything new. The idea is that we strip off the time portion of the date and return just the date portion. The C# code is:
class DateConverter : IValueConverter { // Strip off the time part of the dateTime public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if (value != null && value.ToString().Length > 0) { string dateTime = value.ToString(); int space = dateTime.IndexOf(" "); string date = dateTime.Substring(0, space); return date; } string empty = ""; return empty; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } }
We search for the blank that separates the date from the time and then use Substring() to get the date part of the string. The ConvertBack() bit should never be used.
Very informative. I am trying accomplish combobox in DataGridTemplateColumn. I am display the data on GUI. But after selection, On click of save, not able to read data via c# code. Please note that my code is generating GUI at runtime and read data from it at runtime. here is the method.
private DataGridTemplateColumn AddDataGridComboBoxColumn(XmlNodeList _childnodeList, string header, DataTable table, List data = null)
{
var comboTemplate = new FrameworkElementFactory(typeof(ComboBox));
List items = new List();
for (int count = 0; count < _childnodeList.Count; count++)
{
items.Add(_childnodeList.Item(count).InnerText.ToString());
}
comboTemplate.SetValue(ComboBox.ItemsSourceProperty, items);
comboTemplate.SetValue(ComboBox.SelectedItemProperty, items);
DataGridTemplateColumn column = new DataGridTemplateColumn()
{
Header = header,
CellTemplate = new DataTemplate() { VisualTree = comboTemplate },
CellEditingTemplate = new DataTemplate() { VisualTree = comboTemplate }
};
DataColumn dtcolumn = new DataColumn();
dtcolumn.Caption = header;
dtcolumn.ColumnName = header;
dtcolumn.DataType = typeof(ComboBox);
table.Columns.Add(dtcolumn);
//row[header] = items;
return column;
}
Please let me know what i am missing here. My data row always comes with count 0 without any data.
for (int j1 = 0; j1 < VisualTreeHelper.GetChildrenCount(item); j1++)
{
DependencyObject gridchild_child = VisualTreeHelper.GetChild(item, j1);
DataGrid data = (DataGrid) ((TabItem)gridchild_child).Content;
DataView table = (DataView)data.ItemsSource;
Any will be appreciated…
Hi ,
Instead of TextBlock in the DataTemplate, i am using custom TextBlock control but i am unable assign text or do any work with this custom TextBlock control
my control is “DataTextBlock”.
[TemplatePart(Name = DataTextBlockName, Type = typeof(TextBlock))]
public class DataTextBlock : Control
{
}
Please Help in doing so. | https://programming-pages.com/2012/04/23/datagridtemplatecolumn-for-the-wpf-datagrid/ | CC-MAIN-2018-26 | refinedweb | 951 | 56.55 |
any possibility to get a list of the pairs of atoms that are matched
using the align command?
cheers,
marc
Dear Simon and other pymolers,
I have realised that the solution to getting cctbx working with pymol on
windows I posted no longer works with recent builds of cctbx. I have
found that the following solution works:
-->
3. Download pymol built against python 2.4 (but not including its own
python) and install in the default location
-->
(You cannot use the latest beta versions which include their own version
of python to the best of my knowledge)
4. Create 2 files (use notepad or wordpad or any other text editor) and
save in the C:\Program Files\Delano Scientific\PyMOL directory:
a) pymol.cmd
@python -x "%~f0" %* & exit /b
import cctbx
import pymol
b) run.cmd
CALL C:\cctbx_build\setpaths_all.bat
CALL "C:\Program Files\Delano Scientific\PyMOL\pymol.cmd"
5. One other thing, it's important to have python in your path variable
(which you can access by going to control
panel|system|advanced|environment variables), just add C:\python24 to
the end of the path variable, separated by a semi-colon.
Hopefully this should work OK... I know it is working on at least one
other system than my own. Let me know if it works for you. I'll post
this up on the wiki ASAP.
Cheers
Roger
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/pymol/mailman/message/10098981/ | CC-MAIN-2017-13 | refinedweb | 274 | 65.73 |
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.
Hello , I had forgotten to mail the patch to the gcc-patches mailing list. So I'm doing it now. This is in reference to bug no 8081 <> Analysis of the Fix: When a nested function,returning a structure of variable size is called it is unable to create a temporary space on the stack frame for storing the returning value. This was observed when with slight modification of the program was successfully compiled. modifications: declaration of a block b; and assigning the return value of the function to b. int main (int argc, char **argv) { int size = 10; int i; typedef struct { char val[size]; } block; block b; block retframe_block () { return *(block *) 0; } b=retframe_block (); return 0; } The obvious reason of its successful compilation is that ,there was no need of assigning a temporary space to the stack frame as the return block was returned to address of block b. This gave the inspiration of dynamically allocating the temporary space to the frame when a nested function is called without assignment. 2003-09-02 Sitikant Sahu <sitikants@noida.hcltech.com <mailto:sitikants@noida.hcltech.com>> * calls.c (expand_call): Allocate dynamically on stack for variable size structure return (PR 8081 <show_bug.cgi?id=8081>). <<PR8081FIX.txt>> Thanks and Regards, Sitikant Sahu, MTS, System Software Group, HCL Technologies Ltd., A-11, Sector-16, Noida 201301, UP, India, Tel : +91 120 2510701/702 /813 (Extn: 3165),
Attachment:
PR8081FIX.txt
Description: Text document | http://gcc.gnu.org/ml/gcc-patches/2003-09/msg00464.html | CC-MAIN-2015-11 | refinedweb | 258 | 54.73 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.