text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
This is your resource to discuss support topics with your peers, and learn from each other.
03-04-2009 09:58 AM
I have an application that creates a PhoneCallLog instance which I then add to the PhoneLogs container.);
After invoking addCall() I receive the callback callLogAdded() in my PhoneLogListener implementer. The problem is when I access my phone's call log list manually and delete the call log (added using the above code) I don't receive the callback callLogRemoved() in my Listener class though the listener is registered. On restarting the device, the call log reappears in the list which implies it never got deleted in the first place (though the UI gave a different picture) and hence the call back was also not received by my listener perhaps. Also instead of adding a phone call log through my code I got it added via the phone itself by making an outgoing call. When I delete the call log in this case I am getting a call back.
I am using ver 4.2.1 and also tries using ver 4.6 but the problem persists. Is this a known issue in blackberry or I have missed something. Plz help...
03-04-2009 10:50 AM
You can take help from this link:
Press the kudos button to thank the user who helped you.
If your problem was get solved then please mark the thread as "Accepted solution".
03-05-2009 01:25 AM
03-11-2009 11:30 AM
I haven't been able to reproduce this behaviour. Please verify that you are viewing the call logs within the Phone application. To do so start the Phone app, select Options from the menu, click on General Options and check the setting for Phone List View. This should be set to Call Log.
If this is correct, what is the full 4 digit BlackBerry handheld software version are you testing on? You can find this under Options, About on the BlackBerry Smartphone.
03-12-2009 02:06 AM - edited 03-12-2009 02:18 AM
The phone list view was already set as Call Log in my case. So the call logs were being viewd within the Phone application. The 4 digit software version of my Blackberry 8800 is 4.2.1.68.
Unfortunately the bug is replicable whenever I run this code:
public class CallLogTester extends Application implements PhoneLogListener {
public static final long app_guid = 45614127246L;
private static CallLogTester tester;
public static void main(String[] args) {
try {
RuntimeStore appReg = RuntimeStore.getRuntimeStore();
synchronized (appReg) {
if (appReg.get(app_guid) == null) {
CallLogTester tester = new CallLogTester();
appReg.put(app_guid, tester);
PhoneLogs.addListener(tester);
}
}
testCallLog();
tester.enterEventDispatcher();
}
catch(Exception ex) {
System.exit(1);
}
}
private static void testCallLog() {);
int num_of_normal_calls =
phoneLogs.numberOfCalls(PhoneLogs.FOLDER_NORMAL_CA
LLS);LLS);
for (int i = 0; i < num_of_normal_calls; i++) {
phoneLogs.deleteCall(i--, PhoneLogs.FOLDER_NORMAL_CALLS);
num_of_normal_calls =
phoneLogs.numberOfCalls(PhoneLogs.FOLDER_NORMAL_CA
LLS);LLS);
}
System.exit(0);
}
public void callLogAdded(CallLog arg0) {
System.out.println("Callback is received here when addCall()" +
" is invoked");
}
public void callLogUpdated(CallLog arg0, CallLog arg1) {
}
public void callLogRemoved(CallLog arg0) {
System.out.println("Callback is not received here when" +
" deleteCall() is invoked");
}
}
In testCallLog() I simply add a call log entry using phoneLogs.addCall(phoneCallLog) api for which I receive a callback in callLogAdded(). Next I delete the same call by invoking the deleteCall() api. However in this case I am not receiving a callback in callLogRemoved() which I should. When I check the call log entry from the phone application UI, I can't see the entry but on restarting the phone, the entry reappears implying that it never got deleted in the first place.
This bug is always replicable unless a new call log gets added from the phone application directly instead of my code as shown.
This looks like an issue with the Blackberry platform. Can you plz verify and confirm?
Thanks.
03-17-2009 01:12 PM
03-18-2009 03:24 AM
I tried removing the System.exit(0) call. I even tried performing the operations in testCallLog() in a different thread but the problem persists. I have no doubt about this. Can you please try to replicate using the code I posted earlier.
If you note the testCallLog() method in my code I am simply adding a new call log and then deleting it. If I check the phone's call log list it won't show me the deleted call log, however it does when I restart my device which means the call log never got deleted actually, perhaps the reason I didn't received the call back. This happens assuming you've added the call log through the code and that the call log was not added automatically via the phone app.
03-24-2009 04:13 PM
03-25-2009 09:50 AM
04-24-2009 03:05 AM | https://supportforums.blackberry.com/t5/Java-Development/Issue-with-PhoneLogListener-not-receiving-callbacks/td-p/178109 | CC-MAIN-2017-09 | refinedweb | 810 | 65.22 |
:
:
>>IMAGE:
:
PingBack from
One general issue I have with Windows is that there is far too much text on the screen.
Windows Vista has only made this worse. For example, when I first logged in and got the welcome center, my first reaction was to minimize it. The sheer amount of text and number of links totally turned me away from it. To this day nothing has convinced me to read it.
The Control Panel has gotten wordier, and so have many of the applets. The more text you put up, the less likely it is that I’m going to read it, and I doubt I’m alone. Too much text is scary and distracting. And when it just repeats what the heading says, I find it annoying that the design of the program caused me to waste time reading something I didn’t even need to read.
Now, I know your team was not involved with the above, but I’m bringing this up because that’s what this new tab page reminded me of. You can put as much text as you want on that new tab page, but I really think that the fewer words you use, the better the experience will be.
When I open a new tab, I want to do something. I’m with you so far. So expect that I already know what I want to be doing next. I don’t want to see a screen with a million options.
You said you’ve toned it down a bit, but I still think it’s too many words.
Put my focus in the address bar so I can start typing. Put my most recently closed tabs and my most often used bookmarks there, too, fine, but I don’t need all these words.
Personally, I think the words in green are enough. the black text underneath is not necessary. Or, display that when I hover over the green text.
I know you guys are working hard out there and I know it’s impossible to please everyone. Thanks for listening.
That’s an exciting way on new Tab IE 8 doing!
I think this can be more customizedable, such as set an preview pictures when "reopen closed tab".
all this new tab page could include more information! not just TAB introduction!
I like the new "New Tab" page, but if I can make a small suggestion to look into having the tab that being reopened to load in the newly created new tab, that way there won’t be a need to close the "New Tab" page after the closed tab was reopened.
what i don’t get is when clicking a website under the reopen closed tab the website open in a new tabs. How does a user set it up to reopen website directly on the same tabs. a option should be available?
Top 10 favorite website should be available in New tab page.
I like to make the new tab page my homepage.
(I know this comment does not fit here. But not sure where else to convey this)
I really love the new features in IE. IE seems to finally cope up with other browsers in the market. There are a couple of things that I believe could improve user experience a lot.
If you ask anyone using firefox for why he uses it, the non-intrusive download manager would definitely be there. IE needs to improve the old fashioned pop-up download manager.
The new InPrivate browsing is really helpful. But a missing feature bothers me and most people in using it. Though new cookies are blocked, old cookies are still allowed to be read. Effectively even though IE does collect history information, other sites do. e.g google web history. This is very well taken care of in chrome. The InCognito mode in chrome really hides "everything".
I would love to see subscribed RSS feeds on that page. Or maybe show the last 5 unread items. Feeds are my starting point with every browser session.
Good job on the new tab page except for the large sized "What do you want to do next?". Remember this is a tab in IE, not Media Center. Reminds me of "Where do you want to go today?" LOL. Nearly every user I know would like to have that title proportionately reduced in size. Also instead of writing help on such topics as this one which users can figure out by using, it would be better if the team prioritized writing context sensitive help (using HTML help) for IE’s advanced options and all the security settings in a zone.
The new tab page still looks visually distracting, especially the heading section – reminds the old slogan of Microsoft "Where do you want to go today?" 🙂
The key problem – the new tab page looks like a webpage by itself even though, by definition, I intend to go somewhere else. Too much of text and therefore distracting. Do we really need all those explanations what each function does on the front page? Especially when the browser is going to be used daily and the users WILL know what each function does?
That’s a classic design trap – to design for the newcomers while the golden medium is to design for the intermediates. MS Office is a good example. Some learning curve is involved but once you pass it, the interface feels very efficient for daily tasks.
Choosing a truly non-obtrusive minimalistic design for the new tab page in IE8 could remedy the problem. My suggestion: list all three choices as a single (nicely, visually) bulleted list. Then add an "explain" link somewhere near, which, when clicked, would expand the section with definitions.
Are you going to do something about your Tools->Configuration screen. it’s about time you upgraded it
I think that ‘new tab’ creates a very good additional functionality. I personally do not use the ‘new tab’, but I think it is very convenient for users, especially if they work with many applications and Web pages simultaneously.
Why is it no longer possible to open a new browser window (new browsing session) with the tabs from the previous browsing session automatically opened?
e.g: say i have a number of tabs open on a browsing session then close the ie window (i.e. end session by clicking "X" on the upper far right of the window), it would be fantastic for ie to give me the option to remember these tabs, such that on my next session they automatically open.
Even better was if ie "remembered" this preference, and did not bother asking me whether i wanted to this each time.
You can easily implement this on the exit dialogue box that appears if the user exits when there are multiple tabs open OR on tools > internet options > [should be on general tab] > [should see tabs section] > settings.
Interesting you talk about re-discoverability, which I’ve been finding a lot better in IE8b2.
It could be better still. Instead of a tiny "Restore Last Session (or lose those links forever?)" dialog, you could bring up those session’s links as a page, headed by each separate IE window with tabs beneath grouped by tab colors, and each heading (including the "Your Last Links" top header) can be clickable to release the lot.
Keep it compact! Resist the urge to extract summary text, etc. Don’t make the usual MS mistake of creating something that looks great in a 3-item demo, but becomes an unscalable mess with 50+ items in the real world (rarely do I have anything less than 20-30 tabs open at once).
On "scalability", we need a UI to see how many pages we have open in a given IE window, and across all windows (for those who for some reason prefer multiple windows to tabs)
As it is, most IE8 sessions end in an unclosable mess, because IE seems to take forever to close large numbers of tabs. IE8b2 in particular is prone to tar-pit slowdowns (not actually complete lockups) of this kind – either with System and IExplore each at 50% CPU, or more commonly with Idle at 90%, but still feeling as if "something is hogging IE" (XP SP3, 3rd-party enhancements disabled, set to Prompt on active content, often many such prompts left pending).
When resuming such sessions, the same thing usually happens. What URLs? I can’t tell you, because IE goes ga-ga before they are shown, and that "Resume?" dialog gives no info on what I am about to "resume". Report problem web page? I can’t, because IE’s Tools menu is as ga-ga as everything else. See the problem?
The problem’s exploitable, too, e.g. create a page that (via whatever by-design or exploit of your choice) first squirts new code into the system, then clogs up IE. Every time user starts IE and chooses "Resume", you can update your code again, to stay ahead of av detections.
I really like the new Reopen closed tabs feature but I dislike having to open the New Tab page to get to it.
Is there a shortcut for reopening the last closed tab?
Reopening tabs from the New Tab page opens yet another tab so if I’ve opened a new tab just to get to the list of closed tabs the sequence of actions becomes Ctrl+T, Reopen tab using mouse, click on the new tab so that the close button appears, then close it and finally switch back to the reopened tab. Surely there must be a more efficient way?
@Johan Appelgren
Ctrl+Shift+T will reopen the most recently closed tab (excluding tabs you already reopened).
You can also right-click on any tab to access a similar list of recently closed tabs in a context menu. It’s under the "Recently Closed Tabs" entry at the bottom.
The address bar dropdown lists the most recently visited sites, that too without the titles. And to take advantage of the search, one has to type something in, which means you have to move your hand from the mouse to the keyboard.
So yeah, getting a list of most visited sites isn’t as easy as should be. Please consider putting the list of most frequently visited sites in the new tab page.
Or at least make the address bar dropdown entries pinnable (and show page titles when the dropdown is manually opened without typing/pasting anything into the address bar.
And like someone said above, a list of unread feeds with maybe a snippet (like what Visual Studio does with the "News").
How embarassing. 🙂 Hadn’t noticed the menu items in the tab context menu. Thanks!
This was something that I’d turned off right after installing beta 2, but upon reading this, I’ve gone and re-enabled it.
I especially appreciate the things you’ve done to protect privacy – I might often have something I wouldn’t want seen over the shoulder.
Having said that, give us a way to delete just a single item from that list of recently closed tabs (similar to what is offered in the address bar)
This "new tab" page is plain ugly. The early version looks much better.
Hello everyone! One of the features we improved in IE8 is the ‘new tab’ page, which is the page you see
The new tab page is good. Specially I like the accelerators tied up with clipboard.
It would be nice if you present a list of most visited sites, like Chrome does. I also like the idea of page thumbnails. In Chrome’s new tab page, I don’t read the page titles. I just see the image and click it.
I like the "delete entry" feature in address bar and would like to be expanded on new tabs page as well.
For example, if a suggested site or closed tab has stuck to my new tab page, I would like if there is an option to delete that suggestion if I don’t like that site.
I also support harmony7, for the idea of hiding function description and showing them only on mouse hover on green task headers.
It would be nice if the "Recently Closed Tabs" sub menu is appended at the bottom of the "Tab List" (Ctrl+Shift+Q).
"In IE7, there was a checkbox on the new tab page that said “don’t show me this again,” … We removed this in IE8 …
Finally, because we believe that this new tab page has a lot more value than its IE7 counterpart, we’ve made this new page the default new tab page for all IE8 users, even if you had changed this option in IE7 …"
I hate those checkboxs (in all apps) that give you no clue how to change them back. But even better IMO would be a link that opens the Tabbed Browsing settings dialog, especially for those IE7 users who changed the option in IE7 and want to stay with their change. Otherwise users may think IE8 removed the ability to choose the option for their previously selected preference.
I think that these options should be added for…
IE Tabs:
i. Open New Tab Before The Current Tab
ii. Open New Tab After Last Tab
Hey IE team, I would like to know why there’s nothing new wrt XML in the IE8 release. It’s almost as if the XML team is completely aloof from the browser team and focussing only on developer products (VS and DOTNET). Where is support for XML namespaces, XML Events and so on? According to Wikipedia, IE seems to be pretty deficient in terms of XML support too? See
I meant
You must also be aware of
I agree with the comments about removing the descriptive text under the section headings. Maybe hide them behind a small (?) button next to the green text.
I don’t have the beta installed on this computer, so forgive me if this is already an option, but I would like to have the ability to set the "new tab" page as my homepage (which I have done in Chrome and found very useful).
• The middle mouse button to open links on new tabs__
New tabs are openned in the end of the groups. Can you add an option to open them aside/next the current tab.
• Reopen last Browsing session__
Its too easy to loose the last session, Please can you add more visual feedback to notice there is a session saved, or add an option to reopen the last session "automatically"
save more than one session, for security.
ie. Sometimes a popup/ opens when you are closing IE8, etc and the session is lost because of that
show me whats inside the session I’m about to recover
I bet you can improve this all, much better
• closing The last Tab(webpage), impossible__
allow me to close all the tabs, ie. when you have only one tab(webpage) open, I’d like to close it too with the middle mouse button
• shortcut to open "ungrouped tabs"__
the title says it all. when opening a link or typing something in the adrees bar
• wipeout everything chromatic__
gradients and bright UI colors are bad. please care my eyes
and thanks for reading,
[pls. post something about IE8 Feeds and IE8 Cache]
Why is it such a pain to provide feedback? Is there some place other than actually having to register an account?
Anyways, can you guys do something about IE losing the ClearType effect after text is affected by opacity changes?
IE8 is looking pretty good.
Unfortunately, the links (especially "Start InPage Browsing") have no visual affordance. You have to hover over them to get a hand cursor to know that it’s a link you can click. Many users will think that the links are just headings.
Helpful read:
my suggestion regarding the IE new tab page:
learn again from your masters: Firefox and Chrome
don’t bloat things
You could further increase my privacy by adding a link that lets me clear the list of closed tabs.
That way I don’t have to restart the browser if I want to clear it.
@Brad -> You don’t consider Chrome’s tab bloated? I wonder how much extra code in involved in creating thumbnails of the webpage and storing them just to show them on a new-tab page + all the code and data tracking to give you your top 9 with no additional access to your home pages…
@defan,
IE8 now has the concept of about pages. New tab page is called about:Tabs. You can just set it as your homepage. (Enter about:Tabs in the list of homepages available in Tools -> Options).
Thanks for the tip on dissabling the text, it seemed to take a while to load on my pc and could keep up with the speed of my clicking, I perfer just a blank page 🙂
The vista presentation mode looks better than B2, however to mee they just seem like advertising your new features.
ii. Open New Tab After Last Tab++
ii. Open New Tab After Last Tab++
ii. Open New Tab After Last Tab++
ii. Open New Tab After Last Tab++
ii. Open New Tab After Last Tab++
The lack of this option is why I hate IE’s tabs.
@Brad -> You don’t consider Chrome’s tab bloated? I
i’m refering to UI bloating, not coding bloating
My Tab feature Wishlist (maybe OT):
Detach Tab Group (to New Window)
WYSIWYG Print (background colors too)
Export Tab (Group) History to Excel
Export URL History by Date/Time
Export Download History by Date/Time
Deck Tabs (stack tab groups into coloured "decks" like playing cards)
Cheers,
A.
Overall the new tabs are looking really good.
I’d like an "open home page" option on the ‘new tab page’ (maybe under ‘reopen last browser session). Yeah, I know I can set this to happen -ALL- the time for new tabs, but that denies me access to the new features.
Interesting stuff, thanks!
I would like to see, when using the Search bart (ctrl-e, or at the top corner of the browser) put search results in a new tab. Can this be done?
(And off-topic, can the favorites be navigated via scroll wheel instead of the little up/down buttons?)
Request: Everything that can be configured in the options area, should also be configurable in Group Policy. IE7 made enormous strides in this area, but some tab behavior controls were lacking in GP. Please make certain that this makes it into the IE8 dev cycle.
@D. P. Cole: You can hit ALT+ENTER in the search box to open the results in a new tab.
@Scott Klassen
Which tab behaviour settings are you referring to exactly? We have added over 100 new GPs in IE8 and may consider further enhancements.
When I’m surfing, I used to use the CTRL-N to open the same page in a new window. This allowed me to continue surfing to a new page, while having the old page up for comparison. This allows multiple branches in browsing history, and the best ose of the back and forward buttons.
I’d like to be able to do this same thing with tabs, but I don’t have the option to have new tabs open to the same page I’m on, and no shortcut I’ve been able to find re-produces that behavior. I’m stuck either doing the CTRL-N thing again, or having to copy/paste the URL.
How about adding the option to "open new tab to current page" as an option in the Tabbed Browsing Settings?
Bozford, isn’t CTRL+K (the "Duplicate Tab" feature in IE8) what you’re looking for?
I’d like a most visited pages list.
and maybe three placeholders for customizable addresses (like Speed Dial).
First off, I have not installed this beta so forgive me if my comments are implemented in some fashion.
This all looks fine and dandy but for me I almost always open a new tab because I right clicked on a link and selsected "open in new tab".
A couple of suggestions:
– Allow me to right click on an existing TAB and select "Open this page in new window" (you get the idea).
– Allow me to drag the tab down and start a new (or add to an existing) Horizontal or Vertical tab group. Of course, I will need to be able to resize the group if needed.
Thanks for listening!
I’m finding that new tabs sometimes open rather slowly. They features are all great and work great but sometimes they need to be just a little speedier.
I am going to have to agree with the previous response by Soum. Including the list of most visited pages is a very useful feature and helps accelerate common web browsing tasks, the facebooks a feed readers of the world. I have been experimenting with the new Google Chrome and found my self greatly enjoying this in their new tab page.
Another feature I like regarding the Google Chrome new tab page is the ability to see my recently bookmarked pages. This combined with my recently viewed pages covers that majority of what I want to do with my web browser.
I like the feature of using the middle click button to close a tab, but this occurs immediatly. This is annoying, because you would expect that this took action when the button is actually released. | https://blogs.msdn.microsoft.com/ie/2008/09/10/introducing-the-ie8-new-tab-page/ | CC-MAIN-2017-09 | refinedweb | 3,626 | 78.79 |
This chapter provides conceptual information about disk sets. For information about performing related tasks, see Chapter 21, Disk Sets (Tasks).
This chapter includes the following information:
Introduction to Disk Sets
Solaris Volume Manager Disk Set Administration
Guidelines for Working With Disk Sets
Asynchronous Shared Storage in Disk Sets use a physical slice. You can use the volume to create and to mount a file system and to store data.
Disk sets are supported on both SPARC and x86 based platforms.
This section discusses the different types of disk sets available in Solaris Volume Manager.
Each host has a local disk set. The local disk set consists of.
Shared disk sets are intended, in part, for use with Sun Cluster or another supported High Availability (HA) framework. Solaris Volume Manager by itself does not provide all the functionality necessary to implement a failover configuration. enabled, you can set a disk set to be automatically taken by a host at boot time. hosts. A disk set that is shared cannot be set to use the autotake feature. If the autotake feature is enabled on a shared disk set, the metaset -A command fails. However, after other hosts are removed from the disk set, the autotake feature can be enabled on the single-host disk set. Similarly, an autotake disk set cannot have other hosts added to it. However, description in metaset(1M).
Starting with the Solaris 9 9/04 release, Solaris Volume Manager can manage storage in a Sun Cluster environment using multi-owner disk sets. Multi-owner disk sets allow multiple nodes in a cluster to share the ownership of disk sets and to simultaneously access the shared disks. All disks and volumes in a multi-owner disk set can be directly accessed by all the nodes in a cluster. Each multi-owner disk set contains a list of the nodes that have been added to the disk set. Consequently, each multi-owner disk set within a cluster configuration can have a different (and sometimes overlapping) set of nodes. sets work with Sun Cluster and with applications such as Oracle9i Real Application Clusters. For information about compatible releases of Sun Cluster, see. For more information on Solaris Volume Manager for Sun Cluster, see Chapter 4, Solaris Volume Manager for Sun Cluster (Overview).
Unlike local disk set administration, you do not need to manually create or delete state database replicas for named disk sets. Solaris Volume Manager places one state database replica (on slice 7) on each disk across all disks in the disk set, up to a maximum of 50 replicas in the disk set.
When you add disks to a disk set, Solaris Volume Manager automatically creates the state database replicas on the disk set. When a disk is accepted into a disk set, Solaris Volume Manager might repartition the disk so that the state database replica for the disk set can be placed on the disk (see Automatic Disk Partitioning).
A file system that resides on a volume in a named disk set is not mounted automatically at boot time with the /etc/vfstab file unless the disk set is an autotake enabled disk set. The necessary Solaris Volume Manager RPC daemons (rpc.metad rpc.metamedd, and rpc.metamhd) do not start early enough in the boot process.
Do not disable the Solaris Volume Manager RPC daemons in the /etc/inetd.conf file. They are configured to start by default. These daemons must remain enabled to allow Solaris Volume Manager to use its full functionality.
Additionally, when a system is rebooted, the ownership of a named disk set is lost unless the disk set is an autotake enabled disk set.For more information on the autotake feature, see Autotake Disk Sets.
Although disk sets are supported in single-host configurations, they are often not appropriate for “local” (not dual-connected) use. Two common exceptions are the use of disk sets to provide a more manageable namespace for logical volumes, and to more easily manage storage on a Storage Area Network (SAN) fabric (see Scenario—Disk Sets).
Disk sets can be created and configured by using the Solaris Volume Manager command-line interface (the metaset command) or the Enhanced Storage tool within the Solaris Management Console.
After disks are added to a shared disk set, the disk set can be reserved (or taken) and released by hosts in the disk set. When a disk set is reserved by a host, the other hosts in the disk set can read but cannot write data on the disks in the disk set. To perform maintenance on a disk set, a host must be the owner of the disk set or have reserved the disk set. A host takes implicit ownership of the disk set by putting the first disk into the set.
Disk sets, including disk sets created on a different system, can be imported into existing Solaris Volume Manager configurations using the metaimport command.
Before a host can use the disks in a disk set, the host must reserve the disk set. There are two methods of reserving a disk set:
Safely - Before another host can reserve a disk set safely, the host that currently has the disk set reserved must release the disk set. If a host attempts to take the disk set before the other host attempts to release the disk set, the release (and therefore the reservation) fails.
Forcibly - When you forcibly reserve a disk set, Solaris Volume Manager reserves the disk set whether or not another host currently has the set reserved. This method is generally used when a host in the disk set is down or not communicating. All disks within the disk set are taken over. The state database is read in on the host performing the reservation and the shared volumes configured in the disk set become accessible. If the other host had the disk set reserved at this point, it would panic due to reservation loss.
Normally, two hosts in a disk set cooperate with each other to ensure that the disks in a disk set are reserved by only one host at a time. A normal situation is defined as both hosts being up and communicating with each other.
If a disk has been determined unexpectedly not to be reserved (perhaps because another host using the disk set forcibly took the disk), the host will panic. This behavior helps to minimize data loss which would occur if two hosts were to simultaneously access the same disk.
For more information about taking or reserving a disk set, see How to Take a Disk Set.
Releasing a disk set can be useful when you perform maintenance on the physical disks in the disk set. When a disk set is released, it cannot be accessed by the host. If both hosts in a disk set release the set, neither host in the disk set can access the disks in the disk set.
For more information about releasing a disk set, see How to Release a Disk Set.
The metaimport command enables you to import disk sets into existing Solaris Volume Manager configurations that have device ID support in the disk set. You can also use the metaimport command to report on disk sets that are available for import.
The metaimport command does not import a disk in a disk set if the disk does not contain a volume or a state database replica. When you import a disk set to another system, you might find that a disk is missing from the disk set. This scenario occurs if a volume or a state database replica have not been added to the disk or have been deleted from the disk. For example, maximum of 50 state database replicas are allowed per Solaris Volume Manager disk set. If you have 60 disks in a disk set, the 10 disks that do not contain a state database replica must contain a volume in order to be imported with the disk set.
For tasks associated with importing a disk set, see Importing Disk Sets.
When you add a new disk to a disk set, Solaris Volume Manager checks the disk format and, if necessary, repartitions the disk to ensure that the disk has an appropriately configured slice 7 with adequate space for a state database replica. The precise size of slice 7 depends on the disk geometry, but it will be no less than 4 Mbytes, and probably closer to 6 Mbytes (depending on where the cylinder boundaries lie).
The minimal size for slice seven will likely change in the future, based on a variety of factors, including the size of the state database replica and information to be stored in the state database replica.
For use in disk sets, disks must have a slice seven that meets these criteria:
Starts at sector 0
Includes enough space for disk label and state database replicas
Cannot be mounted
Does not overlap with any other slices, including slice two
If the existing partition table does not meet these criteria, Solaris Volume Manager will repartition the disk. A small portion of each disk is reserved in slice 7 for use by Solaris Volume Manager. The remainder of the space on each disk is placed into slice 0. Any existing data on the disks is lost by repartitioning.
After you add a disk to a disk set, you may repartition it as necessary, with the exception that slice 7 is not altered in any way.
The minimum size for slice seven is variable, based on disk geometry, but is always equal to or greater than 4MB.
The following output from the prtvtoc command shows a disk before it is added to a disk set.
If you have disk sets that you upgraded from Solstice DiskSuite software, the default state database replica size on those sets will be 1034 blocks, not the 8192 block size from Solaris Volume Manager. Also, slice 7 on the disks that were added under Solstice DiskSuite will be correspondingly smaller than slice 7 on disks that were added under Solaris Volume Manager.
After you add the disk to a disk set, the output of prtvtoc looks like the following:
If disks you add to a disk set have acceptable slice 7s (that start at cylinder 0 and that have sufficient space for the state database replica), they will not be reformatted.
Disk set component names are similar to other Solaris Volume Manager component names, but the disk set name is part of the name.
Volume path names include the disk set name after /dev/md/ and before the actual volume name in the path.
The following table shows some example disk set volume names.
Similarly, hot spare pools have the disk set name as part of the hot spare name.
Figure 20–1 shows an example configuration that uses two disk sets.
In this configuration, Host A and Host B share disk sets red and blue. They each have their own local disk set, which is not shared. If Host A fails, Host B can take over control of Host A's shared disk set (Disk set red). Likewise, if Host B fails, Host A can take control of Host B's shared disk set (Disk set blue).
When working with disk sets, consider the following guidelines:
Solaris Volume Manager must be configured on each host that will be connected to the disk set.
Each host must have its local state database set up before you can create disk sets.
To create and work with a disk set in a clustering environment, root must be a member of Group 14 on all hosts, or the /.rhosts file on each host must contain an entry for the other host names associated with the disk set.
To perform maintenance on a disk set, a host must be the owner of the disk set or have reserved the disk set. A host takes implicit ownership of the disk set by putting the first disk into the disk set.
You cannot add a disk to a disk set that is in use for a file system, database or any other application. Before you add a disk, make sure that it is not currently being used.
Do not add to a disk set a disk containing existing data that you want to preserve. The process of adding the disk to the disk set repartitions the disk and destroys existing data.
The default total number of disk sets permitted on a system is 4. You can increase this value up to 32 by editing the /kernel/drv/md.conf file, as described in How to Increase the Number of Default Disk Sets. The number of shared disk sets is always one less than the md_nsets value, because the local disk set is included in md_nsets.
Unlike local volume administration, it is not necessary to manually create or delete state database replicas on the disk set. Solaris Volume Manager tries to balance a reasonable number of state database replicas across all disks in a disk set.
When disks are added to a disk set, Solaris Volume Manager rebalances the state database replicas across the remaining disks. Later, if necessary, you can change the replica layout with the metadb command..
The). | http://docs.oracle.com/cd/E19683-01/817-5776/6ml784a71/index.html | CC-MAIN-2013-48 | refinedweb | 2,214 | 68.2 |
This is, because the speech synthesis already known from Windows can also be used in the .Net framework, the needed component for that is the class System.Speech.Synthesis. To be able to use it, it first has to be included: Project - Add Reference - (Tab .Net) System.Speech. To simplify the code, we include it also via using:
using System.Speech.Synthesis;
For the speech output we need an instance of the class SpeechSynthesizer, which will output spoken text when the function Speak() is called.
But, of course there are also further properties, with which can be experimented.
A little overview:
- Rate: Speed of the text output, ranges from -10 to 10
- Volume: Volume, ranges from 0 - 100
- Voice: Voice
The last property is read-only, but it can be changed with some functions.
Useful is the function SelectVoiceByHints(), which expects "hints" according to which a voice is selected. Such criteria are for example gender, age and even culture group.
The following command looks for a voice, which sounds female and grown up and sets this voice as new outout voice:
Speaker.SelectVoiceByHints(VoiceGender.Female, VoiceAge.Adult);
However, on most Windows computers there will only be one preinstalled voice (at my computer for example only "Microsoft Anna"). Then the voice search does not have any influence.
A list of installed voices can be obtained via Speaker.GetInstalledVoices(). The single voices are represented as objects of the type Voice, the following example reads out all installed voices and adds them to the list InstalledVoices:
List<string> InstalledVoices = new List<string>();
foreach (InstalledVoice voice in Speaker.GetInstalledVoices())
{
InstalledVoices.Add(voice.VoiceInfo.Name);
} | http://csharp-tricks-en.blogspot.com/2011/03/speech-output-with-c.html | CC-MAIN-2017-51 | refinedweb | 270 | 53.21 |
On 11/24/2014 04:23 PM, John Ferlan wrote: > > On 11/24/2014 12:48 PM, Laine Stump wrote: >> + if (nlmsg_append(nl_msg, &ndm, sizeof(ndm), NLMSG_ALIGNTO) < 0) >> + goto buffer_too_small; >> + if (nla_put(nl_msg, NDA_LLADDR, VIR_MAC_BUFLEN, mac) < 0) >> + goto buffer_too_small; > So if someone adds the same thing twice, what happens? Does it matter? > IOW: Is there any need for us to check a return status here that > indicates "duplicate entry" and ignore? Or is there any need for a > *FDBGet() function to determine whether what we're about to add already > exists? If you try to add something that's already there, or remove something that isn't there, the kernel returns an error. We're only adding these entries immediately after creating a tap device, so the entry will never exist, and we actually never delete anything - it's automatically deleted when the tap device is deleted. In the future when we allow changing MAC address, then we'll need to think about those things, but for now we're safe. The "FDBGet()" type function is considerably more complicated (you basically get a dump of the entire db into a list), and not needed for what I'm implementing now. > > Similar argument of course for delete, but removing something that > doesn't exist - what happens? > > Then course there's the "FDBModify()" type function. We have something, > but want to change a setting. That really never happens. You add them and take them away. > >> + return -1; >> +} >> + >> + >> +#endif >> + >> +int >> +virNetDevBridgeFDBAdd(const virMacAddr *mac, const char *ifname, >> + unsigned int flags) >> +{ >> + return virNetDevBridgeFDBAddDel(mac, ifname, flags, true); >> +} >> + >> + > Thinking out loud for the future patches. The 'flags' here - must they > match how they were added? I think they do matter, as it's possible to have two entries with the same MAC address and interface name, but different flags. | https://listman.redhat.com/archives/libvir-list/2014-November/msg00943.html | CC-MAIN-2021-25 | refinedweb | 304 | 62.07 |
lp:~robert-ancell/gnome-mahjongg/update-3.14
- Get this branch:
- bzr branch lp:~robert-ancell/gnome-mahjongg/update-3.14
Branch merges
Related bugs
Related blueprints
Branch information
- Owner:
- Robert Ancell
- Status:
- Development
Recent revisions
- 5. By Robert Ancell on 2014-12-04
* New upstream release (LP: #1399050)
* debian/control:
- Bump build depends on libglib2.0-dev, libgtk-3-dev
* debian/
patches/ no-headerbar. patch:
- Applied upstream
- 4. By Robert Ancell on 2014-07-03
* debian/control:
- Bump build-depends
- 3. By Robert Ancell on 2014-07-03
* New upstream release
* debian/
patches/ hide-headerbar. patch:
- Don't show HeaderBar in shells that don't support it well
- 2. By Robert Ancell on 2014-07-02
*
- 1. By Robert Ancell on 2013-12-16
import packaging
Branch metadata
- Branch format:
- Branch format 7
- Repository format:
- Bazaar repository format 2a (needs bzr 1.16 or later) | https://code.launchpad.net/~robert-ancell/gnome-mahjongg/update-3.14 | CC-MAIN-2020-10 | refinedweb | 146 | 52.87 |
Some schema schema objects and schema object location qualifiers
Guidelines for naming schema objects and qualifiers
Every database object has a name. In a SQL database object, with one exception: database links must be named with nonquoted identifiers. In addition, Oracle strongly recommends that you not use quotation marks to make usernames and passwords case sensitive. Please refer to CREATE USER for additional rules for naming users and passwords.
The following list of rules applies to both quoted and nonquoted identifiers unless otherwise indicated:
Names must be from 1 to 30 bytes long with these exceptions:
Names of databases are limited to 8 bytes.
Names of database links can be as long as 128 bytes., function names, the dummy system table.
You should use ASCII characters in database names, global database names, and database link.
Within a namespace, no two objects can have the same name.
The following schema objects share one namespace:
Tables
Views
Sequences
Private synonyms
Stand-alone procedures
Stand-alone stored functions
Packages
Materialized views
User-defined types
Each of the following schema objects has its own namespace:
Indexes
Constraints
Clusters
Database triggers
Private database links
Dimensions:
User roles
Public synonyms
Public database links
Tablespaces
Profiles
Parameter files (
PFILEs) and server parameter files (
SPFILEs)
Because the objects in these namespaces are not contained in schemas, these namespaces span the entire database.
Nonquoted identifiers are not case sensitive. Oracle interprets them as uppercase. Quoted identifiers are case sensitive."
Columns in the same table or view cannot have the same name. However, columns in different tables or views can have the same name.
Procedures or functions contained in the same package can have the same name, if their arguments are not of the same number and datatypes. Creating multiple procedures or functions with the same name in the same package with different arguments is called overloading the procedure or function.:
Use full, descriptive, pronounceable names (or well-known abbreviations).
Use consistent naming rules.
Use the same name to describe the same entity or attribute across tables.
department_id. | http://docs.oracle.com/cd/B13789_01/server.101/b10759/sql_elements008.htm | CC-MAIN-2015-32 | refinedweb | 338 | 54.52 |
Geeks With Blogs
Welcome to
Geeks with Blogs
Will Smith
94 Posts
| 212 Comments
My Other Recent Posts
Installing SP1… again
Blog Tagging
Live Writer – What took me so long?
"Hotfix" for Extraneous nbsp
Transitioning from TDD to BDD
ReSharper 30 Day Trial: Conclusion
ASP.Net CheckBoxList, Linq, and jQuery
My Intro to jQuery
ReSharper 30 Day Trial: Part 3
TestDriven.Net NUnit Issue Resolved
Post Categories
VS2013
Prism Windows 8.1 MVVM
SQL2012
MVC
ReSharper
Archives
December 2010 (1)
November 2009 (1)
October 2009 (1)
September 2009 (2)
June 2009 (2)
May 2009 (2)
April 2009 (2)
March 2009 (5)
February 2009 (4)
January 2009 (8)
December 2008 (3)
November 2008 (8)
October 2008 (2)
September 2008 (1)
August 2008 (6)
July 2008 (6)
June 2008 (7)
May 2008 (5)
April 2008 (3)
March 2008 (11)
February 2008 (8)
January 2008 (5)
Will Smith
The Blog in Black
<< Custom GridView Guidelines - Preface
|
|
Oracle Update with Join >>
SlickEdit Tools for Visual Studio - Review
|
Introduction
In all honesty, this review may be a bit premature. However, I wanted to record my thoughts and see if anyone listening shares them or has a difference of opinion.
I've been evaluating the
SlickEdit Tools
and have come up with a few top picks. Also, I have identified some that make little difference to me.
First of all, the Tools are segregated into two products: Editing Toolbox and Versioning Toolbox. I'll review them separately.
Editing Toolbox
Top Picks:
Acronyms
- This one is quite nice. I've created acronyms for namespaces that I use regularly. In fact, I even wrapped the entire c#
using
statement in the acronym. So, something like "
umbc
" can translate to something like "
using My.Business.Controllers;"
I am now looking for patterns in my code that may benefit from Acronyms. However, as I like to refactor to reduce code duplication, I don't know if I will find too many more uses for this feature. Regardless, It does save time when starting up a new file. Another positive that Acronyms provides is the ease by which to create new ones.
Code Annotations
- I am not big on commenting code. I prefer to let the code speak for itself. If the code has trouble communicating, then I refactor it, such as renaming a method to clearly communicate the intent. However, as a code reviewer, I can see great benefits to annotating the code for that purpose. In the past, I have been in code reviews where the note taker's handwriting is suspect. Also, the notes are on a hard copy that either gets filed away somewhere or scanned in. I can see the benefit of using Code Annotations with code reviews on a projection monitor. Assuming all participants review the code ahead of time, the meeting can run very smoothly. And then you just send the annotation file to the team that owns the code.
Quick Profiling
- I don't know how many times I wish I had a profiler at hand and was out of luck. Clearly, adding a bunch of lines of code to record timestamps is not generally the best use of a developer's time. Quick Profiling basically uses a trace handler and allows you to just write to the trace and you get profiling. Of course, you are still adding lines of code to write to the trace, but it is a lot less than the other cheap alternative. A true profiler is out of my budget as I suspect is the case for many of you. I can see Quick Profiling saving me time guessing at where a bottleneck is. I'm looking forward to testing it out on my next performance related issue.
Honorable Mention:
Load Files
- This one looks pretty promising too. If it weren't for the "Object reference" error I keep getting when I try to use it, it would be in the Top Picks. This gives me a good opportunity to investigate customer support.
I have used the MVC pattern quite a bit in my applications. As you can imagine, there is some segregation of code between the Model, View, and Controller (not to mention the data context). Though these components may be in different libraries and subfolders, they have similar names. Because of this, using Load Files makes it convenient to load all of the related files at once. Though I could certainly F12 a reference to open the related class, it's nice to be able to open the files in anticipation that I will need them.
Another advantage, and possibly the biggest, is that you don't have to navigate solution explorer to the correct library and subfolder to find the file you want to open.
Indifference:
Auto Code Doc Viewer
- As I said before, I am not big on commenting code. Perhaps if I start publishing an API, this feature might come in handy.
C++ Code Navigation
- The last time I saw C++ code was about 15 years ago.
Code Navigation
- I'm not sure what this gives me that I don't already have with my C# editor (Ctrl+- and Ctrl+Shift+-)
Comment Wrapping
- I have to admit, that when I do write comments, they are generally quite lengthy. A sure sign that I probably need to spend more time thinking about the code and refactoring. Regardless, this doesn't happen often enough for me to get much benefit out of this feature.
Directory Aliases
- I don't browse directories very often; and when I do, the directories are relatively close to each other. I don't see much of a time saver here.
Icon Extractor
- I don't work with icons (nor Windows Forms) much.
Quick Launch
- As far as I can tell, only beneficial if you also have the SlickEdit Application installed.
Regex Evaluator
- This might be great if I didn't already write my own (and if there weren't already a bunch of online tools available).
Word Completions
- The help is confusing on this one. Perhaps I am just "slow". Regardless, I use Visual Studios word completion all the time and I don't know how this can add much value.
Versioning Toolbox
Top Picks:
SVN Integration
- I've been using AnkhSvn for almost a year now. There are certainly some drawbacks to Ankh. SlickEdit's integration seems to be more stable, which is why this is in my Top Picks. However, Ankh does provide a bit more (for example, automatically performing an SVN Add when you create a new file). Also, Ankh is open source (good and bad).
Find Version
- This is a promising tool. I know that I have had to dig through revisions to track down something that is causing problems. This nifty little tool might have saved me a few hours searching. On the flip side, I don't have this problem very frequently.
Who Wrote This?
- Uses the history of your source file to show you who was the last person to touch each line of code. This can come in handy. I can quickly identify who I need to confer with if I have a question about the code.
Indifference:
Comparing Directories, Diff with Disk File, DIFFzilla
- As I use SVN and Tortoise, I already have these types of tools. In my opinion, the Tortoise tools are top notch. I have used the diff/merge tool many times to compare two files on disk. I just don't see the added benefit with SlickEdit's tools.
Version Graphs, Version History, Version Visualizations
- These just don't seem like features I would need or use.
Backup History, How Old Is This?, When Was This Written?
- I think good Agile / XP practices should eliminate the need for these tools. For example, knowing which lines of code are more "stable" can be determined by looking at a history of failing tests.
Summary
As I said from the very beginning, this evaluation may be a bit premature. Although the purpose behind these tools is to save development time, I don't want to take too much time learning how to use a new tool that won't give me a lot of benefit. Don't get me wrong. SlickEdit Tools for Visual Studio has a lot to offer. At around $50 for each piece (Editing and Versioning), that's just a drop in the bucket when you consider the time you can save. Even if just one feature provides a performance boost, it would be worth it. I certainly plan to get my money's worth.
I'm convinced that if more people use the products that SlickEdit provides, they will only get better. It's really up to the community to tell them what we want (perhaps refactoring tools, perhaps templates). Imagine what they can gain from our experience.
SlickEdit
,
Visual Studio
Posted on Wednesday, June 18, 2008 2:23 AM
.Net
,
Tools
,
Productivity
|
Back to top
Related Posts on Geeks With Blogs
Matching Categories
Query Application Insights REST API To Create Cust...
.NET
If I Made a New Web App over the Last 12 Years
Tools
Show off your API with a little Swagger...
.net
The TDLR; on easily managing .NET concurrent async...
.net
How Does Fit out Impact the Productivity and How t...
Productivity
Comments on this post: SlickEdit Tools for Visual Studio - Review | http://geekswithblogs.net/WillSmith/archive/2008/06/18/slickedit-tools-for-visual-studio---review.aspx | CC-MAIN-2017-34 | refinedweb | 1,560 | 72.36 |
I'm trying to overload the [] operator, so I made a simple vector class. I was looking through some of the directx headers and how they overloaded the operator. I made a completely const version that returns the value at a certain pos, and a non-const one that returns a pointer to a pos in the array (I think...) Anyway, it sorta works. On my machine it prints V[1] as 15 and V[0] as 34, but the program always crashes and I'm not sure why. Any help would be great, thanks!
Here's the code:
Code:#include <iostream.h> class CVect { private: int *dat; int len; public: CVect (); CVect (int); CVect (CVect&); ~CVect () {}; const int operator [] (int) const; int get_len (void) const; int& operator [] (int); void set_len (int); }; const int CVect::operator [] (int a) const { return dat [a]; } int& CVect::operator [] (int a) { return *(dat + (a * sizeof (int))); } inline int CVect::get_len (void) const { return len; } inline void CVect::set_len (int l) { len = l; } CVect::CVect () : len (1) { dat = new int (len); } CVect::CVect (int l) : len (l) { dat = new int (l); } int main (void) { CVect V (10); V[1] = 16; V[0] = 34; for (int i = 0; i < V.get_len() - 1; ++i) cout << V[i] << endl; return (0); } | https://cboard.cprogramming.com/cplusplus-programming/22825-overloading-%5B%5D-operator-trouble.html | CC-MAIN-2017-51 | refinedweb | 213 | 74.83 |
Advent of Code 2019 Day 405.
Day 4: Part 1
This challenge is relatively short so I will include the whole thing below:
—in
122345).
- Going from left to right, the digits never decrease; they only ever increase or stay the same (like
111123or
135679).
Other than the range rule, the following are true:
-
111111meets these criteria (double
11, never decreases).
-
223450does not meet these criteria (decreasing pair of digits
50).
-
123789does not meet these criteria (no double).
How many different passwords within the range given in your puzzle input meet these criteria?
Your puzzle input is
136760-595730.
So we need to crack that password! Or at least work out how many combinations there are.
This is a nice and simple thing to do in Scala:
val min = 136760 val max = 595730 val fullRange = min to max
First we define the minimum and maximum and create a range between them.
Next I want to extract each digit inside each item in the range into a single number. I actually use a bit of a short-cut to do this:
def charToInt(char: Char): Int = char.toInt - '0'
This method will take a character and assuming it is a number character will convert it into a matching integer. Combined with a string version of a candidate password this lets me produce an array of digits with ease like so:
fullRange .map(n => n.toString) .map(string => string.map(char => charToInt(char)))
Now all we need to do is filter down this big collection of digits to match the criteria described:
First lets find all the combinations with repeating digits:
def hasRepeatedDigit(number: IndexedSeq[Int]): Boolean = { for (index <- 0 until number.size - 1) { val digit = number(index) val nextDigit = number(index + 1) if (digit == nextDigit) { return true } } false }
That’s pretty simple and easy.
Next let us filter to just those digits with incrementing or remaining the same digits:
def isIncrementingOrSame(number: IndexedSeq[Int]): Boolean = { var index: Int = 0 while (index < number.size - 1) { val digit = number(index) for (i <- index + 1 until number.size) { val testDigit = number(i) if (testDigit < digit) { return false } } index += 1 } true }
A little more complex but not hard.
Putting these together like so:
val validPasswords = fullRange .map(n => n.toString) .map(string => string.map(char => charToInt(char))) .filter(hasRepeatedDigit) .filter(isIncrementingOrSame) println(validPasswords.size)
Will print out the amount of valid values asked for in part 1!
Day 4: Part 2
Now part 2 modifies one of the conditions slightly:
— Part Two —
An Elf just remembered one more important detail: the two adjacent matching digits are not part of a larger group of matching digits.
Given this additional criterion, but still ignoring the range rule, the following are now true:
-
112233meets these criteria because the digits never decrease and all repeated digits are exactly two digits long.
-
123444no longer meets the criteria (the repeated
44is part of a larger group of
444).
-
111122meets the criteria (even though
1is repeated more than twice, it still contains a double
22).
How many different passwords within the range given in your puzzle input meet all of the criteria?
Now we can add an extra filter to cover this.
There’s a few ways of writing this filter, one slightly hacky way is to convert the digits back to a string and use a Regular Expression to find all the repeating digits:
val pattern = Pattern.compile("(?<=(.))(?!\\1)") def repeatDigitsNotPartOfLargerGroup(number: IndexedSeq[Int]): Boolean = { val asString = number.map(digit => digit.toString).mkString val repeatedDigits = pattern.split(asString).toSeq repeatedDigits.exists(repeat => repeat.length == 2) }
The pattern does a positive lookbehind and negative lookahead. Kind of hard to
understand unless you use Regular Expressions a lot.
You could do a similar thing with a Java
Scanner too.
But if we wanted to do this properly without converting to a string we really only need 2 nested loops to perform the same logic on the digits:
def repeatDigitsNotPartOfLargerGroup(number: IndexedSeq[Int]): Boolean = { val groupCounts = mutable.Buffer[(Int, Int)]() var start = 0 while (start < number.length - 1) { val digit = number(start) var count = 1 var i = start + 1 var changed = false while (i < number.length && !changed) { val nextDigit = number(i) if (digit != nextDigit) { changed = true } else { count += 1 i += 1 } } val group = (digit, count) groupCounts += group start += count } groupCounts.exists(group => group._2 == 2) }
And with either of these filters added we get our result for part 2! | https://lyndon.codes/ | CC-MAIN-2019-51 | refinedweb | 735 | 62.78 |
By Milind Paradkar.
Given the growing popularity and ease to learn, the Executive Programme in Algorithmic Trading (EPAT™) offers a dedicated module which covers Quantitative Trading Strategies taught using Python.You can know more about the EPAT™ program here.
The objective of this post is to illustrate how easy it is to learn Python and apply it to formulate and analyze trading strategies. If you are new to programming this blog might just help you overcome your fear of programming. Also, don’t forget to check out some nice links provided at the end of this blog to learn some exciting trading strategies which have been posted on our blog.
Let us run through some Basic Operations that can be performed on a stock data using Python. We start by reading the stock data from a CSV file. The CSV file contains the Open-High-Low-Close (OHLC) and Volume numbers for the stock.
import pandas as pd # Load data from csv file data = pd.DataFrame.from_csv('UBL.csv') print(data.head())
The ‘TIME’ column seen here specifies the closing time of the day’s trading session. To delete the column we can simply use the ‘del’ command.
# Deleting the "TIME" column del data['TIME']
Now, let us use the type function to check whether the object is a pandas datetime index.
type(data.index)
I would like to know the number of trading days (the number of rows) in the given data set. It can be done using the count method.
# Number of rows in the data set print(data['CLOSE'].count())
What if I want to know the maximum close price that was reached in the given period? This is made possible by using the max method.
max_price = data['CLOSE'].max() print(max_price)
Is it also possible to know the date on which this maximum price was reached? To find the respective date we apply the index property as shown below.
data.CLOSE[data.CLOSE == max_price].index
Let us compute the daily percentage change in closing price. We add a new column of ‘Percentage_Change’ to our existing data set. In the next line of code, we have filtered the percent change column for all the values greater than 1.0. The result has been presented below.
# Compute the percentage change data['Percent_Change'] = data['CLOSE'].pct_change()*100 # Filter the percent change column for all values greater than 1.0 dt = (data[data.Percent_Change > 1.0]) print(data.head()) print(dt.head())
Finally, let us add a couple of indicators. We compute the 20-day simple moving average and the 5-day average volume. We can add more indicators to our data frame and then analyze the stock trend to see whether it is bullish or bearish. You can learn more on how to create various technical indicators in Python here.
# Closing near the 20-day SMA ndays = 20 SMA = pd.Series((data['CLOSE']).rolling(window=ndays).mean(),name = 'SMA') data = data.join(SMA) # Higher trade Quantity Avg_vol = pd.Series((data['VOLUME']).rolling(window=5).mean(),name = '5day_AvgVol') data = data.join(Avg_vol) print(data.tail(7))
In his short post, we covered some simple ways to analyze the data set and build more understanding of the stock data. Can you think of building a trading strategy using similar basic operations and simple indicators? Simple trading strategies can be profitable and many successful traders will vouch for that. As mentioned at the start of the blog, here are the links to some trading strategies in Python that can be explored for your own trading needs.
Trading Using Machine Learning In Python – SVM (Support Vector Machine)
Strategy using Trend-following Indicators: MACD, ST and ADX
Sentiment Analysis on News Articles using Python
Python Trading Strategy in Quantiacs Platform
In our upcoming posts, we will provide more ways and methods that can be used for trading using Python. Keep following our posts.!
The post Basic Operations on Stock data using Python appeared first on .
This post first appeared on Best Algo Trading Platforms Used In Indian Market, please read the originial post: here | https://www.blogarama.com/blogging-blogs/1287370-best-algo-trading-platforms-used-indian-market-blog/21857524-basic-operations-stock-data-using-python | CC-MAIN-2019-13 | refinedweb | 680 | 65.42 |
Part 8: i18n
Reaction Commerce has been localized from the very start, and allows plugins to also add internationalization (heretofore call i18n) using the library.
The process is simple and just involves these steps:
- Determine the strings that need to be localized
- Create keys for these strings and the default text
- Add some bootstrap code to load these translations into the db
For the purposes of our tutorial we haven't really added any functionality we can translate, so let's add a little code to our "debug" HTML we added to the core layout that gives us some info about the current layout and workflow.
So to our
core.html let's all these lines right after the
main section
<div class="rui beesknees"> <!--- So you wouldn't want to actually have this here but it's good for some examples --> <div class="bkdebug"><em>Using Bee's Knees layout</em></div> <div class="bkdebug"><em>layoutHeader template: </em>{{layoutHeader}}</div> <div class="bkdebug"><em>layoutFooter template: </em>{{layoutFooter}}</div> <div class="bkdebug"><em>Main Layout: </em>{{template}}</div> </div>
So let's say we determined that we wanted to localize the term "Main Layout". First let's add a
i18n directory to our
server folder. And in that folder let's create an
en.json file and add this to it.
[{ "language": "English", "i18n": "en", "ns": "beesknees", "translation": { "beesknees": { "navbar": { "beeskneesLayout": "Translated Bee's Knees Layout", "mainTemplate": "Translated Main Template" } } } }]
What we've done there is added our "beesknees" namespace (that's what
ns stands for) and then put our keys in there, with
a navbar key so that as we expand our plugin we have room to grow.
Now we need to load that file into the translations. So let's create another file in the
i18n directory called
index.js
and add this code.
import { loadTranslations } from "/server/startup/i18n"; import en from "./en.json"; loadTranslations([en]);
You will then want to add an import for
i18n to the server-level
index.js. This function will then get run on
server startup. (You will need to stop and restart for the translation to be loaded)
Then we want to back to our HTML and add the the helpers. So for our two translated lines we add this attribute
data-i18n="beesknees.navbar.beeskneesLayout"
For the "Bee's Knees Layout" and
data-i18n="beesknees.navbar.mainTemplate"
for the "Main Layout"
When you stop and restart the translations will be loaded. Now of course it won't look any different because we only
have English loaded and even if you switch languages you still only have English, but if you added another translation there
those strings would show the translated version. This allows you to add new language support by just adding additional JSON
files and calling
loadTranslations on them.
Next: Final Thoughts | https://docs.reactioncommerce.com/docs/next/plugin-i18n-9 | CC-MAIN-2018-43 | refinedweb | 475 | 60.45 |
.
First of all, why should you even bother learning how to web scrape? If your job doesn’t require you to learn it, then let me give you some motivation. What if you want to create a website which curates cheapest products from Amazon, Walmart and a couple of other online stores? A lot of these online stores don’t provide you with an easy way to access their information using an API. In the absence of an API, your only choice is to create a web scraper which can extract information from these websites automatically and provide you with that information in an easy to use way.
Here is an example of a typical API response in JSON. This is the response from Reddit:
There are a lot of Python libraries out there which can help you with web scraping. There is lxml, BeautifulSoup and a full-fledged framework called Scrapy. Most of the tutorials discuss BeautifulSoup and Scrapy, so I decided to go with lxml in this post. I will teach you the basics of XPaths and how you can use them to extract data from an HTML document. I will take you through a couple of different examples so that you can quickly get up-to-speed with lxml and XPaths.
If you are a gamer, you will already know of (and likely love) this website. We will be trying to extract data from Steam. More specifically, we will be selecting from the “popular new releases” information. I am converting this into a two-part series. In this part, we will be creating a Python script which can extract the names of the games, the prices of the games, the different tags associated with each game and the target platforms. In the second part, we will turn this script into a Flask based API and then host it on Heroku.
Step 1: Exploring Steam
First of all, open up the “popular new releases” page on Steam and scroll down until you see the Popular New Releases tab. At this point, I usually open up Chrome developer tools and see which HTML tags contain the required data. I extensively use the element inspector tool (The button in the top left of the developer tools). It allows you to see the HTML markup behind a specific element on the page with just one click. As a high-level overview, everything on a web page is encapsulated in an HTML tag and tags are usually nested. You need to figure out which tags you need to extract the data from and you are good to go. In our case, if we take a look, we can see that every separate list item is encapsulated in an anchor (
a) tag.
The anchor tags themselves are encapsulated in the
div with an id of
tab_newreleases_content. I am mentioning the id because there are two tabs on this page. The second tab is the standard “New Releases” tab, and we don’t want to extract information from that tab. Hence, we will first extract the “Popular New Releases” tab, and then we will extract the required information from this tag.
Step 2: Start writing a Python script
This is a perfect time to create a new Python file and start writing down our script. I am going to create a
scrape.py file. Now let’s go ahead and import the required libraries. The first one is the
requests library and the second one is the
lxml.html library.
import requests import lxml.html
If you don’t have
requests installed, you can easily install it by running this command in the terminal:
$ pip install requests
The requests library is going to help us open the web page in Python. We could have used
lxml to open the HTML page as well but it doesn’t work well with all web pages so to be on the safe side I am going to use
requests.
Now let’s open up the web page using requests and pass that response to
lxml.html.fromstring.
html = requests.get('') doc = lxml.html.fromstring(html.content)
This provides us with an object of
HtmlElement type. This object has the
xpath method which we can use to query the HTML document. This provides us with a structured way to extract information from an HTML document.
Step 3: Fire up the Python Interpreter
Now save this file and open up a terminal. Copy the code from the
scrape.py file and paste it in a Python interpreter session.
We are doing this so that we can quickly test our XPaths without continuously editing, saving and executing our
scrape.py file.
Let’s try writing an XPath for extracting the div which contains the ‘Popular New Releases’ tab. I will explain the code as we go along:
new_releases = doc.xpath('//div[@id="tab_newreleases_content"]')[0]
This statement will return a list of all the
divs in the HTML page which have an id of
tab_newreleases_content. Now because we know that only one div on the page has this id we can take out the first element from the list (
[0]) and that would be our required
div. Let’s break down the
xpath and try to understand it:
//these double forward slashes tell
lxmlthat we want to search for all tags in the HTML document which match our requirements/filters. Another option was to use
/(a single forward slash). The single forward slash returns only the immediate child tags/nodes which match our requirements/filters
divtells
lxmlthat we are searching for
divsin the HTML page
[@id="tab_newreleases_content"]tells
lxmlthat we are only interested in those
divswhich have an id of
tab_newreleases_content
Cool! We have got the required
div. Now let’s go back to chrome and check which tag contains the titles of the releases.
Step 4: Extract the titles & prices
The title is contained in a div with a class of
tab_item_name. Now that we have the “Popular New Releases” tab extracted we can run further XPath queries on that tab. Write down the following code in the same Python console which we previously ran our code in:
titles = new_releases.xpath('.//div[@class="tab_item_name"]/text()')
This gives us with the titles of all of the games in the “Popular New Releases” tab. Here is the expected output:
Let’s break down this XPath a little bit because it is a bit different from the last one.
.tells lxml that we are only interested in the tags which are the children of the
new_releasestag
[@class="tab_item_name"]is pretty similar to how we were filtering
divsbased on
id. The only difference is that here we are filtering based on the class name
/text()tells lxml that we want the text contained within the tag we just extracted. In this case, it returns the title contained in the div with the
tab_item_nameclass name
Now we need to extract the prices for the games. We can easily do that by running the following code:
prices = new_releases.xpath('.//div[@class="discount_final_price"]/text()')
I don’t think I need to explain this code as it is pretty similar to the title extraction code. The only change we made is the change in the class name.
Step 5: Extracting tags
Now we need to extract the tags associated with the titles. Here is the HTML markup:
Write down the following code in the Python terminal to extract the tags:
tags = new_releases.xpath('.//div[@class="tab_item_top_tags"]') total_tags = [] for tag in tags: total_tags.append(tag.text_content())
So what we are doing here is that we are extracting the
divs containing the tags for the games. Then we loop over the list of extracted tags and then extract the text from those tags using the
text_content() method.
text_content() returns the text contained within an HTML tag without the HTML markup.
Note: We could have also made use of a list comprehension to make that code shorter. I wrote it down in this way so that even those who don’t know about list comprehensions can understand the code. Eitherways, this is the alternate code:
tags = [tag.text_content() for tag in new_releases.xpath('.//div[@class="tab_item_top_tags"]')]
Lets separate the tags in a list as well so that each tag is a separate element:
tags = [tag.split(', ') for tag in tags]
Step 6: Extracting the platforms
Now the only thing remaining is to extract the platforms associated with each title. Here is the HTML markup:
The major difference here is that the platforms are not contained as texts within a specific tag. They are listed as the class name. Some titles only have one platform associated with them like this:
<span class="platform_img win"></span>
While some titles have 5 platforms associated with them>
As we can see these
spans contain the platform type as the class name. The only common thing between these
spans is that all of them contain the
platform_img class. First of all, we will extract the
divs with the
tab_item_details class, then we will extract the
spans containing the
platform_img class and finally we will extract the second class name from those
spans. Here is the)
In line 1 we start with extracting the
tab_item_details
div. The XPath in line 5 is a bit different. Here we have
[contains(@class, "platform_img")] instead of simply having
[@class="platform_img"]. The reason is that
[@class="platform_img"] returns those
spans which only have the
platform_img class associated with them. If the
spans have an additional class, they won’t be returned. Whereas
[contains(@class, "platform_img")] filters all the
spans which have the
platform_img class. It doesn’t matter whether it is the only class or if there are more classes associated with that tag.
In line 6 we are making use of a list comprehension to reduce the code size. The
.get() method allows us to extract an attribute of a tag. Here we are using it to extract the
class attribute of a
span. We get a string back from the
.get() method. In case of the first game, the string being returned is
platform_img win so we split that string based on the comma and the whitespace, and then we store the last part (which is the actual platform name) of the split string in the list.
In lines 7-8 we are removing the
hmd_separator from the list if it exists. This is because
hmd_separator is not a platform. It is just a vertical separator bar used to separate actual platforms from VR/AR hardware.
Step 7: Conclusion
This is the code we have so far:
import requests import lxml.html html = requests.get('') doc = lxml.html.fromstring(html.content) new_releases = doc.xpath('//div[@id="tab_newreleases_content"]')[0] titles = new_releases.xpath('.//div[@class="tab_item_name"]/text()') prices = new_releases.xpath('.//div[@class="discount_final_price"]/text()') tags = [tag.text_content() for tag in new_releases.xpath('.//div[@class="tab_item_top_tags"]')] tags = [tag.split(', ') for tag in tags] a Flask based API. Here is the code:
output = [] for info in zip(titles,prices, tags, total_platforms): resp = {} resp['title'] = info[0] resp['price'] = info[1] resp['tags'] = info[2] resp['platforms'] = info[3] output.append(resp)
This code is self-explanatory. We are using the
zip function to loop over all of those lists in parallel. Then we create a dictionary for each game and assign the title, price, tags, and platforms as a separate key in that dictionary. Lastly, we append that dictionary to the output list.
In a future post, we will take a look at how we can convert this into a Flask based API and host it on Heroku.
Have a great day!
Note: This article first appeared on Timber.io | https://pythontips.com/2018/06/20/an-intro-to-web-scraping-with-lxml-and-python/ | CC-MAIN-2018-39 | refinedweb | 1,947 | 72.05 |
Having Fun With CSS3: Spinning Newspapers
Martin Angelov.
The Idea
Using JavaScript, we will load a sliced up version of the newspaper (slices are independently encoded as PNG or JPG for smaller filesize), and combine them in a single canvas element. We will also load a custom font from Google WebFonts, which we use to write the article title to the canvas.
We also define a simple CSS3 keyframe animation, which uses transformations such as
scale() and
rotate() to animate the canvas elements. Appending the canvas to the page triggers the animation, which means we don’t need to write a single line of JavaScript for the effect itself.
Currently, CSS3 keyframe animations are supported by Firefox, Safari and Chrome, so if you are using a recent version of one of these browsers, you will be able to enjoy the demo.
Here are some minor considerations that drove the decisions above:
- The image of the newspaper, encoded as PNG, weighs at over 250kb. Slicing it into independently encoded slices saves 200kb, as the center part does need transparency and is encoded as JPEG;
- Rotating a bunch of DOM elements is slower than a single canvas element. Also, rotated text in the browser generally does not look very good, as letters may lose their anti-aliasing (see a simple experiment here; it is most pronounced in Firefox). Painting the text and the newspaper background to a
canvaselement solves both of these problems;
The HTML
The markup of the page is minimal – everything is done using jQuery, so we only need to include our JS source files and stylesheets.
index.html
<!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>Spinning Newspaper Effect | Tutorialzine Demo</title> <!-- Our CSS stylesheet file --> <link rel="stylesheet" href="assets/css/styles.css" /> <!-- Embedding the Anton font from Google Webfonts --> <link rel="stylesheet" href="" /> <!--[if lt IE 9]> <script src=""></script> <![endif]--> </head> <body> <h3 id="fin">That is all</h3> <!-- This div uses the "Anton" font, preloading it for the canvas element --> <div id="fontPreload">.</div> <script src=""></script> <script src="assets/js/script.js"></script> </body> </html>
In the head section, we include our styles.css file (discussed below) and a stylsheet, which embeds the Anton font from Google WebFonts. Near the end of the file, we include version 1.6.2 of the jQuery library and our script.js (discussed in detail later on).
The most important piece of markup in the code is also the most unassuming. The
#fontPreload div is crucial for this example. What it does is use the embedded Anton web font. This is required so that browsers properly initializes the font before it is used in the canvas. Without it we would be staring at a blank newspaper cover.
The jQuery
As we are using a custom web font, we need to be sure that the font is loaded before we use it to generate the newspaper titles. This is why we are binding a callback to the
$(window).load() event, which is called once everything is loaded:
assets/js/script.js
$(window).load(function(){ var imgPath = "assets/img/"; // Define 6 paper covers: var papers = [ { line1:"The financial", line2:"chrisis is a hoax!", subtitle:"Economist admits practical joke" }, { line1:"Deeply fried now", line2:"considered healthy", subtitle:"Scientists change the definition of \"Healthy\"" }, { line1:"Apple announces", line2:"the new iphone 9", subtitle:"5, 6, 7 and 8 deemed \"not hip enough\"" }, { line1:"The world did end", line2:"on may 21st!", subtitle:"Priest argues we are actually dead" }, { line1:"France imposes an", line2:"internet kitten tax", subtitle:"Every cat picture on the internet will cost 3 €" }, { line1:"Thank you &", line2:"goodbye", subtitle:"The Zine Weekly takes its farewell" } ]; // Check whether canvas and CSS3 animations are supported: if(!$.support.canvas){ $('#fin').html('Sorry, your browser does not<br />support <canvas>').show(); return; } if(!$.support.css3Animation){ $('#fin').html('Sorry, your browser does not<br />support CSS3 Animations').show(); return; } // Use jQuery.Deferred to bind a callback when all // the images that comprise the paper are loaded: $.when( loadImage(imgPath+"paper_top.png"), loadImage(imgPath+"paper_left.png"), loadImage(imgPath+"paper_center.jpg"), loadImage(imgPath+"paper_right.png"), loadImage(imgPath+"paper_bottom.png") ).then(function( imgTop, imgLeft, imgCenter, imgRight, imgBottom ){ // Loop through the paper covers and // create a new canvas for each one: $.each(papers,function(i){ var canvas = document.createElement("canvas"), c = canvas.getContext("2d"); canvas.width = 717; canvas.height = 526; // Drawing the paper background slices: c.drawImage( imgTop, 0, 0 ); c.drawImage( imgLeft, 0, 12 ); c.drawImage( imgCenter, 14, 12 ); c.drawImage( imgRight, 711, 12 ); c.drawImage( imgBottom, 0, 516 ); // Drawing the text using our helper // function (see at the bottom): drawText( this.line1, this.line2, this.subtitle, c, 358, 250 ); // Appending the element to the page. // This triggers the CSS3 animation. setTimeout(function(){ $("body").append(canvas); },i*5800); }); // "This is all" $('#fin').delay(papers.length*5800).fadeIn(); }); /*------------------------ Helper functions ------------------------*/ // Load an image by URL and resolve a jQuery.Deferred: function loadImage(src){ var def = new $.Deferred(), img = new Image(); img.onload = function(){ // Resolve the deferred. The img parameter // will be available in the then function: def.resolve(img); } // Always set the src attribute // after the onload callback: img.src = src; return def.promise(); } // Draw two lines of text and a subtitle // on the canvas (passed as the c param): function drawText( line1, line2, subtitle, c, x, y ){ c.font = "65px Anton,Calibri"; c.textAlign = "center"; c.fillStyle = "#3e3e3e"; c.fillText(line1.toUpperCase(),x,y); c.fillText(line2.toUpperCase(),x,y+80); c.font = "italic 20px Georgia,serif"; c.fillStyle = "#737373"; c.fillText(subtitle,x,y+120); } }); (function(){ // Adding custom checks for canvas and css3 // animations support, to the jQuery.support object: $.support.canvas = 'getContext' in document.createElement('canvas'); $.support.css3Animation = (function(){ var sp = $('<span>'); return ( sp.css("-webkit-animation") !== undefined || sp.css("-moz-animation") !== undefined || sp.css("animation") !== undefined ); })(); })();
To generate the newspapers, we need to first load the five slices that comprise the image. This sounds like the perfect place to use jQuery’s Deferred object, introduced in version 1.5. What it does is to notify us when a number of asynchronous events are completed. As you can see in the code above, we are using it in the
loadImage() function. The
then() method on line 58 is called only when all five images are loaded.
Using jQuery.Deferred is a convenient way to organize our code better. It is also used by jQuery’s internal AJAX and animation methods. To get a better idea of what you can do with it, read through the deferred object documentation.
Inside the
$.each loop, we create a canvas element for each of the paper covers, and add them to the page after a delay introduced by the
setTimeout() call.
Once we have the canvas on the page, we can continue with animating it.
The CSS
Canvas elements are treated as any other element. This means that you can safely style and transform them the same way as you would a regular image.
Once the canvas is added to the page, it will assume the styling you see below:
canvas{ position:fixed; width:717px; height:526px; top:50%; left:50%; margin:-263px 0 0 -358px; opacity:0; /* Configure the animation for Firefox */ -moz-animation-duration:6s; -moz-animation-name:spin; -moz-animation-timing-function:linear; /* Configure it for Chrome and Safari */ -webkit-animation-duration:6s; -webkit-animation-name:spin; -webkit-animation-timing-function:linear; }
Nothing out of the ordinary here. We are centering the canvas in the page, and defining the different aspects of the animation like duration, name, and a timing function. “Linear” would make our animation run at a constant speed, instead of getting accelerated as is the case with “ease”, which is used by default.
After this, we have to use the @keyframes declaration to specify how our element would look at different key points during the animation:
@-moz-keyframes spin{ 0%{ opacity:0.2; -moz-transform:scale(0.2) rotate(0deg); } 15%{ opacity:1; margin:-263px 0 0 -358px; -moz-transform:scale(1) rotate(1090deg); } 90%{ opacity:1; top:50%; -moz-transform:scale(1) rotate(1090deg); } 100%{ top:500%; opacity:1; -moz-transform:scale(1) rotate(1090deg); } }
When the canvas element is added to the page, we start off from the 0% position above. The element’s opacity is set to 0.2, and it is made 5 times smaller using a
scale() transformation. It is quickly animated to its full size (
scale(1) ) in and from 15% to 90% of the animation (or about four and a half seconds) it stays fixed on the screen, after which it quickly falls outside the bottom border of the window (top is increased to 500%).
It is important to specify the properties that you want to persist in every percentage point of the animation. One example is the
-moz-transform:scale(1) rotate(1090deg) declaration, which is duplicated three times. Without it, Chrome and Safari (but not Firefox) will revert to the default rotation of 0 degrees mid animation.
And, as this is still considered an experimental feature by browser vendors, we need to write the same code for webkit:
@-webkit-keyframes spin{ 0%{ opacity:0.2; -webkit-transform:scale(0.2) rotate(0deg); } 15%{ opacity:1; margin:-263px 0 0 -358px; -webkit-transform:scale(1) rotate(1090deg); } 90%{ opacity:1; top:50%; -webkit-transform:scale(1) rotate(1090deg); } 100%{ top:500%; opacity:1; -webkit-transform:scale(1) rotate(1090deg); } }
With this our spinning newspaper effect is complete!
Conclusion
As with any cop drama from the 1930s, and the 1930s themselves for that matter, this tutorial has to come to an end. Hope you folks had as much fun following the tutorial as I had writing it. If you have any thoughts or suggestion be sure to share in the comment section. You can also download a PSD with the newspaper template, so you can make your own, below.
25 Comments
hi
it seem's cool but it doesn't wok on fire fox 3.6.18
The article said if you have a newer browser it will work. if you are running 3.6 you are WAAY behind lol.
@aniss, install FF 5.0 :)
Amazing (:
I love It
It's like olds movies....
Or the day of the tentacle
really cool. I like it.
whaaat, dude!
This rocks! :)
That's a great tip. Well done for the well presented tutorial and also for demonstrating this example with canvas and not just a simple div.
Definitely helped me have a slightly better understanding of canvas.
Great effect, I'm going to have to try and find some use for this in a future project =)
Nice one!
What about making it take the headlines from a RSS?
I would love to have it as a "Feed reader" for my newspaper!
Let me know!
Mariano
Superbe, great work as ever. Can't wait for the next tut. Excellent
Amazing effect and great tutorial!
LOL
I love the headlines in the demo! hilarious :)
And as usual,, elegant job!
Hello,
This is nice idea to promote special offers on website, can you even release jquery version as well for non css3 supported browsers. thanks.
Really nice, as always. I have a question. Is this the same technique which Vimeo uses on it's login page for moving a cloud image across the screen. the cloud is not any animation as it is just a simple image and is moving across the screen.
""
I am struggling my brain out to find out the technique as I want to move a balloon on my website's front page and tried doing that with marquee with scrolling up and at the scroll speed of 1.. but unsuccessful as the balloon image disappeared...
I'll definitely would like to see that as a tutorial if possible...
Do you have some link about different in rotated object? Why in FF its so bad?
I test it on Opera, Chrome and FF.
Firefox looses, Chrome was wery blurry (but ok), and Opera is betwet. Maybe Opera win this comepition :)
In firefox rotated text looks terrible! In Chrome they are "no" difference between canvas and normal html text. In Opera you can see different, but it is acceptable.
Why Firefox (v 5!) is so terrible in this area? Maybe they have default off some options?
Im sory. I test it on FF 4. In FF 5 text was blurred and it was ok.
Great tutorial as always!
The French headline made me laugh my ass off!!
Keep up the good work!
Rally neat idea! Love it! Not sure how practical it would be, but I still love it!
That is awesome..will put to use somehow. Thank you!
Amazing effect and great tutorial!
Incredible! I love animation now I can try my self.
thanks
I love this! Sorry if I'm missing something, but where can I download the PSD with the newspaper template (mentioned in the Conclusion of the tutorial) so I can customize this even more? Thanks again!
Awesome tutorial. Loved it and the steps were easy to follow
This is so cool I will be trying this out on some of my next few projects. thanks | http://tutorialzine.com/2011/07/spinning-newspaper-effect-css3/ | CC-MAIN-2014-10 | refinedweb | 2,203 | 58.18 |
On Tue, Mar 13, 2018 at 05:07:22PM +0200, Liran Alon wrote: > Before this commit, dev_forward_skb() always cleared packet's > per-network-namespace info. Even if the packet doesn't cross > network namespaces. > > The comment above dev_forward_skb() describes that this is done > because the receiving device may be in another network namespace. > However, this case can easily be tested for and therefore we can > scrub packet's per-network-namespace info only when receiving device > is indeed in another network namespace. > > Therefore, this commit changes ____dev_forward_skb() to tell > skb_scrub_packet() that skb has crossed network-namespace only in case > transmitting device (skb->dev) network namespace is different then > receiving device (dev) network namespace. > > An example of a netdev that use skb_forward_skb() is veth. > Thus, before this commit a packet transmitted from one veth peer to > another when both veth peers are on same network namespace will lose > it's skb->mark. The bug could easily be demonstrated by the following: > > ip netns add test > ip netns exec test bash > ip link add veth-a type veth peer name veth-b > ip link set veth-a up > ip link set veth-b up > ip addr add dev veth-a 12.0.0.1/24 > tc qdisc add dev veth-a root handle 1 prio > tc qdisc add dev veth-b ingress > tc filter add dev veth-a parent 1: u32 match u32 0 0 action skbedit mark 1337 > tc filter add dev veth-b parent ffff: basic match 'meta(nf_mark eq 1337)' > action simple "skb->mark 1337!" > dmesg -C > ping 12.0.0.2 > dmesg > > Before this change, the above will print nothing to dmesg. > After this change, "skb->mark 1337!" will be printed as necessary.
Hi Liran, > > Signed-off-by: Liran Alon <liran.a...@oracle.com> > Reviewed-by: Yuval Shaia <yuval.sh...@oracle.com> > Signed-off-by: Yuval Shaia <yuval.sh...@oracle.com> I did not earned the credits for SOB, only r-b. Yuval > --- > include/linux/netdevice.h | 2 +- > net/core/dev.c | 6 +++--- > 2 files changed, 4 insertions(+), 4 deletions(-) > > diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h > index 5eef6c8e2741..5908f1e31ee2 100644 > --- a/include/linux/netdevice.h > +++ b/include/linux/netdevice.h > @@ -3371,7 +3371,7 @@ static __always_inline int ____dev_forward_skb(struct > net_device *dev, > return NET_RX_DROP; > } > > - skb_scrub_packet(skb, true); > + skb_scrub_packet(skb, !net_eq(dev_net(dev), dev_net(skb->dev))); > skb->priority = 0; > return 0; > } > diff --git a/net/core/dev.c b/net/core/dev.c > index 2cedf520cb28..087787dd0a50 100644 > --- a/net/core/dev.c > +++ b/net/core/dev.c > @@ -1877,9 +1877,9 @@ int __dev_forward_skb(struct net_device *dev, struct > sk_buff *skb) > * start_xmit function of one device into the receive queue > * of another device. > * > - * The receiving device may be in another namespace, so > - * we have to clear all information in the skb that could > - * impact namespace isolation. > + * The receiving device may be in another namespace. > + * In that case, we have to clear all information in the > + * skb that could impact namespace isolation. > */ > int dev_forward_skb(struct net_device *dev, struct sk_buff *skb) > { > -- > 1.9.1 > | https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1635655.html | CC-MAIN-2018-43 | refinedweb | 506 | 57.87 |
Hello,
I began a ruby application recently and I try to integrate Ajax touchs
but I got a strange problem here.
In my view app/views/messages/index.rthml, I’ve got a simple link to a
remote function and a container
[code]<%= link_to_remote “Refresh”, :url => { :controller => “messages”,
:action => “get_messages”, :method => “get”, :page =>
@messages.current_page }, :before => “Element.show(‘wait’)”, :complete
=> “Element.hide(‘wait’)” %>
In my controller app/controllers/messages_controller.rb, I’ve got the
associated method :
def get_messages @messages = Message.paginate :page => params[:page], :per_page => Message.per_page, :limit => Message.per_page, :order => 'created_at DESC' end
and I finally have a RJS file name app/views/messages/get_messages.rjs
with the following code:
page.replace_html ('messages_container', :partial => "shared/messages/messages", :object => @messages)
The problem is that the RJS file is never used/called by RoR and I do
not understand why.
I’m sure the get_messages method in the controller is called and
executed because when I put the JS instructions inthere, that’s work :
def get_messages @messages = Message.paginate :page => params[:page], :per_page => Message.per_page, :limit => Message.per_page, :order => 'created_at DESC' render(:update) { |page| page.replace_html 'messages_container', :partial => 'shared/messages/messages', :object => @messages } end
I should be able to put the JS code in the RJS file, but it is ignored.
When I tried to add an :update id in my link_to_remote call, the RJS is
read but the container is replaced by the generated javascript code (not
executed so).
Thx in advance if somebody has an idea ! | https://www.ruby-forum.com/t/rjs-not-executed/132783 | CC-MAIN-2021-31 | refinedweb | 247 | 51.75 |
The process of handling text data is a little different compared to other problems. This is because the data is usually in text form. You therefore have to figure out how to represent the data in a numeric form that can be understood by a machine learning model. In this article, we'll take a look at how you can do just that. Finally, you will build a deep learning model using TensorFlow to classify the given text.
Let's get started. Note that you can run all of the code in this tutorial on a free GPU from a Gradient Community Notebook.
Bring this project to life
The first step is to download and load the data. The data we'll use is a sentiment analysis dataset. It has two columns; one with the sentiment and another with its label. Let's download and load it.
!wget --no-check-certificate \ -O /tmp/sentiment.csv \ -O /tmp/sentiment.csv import pandas as pd df = pd.read_csv('/tmp/sentiment.csv')
Here is a sample of the data.
Let's now select the features and the target, then split the data into a training and test set.
X = df['text'] y = df['sentiment'] from sklearn.model_selection import train_test_split X_train, X_test , y_train, y_test = train_test_split(X, y , test_size = 0.20)
Data preprocessing
Since this is text data, there are several things you have to to clean it. This includes:
- Converting all sentences to lowercase
- Removing all quotation marks
- Representing all words in some numerical form
- Removing special characters such as
@and
%
All the above can be achieved in TensorFlow using
Tokenizer. The class expects a couple of parameters:
num_words: the maximum number of words you want to be included in the word index
oov_token: the token to be used to represent words that won't be found in the word dictionary. This usually happens when processing the training data. The number 1 is usually used to represent the "out of vocabulary" token ("oov" token)
The
fit_on_texts function is used to fit the
Tokenizer on the training set once it has been instantiated with the preferred parameters.
from keras.preprocessing.text import Tokenizer vocab_size = 10000 oov_token = "<OOV>" tokenizer = Tokenizer(num_words = vocab_size, oov_token=oov_token) tokenizer.fit_on_texts(X_train)
The
word_index can be used to show the mapping of the words to numbers.
word_index = tokenizer.word_index
Converting text to sequences
The next step is to represent each sentiment as a sequence of numbers. This can be done using the
texts_to_sequences function.
X_train_sequences = tokenizer.texts_to_sequences(X_train)
Here is how these sequences look.
Let's do the same for the test set. When you check a sample of the sequence you can see that words that are not in the vocabulary are represented by
1.
X_test_sequences = tokenizer.texts_to_sequences(X_test)
Padding the sequences
At the moment, the sequences have different lengths. Usually, you will pass a sequence of the same length to a machine learning model. You therefore have to ensure that all sequences are of the same length. This is done by padding the sequences. Longer sequences will be truncated while shorter ones will be padded with zeros. You will therefore have to declare the truncation and padding type.
Let's start by defining the maximum length of each sequence, the padding type, and the truncation type. A padding and truncation type of "post" means that these operations will take place at the end of the sequence.
max_length = 100 padding_type='post' truncation_type='post'
With those in place, let's start by padding the
X_test_sequences. This is done using the
pad_sequences function while passing the parameters defined above.
from keras.preprocessing.sequence import pad_sequences X_test_padded = pad_sequences(X_test_sequences,maxlen=max_length, padding=padding_type, truncating=truncation_type)
The same should be done for the
X_train_sequences.
X_train_padded = pad_sequences(X_train_sequences,maxlen=max_length, padding=padding_type, truncating=truncation_type)
Printing the final results shows that zeros have been added at the end of the sequences to make them of the same length.
Using GloVe word embeddings
TensorFlow enables you to train word embeddings. However, this process not only requires a lot of data but can also be time and resource-intensive. To tackle these challenges you can use pre-trained word embeddings. Let's illustrate how to do this using GloVe (Global Vectors) word embeddings by Stanford. These embeddings are obtained from representing words that are similar in the same vector space. This is to say that words that are negative would be clustered close to each other and so will positive ones.
The first step is to obtain the word embedding and append them to a dictionary. After that, you'll need to create an embedding matrix for each word in the training set. Let's start by downloading the GloVe word embeddings.
!wget --no-check-certificate \ \ -O /tmp/glove.6B.zip
The next step is to extract them into a temporary folder.
import os import zipfile with zipfile.ZipFile('/tmp/glove.6B.zip', 'r') as zip_ref: zip_ref.extractall('/tmp/glove')
Next, create that dictionary with those embeddings. Let's work with the
glove.6B.100d.tx embeddings. The
100 in the name is the same as the maximum length chosen for the sequences.))
The next step is to create a word embedding matrix for each word in the word index that you obtained earlier. If a word doesn't have an embedding in GloVe it will be presented with a zero matrix.
Here is what the word embedding for the word "shop" looks like.
Creating the Keras embedding layer
The next step is to use the embedding you obtained above as the weights to a Keras embedding layer. You also have to set the
trainable parameter of this layer to
False so that is not trained. If training happens again the weights will be re-initialized. This will be similar to training a word embedding from scratch. There are also a couple of other things to note:
- The Embedding layer takes the first argument as the size of the vocabulary.
1is added because
0is usually reserved for padding
- The
input_lengthis the length of the input sequences
- The
output_dimis the dimension of the dense embedding
from tensorflow.keras.layers import Embedding, LSTM, Dense, Bidirectional embedding_layer = Embedding(input_dim=len(word_index) + 1, output_dim=max_length, weights=[embedding_matrix], input_length=max_length, trainable=False)
Creating the TensorFlow model
The next step is to use the embedding layer in a Keras model. Let's define the model as follows:
- The embedding layer as the first layer
- Two Bidirectional LSTM layers to ensure that information flows in both directions
- The fully connected layer, and
- A final layer responsible for the final output
from tensorflow.keras.models import Sequential model = Sequential([ embedding_layer, Bidirectional(LSTM(150, return_sequences=True)), Bidirectional(LSTM(150)), Dense(128, activation='relu'), Dense(1, activation='sigmoid') ])
Training the model
The next step is to compile and train the model.
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
As the model is training, you can set an
EarlyStopping callback to stop the training process once the mode stops improving. You can also set a TensorBoard callback to quickly see the model's performance later.
from tensorflow.keras.callbacks import EarlyStopping, TensorBoard %load_ext tensorboard rm -rf logs log_folder = 'logs' callbacks = [ EarlyStopping(patience = 10), TensorBoard(log_dir=log_folder) ] num_epochs = 600 history = model.fit(X_train_padded, y_train, epochs=num_epochs, validation_data=(X_test_padded, y_test),callbacks=callbacks)
You can use the
evaluate method to quickly check the performance of the model.
loss, accuracy = model.evaluate(X_test_padded,y_test) print('Test accuracy :', accuracy)
Visualization
The performance of the model can be seen by launching TensorBoard from the log directory.
%tensorboard --logdir logs
You can also use the Graph's section of TensorBoard to visualize the model in-depth.
Final Thoughts
In this article, you have walked through an example of how to use pre-trained word embedding in natural language processing problems. You can try and improve this model by:
- Changing the size of the vocabulary
- Using different pre-trained word embeddings
- Using different model architectures
You can also try your hands on this example by running it on a free GPU in a Gradient Community Notebook.
Add speed and simplicity to your Machine Learning workflow today | https://blog.paperspace.com/pre-trained-word-embeddings-natural-language-processing/ | CC-MAIN-2022-27 | refinedweb | 1,350 | 55.84 |
It’s just data
Hm, the test fails:
[blalor@beaker ...ps/Phonebook/data]> python lazydom.py
Traceback (most recent call last):
File "lazydom.py", line 102, in ?
for name in feed[atom.entry][atom.author][atom.name]:
File "lazydom.py", line 83, in _iter_
return result._iter_()
AttributeError: 'list' object has no attribute '_iter_'
What version of python are you using? Iterator types are new in version 2.2.
On some versions of Redhat, this is installed as python2.Posted by Sam Ruby at
I got the same error and I'm using Python 2.2.2 on Red Hat. I fixed it by changing line 83 to "return iter(result)"Posted by Simon Willison at
Here's my feedback on the clever lazydom.py. You can use yield to implement __iter__: from __future__ import generators # must be first line of python code ... def __iter__(self): for element in self.list: yield self.new(element) You can use the...Excerpt from gary burd at
Thanks, all, for the feedback. Particularly Gary.
Here's a new version. Tested on Python 2.2 and Python 2.3.
One note: I am definitely trying to maintain the XPath-like abstraction whereby attributes are simply specially named elements. I did, however, add some support to the namespace class to make namespace qualified attributes easier to access.Posted by Sam Ruby at
Look a look at the SF project pywebsvcs; Brian Overhof is doing a great job of adding schema<->python code generation.Posted by Rich Salz at
Another option for this kind of thing is Aaron's xmltramp: by Nelson at
This is fun. Python's '/' operator can be used to implement path syntax. If you add this line of code to the lazydom class:
__div__ = __getitem__
then you can write this:
# iteration and traversal
for name in feed/atom.entry/atom.author/atom.name:
print name
print
# attribute access and comparison
print feed/atom.generator/'@name'
print feed/atom.generator/'@name' == 'Blogger'
Egads, Gary, you're evil! And here I thought overloading '+' to mean both numeric addition and string concatenation was bad! :-)Posted by Ken MacLeod at
Gary: deliciously evil. Playing with it a bit, my one problem is that the precedence order is wrong.
feed/atom.entry[0] will be evaluated as:
My (initial) conclusion: perhaps div should be considered instead of index, but having both is dangerous.Posted by Sam Ruby at
More evil. If you add "_floordiv_ = find", then you can write:
for issued in feed.find//atom.issued:
print issued
The more I think about this, and read about it, e.g. here, or here, or think about it in this context, the more firmly I believe that Java, or any strongly typed programming language, is a good vehicle for building...
[more]
Trackback from Random Stuff
What's with that trackback? I assumed it's autogenerated, but it's
missing a critical "not". Here's from the link target:
"""
The more I think about this, and read about it, e.g. here, or here, or think about it in this context, the more firmly I believe that Java, or any strongly typed programming language, is not a good vehicle for building document-oriented Web services.
"""
Maybe the 1st version of the linked entry was wrong?Posted by Jean Jordaan at
Here's the code. Key new features vs the original: element[:] returns all children of all elements in the list element[f] will apply function f to all elements in the list, and act either as a filter or as a map or even...
[more]
Trackback from Sam Ruby | http://www.intertwingly.net/blog/1580.html | crawl-002 | refinedweb | 600 | 68.16 |
- Gauge chart control - without store?
- Prevent button menu from expanding
- Unable to access same Domain Web Service through emulator
- Apply php function in extjs
- Grid.store.getRange() returns the current page records instead of all ?
- Howto create a div in a panel?
- counting and grouping
- AbstractComponent.tpl not rendered before afterrender fired?
- duplicating fields
- storeIng.load({url: "blabla.jsno"});
- msgTarget: 'under' ignored
- Event on Grid Panel Data rerender
- How to change the color of a POINT based on it's value
- Calling WSDL from Ext Js
- Re-use store in different window instances?
- How to add childEl dynamically after render?
- Where can I get the 4.1.2 version
- Window Header Gradient Internet Explorer 8
- Grid Group Row itemmouseenter
- Accessing itemcontextmenu params from menu
- what's the type of 'records' in callback(records,option,success) in json store.load
- Custom Layout: Rows, with multiple Columns, that Expand
- get the tooltip value of each grouped column in extjs 4 chart
- Extjs4.0 Class diagram
- Ext.tab.Bar, changing UI
- Redraw chart
- If I add a row expander into a grid, the grid's appearence then is fixed.
- What is the best approach to Migrating a webproject from Ext 2.2 to 4.1
- dragging a leaf from tree to a particular cell in grid
- Is there anyway to supress the autoEl of a component?
- Is Extjs usable to create a website?
- animate callback problem
- ExtJS 4.1 removes ARIA role attribute in components
- Transitions in Extjs 4.2.1
- Tooltips and textfield
- Window closing and not removing config
- Multi validations VType email
- Tree Panel with renderer
- Hide form
- Grig changing position after closing another grid
- Making ExtJS and Touch bot friendly - SEO
- Going to next page after submitting the form
- Save active tab in storage provider?
- [4.1.1] Combobox filtering using anyMatch
- Sencha Architech
- problem in setValue of textfield.
- Wait for current function response.
- How to load .js file
- store.sync() gives error
- Extjs 4.1 Sprite surface post zoom mouse over issue in IE7
- CheckTree reloads blank image on every check/uncheck
- Using Jquery Carousel in Extjs 4.0 MVC
- Ext.app.Application and Ext.app.Controller reengineering concept.
- Scrolling event on custom gridpanel
- content type
- Setting root URLs for loading custom ExtJs components?
- Updating grid with store sync via PHP back-end
- Row selection issue for grid.panel in Mozilla Firefox
- Horizontally paging grid?
- Adding drop down (combo box) in panel header
- Load Mask on Dataview or on Panel
- Image Slideshow in ExtJs without Using Carousel
- How to handle an event after dragging the draw component
- Can a Grid Cell span Multiple Rows (Rowspan for Grid Cell)?
- Tree Grid: Dynamic columns
- Combobox in a grid which update value in another cell
- Drag From DataView and Drop in Panel EXTJS 4
- Fixed form field width
- Layout issue
- Extjs 4.0.7 Combobox in EventStore, record desappear
- Using JSDuck for documenting PHP classes
- grid event itemclick not fire
- On click of a link in a Grid Column should open a new panel
- Dynamic Tree grid in Extjs 4.1
- Expand collapsed panel upon clicking the collapsed bar ..
- Set Font Size/Font size dropdown in Ext.form.HtmlEditor
- Selection of records over multiple pages in a grid
- Bug in Ext.form.FieldContainer?
- Pickerfield with readOnly property is not displaying the trigger button
- Fixed summary row in Grid
- Store Filter not finding exact matches
- Extra long word in Grid cell forces column to be larger than header
- Extjs calendar example works ??
- ExtJs 4.0 Tutorial issues.
- Unable to display root nodes in a tree panel using TreeStore
- Desktop Packager alternates
- [4.1.1] How to hide grid's header
- Ext JS Calendar sample does not work at sencha site
- Ext JS 4.1.1a - Linechart - Marker only at last Point of line
- Changing layout accoring to viewport dimensions (responsive design?)
- Tooltip gets truncated, wraps to next line.
- Create store from user input
- Unable to change the Grid Header to a image
- Can ExtJs4 modules co-exist with ExtJs3?
- grid column header in the left side
- Ext Js 4.1 Right Click Menu
- Ext.MessageBox promt message issue with IE 7
- Ext.grid.CheckboxSelectionModel :: Example needed.
- updating pie chart
- onNodeDrop tick mark is not released automaticaly until we click outside the node ff
- How to call the PHP class function in ExtJs button handler
- Sencha Desktop Packager
- Help with controller actions
- Getting node from path in a TreePanel?
- ExtJS 4.x ChangeLog / Upgrade Guide
- Grid with CheckboxModel and CellModel.
- How to add icon to a panel header
- ComboBox looses focus (iPad)
- How To Add ,Delete Rows in grid
- labels in the area chart are not displayed ?
- Drag and Drop
- Group Gird Add/Remove Row
- Select combobox on keypress
- Dont allow users to submit data not include in combobox dropdown
- Element lost focus after blankText is shown
- form.submit with file upload requires to download the server response
- How can I use the MVCS model to get data from activex control using json?
- Embed .Net Window Form Control In Form Panel
- Why does store load twice?
- datefield in format "m.Y"
- read contents of .json file and edit the json
- Unable to use form data in php
- Why three level linkage combo can not success in FireFox
- Extending Dataproxy and Reader
- Extending in ext 4
- Ext.widget set iconCls problem
- [4.1.1] How to set style for grid cell content
- [Solved] Encoding with ExtJS 4 Proxy
- Html Editor and InitFields
- HasOne association from nested json?
- MVC and sliding pager
- Error migrating Ext 2.2 project to Ext 4.1:ExtTypeError: namespace is undefined
- Desktop Task Bar on Top Issue
- [4.1.1] How to use "external" variables in Ext.XTemplate
- Problem after upgrading from ExtJS 4.0.7 to ExtJS 4.1.1a
- Show window outside iframe
- SASS $tab-base-color-over does not seem to be applied
- is it possible to restore a removed record from store before doing sync()
- Unexpected string on getController
- Textbox Validations shows only one error message
- repeated get references, should I save the reference?
- Ext.draw.engine.Svg examples for graph display
- Set text in textarea (simple solution?)
- Number Field -- Render to '0.00'
- Dynamic tab generation
- How to get a Checkbox action to open accordion panel
- sencha slice theme... I've got to be doing something wrong, but I don't know what
- Set value of a textfield from localstorage
- display decimals in a numberfield
- Validate The Grid
- Cannot encode params as JSON in POST read request
- Datefield format
- How to get all the names of the nodes of a tree
- Refreshing Combo box in extjs 4
- grid - adding new menu items to individual column header
- Notification in EXTJS
- IE 8 and behind error
- Sending JSON request - EXTJS4 MVC Infinite grid with remote filtering.
- Websocket Proxy
- Form validation: How to get something more understated than the underline/outline?
- How to prevent extjs grid from scrolling when clicking on a cell ?
- Setting height of Panel to accomodate Grids
- What is best way to design my code
- Dynamic rendering of the steps or 'concentric circles' in radar chart
- Compatibility Pack Issues with 4.1
- Change color of a certain labels in a panel
- change the title of chart image
- Automation testing for EXT JS
- EXT JS 4 - How can I get Excel type drag and drop
- ExtJS JsonStore
- Example of grid with multiple sort columns and ASP.NET MVC?
- hiding grid column locking and selction features in ExtJS 4.1
- Combo box - reset problem
- Selection model returns wrong record when Grouping feature is enabled
- Message Conversation History- How to show using EXTJS component
- Auto resize grid columns in ext 4.1.1
- How to drag and drop into tree's unvisible root element?
- A floated panel has two headers
- Bug TreeStore Ajax XML in IE9?
- Month picker is not showing ok and cancel button
- Click event for datefield
- Saving form updates in local JSON file
- Upgrading Grid Component
- About Ext.data.StreeStore. It can not autowired model in MVC.
- Extjs 4 create dynamic grid
- Removing row from a grid
- inheritance mechanism lacks
- Tree grid is not scrolling during dragging
- illegal token in Ext.create
- Theme image not found, when button has no rounded-borders
- ColumnModel of Grid
- auto height and width of draw.Component
- Ext.Map Zoom trouble
- Extjs 4 Popup window vertical scroll-bar not moving in Google chrome
- How To build Dynamic nodes in a Tree panel
- Approach for Rule Builder
- Prevent Cycling on ComboBox
- filefield Upload localhost domain
- Sharing components between MVC apps
- Changing the tpl dynamically?
- Form Submit example which load the next page
- the old way vs the new way -- compile extjs4
- Sizing and positioning related text boxes above an Item Selector
- how to pass value to an extjs 4 application from an url
- I want my selected grid rows to remain selected. Even if I'm editing a cell. But no..
- How to extend a library model and pass parameters to the preconfigured proxy?
- Problem overriding functions of Ext.ux.form.ItemSelector in derived class
- Dynamic ButtonGroup Not Loading Displaying In I.E.
- CSS have some wrong in new Chrome.
- How to use Ext.data.writer.Json to create a model object with nameProperty applied
- Fit application to browser window
- Tree panels differences
- Need to change the color of cell in grid or Tree Panel at runtime
- Problem loading data with store.loadData()
- Hard time extending Ext.grid.header.Container, missing config option
- onLaunch in controller not working
- Dynamic Grid WIth Direct
- Problem overriding functions of Ext.ux.form.ItemSelector in derived class
- change theme
- textfield fill data
- How to initially sort columns when using Ext.ux.grid.TransformGrid
- Detecting scrollbar on a grid panel.
- How to manipulate images on a page?
- Dynamic Menu and Submenu
- handling a paste event in an editable grid.
- Automated ExtJS testing using Jasmine and PhantomJS
- Editable Grid panel giving type error b is not defined
- searchfield and tbar
- Component query to get the picket from datefield
- Neptune theme number field issue
- Click event for DatePicker is only working for calender icon but not for textbox
- right-to-left vs left-to-right direction (not alignment) on htmleditor
- store filter - like operatior
- Problems with Tree Store and Tree Panel
- grip panel not displaying elements
- On click of "Enter" Key, the page gets refreshed.. This happens only in IE 9 ...
- How to store the selected tab on tab selection? (Ext.tab.Panel) + State Handling
- Using Anchor for layout but can't get images to go to right place with x & y
- Grouping in Combobox
- Expand node in Ext.tree.panel with children above the parent
- how to add window to another window
- Collapse all panels in the Accordion
- background image for button not working in IE
- Best way to load Accordion item the first time it expands (render,expand events)
- Upgrade extjs 4.1.2 to 4.1.3 problem
- ERORR:"Cannot read property 'data' of undefined " when removing data from grid(Store)
- RadioGroup with additional fields
- Pop - up (window) height from properties
- Editor cannot disappear after closing Window | https://www.sencha.com/forum/archive/index.php/f-87-p-40.html?s=96fc93f1ba0feea0fd2759aee58e4e0a | CC-MAIN-2019-30 | refinedweb | 1,841 | 55.03 |
Your answer is one click away!
I'm learning flask and Python along with HTML and CSS. I've got a Flask template to render a dropdown.
What I need to do is this: When you select a value from the dropdown it opens a new HTML page corresponding to the value chosen(different page for different options chosen).
I tried searching the web but couldn't get much resources. When I submit the dropdown option by submit button the page gives a error message saying:
Method Not Allowed
The method is not allowed for the requested URL.
Please guide me to the best possible solution.
Below is my code.
pro.html
<form name="startpage" method="POST" action=""> <div class="form-group"> <div class="input-group"> <select name = "method_cipher" id = "method_cipher"> <option value="Encrypt">Encrypt</option> <option value="Decrypt">Decrypt</option> </select> <select name = "cipher_type" id = "cipher_type"> <option value="Caesar">Caesar</option> <option value="Transposition">Transposition</option> <option value="Reverse">Reverse</option> </select> </div> <button type="submit" name="submit" value="success" class="btn btn-default">Submit</button> </div> </form>
test.py
import flask APP = flask.Flask(__name__) @APP.route('/') def index(): return flask.render_template('pro.html') @APP.route("/test" , methods=['GET', 'POST']) def test(): if flask.request.method == 'POST': select = flask.request.form.get('method_cipher') if(select == 'Encrypt'): return flask.render_template('lastpage.html') if __name__ == '__m
Your form in
pro.html sends the data to the url specified by the
action attribute. If you leave that out or set it to an empty string like you did the requests goes to the same page which is
/.
But you already set up the form handling in a different view function. You probably want to post to that one.
So change your form to post to the correct url
<form ... action="/test" ...>
That will work but it is not a good idea to hardcode the target url in the template. I assume you are using
jinja so you can let it insert the url for you
<form ... action="{{ url_for("test") }}" ...>
Take a look at the
url_for function if you haven't already. | http://www.devsplanet.com/question/35264455 | CC-MAIN-2017-22 | refinedweb | 350 | 58.58 |
Python for .NET Programmers
An Introduction to IronPython
Contents
- Introduction
- Dynamic Languages
- Learning Python
- Basic Syntax
- Python Datatypes
- Basic constructs
- Functions
- Classes
- Python magic methods
- Properties & decorators
- Modules and Packages
- Other language features
- Programming Paradigms
- What Next?
Note
This is a short introduction to the Python programming language for .NET programmers interested in IronPython. If you're completely new to IronPython you should first read An introduction to IronPython.
This article is the second in a series of articles on developing with IronPython. The other articles are:
- Introduction to IronPython
- Tools and IDEs for IronPython
- Dark Corners of IronPython (working with .NET)
A much more complete introduction to Python and IronPython is my book: IronPython in Action.
Introduction
IronPython is an implementation of the popular open source programming language Python for the .NET framework. IronPython is built on top of the Dynamic Language Runtime. This article is a quick guide to the Python programming language. It is aimed at .NET programmers but should be understandable by anyone with previous experience in an imperative programming language like Java.
Python has much in common with C#. The core object models are similar, as is much of the syntax; C# and VB.NET programmers should find it easy to learn the basics of Python. There are plenty of differences though, including some fundamental ones.
In this article we'll be looking at the following aspects of programming with Python:
- why dynamic languages?
- built in types
- basic flow-control and exceptions
- functions
- classes
- modules and importing
- scripting, functional programming and metaprogramming
This article isn't a replacement for the full Python documentation or a comprehensive tutorial. See what next? for useful online references on learning Python.
Note
For learning Python a great reference is the Python tutorial. I've built an online, interactive version of the tutorial with IronPython and Silverlight: Try Python.
Dynamic Languages
Unlike the traditional programming languages for .NET Python is dynamically typed. This means that you don't have to declare the types of your objects as they are determined at runtime.
This is much more flexible, you use objects based on their behavior (called duck typing) rather than just their type. The cost is that much less can be determined at compile time so you lose type safety. It is also harder for tools to deterministically tell you the type of objects and where they are used. Despite this there is an awful lot that can be determined or inferred by static analysis of Python code.
Note
For a list of IDEs that support IronPython and their features, along with some standard Python development tools for things like code quality and refactoring, read the article: IronPython Tools and IDEs.
Before we look at the benefits of dynamic languages let's quickly look at the cost of static typing.
What does type safety buy?
Type safety does eliminate particular classes of errors.
For example, the compiler can assure you that when using the return value from your integer addition method, 1+1 always returns an integer.
But this is not sufficient to confirm that an application actually works.
In order to have confidence about this, the very best method known to todays computer science is automated tests. Unit tests and acceptance tests.
Unit tests are always needed, and they are much more detailed about run-time behaviour than static type checking can ever be.
With tests in place confirming correct values, then checks of correct type are now redundant, and can safely be removed.
—paraphrased from Jay Fields, Ruby luminary, card shark.
In 5 years, we'll view compilation as the weakest form of unit testing.
—Stuart Halloway
In practice, the benefits of type safety turn out, unexpectedly, to be fairly minimal. Often overlooked, the costs of maintaining type safety turn out to be extremely high.
Dynamic Languages tend to be
- Short on ceremony:
- Good for education
- Expose to power users for scripting
- Small code base
- Powerful and expressive
- Can express powerful ideas simply and cleanly
- Flexible - remain simple for disparate types of problem.
For example, the simplest "Hello World" in C#:
class Program { static void Main(string [] args) { Console.WriteLine("Hello, world"); } }
In Python:
print 'Hello, World'
Some of the ways that dynamic languages tend to be different from statically typed languages:
- No need for explicit type declarations
- First class and higher order functions instead of delegates
- No need for generics, flexible container types instead
- Protocols and duck-typing instead of compiler enforced interfaces
- First class types, functions and namespaces and the ability to modify objects at runtime means that patterns like dependency injection and inversion of control aren't necessary just to make code testable (although they can be useful in their own right)
- Easy introspection without the pain of reflection
- Problems like covariance and contravariance and casting just disappear
Programming in dynamic languages, although superficially the same as programming in a statically typed language, involves thinking about programs in very different ways. An idiomatic C# programmer will tend to reason about programming in terms of types, which makes dynamic languages feel loose and unreliable. Programming in a dynamically typed language involves reasoning in terms of object behavior instead, which is why programmers used to dynamic languages find statically typed languages so restrictive. Of course these statements are broad generalisations but they go some way to explaining the sharp divide between those who strongly prefer one or the other.
Multiple Programming Paradigms
Python (and dynamic languages in general) support multiple programming paradigms:
- Interactive
- Procedural
- Functional (closures are very important)
- Object Oriented
- Metaprogramming data in, transform it, push it back out and then walk away.
First class functions and types, along with closures (lexical scoping) make the functional style of programming possible. Although Python isn't a pure-functional language higher order functions and function factories are very common in Python code.
Facilities for metaprogramming include metaclasses, code generation, decorators and so on.
Learning Python
The core object model of Python is similar to imperative languages such as C#, Java, and VB.NET. If you’ve used any of these languages before, even if aspects of the syntax are unfamiliar, you’ll find learning Python easy. Python does differ from these languages in how it delimits blocks of code. Instead of using curly braces, Python uses indentation to mark blocks of code. Here’s a simple example using an if block:
if condition == True: do_something()
Python is case-sensitive and the comment symbol is #.
Python is fully object oriented, everything is an object, but you can use it to write procedural or functional style programs. There is no need to write a full object-oriented application if you are just creating scripts.
Basic Syntax
Python
Python basic syntax for functions, classes and modules:
import module from othermodule import something MODULE_LEVEL_CONSTANT = 3 def function(arg, arg2=None): while arg: arg -= 1 return arg2 class ClassName(BaseClass): def __init__(self, arg): # constructor self.value = arg instance = ClassName('foo')
In Python namespaces (modules) are created by individual files - the physical container (the file) is the same as the logical container (the namespace). Files can be organised as packages (groups of files) to group namespaces.
The naming of Python variables (constants, classes, members) follow conventions (module level constants are usually all caps for example) but these are not enforced. We'll look more closely at some of the details of the syntax shortly.
Python Datatypes
Strings
a = 'single quoted' b = "double quoted" c = 'Normal\nEscaping \t rules' d = """Triple quoted spanning multiple lines""" e = r"Raw string where backslashes \ are treated literally" string = str(some_object)
CPython has two string types: str and unicode. IronPython only has Unicode strings, which is probably the biggest difference between IronPython and the standard C Python implementation. IronPython does lots of magic to let you treat strings as bytestrings for compatibility with CPython. I've seen few problems in practise because of this difference (and where there are problems it's a bug).
In Python 3 the C implementation also moves to Unicode only strings, which is a great improvement.
Numbers
On the .NET framework the Python integer is System.Int32 and the float is System.Double. Python auto-promotes overflowing integers to long integers. This means that you don't need to know the result of a calculation before performing it... Python also has a built-in complex number type.
a = 32 b = int('12') c = 0.2 d = 12e32 e = float('13.6') f = 10L g = 1000 ** 1000 h = long(10) i = 3 + 2j j = complex(3, 2)
For decimals you have the choice of using System.Decimal (fast) or the Python decimal module (compatible with code written for CPython).
Containers
The Python built-in container types are heterogenous (can contain elements of different types). You can compose the different container types to create complex data-structures without needing to define custom types:
>>> points = {} >>> points[(4, 8)] = {'height': 56.0, 'name': 'The middle'}
The built-in container types are:
the dictionary (hash table) can use any hashable (i.e. immutable) object as keys and store any object as values:
>>> a = {} >>> a['key'] = 'value' >>> a {'key': 'value'} >>> print a['key'] value >>> del a['key'] >>> a {} >>> # Creating a dictionary from a list of key -> value pairs (as tuples) >>> b = dict([('key', 'value'), ('key2', 'value2')]) >>> b {'key': 'value', 'key2': 'value2'} >>> b['key2'] 'value2'
the list is a mutable ordered sequence of members:
>>> a = [1, 2, 3, 4] >>> a[0] = 0 >>> print a[0] # first member 0 >>> print a[-1] # last member 4 >>> a [0, 2, 3, 4] >>> del a[0] >>> a [2, 3, 4]
Lists have many useful methods for working with them:
>>> a.remove(2) >>> a [3, 4] >>> a.append(3) >>> a [3, 4, 3] >>> a.insert(0, None) [None, 3, 4, 3] >>> a.sort() # in place sort >>> a.reverse() # in place reverse >>> a [4, 3, 3, None]
Lists support slicing for fetching, setting and deleting members:
>>> print a[2:] # from third member to the end [3, None] >>> print a[:2] # from the start up to (but not including) the third [4, 3] >>> print a[1:3] # second and third members [3, 3] >>> print a[::2] # extended slicing - from start to end, step 2 (skip alternate members) [4, 3]
the tuple is an immutable ordered sequence (they can be used as dictionary keys):
>>> a = (1, 2, 3) >>> a (1, 2, 3) >>> b = tuple([3, 2, 1]) >>> b (3, 2, 1)
Tuples can be indexed and sliced in the same way as lists, but you can't add or remove members. Tuples and list overload the add operator. Adding tuples produces a new tuple:
>>> a[0] 1 >>> b[-1] 1 >>> a + b (1, 2, 3, 3, 2, 1)
the set is an unordered collection of members (there is no syntax for creating sets until Python 3):
>>> a = set() >>> a.add(1) >>> a set([1]) >>> b = set([1, 2, 3, 4]) >>> b set([1, 2, 3, 4]) >>> b.remove(3) >>> b set([1, 2, 4]) >>> b.pop() # remove and return an arbitrary member 1 >>> b set([2, 4])
Like dictionaries, sets can only be used to store hashable (immutable) objects.
Sets, lists and dictionaries all have useful methods. The Python documentation and the interactive interpreter will be your friend when working with the built-in container types. In addition there is Python syntax that translates under the hood into method calls. These are the Python 'magic methods' (protocol methods) we'll be looking at shortly:
>>> a = [1, 2, 3] >>> 1 in a True >>> a == [1, 2, 3] True >>> len(a) 3 >>> bool(a) True >>> a + [4] [1, 2, 3, 4]
The Python standard library provides a great deal of 'non built-in' Python data-structures like arrays, named tuples and double ended queues. Of course IronPython can use all the .NET data structures as well.
Booleans, None and truth testing
Python has three useful built-in objects: None, True and False. True and False are the booleans and None is the .NET null (except in Python it is, like everything else, a first class object).
In Python None, 0 (both int and float), the empty string (''), and empty containers all evaluate to False. By default everything else evaluates to True. Classes are able to customize this, if you implement your own containers or data-structures it is normal for them to evaluate to False when empty and True otherwise.
Truth testing is done explicitly by calling bool (the Python boolean type) on an object or the result of an expression, or implicitly in if and while statements:
>>> bool(None) False >>> bool(object()) True >>> bool([None]) # a non-empty list True >>> bool([[]]) # a list containing an empty list is not empty... True
.NET methods on Python types
In IronPython strings are normal .NET strings. However, by default they have all the 'usual' Python methods but not the .NET methods you might expect:
>>>>> a.title() 'Some String' >>> a.ToUpper() Traceback (most recent call last): ... AttributeError: 'str' object has no attribute 'ToUpper'
This is to keep on the right side of the Python community who might object to the built-in Python types gaining a host of extra methods and properties. To please the .NET community who use IronPython but want to use the methods they are familiar with you can 'switch on' the .NET methods in a namespace by executing import clr. This signals to IronPython that the code in this namespace is interoperating with .NET and makes .NET methods visible:
>>> import clr >>> 'some string'.ToUpper() 'SOME STRING'
You can see this at work by doing dir('a string') at the interactive interpreter both before and after importing the clr module.
Basic constructs
Conditionals:
if 1 > 2: print 'not possible' else: print "that's better"
Iteration (looping):
for a in range(100): if a % 2: # % is the modulo operator continue print a else: # only entered if the loop is # exited normally (without a break) pass
The while loop:
a = [1, 2, 3, 4] while a: b = a.pop() if b > 3: break else: # only entered if the loop doesn't break pass
Exception handling:
try: raise Exception('boom') except: print 'an exception was raised' try: raise Exception except Exception, e: print 'Exception', e try: raise KeyError('ouch!') except (IOError, KeyError), e: # a bare raise re-raises the last exception raise else: # entered if no exception is raised pass try: pass finally: print 'a finally block'
Functions
We've already seen the basic syntax for defining functions. Arbitrary number of arguments can be collected with *args syntax (equivalent to the .NET params and collected as a tuple) and an arbitrary number of keyword arguments can be collected as a dictionary with **kwargs syntax.:
def function(*args, **kwargs): assert isinstance(args, tuple), 'args is always a tuple' assert isinstance(kwargs, dict), 'kwargs is always a dictionary'
assert is a statement - it can be used for runtime design by contract. isinstance is one of Python's built-in functions. If there is no explicit return statement then a function returns None.
*args and **kwargs can be used to call functions with multiple arguments / keyword arguments from a tuple or dictionary:
a = (1, 2, 3) b = {1: 'one', 2: 'two'} result = function(*args, **kwargs)
Python also has anonymous functions; lambdas. Lambda functions can take arguments but the body can only be an expression. When the lambda function is called the expression is evaluated and the result returned:
>>> anonymous = lambda arg: arg * 2 >>> anonymous(3) 6
The above lambda function is exactly equivalent to:
def anonymous(arg): return arg * 2
Classes
We've also seen the basic syntax for classes. Methods are created using the same syntax as normal function definitions (which is what they are) - but instance methods explicitly take self as the first parameter. self is the equivalent of this in C#, it is the current instance in use, and is passed in automatically as the first parameter:
class ClassName(object): def print_self(self): print self
>>> first = ClassName() >>> first.print_self() <__main__.ClassName object at 0x780d0> >>> second = ClassName() >>> second.print_self() <__main__.ClassName object at 0x780f0>
Python doesn't have explicit access modifiers (no public, protected, etc). You'll be surprised by how little you miss them...
Unlike C# Python doesn't have method overloading. If you need this you can collect arguments with the * and ** syntax and do dynamic dispatch on the type or number of arguments. There are external libraries that use decorators (explained shortly) to implement generic functions, a more general system of overloading.
Class bodies can contain assignment statements. These create class attributes that are shared between all instances. Updating a class attribute will make the change visible to all instances:
class ClassName(object): attribute = 3
>>> first = ClassName() >>> first.attribute 3 >>> second = ClassName() >>> second.attribute 3 >>> ClassName.attribute = 6 >>> first.attribute 6 >>> second.attribute 6
You can even put arbitrary code in the body of the class. This can be useful for providing different implementations of methods on different platforms, but isn't a very common technique in practise:
import sys class ClassName(object): if sys.platform == 'cli': def method(self): # implementation for IronPython else: def method(self): # implementation for other platforms
Inheritance works straightforwardly in Python, until you start using multiple inheritance that is:
class BaseClass(object): def method(self): print 'method on base' def other_method(self): print 'other_method on base' class SomeClass(BaseClass): def other_method(self): print 'other_method on some class' BaseClass.other_method(self)
>>> something = SomeClass() >>> something.method() method on base >>> something.other_method() other_method on some class other_method on base
The explicit self parameter makes it very easy for inherited methods to call up to the methods they override on a base class.
Multiple inheritance is perfectly valid in Python, but should not be overused. It is most often used to provide mixin functionality.
Python magic methods
We've seen that the constructor for Python classes is the oddly named __init__ method. Methods that start and end with double-underscores (often shortened to 'dunder-method name' for convenience) are special methods, called the 'magic methods'. These implement Python protocols, roughly the equivalent of interfaces in C#, and are called for you by the interpreter rather than being explicitly called by the programmer (usually anyway).
There are lots of different protocol, you can find a good reference to all the Python magic methods in IronPython in Action or online on the book website. If you come across a protocol method that you don't recognise this is the place to turn.
Interfaces are used in C# to specify behavior of objects. For example, if a class implements the IDisposable interface, then you can provide a Dispose method to release resources used by your objects. .NET has a whole host of interfaces, and you can create new ones. If a class implements an interface, it provides a static target for the compiler to call whenever an operation provided by that interface is used in your code.
In Python, you don’t need to provide static targets for the compiler, and you can use the principle of duck typing. Many operations are provided through a kind-of-soft interface mechanism called protocols. This isn’t to say that formal interface specification is decried in Python—how could you use an API if you didn’t know what interface it exposed?—but, again, Python chooses not to enforce this in the language design.
The mapping and sequence protocols use the __getitem__ and __setitem__ methods:
class DataStore(object): def __init__(self): self._store = {} def __getitem__(self, name): return self._store[name] def __setitem__(self, name, value): return self._store[name] = value
>>> store = DataStore() >>> store['foo'] = 'bar' >>> store['foo'] 'bar'
The consequence of this is that Python programmers are much more interested in the behavior of objects than the type. It is common to see the specification of a function or a method that it takes a mapping type or a sequence type - meaning any object that implements these methods. This is the essence of duck-typing, if you know what methods / properties of objects are used you can provide an alternative implementation. So long as the object quacks like a duck and walks like a duck Python will treat it like a duck...
There are lots of other standard protocol methods for containers, here are a few of them:
class DataStore(object): def __init__(self): self._store = {} def __getitem__(self, name): return self._store[name] def __setitem__(self, name, value): return self._store[name] = value def __len__(self): # number of elements return len(self._store) def __nonzero__(self): # boolean value return bool(self._store) def __iter__(self): # iteration return iter(self._store) def __contains__(self, name): # membership test return name in self._store
Other protocols include the rich comparison methods (__eq__, __lt__ and friends) the numeric methods (__add__, __sub__ and friends) plus a whole host more. Python supports operator overloading, and implementing protocol methods is how you do it.
There are also some special magic methods we can use to customize attribute access, particularly useful for creating fluent interfaces. The three methods are __getattr__ for fetching attributes, __setattr__ for setting them and __delattr__ for deleting them. Here's an example using __getattr__ to build up messages:
class Fluid(object): def __init__(self): self._message = [] def __getattr__(self, name): self._message.append(name) return self def __str__(self): # using the join method on string return ' '.join(self._message).strip()
>>> f = Fluid() >>> f.hello.everyone.welcome.to.Python <__main__.Fluid object at 0x782b0> >>> str(f) 'Hello everyone welcome to Python'
You've probably already used an API built in a similar way in Javascript, where you an traverse the DOM as attributes on document. Creating APIs like this is very easy in dynamic languages. __getattr__ and friends have some complexity, so it is worth reading up on them if you want to use them. They're covered in IronPython in Action.
Although these methods allow us to implement custom behavior for attribute access, they aren't a replacement for properties which we'll look at next.
Properties & decorators
We haven't yet looked at properties in Python. Instead of having first class syntax for properties Python uses the 'descriptor protocol', along with normal Python syntax, to provide class methods, static methods and properties. The descriptor protocol is considered to be fairly deep Python 'magic'. It's actually fairly easy to understand but beyond the scope of this article. Let's look at how we use the built-in classmethod, staticmethod and property descriptors.
The easiest way to use these descriptors are as decorators. Decorators are a way of transforming functions and methods and are nominally similar to .NET attributes or Java annotations. They work due to the way that functions are first class objects in Python and are examples of higher order functions (functions that receive functions as arguments or return functions).
The syntax to create a static method in Python is:
class Static(object): @staticmethod def static_one(): return 1 @staticmethod def static_two(): return 2
>>> Static.static_one() 1 >>> Static.static_two() 2
A class method is a method that receives the class as the first argument instead of the instance. They are often used to create alternative constructors. There isn't much need for static methods in Python.
staticmethod behaves like a function (it is actually a type), it wraps the methods. The @ syntax is pure syntactic sugar. The following two snippets of code are identical:
@decorator def function(): pass def function(): pass function = decorator(function)
The decorator is called with the function it wraps as an argument. The function name is bound to whatever the decorator returns. Here's a decorator that checks arguments for null values (None):
def checkarguments(function): def decorated(*args): if None in args: raise TypeError("Invalid Argument") return function(*args) return decorated class MyClass(object): @checkarguments def method(self, arg1, arg2): return arg1 + arg2
>>> instance = MyClass() >>> instance.method(1, 2) 3 >>> instance.method(2, None) Traceback (most recent call last): ... TypeError: Invalid Argument
The checkarguments decorator takes a function as the argument. It creates a new inner function, which it returns. When this function is called it checks all the arguments and then calls the original function, which it still has a reference to through the closure. It uses the *args syntax to collect all the arguments the method is called with and then call the original method with the same arguments.
Python 2.6 introduces class decorators in addition to function / method decorators. They also wrap functions and can be used for many of the same purposes as metaclasses (for checking or transforming classes).
We can use property as a decorator to create get only properties:
class SomeClass(object): @property def three(self): print 'Three fetched' return 3
>>> something = SomeClass() >>> something.three Three fetched 3
The old way of creating get and set properties (in Python you can also use properties to customize deletion but it is there for symmetry and not used very often) is less attractive. This is one area where the C# syntax is nicer than Python:
class SomeClass(object): def get_three(self): print 'Three fetched' return 3 def set_three(self, value): if value != 3: raise ValueError('Three has to be equal to 3!') three = property(get_three, set_three)
>>> something = SomeClass() >>> something.three Three fetched 3 >>> something.three = 4 Traceback (most recent call last): ... ValueError: Three has to be equal to 3!
Python 2.6 introduces a new technique that is slightly better looking:
class SomeClass(object): @property def three(self): return 3 @three.setter def three(self, value): if value != 3: raise ValueError('Three has to be equal to 3!')
Still not as nice as C#, but better...
Modules and Packages
The last thing you want when programming is to have all your code contained in a single monolithic file. This makes it almost impossible to find anything. Ideally, you want to break your program down into small files containing only closely related classes or functionality. In Python, these are called modules.
Note
A module is a Python source file (a text file) whose name ends with .py. Objects (names) defined in a module can be imported and used elsewhere. They’re very different from .NET modules, which are partitions of assemblies.
The import statement has several different forms.
import module from module import name1, name2 from module import name as anotherName from module import *
Importing a module executes the code it contains and creates a module object. The names you’ve specified are then available from where you imported them.
If you use the first form, you receive a reference to the module object. Needless to say, these are first-class objects that you can pass around and access attributes on (including setting and deleting attributes). If a module defines a class SomeClass, then you can access it using module.SomeClass.
If you need access to only a few objects from the module, you can use the second form. It imports only the names you’ve specified from the module.
If a name you wish to import would clash with a name in your current namespace, you can use the third form. This imports the object you specify, but binds it to an alternative name.
The fourth form is the closest to the C# using directive. It imports all the names (except ones that start with an underscore) from the module into your namespace. In Python, this is generally frowned on. You may import names that clash with other names you’re using without realizing it; when reading your code, it’s not possible to see where names are defined.
Python allows you to group related modules together as a package. The structure of a Python package, with subpackages, is shown in the image below.
Note
A package is a directory containing Python files and a file called __init__.py. A package can contain sub- packages (directories), which also have an __init__.py. Directories and subdirectories must have names that are valid Python identifiers.
A package is a directory on the Python search path. Importing anything from the package will execute __init__.py and insert the resulting module into sys.modules under the package name. You can use __init__.py to customize what importing the package does, but it’s also common to leave it as an empty file and expose the package functionality via the modules in the package.
You import a module from a package using dot syntax.
import package.module from package import module
Packages themselves may contain packages; these are subpackages. To access subpackages, you just need to use a few more dots.
import package.subpackage.module from package.subpackage import module
Python also contains several built-in modules. You still need to import these to have access to them, but no code is executed when you do the import. We mention these because one of them is very important to understanding imports. This is the sys module.
When you import a module, the first thing that Python does is look inside sys.modules to see if the module has already been imported. sys.modules is a dictionary, keyed by module name, containing the module objects. If the module is already in sys.modules, then it will be fetched from there rather than re-executed. Importing a module (or name) from different places will always give you a reference to the same object.
If the module hasn’t been imported yet, Python searches its path to look for a file named module.py. If it finds a Python file corresponding to the import, Python executes the file and creates the module object. If the module isn’t found, then an ImportError is raised.
As well as searching for a corresponding Python file, IronPython looks for a package directory, a built-in module, or .NET classes. You can even add import hooks to further customize the way imports work.
The list of paths that Python searches is stored in sys.path. This is a list of strings that always includes the directory of the main script that’s running. You can add (or remove) paths from this list if you want.
Some Python files can be used both as libraries, to be imported from, and as scripts that provide functionality when they’re executed directly. For example, consider a library that provides routines for converting files from one format to another. Programs may wish to import these functions and classes for use within an application, but the library itself might be capable of acting as a command-line utility for converting files.
In this case, the code needs to know whether it’s running as the main script or has been imported from somewhere else. You can do this by checking the value of the variable __name__. This is normally set to the current module name unless the script is running as the main script, in which case its name will be __main__.
def main(): "docstring" # code to execute functionality # when run as a script if __name__ == '__main__': main()
This segment of code will only call the function main if run as the main script and not if imported.
Other language features
Python has lots of other language features that make it a pleasure to work with. These features include:
tuple unpacking
a, b = (1, 2) a, b = get_tuple() for a, b, (c, d) in some_iterator: pass
list comprehensions and generator expressions. These allow you to combine a loop and a filter in a single expression (similar to LINQ over objects).
>>> # list comprehensions are eager >>> a = [value ** 2 for value in some_iterator if value > minimum] >>> # generator expressions are lazy >>> a = (value ** 2 for value in some_iterator if value > minimum) >>> a <generator object at 0x77be8>
iterators and generators (iterators are implemented with the __iter__ and next protocol methods whilst Python's yield is similar to C#'s Yield Return but with added capabilities).
the with statement (similar to the C# using statement but able to detect and optionally handle exceptional exits)
ternary expressions (unlike other languages the expression in the middle is evaluated first. If this evaluates to True then the left hand expression is evaluated and returned otherwise the right hand expression is evaluated and returned.)
a = 1 if x > 3 else None
Programming Paradigms
Scripting
Python is sometimes categorized as a 'scripting language', implying it is only suitable for scripting tasks. Whilst that certainly isn't true Python does make an excellent scripting language.
If you want to write a script to automate a regular task, you aren’t forced to write an object-oriented application; you aren’t even forced to write functions if the task at hand doesn’t call for them. This next listing is a script for a typical admin task of clearing out the temp folder of files that haven’t been modified for more than seven days.
import os, stat from datetime import datetime, timedelta tempdir = os.environ["TEMP"] max_age = datetime.now() - timedelta(7) for filename in os.listdir(tempdir): path = os.path.join(tempdir, filename) if os.path.isdir(path): continue date_stamp = os.stat(path).st_mtime mtime = datetime.fromtimestamp(date_stamp) if mtime < max_age: mode = os.stat(path).st_mode os.chmod(path, mode | stat.S_IWRITE) os.remove(path)
Python has a rich tradition of being used for shell scripting, particularly on the Linux platform.
Procedural
The code above works fine, but it runs whenever the script is executed and so isn't reusable. We can make it more useful by refactoring it into functions. If we have an if __name__ == '__main__' block then the Python file retains the same behavior when executed as a script but also behaves as a module that can be imported.
import os, stat from datetime import datetime, timedelta tempdir = os.environ["TEMP"] max_age = datetime.now() - timedelta(7) def delete_old_files_in_directory(directory): for filename in os.listdir(directory): path = os.path.join(directory, filename) if os.path.isdir(path): continue delete_old_file(path) def delete_old_file(path, max_age=max_age): date_stamp = os.stat(path).st_mtime mtime = datetime.fromtimestamp(date_stamp) if mtime < max_age: mode = os.stat(path).st_mode os.chmod(path, mode | stat.S_IWRITE) os.remove(path) if __name__ == '__main__': delete_old_files_in_directory(tempdir)
Functional
Functions are first class objects. First class functions, in combination with closures, make higher-order functions (functions that take or return functions) common in Python.
Closures are a fundamental concept to functional programming. A closure is basically a scope. Functions have access to all the variables in their enclosing scope. When you create a function it is said to 'close over' all the variables in its scope that it uses. Here's a simple closure:
>>> def f(): ... a = 1 ... def inner(): ... print a ... return inner ... >>> function = f() >>> function() 1
The inner function has access to ('closes over') the variable a defined in its enclosing scope. When f is called it returns the inner function. When the inner function is called it prints the value of the variable a from the scope it was defined in.
Parameters that are passed into a function become local variables within the scope of the function. We can use this to create function factories based on the parameters we pass in. Every time a function is called a new scope is created, so our factories can be called multiple times with different values.
def makeAdder(x): def adder(y): return x + y return adder
>>> add3 = makeAdder(3) >>> add3(5) 8 >>> add2 = makeAdder(2) >>> add2(2) 4
In makeAdder we bind the inner function to the argument (x) passed in. makeAdder returns a new function that takes a single argument (y). When this new function is called it returns the result of adding the new argument to the original value of x when it was created.
The generalisation of this is called partial application. In the next example the function partial takes a function that takes two arguments (as its first argument - func) and the first argument (x). partial returns a new function with its first argument bound to it.
The returned function (called inner) takes one argument, and when called it calls the original function with its first argument and the new one. This sounds more complicated than it is. We can rewrite makeAdder to use it:
def add(x, y): return x + y def partial(func, x): def inner(y): return func(x, y) return inner
>>> add2 = partial(add, 2) >>> add2(3) 5 >>> add1 = partial(add, 1) >>> add1(1) 2
These are examples of the common pattern, the 'factory function'; functions that return a function specialised on the input parameters passed to them.
An extension of this is the 'class factory', which is a function that returns a class specialized on its input. We saw an example of this from the Python standard library in the Introduction to IronPython article - in the form of namedtuple.
Metaprogramming
The most basic form of metaprogramming is code-generation and execution at runtime, which in Python means exec and eval. eval is for evaluating expressions and returning a result:
>>> a = eval("1 + 2") >>> a 3
For executing statements we can use the exec statement. We can use a dictionary as the namespace the code is executed in. If the code creates variables, functions or classes then they will be accessible in the dictionary after execing:
>>> namespace = {} >>>>> >>> exec code in namespace >>> namespace['a'] 3 >>> function = namespace['function'] >>> function() w00t! >>>
Code generation is a bit of a blunt instrument when it comes to metaprogramming. A more common way of metaprogramming in Python is with metaclasses.
Metaclasses are a particularly interesting feature of Python. Classes are first class objects in Python, and like all objects they have a type. Classes are instances of their metaclass, which defines some of the ways they behave. Metaclasses are seen as advanced Python but although the usecases for implementing them yourself are rare the mechanisms involved are easy to understand. For a good introduction to metaclasses read: Metaclasses in five minutes.
What Next?
Python is a full programming language and although it is very easy to learn the basics it can take time to become an idiomatic Python programmer. Fortunately there are many free online resources to help. Here are a few of the best ones:
- The Official Python Tutorial
- Try Python: Online Interactive Python Tutorial with Silverlight
- Dive into Python
- A Byte of Python
The next article in this series is about choosing an IDE or editor for developing with IronPython and some of the standard tools (like debuggers, code quality checkers and so on) that are available to you:
For tutorials and examples of working with IronPython, try these resources:.
Note
The early part of this tutorial draws from Pumping Iron, a presentation on IronPython originally by Harry Pierson and expanded on by Jonathan Hartley.
For buying techie books, science fiction, computer hardware or the latest gadgets: visit The Voidspace Amazon Store.
Last edited Fri Nov 27 18:32:35 2009.
Counter... | http://www.voidspace.org.uk/ironpython/python-for-programmers.shtml | CC-MAIN-2016-22 | refinedweb | 6,463 | 54.73 |
I think others are having the same problem and multiple people have reported this. Hide() show() elements are not working in chrome but in other browsers. It is working in site editor preview mode in chrome which I've tested.
This has happened 4 times over the last month (at times we checked..maybe more when it wasnt) and we reported it here/directly to wix earlier. If this chronic issue isnt fixed by wix soon i'm moving my 3 sites out of Wix and filing for a refund of my plan. The whole point of using wix was to make things easier, but that doesn't matter if basic things keep breaking down again and again.
Chrome Live Site - No elements appearing below upon clicking the button
Other browsers - Elements appear fine and things are working
This is happening to me!
Waiting for solutions...
I view my website in Google Chrome and all my elements that are set as show and hide all work perfectly fine as they should do.
Maybe for yourself it isn't a fact that there is an error with Wix, maybe it is just down to something you have got installed in your Chrome browser like a third party adblocker or popup blocker etc.
Simply make sure that you have white-listed your website and Wix Editor etc in any of these add-ons that you have in Chrome.
I had similar issue last year and found out that it was one of my installed add-ons that was causing the issue and so white-listed my site in that add-on fixed it and everything was shown as it should.
Nop.
Even in "incognito mode" my header dont hide... It is my code:
import wixWindow from 'wix-window'; let formFactor = wixWindow.formFactor; export function headerFirst_viewportEnter(event) { if (formFactor !== "Mobile") { $w("#headerDepois").hide("fade"); $w("#headerAntes").show("fade"); } } export function headerFirst_viewportLeave(event) { if (formFactor !== "Mobile") { $w("#headerDepois").show("fade"); $w("#headerAntes").hide("fade"); } }
Its simple... Only in "preview" works.
I suggest that you use the correct code for form factor.
Plus, where is your onReady page function and are you placing it in your site tab for overall site control or in the page tab which will only work on the one page that the code is on. | https://www.wix.com/corvid/forum/community-discussion/fix-the-live-site-issues-in-chrome-asap-please-recurring-issue-and-not-acceptable | CC-MAIN-2019-51 | refinedweb | 385 | 71.14 |
Some of us come to Java from scripting languages like Python, Ruby or Perl.
Almost everything in Java is fascinating but small things make us irritated. Java is a verbose programming language and writing programs similar in functionality to scripts is cumbersome. Dealing with stuff like XML, json, even Strings is not so handy. Things which should be straightforward are awkward. Thank God there are plenty of third-party libraries like Apache Commons, Guava, Gson which makes our life feasible. Try to rewrite a program from Perl to Java. Imagine a source code with a lot of regular expressions, which are first class components in Perl. Then it comes to you that you have to escape double quotes and syntax is a little different and you can’t just copy, paste. You can imagine that most of the regexes would require changes and thus the whole process leads to debugging regular expressions and this is like a nightmare. So Java developers in such cases would say I wish I would write in Python … I would write it quicker and source code would be up to 50% shorter, but I have to write it unfortunately in Java …
Another case, Scala new modern JVM language has recently been becoming more and more popular. It’s target is to be Java replacement. Frequently I have heard when developers say: have you seen what it can do with collections? this is much more concise language than Java, and it supports Actors model which simplifies dealing with concurrency. This is awesome! But there are some disadvantages Scala is quite demanding, difficult to learn, the learning curve is pretty steep. Moreover there are problems with dependencies, you have to stick to a certain version. The language is so difficult that also frequently I hear from Java developers, Scala yes, I tried it, but I gave up it was too difficult.
Yet another case some developers before implementing a big project build a kind of prototype to check if their assumptions while designing work. And they don’t have enough time to implement the prototype in verbose Java. They could of course implement it in Python but it would be nice to have the possibility to reuse an already done job and invested time …
Now comes Groovy modern dynamic JVM language which was not designed like Scala to be a Java replacement but to be Java Companion. That makes a world of difference. Groovy also is not a Scala competitor. It fulfils places where Java lacks and makes possible easy scripting, and rapid development feasible. The learning curve is not very steep, at the beginning just change java file extension to groovy and that is all.
Compared to Java Groovy the amount of boilerplate code has been minimised as possible. You don’t need to write a semicolon at the end of every line. In Java you have to write getters/setters. Of course modern IDE could generate it for you, but it clutters the code. In Groovy those are generated at bytecode level, you don’t have to write them manually.
One of its strong points is the ability to create so-called Domain Specific Language – DSL. The most important and well-known DSL is Gradle, a build automation tool. it is created in Groovy. This is a perfect solution because in stark contrast to Java where the base of DSL is xml. DSL in xml is cumbersome to maintain, develop and debug. When DSL is written in Groovy adding new functionality it is just writing normal code. This is far more natural and easy. It may surprise you but Gradle contains plug-ins for Scala and JavaScript.
Groovy contain many things which we miss in Java, like easy building and parsing xml, json. A feature which already contains Scala and will be the most important change in Java 8, lambdas called another way closures. Closures are something like a pointer to method and they are first class Groovy citizens. Closure can be defined as method put between braces and then for instance passed as method argument. They can be called lazily. Closures makes working with collection very straightforward.
def list = ['a','b','c','d'] def upperCase = list.collect { it.toUpperCase() } assert list instanceof ArrayList assert upperCase == ["A", "B", "C", "D"]
This small code snippet shows the power of Groovy. The first list is Java ArrayList but created and initialized with the smaller amount of code. Then method collect which iterates over whole collection calling closure on every item. Closure is between braces, it means the current collection element so this code iterates over ArrayList and calls method toUpperCase() on every element. Simple, powerful, elegant.
Groovy provides means like scripting language to deal easily with files: creating, transforming, traversing directory structure. It makes templating a very easy task. Even running and chaining external processes are natural to code.
If you need to have scripting language inside your Java application, Groovy provides a means for it. Because it uses among others means of reflection, you can use Groovy as an advanced debugging tool in a very sophisticated environment. These capabilities make Groovy an ideal language for writing Unit Tests for Java classes, Groovy can run your class private methods …
And so on and so on. I prepared a project on GitHub to present some interesting Groovy capabilities:
Groovy is dynamic. It means that types are inferred from context. Thanks to def keyword, flow typing is possible it means that you can change a type of variable just by assigning other type. Dynamic also means that you could any behaviour of class in runtime. Due to fact that methods and variables are called indirectly through so-called Meta Object Protocol you can add, change methods and properties to class, even in the runtime. For instance Groovy 2.1.3 adds 250 methods to well-known Java String class, and you could add them more and they can do whatever you can imagine. Of course nothing is for free so this dynamic behaviour has two drawbacks. First is that errors are seen at runtime, second is performance penalty.
First one doesn’t create such a huge problem because modern IDE like my favourite IntelliJ Idea, helps a lot just by syntax colouring. If it is not enough, special annotations could change Groovy behaviour. Namely @TypeCheck and @CompileStatic. The former forces Groovy to check variable, method names and returned types so it eliminates typos, and forces code to be consistently written. But there is always a cost, this time you we lose power of meta programming i.e. features like adding methods in runtime. Code generated this way although statically checked will be the same as dynamically checked. Only compiler became grumpy. Program performance will be the same as in pure dynamic mode. If you need performance similar to Java and you use older than Java 7, second annotation apart from statically type checking causes Groovy to produce static code. Code generated this way will be similar to produced by Java. Although you still benefit from more concise source code. For newer Java there is one more alternative.
Java 7 introduced new bytecode instruction called invokeDynamic. This changes rule of game, because up to Java 7, code was called or by reflection or proxy classes. Even using so-called cache-site which accelerates performance, results are substantially worse (slower) than Java. Nowadays thanks to method handles and invokeDynamic which calls special method stub which in turn calls destination method whole process is much faster. This functionality will be heavily used in Java 8. Yes, Java 8 will be much more dynamic than predecessors.
Dynamic nature of Groovy gives us completely new opportunities to create software, new excellent tools, and gives us something what is so-called “Groovy magic”. I strongly encourage you to get acquainted with that magic.
Where Groovy is used? There is number of well-known frameworks done in Groovy, lets enumerate a few of them:
- Grails – rapid website development framework
- Gradle – mentioned above build automation tool, currently it is used even by Android project
- Spock – powerful testing framework, similar to Mockito
- GPars – concurrency library which uses Actor model
And many others, Groovy became second language for Spring framework. | https://www.javacodegeeks.com/2013/06/groovy-java-companion-introduction.html | CC-MAIN-2016-40 | refinedweb | 1,371 | 64.41 |
MTD (Memory Technology Devices) are NAND/NOR-based flash memory chips used for storing non-volatile data like boot images and configurations. Readers are cautioned not to get confused with USB sticks, SD cards, etc., which are also called flash devices, but are not MTD devices. The latter are generally found on development boards, used to store boot loaders, an OS, etc.
Even though MTD devices are for data storage, they differ from hard disks and RAM in several aspects. The biggest difference is that while hard disk sectors are rewritable, MTD device sectors must be erased before rewriting — which is why they are more commonly called erase-blocks. Second, hard disk sectors can be rewritten several times without wearing out the hardware, but MTD device sectors have a limited life and are not usable after about 10^3-10^5 erase operations. The worn out erase-blocks are called bad blocks and the software must take care not to use such blocks.
Like hard disks, MTD devices can be partitioned and can therefore act as independent devices. On a system with one or more MTD devices, device and partition information can be obtained from the
/proc/mtd file. A typical
/proc/mtd file is as follows:
cat /proc/mtd dev: size erasesize name mtd0: 000a0000 00020000 "misc" mtd1: 00420000 00020000 "recovery" mtd2: 002c0000 00020000 "boot" mtd3: 0fa00000 00020000 "system" mtd4: 02800000 00020000 "cache" mtd5: 0af20000 00020000 "userdata"
A partitioned MTD device can be depicted as in Figure 1, which shows the relation between an MTD device, a partition and a sector.
Figure 1: An MTD device
As already said, MTD write operations are different from usual storage devices. Therefore, before we move further, let’s understand how write operations take place on MTD devices. Figure 2 shows a typical write case.
Figure 2: An MTD write operation
The left-most part shows a sector that has some data at the end. The rest of the sector has not been written since the last erase. A user wants to write “new data 1″ to this sector at offset 0. Since this part of the sector has already been erased, it is ready to be written and so “new data 1″ can be directly written to the sector. Later, the user may want to write “new data 2″, again at offset 0. To do this, the sector must be erased. Since the sector needs to be erased in entirety, the “old data” must be backed up in a temporary buffer. After erasing the complete sector, the “new data 2″ and “old data” must be written at appropriate offsets.
This procedure is the reason there are specific file systems for MTD devices, like JFFS2 and YAFFFS, and flash translation layers (FTL) like NFTL, INFTL, etc. These FTLs and file systems take special care of MTD device properties to hide complexity from the user.
In the first section that follows, we will look at how to access, read/write and erase MTD devices from Linux applications. The second section describes the same things in kernel space, so that this article can be useful to both application as well as kernel developers.
Accessing MTDs from applications
The user must know the device partition to work upon, which can be found from
/proc/mtd as shown earlier. Assuming users want to work on the “userdata” partition, they must use the
/dev/mtd5 device.
The first thing to do is to get information about the MTD device. Use the
MEMGETINFO ioctl command, as follows:
#include <stdio.h> #include <fcntl.h> #include <sys/ioctl.h> #include <mtd/mtd-user.h> int main() { mtd_info_t mtd_info; int fd = open("/dev/mtd5", O_RDWR); ioctl(fd, MEMGETINFO, &mtd_info); printf("MTD type: %u\n", mtd_info.type); printf("MTD total size : %u bytes\n", mtd_info.size); printf("MTD erase size : %u bytes\n", mtd_info.erasesize); return 0; }
Error handling has been omitted for brevity. The
mtd_info_t structure is used with the
MEMGETINFO command. The MTD type can be
MTD_ABSENT,
MTD_RAM,
MTD_ROM,
MTD_NAND,
MTD_NOR, etc., which are defined in the
mtd/mtd-abi.h header file. The
mtd_info.size indicates the size of the whole device (i.e., the partition, in this case). Finally,
mtd_info.erasesize indicates the sector size. During an erase operation, this is the minimum size that can be erased, as we’ll see later.
Reading MTD devices is similar to ordinary devices:
/* read something from last sector */ unsigned char buf[64]; lseek(fd, -mtd_info.erasesize, SEEK_END); read(fd, buf, sizeof(buf));
A write operation can be performed in the same way, provided the sector has been erased previously. Finally, we come to the erase operation. Here is an example of erasing a partition, sector by sector:
void erase_partition(mtd_info_t *mtd_info, int fd) { erase_info_t ei; ei.length = mtd_info->erasesize; for(ei.start = 0; ei.start < mtd_info->size; ei.start += mtd_info->erasesize) { ioctl(fd, MEMUNLOCK, &ei); ioctl(fd, MEMERASE, &ei); } }
All sectors of the device are writeable after this erase operation. Notice the use of
MEMUNLOCK before
MEMERASE, which is essential to allow the erase operation.
Accessing MTDs from kernel space
This section will repeat the functions explained in the previous section, but in kernel space. This needs a separate section since the erase operation is more complex here — the erase operation may sleep and therefore the kernel programmer has to wait until the operation is completed. This is the case for applications too, but the sleep is transparently taken care of by the scheduler.
As explained earlier, the first MTD information is the
mtd_info structure. This is retrieved by iterating through all registered MTD devices:
#include <linux/kernel.h> #include <linux/mtd/mtd.h> #include <linux/err.h> static struct mtd_info *mtd_info = NULL; int init_module(void) { int num; for(num = 0; num < 64; num++) { mtd_info = get_mtd_device(NULL, num); if(IS_ERR(mtd_info)) { printk("No device for num %d\n", num); continue; } if(mtd_info->type == MTD_ABSENT) { put_mtd_device(mtd_info); continue; } if(strcmp(mtd_info->name, "userdata")) { put_mtd_device(mtd_info); continue; } printk("MTD type: %u\n", mtd_info->type); printk("MTD total size : %u bytes\n", mtd_info->size); printk("MTD erase size : %u bytes\n", mtd_info->erasesize); return 0; } mtd_info = NULL; return 0; } void cleanup_module(void) { if(mtd_info) put_mtd_device(mtd_info); }
The above kernel module searches for the “userdata” partition. The function
get_mtd_device(), when invoked with the first argument
NULL, returns the MTD device associated with the minor number specified in the second argument. On a successful search, it increments the reference count of the device. That’s why, before exiting, a call to
put_mtd_device() must be made to release (decrement) the reference count.
Additionally, the module uses the
flag MTD_ABSENT (which is available to applications too). This check is required to function correctly with some probing device drivers used to allocate placeholder MTD devices on systems that have socketed or removable media.
Having retrieved the
mtd_info structure, reading is relatively simple:
/* read something from last sector */ u_char buf[64]; mtd_info->read(mtd_info, mtd_info.size-mtd_info.erasesize, sizeof(buf), buf);
The second argument of the read function specifies the read offset, and the third the length to read. Note that the read operation too may sleep and, therefore, it must not be performed in an interrupt context. The write operation can be performed as follows (assuming the sector has been previously erased):
/* write something to last sector */ mtd_info->write(mtd_info, mtd_info.size-mtd_info.erasesize, sizeof(buf), buf);
As mentioned before, the read, write and erase operations may sleep. Therefore, kernel code must wait for the operation to finish. Here is an example of erasing the partition and waiting to finish the operation:
#include <linux/sched.h> void erase_partition(struct mtd_info *mtd_info) { unsigned int start; for(start = 0; start < mtd_info->size; start += mtd_info->erasesize) erase_sector(mtd_info, start, mtd_info->erasesize); } void erase_sector(struct mtd_info *mtd_info, unsigned int start, unsigned int len) { int ret; struct erase_info ei = {0}; wait_queue_head_t waitq; DECLARE_WAITQUEUE(wait, current); init_waitqueue_head(&waitq); ei.addr = start; ei.len = mtd_info->erasesize; ei.mtd = mtd_info; ei.callback = erase_callback; ei.priv = (unsigned long)&waitq; ret = mtd_info->erase(mtd_info, &ei); if(!ret) { set_current_state(TASK_UNINTERRUPTIBLE); add_wait_queue(&waitq, &wait); if (ei.state != MTD_ERASE_DONE && ei.state != MTD_ERASE_FAILED) schedule(); remove_wait_queue(&waitq, &wait); set_current_state(TASK_RUNNING); ret = (ei.state == MTD_ERASE_FAILED)?-EIO:0; } } void erase_callback (struct erase_info *instr) { wake_up((wait_queue_head_t *)instr->priv); }
The
erase_partition() function iterates over all sectors, and erases them with
erase_sector(). At the core of
erase_sector() is the
mtd_info->erase call, which (as mentioned previously) may sleep. Therefore,
erase_sector() prepares a wait queue and a wait queue head.
After a call to
mtd_info->erase, the function prepares itself to relinquish the CPU (presuming that
mtd_info->erase will sleep) by changing task state to
TASK_UNINTERRUPTIBLE and adding itself to the wait queue head. Before relinquishing the CPU, it checks if erase is done, through the
ei.state flag. If erase is done successfully, this flag will be set to
MTD_ERASE_DONE.
If the erase operation is not complete, the task relinquishes the CPU by calling
schedule(). Later, when the erase operation is complete, the driver calls the callback function provided in
ei.callback. Here the task wakes up to itself, then removes itself from the wait queue, changes the task state to
TASK_RUNNING and finally, the
erase_sector() function returns.
MTD devices have many more features that can be used by application programmers. ECC (error correction codes) and OOB (out of band) data are some of them. The MTD framework is integrated into the Linux kernel — therefore it makes working with MTD devices very simple, as we have seen in this article. | http://www.opensourceforu.com/2012/01/working-with-mtd-devices/ | CC-MAIN-2015-22 | refinedweb | 1,581 | 55.95 |
I had the same problem a while ago and for the same reasons as you mention
we decided to use fingerprints (MD5 hash of the schema), however there are
some catches here.
First I believe that the normalisation of the schema is incomplete so you
might end up with different hashes of the same schema.
Second, using a 128 bit integer prepended to both key and values takes more
space than using 32 bit. Not a big issue for values but for keys this
doubles our size.
Third, we already started to use confluent's registry as well because of
the already existing integration with other pieces of infrastructure.
(camus, bottledwater etc.)
What should be useful given this perspective is a byte or two prepending
the schema id - defining the registry namespace.
I've added the fingerprint schema registry as a example in the c++ kafka
library at
We run a couple of those in a mesos cluster and use HAproxy find them.
/svante
2015-07-09 10:36 GMT+02:00 Daniel Schierbeck <daniel.schierbeck@gmail.com>:
> I'm working on a system that will store Avro-encoded messages in Kafka.
> The system will have both producers and consumers in different languages,
> including Ruby (not JRuby) and Java.
>
> At the moment I'm encoding each message as a data file, which means that
> the full schema is included in each encoded message. This is obviously
> suboptimal, but it doesn't seem like there's a standardized format for
> single-message Avro encodings.
>
> I've reviewed Confluent's schema-registry offering, but that seems to be
> overkill for my needs, and would require me to run and maintain yet another
> piece of infrastructure. Ideally, I wouldn't have to use anything besides
> Kafka.
>
> Is this something that other people have experience with?
>
> I've come up with a scheme that would seem to work well independently of
> what kind of infrastructure you're using: whenever a writer process is
> asked to encode a message m with schema s for the first time, it broadcasts
> (s', s) to a schema registry, where s' is the fingerprint of s. The schema
> registry in this case can be pluggable, and can be any mechanism that
> allows different processes to access the schemas. The writer then encodes
> the message as (s', m), i.e. only includes the schema fingerprint. A
> reader, when first encountering a message with a schema fingerprint s',
> looks up s from the schema registry and uses s to decode the message.
>
> Here, the concept of a schema registry has been abstracted away and is not
> tied to the concept of "schema ids" and versions. Furthermore, there are
> some desirable traits:
>
> 1. Schemas are identified by their fingerprints, so there's no need for an
> external system to issue schema ids.
> 2. Writing (s', s) pairs is idempotent, so there's no need to coordinate
> that task. If you've got a system with many writers, you can let all of
> them broadcast their schemas when they boot or when they need to encode
> data using the schemas.
> 3. It would work using a range of different backends for the schema
> registry. Simple key-value stores would obviously work, but for my case I'd
> probably want to use Kafka itself. If the schemas are writting to a topic
> with key-based compaction, where s' is the message key and s is the message
> value, then Kafka would automatically clean up duplicates over time. This
> would save me from having to add more pieces to my infrastructure.
>
> Has this problem been solved already? If not, would it make sense to
> define a common "message format" that defined the structure of (s', m)
> pairs?
>
> Cheers,
> Daniel Schierbeck
> | http://mail-archives.apache.org/mod_mbox/avro-user/201507.mbox/%3CCAJ8Yxq0xs=kDk8UH6Lgc9K2W8XXiqU6_qxSSPFQni_w2WaF6hA@mail.gmail.com%3E | CC-MAIN-2017-51 | refinedweb | 625 | 70.02 |
As the new Java versions are released, the coding becomes easier and Java becomes more robust language to practice, For example JDK 1.5 introduced a new for loop with which iteration of elements of arrays and data structures becomes easier. Start reading this tutorial on "Arrays Enhanced for loop".
Java for loop is modified (enhanced) to suit arrays and collections (data structures). Elsewhere for general loop requirements, this does not work. This modified for loop, known as enhanced for loop, is similar to the foreach loop of many object-based languages like JavaScript, vb.net, PHP, Delphi etc. For this, many call the enhanced for loop as foreach loop of Java. This is introduced with JDK 1.5.
Now let us see how it is simple to write. Illustrations are given with respect to the following.
1. Arrays – Arrays Enhanced for loop (foreach)
2. Collections – Collections Enhanced for loop (foreach)
1. Enhanced for loop with arrays
Observe the code on Arrays Enhanced for loop before going into the details.
public class ForEachArrays { public static void main(String args[]) { // illustration with int array int marks[] = { 40, 50, 60, 70, 80 }; System.out.print("Printing int array with taditional for loop: "); for(int i = 0 ; i < marks.length; i++) { System.out.print(marks[i] + " "); } System.out.print("\nPrinting int array with enhanced for loop: "); for(int k : marks) { System.out.print(k + " " ); } // illustration with string array String names[] = { "S N Rao", "Sumathi", "Sridhar", "Jyothi", "Jyostna" }; System.out.print("\n\nPrinting string array with taditional for loop: "); for(int i = 0 ; i < names.length; i++) { System.out.print(names[i] + " "); } System.out.print("Printing string array with enhanced for loop: "); for(String str : names) { System.out.print(str + " " ); } } }
Output screenshot on Arrays Enhanced for loop
for(int k : marks) { System.out.print(k + " " ); }
The new for loop does not have initialization, incrementing and condition checking of a traditional for loop. Then how it works? In each iteration, one element of the array is copied into the k variable. The k variable is printed in the body. Condition checking etc. are taken care of implicitly. This for loop is an instruction to the JVM just to print the elements of the array. Observe, the type of the array and variable in the for loop are of the same data type, int.
For a similar for loop with collections is available at Collections Enhanced for loop (foreach).
2 thoughts on “Arrays Enhanced for loop”
Collections Enhanced for loop (foreach) redirect to concurrentException page. Looks it redirect to a not expected link
Now corrected. Thank you. | https://way2java.com/collections/arrays-enhanced-for-loop-foreach/ | CC-MAIN-2022-33 | refinedweb | 431 | 59.9 |
Member
12 Points
Participant
1740 Points
Oct 19, 2012 01:48 AM|anil.india|LINK
Have you taken reference of yor service in your project? Plz check
Member
421 Points
Oct 19, 2012 01:51 AM|narendrajarad|LINK
Member
12 Points
Participant
1740 Points
Oct 21, 2012 01:38 AM|anil.india|LINK
If references are available then you may missing to include that namespace into your code.
using myNameSpaceGetsQualifyHere;
If this is also there, then check accessibility of that
Contributor
4926 Points
Oct 21, 2012 11:28 PM|Steven Cheng - MSFT|LINK
Hi sfiros2003,
Are you using Web Application or web site project template for your we app? For Web site project, you need to save all items after you've added the webservice reference in your project.
Also, I'd suggest you first creating a test Console project and add the webservice reference (either ServiceReference or WebReference) against the target service and make sure it works. Then, we can make sure the service is ok and further concentrate on the web application.
#How to: Add, Update, or Remove a Service Reference
#Add Web Reference in Visual Studio 2010
5 replies
Last post Oct 21, 2012 11:28 PM by Steven Cheng - MSFT | https://forums.asp.net/t/1852481.aspx?Cannot+Use+Web+Service+in+code | CC-MAIN-2018-13 | refinedweb | 204 | 58.82 |
This is to let everyone know that there's a new LTT release. This
should be the final LTT candidate before I make an official 0.9.6
release, so please test this out and report any problems. This
release includes patches for vanilla 2.6.9, so testing should be
rather straight-forward.
I would like to thank Tim Bird of the CE Linux Forum for going
through the LTT-dev mailing list archives and picking up some of
the contributions that were posted there, and integrating it all.
For this release, I've followed Andrew Morton's advice, and have
proceeded with a namespace cleanup of the kernel code. So,
instead of trace statements being like this:
TRACE_SCHEDCHANGE(prev, next);
they have the more mainstream form of:
ltt_ev_schedchange(prev, next);
Here are the highlights of the most important additions since the
opening of the 0.9.6preX branch:
* Lockless logging (relayfs)
* Per-CPU buffering
* TSC timestamping
* Use of relayfs for buffering and user-space interfacing
* Code cleanup for adhering to kernel coding standards
* ARM port
Enjoy,
Karim Yaghmour
--
Author, Speaker, Developer, Consultant
Pushing Embedded and Real-Time Linux Systems Beyond the Limits || karim <at> opersys.com || 1-866-677-4546 | http://article.gmane.org/gmane.linux.kernel.tracing/524 | crawl-002 | refinedweb | 202 | 59.43 |
Java - Quick Reference Guide
What is Java?
Java is:
- Object Oriented
- Platform independent:
- Simple
- Secure
- Architectural- neutral
- Portable
- Robust
- Multi-threaded
- Interpreted
- High Performance
- Distributed
- Dynamic
Java Environment Setup:
Java SE is freely available from the link Download Java. So you download a version based on your operating system.
You can refer to installation guide for a complete detail.
Java Basic Syntax:. : default, public , protected, private
Non-access Modifiers : final, abstract, strictfp
We will be looking into more details about modifiers in the next section.
Java Variables:
We would see following type of variables in Java:
- the); } }
Note:"); } }
Data Types in Java
- short
- int
- long
- float
- double
- boolean
- char'
String literals in Java are specified like they are in most other languages by enclosing a sequence of characters between a pair of double quotes. Examples of string literals are:
"Hello World" "two\nlines" "\"This is in quotes\""
Java language supports few special escape sequences for String and char literals as well. They are:
Java).
Java Basic Operators:
Java provides a rich set of operators to manipulate variables. We can divide all the Java operators into the following groups:
The Arithmetic Operators:
The Relational Operators:
The Bitwise Operators:
The Logical Operators:
The Assignment Operators:
Misc :
variable x = (expression) ? value if true : value if false
instanceOf Operator:
This operator is used only for object reference variables. The operator checks whether the object is of a particular type(class type or interface type). instanceOf operator is wriiten as:
( Object reference variable ) instanceOf (class/interface type)
Precedence of Java Operators:
The while Loop:
A while loop is a control structure that allows you to repeat a task a certain number of times.
Syntax:
The syntax of a while loop is:
while(Boolean_expression) { //Statements }
The do...while Loop:
A do...while loop is similar to a while loop, except that a do...while loop is guaranteed to execute at least one time.
Syntax:
The syntax of a do...while loop is:
do { //Statements }while(Boolean_expression);:
for(initialization; Boolean_expression; update) { //Statements }
Enhanced for loop in Java:
As of java 5 the enhanced for loop was introduced. This is mainly used for Arrays.
Syntax:
The syntax of enhanced for loop is:
for(declaration : expression) { //Statements }
The break Keyword:
The break keyword is used to stop the entire loop. The break keyword must be used inside any loop or a switch statement.
The break keyword will stop the execution of the innermost loop and start executing the next line of code after the block.:
continue;
The if Statement:
An if statement consists of a Boolean expression followed by one or more statements.
Syntax:
The syntax of an if statement is:
if(Boolean_expression) { //Statements will execute if the Boolean expression is true }
The if...else Statement:
An if statement can be followed by an optional else statement, which executes when the Boolean expression is false.
Syntax:
The syntax of a if...else is:
if(Boolean_expression){ //Executes when the Boolean expression is true }else{ //Executes when the Boolean expression is false }
The if...else if...else Statement:
An if statement can be followed by an optional else if...else statement, which is very usefull to test various conditions using single if...else if statement.. }
Nested if...else Statement:
It is always legal to nest if-else statements. he remaining else if's or else's will be tested.
Syntax:
The syntax for a nested if...else is as follows:
if(Boolean_expression 1){ //Executes when the Boolean expression 1 is true if(Boolean_expression 2){ //Executes when the Boolean expression 2 is true } }
The switch Statement:
A switch statement allows a variable to be tested for equality against a list of values. Each value is called a case, and the variable being switched on is checked for each case.
Syntax:
The syntax of enhanced for loop is:
switch(expression){ case value : //Statements break; //optional case value : //Statements break; //optional //You can have any number of case statements. default : //Optional //Statements }
Java Methods:
A Java method is a collection of statements that are grouped together to perform an operation. When you call the System.out.println method, for example, the system actually executes several statements in order to display a message on the console.
In general, a method has the following syntax:
modifier returnValueType methodName(list of parameters) { // Method body; }.
Java Classes & Objects:.
A sample of a class is given below:
public class Dog{ String breed; int age; String color; void barking(){ } void hungry(){ } void sleeping(){ } } instantiated when the class is loaded. Instance variables can be accessed from inside any method, constructor or blocks of that particular class.
Class variables . Class variables are variables declared with in a class, outside any method, with the static keyword.
Exceptions Handling:. }
For a complete detail of the Java Programming language, it is recommended to go through our simple Java Tutorial. | http://www.tutorialspoint.com/java/java_quick_guide.htm | CC-MAIN-2015-14 | refinedweb | 804 | 54.83 |
rproduce:
1.Launch MonoDevelop
2.Create MonoTouch Single view application.
3.Open Main.cs file.
4.Type any keyword.
5.Auto complete popup appears.
6.Double click any function/variable.
7.Type . after function/variable.
8.No auto complete popup display.
Actual result:
When user type . after selected function/variable,no auto complete popup appears.
Expected result:
Auto complete popup shoud be appears after selecting function/variable.
Enviornment:
MonoDevelop 2.9.0
MonoTouch 5.1
Test data:
Please refer attached screenshot.
Created attachment 863 [details]
Function
This is working fine for MonoforAndroid but not for MonoTouch
-> monodevelop
From the screenshot, and a little of my testing, it seems this is only happening outside the context of a method. Within the body of a method, the autocomplete box seems to work fine.
It's highly possible that in this case the code completion has simply gotten more accurate. There are no valid things to display in this kind of scenario unless the attribute has an inner class like:
public class Foo
{
public class Bar
{
}
}
In that case it'd be expected and valid to see 'Bar' listed when you press dot. Otherwise it should display nothing.
This also seems to affect declaring new namespaces. You type "namespace " and it gives you a list of completions, but if you choose one and type '.' there is no further completion list presented.
The initial screenshot looks like not a bug and the namespace issue has been resolved in the newresolver branch (where the regression existed), so closing this.
We have tested this issue,this is working fine.
Hence closing | https://xamarin.github.io/bugzilla-archives/19/1980/bug.html | CC-MAIN-2019-43 | refinedweb | 265 | 60.21 |
Using ADO.NET's DataReader and DataAdapter
For me, the most enjoyable part of designing a software application is designing the database and the application’s data access layer. ADO.NET is the best tool I’ve found for the job. It excels at accessing data from almost any source, manipulating that data, and then sending it to just about any platform in XML form. Better yet, since ADO.NET is built on top of the Microsoft .NET framework, it’s robust and scalable right out of the box.
Most developers in the Microsoft arena are familiar with the legacy set of data access objects known as ADO. The ADO.NET object model was designed to be as similar as possible to ADO without sacrificing functionality. Therefore, in writing "Sams Teach Yourself ADO.NET in 24 Hours," I tried to point out the many similarities and differences between ADO and ADO.NET in order to make the transition for Visual Basic 6 developers as easy as possible. For this same reason the examples in this book were all created in Visual Basic .NET (with several of them provided in C#, as well). However, this book is also perfect for developers with some Microsoft .NET experience and no knowledge of legacy Microsoft development platforms whatsoever.
The four chapters you’ll be reading here showcase some of the new features of ADO.NET. Chapters 8 and 9 show you how to access data and bind them to a Windows form using Visual Studio .NET. Chapter 11 demonstrates how to use the same data access code to display data on Web forms using ASP.NET’s built-in List controls. Lastly, chapter 18 shows how to create and manage database transactions using ADO.NET.
— Jason Lefebvre
In Hour 6, "Retrieving Data from the Data Source," you saw how to use ADO.NET to connect to a data source, fetch some records, place them into a DataSet using the DataAdapter, and display them on a Web form. In this hour, you'll see an alternativeand in many instances more efficientmethod of retrieving data. Specifically, in this hour, you'll learn how to
Bind the DataReader object to Web controls
Step through the results of a DataReader object
Determine when to use a DataAdapter versus a DataReader
DataReader Versus DataAdapter
In Hour 6, you saw how to use the Command object in conjunction with the DataAdapter object to retrieve records from the database and place them into a DataSet. The DataSet was then bound to a Web control such as the DataGrid and displayed in a Web form. The code in Listing 8.1 is a review of the ADO.NET code required to perform these tasks.
Listing 8.1 Retrieving Records with the DataAdapter
<script language="VB" runat="server" > Sub Page_Load(Source as Object, E as EventArgs) Dim conn as New SqlConnection("Initial " + _ Catalog=Northwind;Server=(local);UID=sa;PWD=;") Dim cmd as New SqlCommand("SELECT * FROM Employees", conn) Dim adapt as New SqlDataAdapter(cmd) Dim dsEmployees as New DataSet() conn.Open() adapt.Fill(dsEmployees, "Employees") conn.Close() employees.DataSource = dsEmployees employees.DataBind() End Sub </script>
NOTE
If you are placing the code from Listing 8.1 into a Web form, do not forget to import the System.Data and System.Data.SqlClient namespaces at the top of your Web form.
To use the DataAdapter, you must create a DataSet, as shown in line 7 of Listing 8.1. The DataSet is then passed to the DataAdapter in line 10, where it is filled with records from the database. In lines 13 and 14, the DataSet is then bound to a Web control in order to display the data as shown in Figure 8.1.
There is one problem with this method of retrieving data: The DataSet object exists in memory and contains all rows returned by your query. Suppose that you are retrieving a large number of records from the data source. For the brief amount of time it takes to bind the data to your form and send it to the user, you have a potentially large amount of memory consumed by the DataSet.
If you are only retrieving small DataSets on a low-traffic site, this probably won't be of much concern. However, as the number of concurrent users of your application increases, the more important this issue becomes.
Figure 8.1 The appearance of a Web form with bound Web controls.
Like the DataAdapter, the DataReader object is designed to retrieve records from the data source. However, unlike the DataAdapter, the DataReader never has more than a single database record in memory at any given time. It does this by opening a forward-only, read-only stream of data from your data source.
If you think of your data source as a water reservoir, you could envision the DataAdapter method of returning records as a man running back and forth between the source and destination with buckets (albeit very quickly). The DataReader is more like a firehose. | http://www.informit.com/articles/article.aspx?p=29586&seqNum=3 | CC-MAIN-2018-22 | refinedweb | 839 | 63.59 |
The Django project is a custom-built framework that originated with an online newspaper Website and was released as open source in July 2005. The core components of the Django framework are:
- Object-relational mapping for creating models
- Polished administrator interface designed for end users
- Elegant URL design
- Designer-friendly template language
- Caching system
This is the first article in a two-part series on Python Web frameworks. The secord article will introduce you to the TurboGears framework.
To use and understand the
code in this article, you need to have Python installed and know how to use it at a beginner level. Check to see if you have Python, and what version, by typing
python -V. Django requires, at a minimum, version 2.3.5, which is available at the Python Web site (see the Resources section later in this article for a link). You should also be at least passingly familiar with the MVC architecture.
This article uses the development version of Django, to take advantage of the recent improvements to the Django framework. I recommend that you use this version until the 0.95 release. Check the Django Web site for the latest release (again, see Resources for a link).
Download and install Django as follows:
Listing 1. Downloading and installing Django
After installing Django, you should have the admin tool, django-admin.py, available on your path. Listing 2 shows some of the commands available to the admin tool:
Listing 2. Using the Django administration tool
Django projects and applications
To begin a Django project, use the django-admin
startproject command, like so:
Listing 3. Starting a project
The above command creates a directory called djproject that contains the basic configuration files needed to run a Django project:
Listing 4. Contents of the djproject directory
For this project, you will build a job-board application called "jobs." To create an application, use the
manage.py script, which is a project-specific django-admin.py script where the settings.py file is automatically supplied:
Listing 5. Using manage.py startapp
This creates a barebone application with one Python module for your models and another for your views. The jobs directory will contain the following files:
Listing 6. Contents of the jobs application directory
The location of the application inside the project is purely a convention created for new Django developers, not a requirement. Once you start mixing and matching applications across several projects, you can put applications in their own module namespace and tie them together using settings and master URL files. For now, follow the steps as shown.
To make Django aware of a new application, you'll need to add an entry to the INSTALLED_APPS field in the settings.py file. For this job board
application, the string
djproject.jobs must be added:
Listing 7. Adding an entry to settings.py
Django comes with its own object-relational mapper (ORM) library that supports dynamic database access through a Python object interface. The Python interface is very usable and powerful, but you are also free to drop down and use SQL directly, if needed.
The ORM currently provides support for PostgreSQL, MySQL, SQLite, and Microsoft® SQL databases.
This example uses SQLite as the database backend. SQLite is
a lightweight database that requires no configuration and resides on
disk as a simple file. To use SQLite, simply install the pysqlite
library using setuptools (see Resources for more information on setuptools and the
easy_install tool in particular, which you need to install separately):
easy_install pysqlite
Before working on the model, configure the database in the settings file. SQLite requires only that the database engine and name be specified.
Listing 8. Configuring the database in settings.py
This job board application will have two types of objects, Locations and Jobs. A Location contains city, state (optional), and country fields. A Job has a location, title, description, and publish date.
Listing 9. The jobs/models.py module
The
__str__ method is a special class method in Python that
returns the string representation of an object. Django uses
this method extensively when displaying objects in the Admin
tool.
To see the database schema for the model, run manage.py's
sql
command. The schema won't be enacted yet.
Listing 10. Viewing the database schema using the manage.py sql command
To initialize and install the model, run the synchronize database
command,
syncdb:
~/dev/djproject$ python manage.py syncdb
Note that the
syncdb command asks you to create a superuser account.
This is because the django.contrib.auth application, which provides
basic user authentication functionality, is supplied by default in
your INSTALLED_APPS settings. The superuser name and password will be
used for logging into the admin tool described in the next section.
Remember that this is the Django superuser, not your system's.
Django models access the database through the default Manager class
called
objects. For example, to print a list of all Jobs, you would
use the
all method of the
objects manager:
Listing 11. Printing all jobs
The Manager class also has filtering methods called
filter and
exclude. Filtering gets all the objects that meet a condition, while
excluding gives all the objects that do not. The queries below should give
the same results ("gte"
means "greater than or equal," and "lt" means "less than").
Listing 12. Excluding and filtering jobs
The
filter and
exclude methods return QuerySet objects that can be
chained together and can even perform joins. The
q4 query below will
find jobs posted since January 1st, 2006, in Cleveland, Ohio:
Listing 13. More excluding and filtering jobs
It's very nice that QuerySets are lazy. This means that they do not execute against the database until they are evaluated, and thus run much faster than immediate queries.
This laziness is handy with Python's slicing functionality. Rather than request all the records and then slice the records needed, the code below uses an OFFSET of 5 and a LIMIT of 10 in the actual SQL query, greatly improving performance.
Listing 14. Python slice
Note: Use the
count method to find out how many records are in a
QuerySet. The Python
len method does a full evaluation and then
counts the rows returned as records, while the
count method does an
actual SQL COUNT, which is much faster. Your database administrator
will thank you.
Listing 15. Counting records
For more information, see the Resources section for a link to the Django "Database API reference."
One of the biggest selling points of Django is its well-polished admin interface. This tool was created with end users in mind. It gives your projects a great data entry tool.
The admin tool is an application that comes with Django. It must be
installed, like the
jobs application, before you can use it. The
first step is to add the application's module (
django.contrib.admin)
to the INSTALLED_APPS setting:
Listing 16. Modifying settings.py
To make the admin tool available from the /admin URL, simply uncomment the line provided in your project's urls.py file. The next section goes into URL configuration in greater detail.
Listing 17. Making the admin tool available via urls.py
The admin application has its own database model and needs to be
installed. Use the
syncdb command again to
accomplish this:
python manage.py syncdb
To view the admin tool, you can use the test server that comes with Django.
Listing 18. Using the test server to view the admin tool
You can now navigate to the admin tool at and log in using the superuser account you created before. You will notice that none of your models are available for use.
To make a class accessible through the admin tool, create an Admin subclass to it. You can then customize how each class can be administered by adding class attributes to this subclass. Listing 19 shows how to add the Location class to the admin tool.
Listing 19. Adding the Location class using the admin tool
You can now create, update, and delete the Location records through the admin interface.
Figure 1. Editing locations with the admin tool
You can list and sort Records by city, state,
and country as specified by the
list_display class attribute.
Figure 2. Listing locations with the admin tool
The admin tool has numerous options for managing each type of model class. Listing 20 shows several examples applied to the Job class:
Listing 20. Options for managing model classes
According to the above settings, a job's title, location, and published data will be used when listing job records. The jobs will be ordered by when they were published, starting with the most recent (a minus sign indicates descending order). Users can find jobs by title and description, and administrators can filter records based on location.
Figure 3. Listing jobs with the admin tool
Designing your URL scheme
The Django URL dispatch system uses regular-expression configuration modules that map URL string patterns to Python methods called views. This system allows URLs to be completely decoupled from the underlying code, allowing for maximum control and flexibility.
A urls.py module is created and defined as the default starting
point for URL configuration (via the ROOT_URLCONF value in the
settings.py module). The only requirement for a URL configuration file
is that it must contain an object that defines the patterns called
urlpatterns.
The job board application will start with an index and detail view that are accessed through these URL mappings:
/jobsindex view: Displays the latest 10 jobs
/jobs/1detail view: Displays jobs with an ID of 1
Both views (index and detail) will be implemented in a module called views.py in the jobs application. Implementing this configuration in the project's urls.py file would look like this:
Listing 21. Implementing the configuration of views in djproject/urls.py
Note the
<job_id> piece. It's important later.
Best practice is to pull out application-specific URL patterns and place them in the application itself. This decouples the application from the project and allows for greater reuse. An application-level URL config file for jobs would look like this:
Listing 22. Application-level URL configuration file, urls.py
Since the view methods now all come from the same module, the first
argument can be used to specify djproject.jobs.views as the module's
root name, and Django will use it to look for the methods
index and
detail:
Listing 23. jobs/urls.py: Looking for index and detail
Tying the above jobs URLs back into the project as a whole is done
using the
include function. The application level URLs are tied back
below the
/jobs section:
Listing 24. djproject/urls.py: Tying URLs back into the project
If you try to access the index page () at this point using your test server, you will get an error, since the view being called (djproject.jobs.views.index) does not exist yet.
A view is a simple Python method that accepts a request object and is responsible for:
- Any business logic (directly or indirectly)
- A context dictionary with data for the template
- Rendering the template with a context
- The response object that passes the rendered results back to the framework
In Django, the Python method called when a URL is requested is called a view, and the page loaded and rendered by the view is called a template. Because of this, the Django team refers to Django as an MVT (model-view-template) framework. TurboGears, on the other hand, calls its methods controllers and their rendered templates views so that they can fit squarely into the MVC acronym. The difference is largely semantic, as they accomplish the same things.
The simplest possible view returns an HttpResponse object initialized
with a string. Create the following method and make a
/jobs HTTP
request to ensure your urls.py and views.py files are set up
correctly.
Listing 25. jobs/views.py (v1)
The following code gets the latest 10 jobs, renders them through a template, and returns a response. It will not work without the template file from the next section.
Listing 26. jobs/views.py (v2)
In the above code, the template is named by the
jobs/job_list.html
string. The template is rendered with a context of the job list named
object_list. The rendered template string is then passed into an
HTTPResponse constructor, which is sent back to the request client via
the framework.
The steps of loading a template, creating a context, and returning a
new response object are replaced below with the convenience method
named
render_to_response. Also new is the detail view method that
uses a convenience method called
get_object_or_404 to retrieve a Job
object using the arguments supplied. If the object is not found, a 404
exception is thrown. These two methods remove a lot of boilerplate
code in most Web applications.
Listing 27. jobs/views.py (v3)
Note that
detail takes
object_id as an argument. This is the
number mentioned earlier after the
/jobs/ URL path in the jobs urls.py
file. It is passed further to the
get_object_or_404 method as the
primary key (pk).
The above views will still fail because the templates that they load and render (jobs/job_list.html and jobs/job_detail.html) do not exist yet.
Django provides a simple templating language designed for fast rendering
and ease of use. Django templates are created with plain text embedded
with
{{ variables }} and
{% tags %}. Variables are evaluated and
replaced with the value they represent. Tags are used for basic control
logic. Templates can be used to generate any text-based format including
HTML, XML, CSV, and plain text.
The first step is to define where the templates are located. For simplicity's sake, create a templates directory under djproject and add its path to the TEMPLATE_DIRS settings.py entry:
Listing 28. Creating a templates directory in settings.py
Django templates support a concept called template inheritance,
which allows site designers to create a uniform look and feel without
repeating content in every template. You can use inheritance by
defining a skeleton, or base, document with block tags. These block
tags are filled by page templates with content. This example shows an
HTML skeleton with blocks called
title,
extrahead, and
content:
Listing 29. Skeleton document, templates/base.html
To keep the application decoupled from the project, use an intermediary base file as the base for all the Job application page files. For this example, put the application CSS in the base file for simplicity. In a real application, with a properly configured Web server, extract this CSS and put it in a static file served by the Web server.
Listing 30. Intermediary base file, templates/jobs/base.html
By default, the Django test server does not serve static files because that is the Web server's job. If, during development, you would like Django to serve images, style sheets, etc., then see the link in Resources on how to turn that feature on.
Now, create two page templates to be loaded and rendered by the
views. The
jobs/job_list.html template simply iterates through the
object_list it gets through the context by the index view, and
displays a link to each record's detail page.
Listing 31. The templates/jobs/job_list.html template
The
jobs/job_detail.html page shows one record, called job:
Listing 32. The templates/jobs/job_detail.html page
The Django template language has been designed with limited functional capabilities. This limitation keeps templates simple for non-programmers and keeps programmers from putting business logic where it doesn't belong, the presentation layer. See the link to the template language documentation in Resources.
Django comes with four sets of generic views that let developers create applications that follow typical patterns:
- List/detail pages (like the above example)
- Date-based breakdown of records (useful for news or blog sites)
- Creation, update, and deletion (CRUD) of objects
- Simple direct template rendering or simple HTTP redirect
Instead of creating boilerplate view methods, all of the business logic is in the urls.py file and is handled by the generic view methods supplied by Django.
Listing 33. Generic views in jobs/urls.py
Three major changes to this urls.py file are:
- An
info_dictmap object passes along a query set for the Jobs to be accessed.
- It uses
django.views.generic.list_detailinstead of
djproject.jobs.views.
- The actual views called are
object_listand
object_detail.
This project follows some requirements to make the transition to generic views work automatically:
- The generic detail view expects an argument named
object_id.
- The templates follow the naming pattern: app_label/model_name_list.html (
jobs/job_list.html) app_label/model_name_detail.html (
jobs/job_detail.html)
- The list template handles a list named
object_list.
- The detail template handles an object named
object.
More options can be passed through the
info_dict,
including a
paginate_by value that specifies the number of objects
per page.
The next article in this series will examine TurboGears, another Python Web framework, and compare it with Django.
Learn
- Read an overview of the MVC architecture on Wikipedia.
- "Fast-track your Web apps with Ruby on Rails" (developerWorks, June 2005) shows how Ruby on Rails creates Web-based applications.
- Python.org is the home of the Python programming language, where you can find links for downloading the Python interpreter and standard libraries.
- The Python tutorial will get you started with Python.
- DjangoProject.com is the home page for the Django framework. The documentation includes:
- The How to install Django, which shows how to set up Django on a development machine
- The Database API reference, a guide to using the Django ORM library
- The Django template language, a simple guide for template authors
- How to serve static files, an illustration of how to set up Django to serve static files during development (do not do this in production)
- How to use Django with mod_python, a guide to combining Django with Apache using the mod_python module
- Generic views, which shows how to implement common Web application patterns even more quickly with Django's generic views
- Building and Distributing Packages with setuptools shows how to install setuptools, along with
easy_install(part of the Python Eggs package).
- Django performance tips shows how to handle lots of traffic with Django.
-_3<<
Ian Maurer is a senior consultant for Brulant, Inc., where he specializes in developing integrated e-commerce solutions using open source and IBM WebSphere technologies for various industries including consumer goods and retail. Ian resides in northeastern Ohio and is a member of the Cleveland Area Python Interest Group. | http://www.ibm.com/developerworks/linux/library/l-django/ | crawl-002 | refinedweb | 3,092 | 55.64 |
(Magical land of React Suspense, Concurrent React and React.lazy API)
Dan Abramov in his talk “Beyond React 16” at JSConf Iceland 2018 said:
We’ve built a generic way to ensure that high-priority updates like user input don’t get blocked by rendering low-priority updates.
Let’s understand what this means and also get introduced to some of the new features coming to React, some of which have been released as part of the latest stable release and some of them are still in unstable mode and it’s quite possible that the implementation of the api might change over time.
Things we got introduced to in the talk:
- Suspense
- react-cacheDefer set state with scheduler
- Concurrent React ModeCode
- splitting with React.lazy
Suspense
React.Suspense in simple words means we can suspend the rendering of our component or components until some condition is met( for example data from an endpoint or a resource is loaded) and until then show a fallback(for-example a spinner)
Why do we need this?
If suspense is all about just showing a spinner till data loads, can we not do that today as well?
I mean we have been doing the same thing since a long time by keeping a loading state as true and till the data is not fetched we show a spinner and when data fetch is complete we set the loading state to false.
An example of how we have been currently doing it:
state = { loading: true, data: null }
So the question is if it can be done even today then what is it that suspense is bringing into our codebase?
The answer to that is yes it’s still possible to use loading state and play around with it to show/hide the spinner but as the application grows complex this becomes tedious to manage.
For example:-
<RestaurantDetail> <RestaurantInfo /> <RestaurantAlbums /> <RestaurantReviews> <RestaurantReviewDetail> <RestaurantReviewDetail> <RestaurantReviewDetail> {...} </RestaurantReviews> </RestaurantDetail>
In the above example we can have 4 api calls:
1) <RestaurantInfo /> component making one api call for getting basic information about a Restaurant
1) <RestaurantInfo /> component making one api call for getting basic information about a Restaurant
2) <RestaurantAlbums /> api to fetch all the images of that Restaurant
3) <RestaurantReviews /> api to fetch all reviews
4) <RestaurantReviewDetail /> api to fetch some details around those individual reviews like comments, likes etc.
3) <RestaurantReviews /> api to fetch all reviews
4) <RestaurantReviewDetail /> api to fetch some details around those individual reviews like comments, likes etc.
The problem with the above code structure is that we need to somehow manage the loading state and data fetching states for all those api calls that are happening above.
So what is the solution?
For the above problem we have multiple solutions which can be as follows:
- Delegate all api calling logic into the parent container and let all of them wait until all data fetching is complete and pass data to child components as props. The problem with this approach is now the parent needs to be aware of all api calls which are needed by child components and also maintain a complex state for all these api responses.
- Make all the child components smart/stateful components and let each of them manage their own loading and data states. This is complex since converting a stateless component to a stateful component is not something we would want to do.
- The third solution is using Suspense
With Suspense it works differently. How?
With suspense and react-cache, we can use our same functional component and still fetch data from it.
The difference here being instead of fetching data from a lifecycle method like
we will fetch this data from inside ofwe will fetch this data from inside of
componentDidMount
..
render
How is this even possible?
This becomes possible using react-cache and suspense
Now a word of caution,
is still unstable and it’s implementation or api might change over time.is still unstable and it’s implementation or api might change over time.
react-cache
An example of how to use react-cache to create a restaurant list fetching resource:-
import { unstable_createResource, } from "react-cache"; const restaurantListResource = unstable_createResource(() => { return new Promise((resolve, reject) => { fetch("") .then(res => res.json()) .then(response => { const { restaurants } = response; resolve(restaurants); }); }); });
react-cache
In the above code snippet we are using unstable_createResource and as the name suggests this is still
unstable.
unstable_createResource takes a function as one of its arguments and a hash function as the second argument which is used to create the key for the hash map which caches the data.
The whole suspense and suspended rendering magic comes into play with this unstable_createResource because this function throws a promise.
(Dan’s tweet about cache throwing a promise and react catching it)
A layman implementation of what it might look like could be:
const unstable_createResource = (method) => { let resolved = new Map(); return { read(key) => { if (!resolved.has(key)) { throw method(...args) .then(val => resolved.set(key, val)); } return resolved.get(key); } }; }
If you notice, it returns an object with a read function which takes key as a param which is usually the hash key of the hash map and if the data is not fetched yet then this read function will throw a promise as we see on line no 6.
The actual implementation of this function can be found here:-
Who will catch this promise?
Error Boundaries will catch this promise like how they used to catch the error thrown by React render.
More information on Error Boundaries can be found on reactjs.org:
More information on Error Boundaries can be found on reactjs.org:
So now with unstable_createResource our RestaurantList component would look something like this:
const RestaurantList = () => { // if no restaurants are found in the cache, the suspense will throw a promise const restaurants = restaurantListResource.read(); // this line will have to wait until that promise resolves return ( <article> <h2 className="f3 fw4 pa3 mv0">Restaurant List</h2> <div className="cf pa2">{renderRestaurants(restaurants)}</div> </article> ); };
Who will implement componentDidCatch?
This is where Suspense from react comes into play. React.Suspense has a componentDidCatch sort of mechanism which will catch this promise and show a fallback until the promise is resolved.
import React, { Suspense } from 'react'; const RestaurantListContainer = props => { return ( <Suspense fallback={'Loading...'}> <RestaurantList /> </Suspense> ); };
This concludes the topic of using React.Suspense to suspend rendering until data fetching is complete and until then show a fallback. We learnt about react-cache in this topic and how it can be used to throw a promise and suspend rendering.
Concurrent React
To take advantage of the asynchronous capabilities of concurrent React, We change the way we render our root
element.element.
<App />
Where we do this in standard React:
ReactDOM.render(<App />, document.getElementById('root'));
We do this for concurrent React:
ReactDOM.createRoot(document.getElementById('root')).render(<App />);
That is all that needs to be changed to enable Concurrent React.
This brings us with a new magical capability which is:-.
maxDuration
React.StrictMode
If you are developing in React 16.6, what has been recommended is to wrap <React.StrictMode> around <App /> so any unsupported features you may integrate will be prompted as warnings in your development console.
Wrap strict mode around your app like so:
ReactDOM.render( <React.StrictMode> <App /> </React.StrictMode>, document.getElementById('root'));
Defer Mode in Concurrent React
We can defer certain setState calls and let it wait until some other important operations like data fetching finishes for example clicking on a restaurant card and not rendering the detail page until all the data for the detail page has been loaded.
This is the kind of stuff it allows you to do:
import { unstable_scheduleCallback as defer } from "<a href="" target="_blank">scheduler</a>";
To defer a setState call we can do like the function below where I am delaying showing the detail page until the data for that page is loaded.
import { unstable_scheduleCallback as defer } from "scheduler"; ... toggleDetailPage = id => { if (id) { this.setState({ id: id }); defer(() => { this.setState({ showDetail: true }); }); } else { this.setState({ id: id, showDetail: false }); } };
Defer is also able to somewhat catch the thrown promises, and only apply the state changes after all of the child async operations are completed.
Complete example with concurrent react, createResource, Suspense and defer mode can be found here
Here is what Dan had to say about this whole set state defer:
Code Splitting with React.lazy and React.Suspense
The React.lazy function lets you render a dynamic import as a regular component.
Before:
import React from 'react'; import RestaurantListComponent from './RestaurantList'; function App() { return ( <div> <RestaurantListComponent /> </div> ); }
Note: In the above implementation the RestaurantListComponent will be part of your main bundle and will not be lazy-loaded.
After:
import React, { Suspense, lazy } from 'react'; const RestaurantListComponent = lazy(() => import('./RestaurantList'));// code-splitted, on demand loaded component function App() { return ( <Suspense fallback="Loading..."> <RestaurantListComponent /> </Suspense> ); }
Note: With this approach, RestaurantListComponent will be a code splitted component loaded on demand.
Example of lazily loaded component in a chunk “1.chunk.js”
How to use these API’s?
React.Suspense with React.lazy is a stable API and can be installed from npm as the latest stable version of react and react-dom
npm install react react-dom
react-cache and scheduler are unstable. Here is how you can still use them:
npm install react@16.7.0-alpha.2 - next npm install react-dom@16.7.0-alpha.2 - next npm install react-cache@2.0.0-alpha.1 npm install scheduler@0.11.2
Conclusion
To conclude, I am really excited about all these api’s to be stable soon.
What are your thoughts on these features? Let me know in the comments section! 😊
What are your thoughts on these features? Let me know in the comments section! 😊
Here is the roadmap from the React team for the upcoming releases:
Resources
- React lazy, Suspense and Concurrent React Breakdown with Examples
- Beyond React 16: Time Slicing and Suspense API
- Dan’s JSCONF talk
- React Suspense: Async Rendering in React
- What’s new in React — Presentation
- Fresh Concurrent React
- Twitter thread around this topic
Special thanks to Sara Vieira for reviewing this 😊 | https://hackernoon.com/magic-of-react-suspense-with-concurrent-react-and-react-lazy-api-e32dc5f30ed1 | CC-MAIN-2019-39 | refinedweb | 1,691 | 52.7 |
--- David Smith <dns4@cornell.edu> schrieb:
> No, I don't mean that. It should be able to retrieve
> any type of file.
> What you can do with it from within a jsp might be
> somewhat limited thought.
Ok,
> What exactly do you want to do with the file
> contents within the jsp?
I will save them to a Content Server by the JSPs.
I will use some beans and taglib for the logic
implementation.
>
> BTW, I highly recommend you read the documentation
> for the jstl taglibs
> and do some googling. I'm sure some research would
> help you a lot.
>
> --David
>
> henry human wrote:
> > Hi David,
> > most of these files are PDF, XLS and not only TXT
> > format.
> > You are meaning that with a JSP definitvly one can
> > reads only TXT files?
> >
> > i understood with help of
> > --- David Fisher <dfisher@jmlafferty.com> schrieb:
> >
> >
> >> Henry doesn't say if these are text files or
> binary
> >> files.
> >>
> >> If these are binary files like PDF, PPT and XLS
> >> files then a servlet
> >> will be needed - not a jsp.
> >>
> >> We use variations like the following in both
> Tomcat
> >> 4.1.31 and Tomcat
> >> 5.5.26
> >>
> >> public class OpenFileServlet extends HttpServlet{
> >>
> >> public void doGet (HttpServletRequest
> request,
> >>
> >> HttpServletResponse response) throws
> >> ServletException, IOException {
> >>
> >> // You probably want to look up the url
> -
> >> which is really a
> >> path.
> >> String url =
> request.getParameter("url");
> >> if(url == null) return;
> >>
> >> // You'll know your mime types for your
> >> content.
> >> String ext =
> request.getParameter("ext");
> >> String content_type;
> >>
> >> if (".ppt".equals(ext)) {content_type =
> >> "application/vnd.ms-
> >> powerpoint"; }
> >> else if (".xls".equals(ext))
> {content_type
> >> = "application/
> >> vnd.ms-excel"; }
> >> else {content_type = "application/pdf";}
> >>
> >> // we don't like to inline Office
> >> documents.
> >> boolean is_inline =
> >> "application/pdf".equals(content_type);
> >>
> >> File f = new File(url);
> >>
> >> if ( f.exists() && f.length() > 0) {
> >> response.setContentType(
> content_type);
> >> // The following works way better in
> >> Windows IE than ext=
> >>
> >> response.setHeader("Content-disposition",
> >> (is_inline?"inline":"attachment")+";filename=" +
> >> f.getName());
> >> int lng = (int)f.length();
> >> response.setContentLength( lng );
> >> FileInputStream fis = new
> >> FileInputStream(f);
> >> byte[] chunk = new byte[16184];
> >> int count;
> >> while ((count = fis.read(chunk)) >=0
> )
> >> {
> >>
> >> response.getOutputStream().write(chunk,0,count);
> >> }
> >> fis.close();
> >> } else {
> >> log("File not found: " + url);
> >> }
> >> }
> >> }
> >>
> >>
> >>
> >> FYI - this approach really became necessary about
> >> when 4.1.29 came out
> >> - at that time Tomcat got pretty strict with
> >> non-Text being served via
> >> JSP. All of our PDF and PPT content broke in
> Windows
> >> IE. And we had to
> >> back out a whole release.
> >>
> >> Regards,
> >> Dave
> >>
> >> On Apr 29, 2008, at 1:39 PM, David Smith wrote:
> >>
> >>
> >>> So... the "remote file" is available to the
> local
> >>>
> >> system on a
> >>
> >>> network drive. That's a fun one. There are a
> >>>
> >> couple of different
> >>
> >>> ways to do this.
> >>>
> >>> 1. Using Windows fileshares
> >>>
> >>> Let me preface this by saying *I've* never done
> >>>
> >> this. The few times
> >>
> >>> I've had a tomcat server on a Windows machine,
> it
> >>>
> >> only ever accessed
> >>
> >>> local files. There are people on the list with
> way
> >>>
> >> more experience
> >>
> >>> than I have.
> >>>
> >>> As I understand it, as long as tomcat is running
> >>>
> >> under a user
> >>
> >>> account that has privileges to read the remote
> >>>
> >> file, you could use a
> >>
> >>> UNC path with java standard file access classes
> >>>
> >> and methods to read
> >>
> >>> the file. The mapped drive letter wouldn't work
> >>>
> >> unless tomcat was
> >>
> >>> only running while you are logged in. In a jsp,
> >>>
> >> this could be done
> >>
> >>> with a scriptlet:
> >>>
> >>> <!-- import your classes at the top of the
> jsp....
> >>>
> >> -->
> >>
> >>> <jsp:scriptlet>
> >>> try {
> >>> FileInputStream remoteFileReader = new
> >>>
> >> FileInputStream( "\\\
> >>
> >>> \remoteServer\\archive\\files\\myFile.txt" ) ;
> >>> // do something with the file
> >>> } catch ( Exception e ) {
> >>> // do something if the access fails
> >>> } finally {
> >>> try {
> >>> remoteFileReader.close() ;
> >>> } catch ( Exception e ) {}
> >>> }
> >>> </jsp:scriptlet>
> >>>
> >>> It should be mentioned the system account most
> >>>
> >> services run under by
> >>
> >>> default does not have any privilege to access
> >>>
>
=== message truncated ===
__________________________________________________________
Gesendet von Yahoo! Mail.
Mehr Möglichkeiten, in Kontakt zu bleiben.
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org | http://mail-archives.apache.org/mod_mbox/tomcat-users/200804.mbox/%3C326061.98610.qm@web25008.mail.ukl.yahoo.com%3E | CC-MAIN-2016-30 | refinedweb | 666 | 68.97 |
Hi,
I was doing the Is_Int section on Practice Makes Perfect, and from what I read from the instructions, I submitted:
def is_int(x):
if x - int(x) == 0:
return True
else:
return False
Later on I found out that the more appropriate code probably is:
def is_int(x):
if x - round(x) == 0:
return True
else:
return False
Now the question I have is, can anyone help explain to me why the first set of code would still be consider correct by Codeacademy(both are when I submit them)?
I checked and the int() function is suppose to convert strings and variables to integers. Whereas the round function actually rounds the numbers in the function, which would be more useful in this situation.
Any help would be appreciated. | https://discuss.codecademy.com/t/why-does-this-code-work-for-the-is-int-section/47379 | CC-MAIN-2018-26 | refinedweb | 129 | 56.93 |
This action might not be possible to undo. Are you sure you want to continue?
“ENGLAND”
The PEST analysis is a framework that strategy consultants use to scan the external macro-environment in which a firm operates. PEST is an acronym for the following factors: • Political • Economic • Social • Technological
Political Factors(incl. Legal)
• Environmental regulations and protection:
As the menace of coal smoke receded the society changed its name (to the National Society for Clean Air) and its focus, and in the 1970s began to campaign vigorously on air pollution from industry and, increasingly, transport. During this period membership was mainly (although far from exclusively) drawn from local authorities, with some industrial membership. Perhaps the main achievement of the Society after the Clean Air Acts was the development of the concept of Local Air Quality Management and the incorporation of this in the Environment Act 1995. The original Environment Bill was intended to deal with issues such as the establishment of the Environment Agency, contaminated land, National Parks and waste topics.
• Tax policies
Britain is becoming a less attractive place to invest and work in because of government tax plans, trade and investment minister Digby Jones said on Friday, the latest non-political appointee to question policy. The ruling Labor party has come under pressure to match opposition plans to raise more tax from wealthy foreigners living and working in Britain. It has proposed to end tax breaks which mean rich residents who are non-domiciled for fiscal purposes pay no UK tax.
• International trade regulations and restrictions
a.
Import Tariffs Customs duty is assessed on the fair market value of imported goods at the time they are landed in the UK. Import prices for products entering the UK from nonEU states generally consist of: Cost, Insurance, Freight and Duty, with VAT of 15% levied on the aggregate value. This sum is the exporter’s “landed cost, duty paid.”The commercial invoice value is usually accepted as the normal price, but if a preferential arrangement has been established between the overseas supplier and the importer, or an unrealistic value has been declared, HM Revenue and Customs (HMRC) reserves the right to assess a fair market value for duty purposes. The duty is payable at the time the goods are imported, but established importers can defer payment for an average of 30 days. In addition to customs duties on imported goods, an excise tax is levied on in-country sales of alcohol, tobacco, and road vehicles, and on sales of oil and petroleum products.
1
b.
Trade Barriers The UK has no significant trade or investment barriers and no restrictions on the transfer of capital or repatriation of profits. The very few barriers that exist are almost all attributable to UK implementation of EU Directives and regulations. Import Requirements and Documentation A limited range of goods requires import licenses, which are issued by the UK Department for Business, Enterprise and Regulatory Reform’s Import Licensing Branch. These include firearms and explosives, nuclear materials, controlled drugs and certain items of military equipment. U government defense programs. At the time of writing, the Treaty is under review by the U.S. Senate Foreign Relations Committee. In addition to International Trafficking in Arms Regulations (ITAR), reexports from the UK and the activities of UK-based subsidiaries, are subject to UK export controls. These are managed by the Export Control Organization (ECO), an office of the UK Department for Business, Enterprise and Regulatory Reform (BERR). U.S. companies supplying certain restricted items appearing on the UK Military List, including missile and long-range UAV technology, are encouraged to consult guidance available from the ECO website on the trafficking and brokering provisions contained in the UK Export Control Act 2002. Temporary Entry Raw materials, temporarily imported for incorporation into products for export, may be admitted without payment of duties and taxes. The importer must provide a bank or insurance company guarantee or indemnity for the applicable duties and taxes. Goods intended for unaltered re-export may also be imported free of duty for a period of up to six months by prior arrangement with Her Majesty’s Revenue & Custom (HMRC). Temporary entries and goods imported for technical examination and testing are subject to a VAT deposit scheme with VAT refunded following the re-export of the goods. Products imported for repair, calibration, or incorporation are admitted with conditional relief from duty and VAT pending correct disposal of goods, usually re-export from the European Community. Professional and demonstration equipment may be temporarily imported into the UK free of duty and tax under the Customs Convention on the Temporary Importation of Professional Equipment. Additionally, these goods may also be imported under the above-mentioned VAT deposit scheme for temporary entries. Labeling and Marking Requirements In the UK, origin, weight and dimension, chemical composition and appropriate hazard warnings are required for consumer protection purposes on any product offered for retail sale. If the product cannot be labeled or marked, the data may be included on any packaging, accompanying printed material, or product literature. European and British clothing and shoe sizes are differently marked, and special provision may have to be made for apparel retail labeling. Dual labeling is strongly supported by the UK, which uses the practice as a cost-saving measure in its exports to North America.
c.
d.
e.
f.
2
switchblade knives. namely those which do and don't give rise to a legal duty.S. this is simply a way of distinguishing between two types of promise. law in export controls. • Contract enforcement law A contract is a promise or a set of promises for the breach of which the law gives a remedy. certain types of pornography and hormone-treated beef.g. the Australia Group (AG) for the control of chemical and biological weapons. when required. for what is it used. whilst the person to whom the statement is made (the promisee) is under no obligation to run in the first place.S. or meeting of minds. special certificates of origin. b. A clear description of goods should satisfy three basic questions as to what the product is. packing list. Prohibited and Restricted Imports Prohibited imports include AM citizens band radios. meaning that a clear description of the goods is essential and should be worded in such a way as to describe the goods to an individual who may not necessarily have an understanding of a particular industry or article. This may be seen as giving three interconnected elements: a.S. Although sensitive to the extraterritorial application of U. It is worth noting that imprecise descriptions are a common reason for goods being held without customs clearance. A promise In the context of English law. whilst the buyer has an obligation to pay the price. sanitation. devices that project toxic. but all of the details needed to establish the true value of the goods should be given. or the performance of which the law in some way recognizes as a duty. A copy of the commercial invoice should accompany the shipment to avoid delays in customs clearance. a promise to meet one's other half for dinner at 7pm gives rise to no legal obligation . noxious or harmful substances (e. 3 . A bilateral contract gives rise to obligations on both sides.-origin to unauthorized destinations. and of what it is made. bill of lading or airway bill. Consular documents are not required for shipments to the UK.it is a "mere" promise . by contrast. Thus. A unilateral contract. No special form of invoice is required. insurance documents. A legal duty arising from that promise Here. ownership. when the enforcement action is based on multilateral controls. tear gas). In fact. h. the seller has an obligation to transfer title in the thing sold to the buyer. The UK participates in the Wassenaar Arrangement for the control of dual-use exports.whereas a promise to sell someone a car for £5000 gives rise to legal obligation. and the Nuclear Suppliers’ Group (NSG) for nuclear-related goods.g. Thus "I will give you £100 if you run a marathon" gives rise to a legal duty on the maker of the statement (the promisor) to pay the money if the race is run. It is often (rightly) stated that English law will not give effect to a mere promise and that an agreement.. and. is required. preventing the export of restricted goods and technology to countries of proliferation concern. At least two additional copies of the invoice should be sent to the consignees to facilitate customs clearance. Customs Regulations and Contact Information The documents required for shipments include the commercial invoice. a reference to a promise here may be seen as misleading. etc. counterfeit coins and currency. English contract doctrine distinguishes between bilateral and unilateral contracts. Thus in a contract of sale. The UK also supports United Nations’ sanctions restricting exports to certain other destinations. in preventing the re-export of sensitive goods and technology of U. gives rise to obligations on one side only. the UK authorities cooperate with the U.
was called "the common law". how their grievances are handled. The Office of Fair Trading [3] also acts as the UK's official consumer and competition watchdog. Complaints need to be made to Consumer Direct who will provide legal advice to complainants. One. contract. with a remit to make markets work well for consumers. with the influence of EU law. Many of the consumer protection laws e. although they remain separate sets of doctrine. A remedy for breach of that duty In considering the development of remedies. or re-direct the individual complaint to Trading Standards for investigation. Since the Judicature Acts of the nineteenth century the two systems have been administered by the same courts. In very rare cases. restitution or even criminal law. e. time off for 4 . the matter judicially treated as tort.c. In many circumstances. the other. impose an injunction or take the matter to litigation. although other equitable rules which have application to contracts will be discussed as they arise. Consumer complaints against companies are not published. consumers cannot directly complain to the OFT.via a super complaint. f. however. • Employment laws The law has given employees – and in many cases other workers who might not count as employees – rights and entitlements in relation to how they are disciplined and dismissed. where domestic law is in question. wages. administered by the courts of common pleas and King’s Bench. The OFT rarely prosecute companies. municipal level by Trading Standards departments. Distance Selling Regulations 2000 or Unfair Terms in Consumer Contracts Act 1997 are actually UK implementations of EU directives.g. However. Most important for our current purposes is that the two systems developed different sets of remedies for breach of contract. preferring a light touch regulation approach. undertakings and enforcements are located at [2]. individual complainants are unable to be told whether their case is being investigated or not. each of which had different rules. g. and at a local. Domestic (UK) laws originated within the ambit of contract and tort but. absence from work and sickness. General consumer advice can be obtained from Consumer Direct or via a local branch of the Citizen's Advice Bureau. h. • Consumer protection The United Kingdom. The Office of Fair Trading [3]will then investigate. Consumer Direct may direct a very large number of complaints to the OFT to be considered as a systemic complaint.g. it is emerging as an independent area of law. The OFT is one of the bodies responsible for enforcing these rules. For much of its history. as member state of the European Union. Consumer Protection issues are dealt with when complaints are made to the Director-General of Fair Trade. This leads to a problem in that these examples of legislation are clearly designed to deal with individual complaints but the OFT will only deal with systemic complaints and will ignore individual complainants redirecting them back to Consumer Direct. Due to restrictions within the Enterprise Act 2002. but investigation work. The OFT can also be engaged by consumer groups e.Consumer Focus . England had two separate systems of law working side by side. a fundamental distinction in English law between common law (often just abbreviated to law) and equity must be understood. is bound by the consumer protection directives of the EU. The Consumers Association or the statutory consumer protection body . work breaks and working hours. holidays. Proof of promise: Objective intention Form of promise: Offer & acceptance Form of promise: Certainty Validity of promise: Intention to create legal relations Validity of promise: Consideration d. presided over by the Lord Chancellor in the court of chancery was "equity".
In unfair dismissal cases employers can be ordered to pay compensation of more than £76. This is despite their own failure to secure a majority of seats in the new House of Commons. or approved subject to "remedies" such as an obligation to divest part of the merged business or to offer licenses’ or access to facilities to enable other businesses to continue competing. and in fact he said.000. Banning abusive behavior by a firm dominating a market. Like all competition law. There was a year on year increase in Employment Tribunal claims of 56 per cent last year (2009-10) bringing the number of claims received to their highest level ever at 236. or anti-competitive practices that tend to lead to such a dominant position. Staff who feel they have been denied their rights have redress by taking their employers to an Employment Tribunal. including some joint ventures. • Political Stability Some form of cross party collaboration is essential if market confidence in sterling and the UK governments triple A credit rating is to be retained. • Government organization / attitude I believe USA and UK knew the truth that no one tried communist system by its strict definition…. Transactions that are considered to threaten the competitive process can be prohibited altogether. redundancy and retirement. Supervising the mergers and acquisitions of large corporations. All workers have the right not to be discriminated against in relation to their gender or orientation. British Prime Minister Gordon Brown remains in office until he resigns (as Prime Minister). c. and nuclear holocaust is indeed a real threat on society built on greed and mistrust… These two nations are definitely contemplating on such system… • Competition regulation The section 59 of the Competition Act 1998 provides that UK rules are to be applied in line with European jurisprudence.100 claims (source: Tribunals Service). China is a State Capitalist while Russia is a totalitarian government that is breaking up into small capitalist states… Karl Marx’ vision has never been achieved yet. disabilities. Nick Robinson. it will be a last form of government system globally and it will require complete transformation….family emergencies. maternity and paternity leave. This includes in particular the repression of cartels. Practices controlled in this way may include predatory pricing. tying. The leadership of the Labor Party is a separate issue. race. just like those religious icons.And it is beginning to become evident as climate change is beginning to be so evident as well. b. age. now that we know that no political party in the new UK Parliament has a majority of seats. Prohibiting agreements or practices that restrict free trading and competition between business entities. That is true on his 5 . The chances of this happening have increased three-fold for employers in the past decade or so. "Been rejected by the electorate" are the words used by the BBC's political editor. price gouging. that in the UK has three main tasks. refusal to deal and many others. Their spin continues to emphasis their view that Brown has lost a mandate to govern. or religion and beliefs. the right to apply for flexible working. In discrimination cases compensation awards are theoretically unlimited and six-figure payouts are not uncommon. a. Conservative commentators continue to insist that their electoral result gives them the right to govern.
particularly banking. as Leader of the Labour Party. The Health and Safety Executive is responsible for enforcing the Act and a number of other Acts and Statutory Instruments relevant to the working environment. Services.50 percent in March of 1974. From 1955 until 2010 The United Kingdom's average quarterly GDP Growth was 0.own. there is the Greek lesson. escape and rescue from mines. Looking a little further out into the future. account by far for the largest proportion of GDP while industry continues to decline in importance. ionizing radiation and working at height. Brown to my mind has won the right to see that work through. two favoured a quarter-point rise. Having taken the UK economy through the worst recession in 80 years. These cover a wide range of subjects. • Safety regulations Health and Safety at Work etc Act 1974: The Health and Safety at Work etc Act 1974 also referred to as HASAW or HSW is the primary piece of legislation covering occupational health and safety in the United Kingdom.59 percent reaching an historical high of 5. The most critical issue facing the country is the state of the economy. Evidence that a double-dip recession has been managed away will come in the next twelve months. told an audience of business people: "The risk of delaying interest rate rises too long is that this gradual approach may cease to be an option in the future. Economical Factors • Economic growth The Gross Domestic Product (GDP) in the United Kingdom expanded 0. or a more considered reflection of the constitutional realities. • Interest rates & monetary policies Sentance. 6 . and economic and social necessities to give the UK a new government by early next week. Public unrest in Greece has been a salutary and tragically lethal reminder of the consequences of imposed austerity. But it doesn't mean that he doesn't have a mandate to take a leading role in a coalition. and Labor and the Lib-Dems do not. from control of asbestos at work. insurance. So as we await the latest dispositions from Tory leader David Cameron and Liberal Democrat leader Nick Clegg. What we know is that Tories plan cuts this year. Statutory instruments are the secondary types of legislation made under specific Acts of Parliament. Part of the price of any political understandings will be how their interests are going to be addressed in public spending plans for the future are a consideration.30 percent in March of 1973 and a record low of -2."Minutes of the MPC this week revealed a four-way split. diving. of which a referendum on electoral reform is Labor Party policy. who voted for a half-point increase in rates at the last Bank of England monetary policy committee (MPC) meeting. The United Kingdom is among the world's most developed economies. and business services. the question arises are we going to have more assertions that Brown has lost any right to govern. Five members were content to keep rates on hold. and in person. Over the past two decades the government has greatly reduced public ownership and contained the growth of social welfare programs. We have heard from the leaders of the nationalist parties about their concerns concerning funding from the UK Exchequer. The other part of a government programme that addresses widespread public concerns is political reform. The election campaign was littered with uncertainty about the detail of necessary public expenditure cuts. Sentance opted for double that and one member argued for further monetary easing.50 percent in the first quarter of 2011 over the previous quarter.
which should be allowed to rise.Higher VAT bills. Less spare capacity than some economists estimate.estimated outturn in HM Treasury 2010 budget e .20 b g g g g Year 2010 2011 2012 2013 2014 GDP 1474 1539 1620 1710 1803 Legend: b . Stronger underlying UK demand and growth than many economists believe. While the CBI finding of rising retail prices appeared to support Sentance's view. A confidence scorecard rose from -29 to -28. h. Artificially low interest rates. e.co.90 681. This lackluster performance by retailers and dire consumer confidence figures are expected to fuel concerns that a rate rise will tip the economy into recession."guesstimated" projection by ukpublicspending.estimate in HM Treasury 2010 budget g . g.00 724. Its monthly snapshot of the retail sector shows a balance of 6% of shops reported higher sales this month. Consumer confidence remained at historically low levels in February. Retailers expect sales to grind to a halt in March. but the time had come to increase them.42 701.Sentence said 10 reasons for an interest rate rise were: a. which is fuelling inflation. rising inflation and an uncertain economic outlook were blamed for the failure to reverse the survey's worst-ever poll readings in January. j. compared with 37% in January. according to a survey by GfK NOP. Global demand. c. An artificially depressed currency.uk • Unemployment policy policies to reduce unemployment 7 . b. Sentance said the decision to cut rates in the recession was "right and proper". Global inflationary pressures. revealing that consumers continued to be worried about spending their earnings. i. Consumer accounts for about 65% of the economy and is a key indicator of the likely growth in GDP this year. Gradual tightening needs to start now to be effective. • Government spending Total Spending Fiscal Years 2010 to 2014 Total Spending -total £ billion 660. A loss of credibility with the markets if inflation persists.80 713. d. which is double the 2% Bank of England target. the employer’s organization also pointed to the weakest level of high street sales growth since last June. f. UK inflation. Companies exploiting high inflation by raising prices further.
this requires that aggregate demand is sufficiently high for businesses to be looking to expand their workforces Improving skills and reducing occupational immobility Policies should provide the unemployed with the skills they need to find re-employment and improve the incentives to find work.so that the unemployed have the right skills to take up the available job opportunities.improvements in education and training will increase the human capital of these workers. Structural unemployment is the result of workers being occupationally immobile . Because of the increase in demand for output.leading to an increase in total employment. the demand for labour at each wage rate will grow . Policies should focus on improving the occupational mobility of labour An improvement in the incentives for people to search and then accept paid work . Reflating Aggregate Demand The government can also use macro-economic policies to increase the level of aggregate demand. In the diagram below we see an increase in aggregate demand leading to an expansion of aggregate supply. It might also encourage foreign investment into the economy from foreign multinational companies. These policies might involve lower interest rates or lower direct taxes.In the long term. 8 . effective policies to reduce the total level of unemployment need to encourage An improvement in the employability of the labour supply . and therefore give them a better chance of taking the new jobs that become available in the economy.this may require some reforms of the tax and benefits system A sustained period of economic growth so that new jobs are being created .
Benefit and Tax Reforms Reducing the real value of unemployment benefits might increase the incentive to take a job . businesses may decide to increase production by making greater use of capital inputs (machinery and technology). the number of people in work has fallen 9 . Economic Growth and Unemployment A growing economy creates jobs for people entering the labor market for the first time.particularly if the real worth of unemployment benefits is well below the national minimum wage rate. And. Targeted measures are designed to help the long-term unemployed find re-employment (including the Government's "Welfare to Work Schemes" . And. Employment subsidies may also be available for overseas firms locating in the UK. Each year we expect to see a rise in labor productivity (more output per worker employed). it provides employment opportunities for people currently unemployed and looking for work The chart above shows the level of real national output (GDP) and total employment in the economy since 1980.Not every increase in demand and production has to be met by using more labor. In both of the last two recessions (1980-81 and 1990-92).see New Deal Employment Subsidies Government subsidies for those firms that take on the long-term unemployed will create an incentive for firms to increase the size of their workforce.
13 Pounds to South African Rands 9 5.sharply.9 per cent of GDP approximately £600 billion (using 2008 nominal GDP measured in dollars. This has helped reduce the official measures of unemployment to a level not seen for over twenty-five years. National Insurance contributions.5 Pounds to Yen 7 11.522 Pounds to Australian Dollars 1 1. employment in the British economy was at record levels. with net taxes and National Insurance contributions standing at 36. Local government revenues come primarily from grants from central government funds. value added tax. business rates in England and Wales. Council Tax and increasingly from fees and charges such as those from on-street parking. • Taxation Taxation in the United Kingdom may involve payments to a minimum of two different levels of government: The central government (HM Revenue and Customs) and local government.569 Pounds to Canadian Dollars 5 132. • Exchange rates GBP EUR GBP USD GBP NZD GBP AUD GBP CAD GBP JPY GBP ZAR GBP AED GBP INR 1.134 Pounds to Euros 1.67 Pounds to Rupees 9 10 . corporation tax and fuel duty.623 Pounds to Dollars 4 2. Central government revenues come primarily from income tax. Indeed by the summer of 2001. total government revenue was 39.957 Pounds to Dirhams 7 72. But a period of sustained economic growth (as experienced by the UK from 1993-2001) has led to a significant increase in employment levels. In the fiscal year 2007-08.051 Pounds to New Zealand Dollars 5 1. and converting using 2009 conversion rate).2 per cent of GDP.
varying in frequency. The main downward pressures to annual inflation came from petrol and diesel. enjoyed rapid growth in emerging markets.5 percent in April of 2011. researchers said. the average inflation rate in United Kingdom was 2.72 percent reaching an historical high of 8. The timing of Easter 2011 had a significant impact on these data. managing director at GfK NOP Social Research. it is affected by the changes in the business cycle from boom to recession and back again. up from 4. confidence has only slumped this much on six occasions. people have cut back on holidays. so the need for hotel linen services has reduced. Air transport. Just like any other business. business cycles were thought to be extremely regular. For example. The cycle involves shifts over time between periods of relatively rapid growth of output (recovery and prosperity). alternating with periods of relative stagnation or decline (contraction or recession).50 percent in May of 2000. recession (contraction). will make talk of a double-dip recession unavoidable. the last being in the midst of the 1992 recession. Annual inflation as recorded by the retail prices index (RPI) stands at 5. magnitude and duration Activity within the economy fluctuates over time. miscellaneous goods and services. Like hotels.50 percent in April of 1991 and a record low of 0.5% IN APRIL • Inflation rates United Kingdom CPI annual inflation stands at 4. From 1989 until 2010. which measures inflation in the whole of the domestic economy. clothing and footwear and communication. the magazine publishing industry was expecting a decline in subscribers during recent difficult economic times because this is a 'nice to have' item rather than a necessity.5 per cent. The Davis Service Group provides textile maintenance. The recurring and fluctuating levels of economic activity that an economy experiences over a long period of time. and the GDP deflator. when the UK was mired deep in the last recession. The business cycle or economic cycle refers to the periodic fluctuations of economic activity about its long term growth trend. This is called the business cycle.2 per cent in April. trough and recovery.UNITED KINGDOM INFLATION RATE The inflation rate in United Kingdom was last reported at 4." The eight-point plunge in optimism took the barometer's headline reading to -29. such as the Czech Republic. • Stages of the business cycle The most closely-watched barometer of consumer confidence revealed an "astonishing collapse" in January as the VAT rise took effect. peak. But today business cycles are widely known to be irregular .0 per cent in March. Throughout the business cycle it has to respond to the economic challenges it faces. UK INFLATION RATE RAISES TO 4. when combined with the bleak economic forecast.3 per cent in March. Davis Service Group has been able to invest in the business where necessary to be ready for recovery. hotel laundry and washroom services in the UK and Europe. At one time. according to market research group GfK NOP. • Consumer confidence 11 . the lowest since March 2009. alcohol and tobacco and gas were the most significant drivers behind the increase in annual inflation between March and April. The most well known measures of Inflation are the CPI which measures consumer prices. By balancing resources to meet customers' needs. The five stages of the business cycle are growth (expansion). "Today's figures. Davis has however. down from 5. "In the 35 years since the index began. Inflation rate refers to a general rise in prices measured against a standard level of purchasing power. with predictable durations." said Nick Moon. The first taste of the fiscal tightening to have a widespread impact on consumers appeared to have hit sentiment hard. which are at different stages of the business cycle. even before the full impact of the public spending cuts is felt.
This report tabulates sources of income as a percentage of total income. Other Other Employme Disabili Social Self Investme Workin State Occupatio Incom nt ty Securit Region Employ nt g tax Pensio nal e (Salaries Benefit y ed Income credit ns Pensions Source & Wages) s Benefit s s UK 64% 11% 2% 1% 6% 7% 2% 5% 2% Norther n 60% Ireland 11% 1% 2% 7% 5% 4% 7% 3% Scotlan 66% d 7% 2% 2% 7% 7% 3% 5% 2% Wales 60% 8% 2% 2% 8% 8% 4% 6% 1% England 64% 11% 2% 1% 6% 7% 2% 5% 2% North East 64% England 5% 2% 2% 8% 6% 4% 7% 2% North West 59% England 13% 2% 2% 7% 7% 3% 6% 2% 12 . This details income amongst a representative sample of the British population.Their findings will prompt more questions as to whether the Coalition risks tipping the economy back into recession through its programme of tax rises and spending cuts to reduce the budget deficit Social Factors • Income distribution Sources of income: The Family Resources Survey is a document produced by the Department for Work and Pensions. The 2005-2006 report can be found here.
Yorkshir 64% e 7% 2% 2% 7% 7% 2% 5% 3% East Midland 65% s 9% 2% 1% 7% 6% 2% 5% 3% West Midland 62% s 8% 3% 2% 8% 6% 2% 5% 3% Eastern 56% England 22% 2% 1% 5% 7% 1% 3% 2% London 71% 10% 2% 1% 4% 4% 1% 5% 3% South East 66% 9% 4% 1% 7% 8% 1% 4% 2% South West 60% England 9% 4% 1% 7% 10% 2% 4% 2% • Demographics.97[1] (higher than Germany .144 Female: 25.138.216.1. estimates of the population of England for dates prior to the first census in 1801 vary considerably. and is marked by centuries of population growth and urbanization.831 Male: 23.922. Due to the lack of authoritative contemporary sources.687 Total: 100 Total Fertility Rate: 1. Population The population at the time of the 2001 census was 49. Age distribution The demography of England has since 1801 been measured by the decennial national census.02) Age 13 . Population growth rates. lower than France 2.1.38 & Italy .40.
751.32 15 623.[12] In 2001.693 18.25 18 .19 1.17 1.26 60 .27 16 .231.127.861 2.767 1.266 2.60 0.20 25 .29 3.24 2. and the median age was 37.571 2.841 8.238 5.830 4.279.701 1.60.89 637.57 1.20 20 .65 1.660 6. Ages attained (years) Population % of total population % per year of age band 0-4 2.268.229.96 1.61 1.88 1.51 45 .511 22.00.40 1.391.323 0.177.27 1.56 85 .14 3.84 2.97 65 .65 1.64 - 14 .74 1.59 9.135 5.33 30 .952.283.31 10 .838.719 6.102.35 0.64 2.87 0.19 5-7 1.The data below is based on the 2001 census.047 6.26 90+ 316.01 1.74 4.926.51 1.44 11.84 75 .30 0.668 3.25 8-9 1. the mean age of England's population was 38.
Despite this formal opportunity for social mobility. a larger farming sector. upward mobility is practically nonexistent. there can also be relative downward mobility. by controlling that inclination. Work satisfaction is high. the individual's family background is more predictive of social position today than it was in 1850 • Lifestyle changes (Reuters) . and economic mechanisms that permit them to fortify their advantages. for the enslaved individuals. Second most popular activity was switching off electrical appliances rather than leaving them on standby. However. said Russ Lid stone of Euro RSCG London. but it is a two-sided phenomenon . there was social mobility of different degrees existing between the two countries during different historical periods. In the United States in the mid-19th century inequality was low and social mobility was high.a booming source of climate changing carbon emissions. In the late 19th century. as well as higher geographic mobility in the United States. the difference between the social nobilities of the two countries has narrowed. followed by switching over to low energy light bulbs and turning the central heating thermostat down slightly and using less water. Only small proportions (usually less than 10 percent) tell 15 . Highlights of recently conducted polls show: • Work/career/leisure attitudes & entrepreneurial spirit a.or local councils prompting them into action. But the reasons given were mostly connected with council schemes and punishments rather than altruism. it can also change over time.Britons are starting to change their lifestyles in response to global warming. Top of the list of environmental activities is recycling.as has been the case in Western Europe. Social mobility is normally discussed as "upward only". "While people from all walks of life now see climate change as one of the key challenges that they face. educational. But when it comes to the tougher lifestyle choices. These authors state that "the idea of the US as ‘the land of opportunity’ persists. with 90 percent of the people surveyed saying they were doing it more than a year ago. and clearly seems misplaced. In other words. according to a new survey. it is possible in a growing economy for there to be greater upward mobility than downward . the U.Not only does social mobility vary across types of countries. according to the survey by advertising firm Euro RSCG. Official or legally recognized class designations do not exist in modern western democracies and it is considered possible for individuals to move from poverty to wealth or political prominence within one generation.• Labor / social mobility Societies which use slavery are an example of low social mobility because. recent research suggests that Britain and particularly the United States have less social mobility than the Nordic countries and Canada. and some people can manage a relative upward shift in their social status. due to the common school movement and open public school system. then some people can also move downward relative to others. and for their owners. Only 33 percent said they were driving their cars less than a year ago and half that number had decided to take fewer international flights -. Comparing the United States to the United Kingdom. action was far less popular. had much higher social mobility than in the UK. during the latter half of the 20th and early 21st centuries. but particularly in the United States. but few are making the tough choices and in many cases the motivation is fear of punishment. legal. However. as social inequality has grown in both countries. If merit and fortune play a larger role in life chances than the luck of birth.where there is upward mobility. for many the motivation to reduce their environmental impact is directly related to cost savings .S. This is the risk that motivates people in power to increasingly devise and commission political. downward mobility is practically outlawed.
(Gallup/Harris/Roper/National Opinion Research Center) Solid majorities of workers tell pollsters they would take the same job again "without hesitation. separately. About 90 percent are not worried that their company will move jobs overseas (Gallup. hypes 16 . self-confident. In Gallup's latest question. a quarter of workers say their job "never" requires them to be in an office.aired on Indus Vision between the 16 to 19 of May 2011 at 6:00 p. As for today's economy. ruff and aggressive people. Some people say when they hear the word Jamaican they think of loud. A UK postgraduate education will give you an edge over the competition. around 20 percent of workers fear being laid off "in the near future. of Michigan/Harris) Huge majorities (85 percent/Gallup. places to visit – the list is endless. e. wages. most workers tell the pollsters they are satisfied. This has changed little in the last decade. Today. Twenty-six percent say their employer has laid off someone in the past six months. (Gallup/Roper/NBC. • Education UK life is an incredible mix of international cultures and contemporary thinking. More than 80 percent of workers are not worried that their hours or. Stress is over-stressed.WSJ) They are least happy with the amount of money they earn.m. held together by a strong sense of identity and tradition. celebrations. When you see them all up in the video light is not to make up noise no its to big up their hard work and what they have achieve in life and giving praise to the man upstairs for life and family. Smaller." (Univ. When Jamaicans go out they don’t just go out they dress to impress from head to toe. Get a head start in your career with a UK postgraduate degree. and 63 percent a bad time (Gallup.m. Workers are happiest with their coworkers. 79 percent are satisfied with the amount they have. There has been little change in these responses over the past quarter century. g. in most polls. 2004) say they have a strong sense of loyalty to the company they work for. 40 percent say it "always" does. 2004).music. Twenty-seven percent of workers are completely satisfied with "the amount of on the job stress." and 37 percent are somewhat satisfied (2004). i. The UK is a world-leader in innovation and many university facilities are state-of-the-art. At a young age Jamaicans are taught to have pride in their appearance no matter where they are going. people to meet. pollsters they are very dissatisfied with their jobs. and 17 percent dissatisfied (2004). but still robust majorities say the company they work for has a strong sense of loyalty to them (67 percent). Many UK education institutions work with local agents/representatives in Pakistan. to 6: 30 p. college or school enters into an agreement with an agent to represent their institution to students in Pakistan who are considering an overseas education. 2004) A third say it's a good time to find a quality job. c. As for the work load. As for vacation time. Yes at times it may me a little much for some or SHOCKING but hey at the end of the day the outfit caught your eye right. h. 2004).b. Education UK. The university. accents. British Council Pakistan has produced a TV programme which is a series of four talk shows that will be re . d." Around 80 percent do not. f. Study in the UK and you’ll find an array of exciting experiences to discover . 86 percent of workers are satisfied with the amount of work required of them and 19 percent are dissatisfied. With a UK degree you can specialize in the subject that interests you. in a country renowned for the quality of its research and the reputation of its academics. Want to see for yourself you say not a problem watch and enjoy the JAMAICAN flavor in the UK… • Fashion. As the UK is such a cosmopolitan society you’ll find that many UK customs are already familiar to you – so you’ll settle in faster than you think. hard-working and over comers. will be cut (Gallup. We are all obligated to our opinions but rest ah sure Jamaicans are always a couple of things which is fashionable.
” (Damon. 17 . compensatory damages. Currency The unit of currency in Britain is the pound sterling (?). A 1998 NIOSH study concluded that the role of training in developing and maintaining effective hazard control activities is a proven and successful method of intervention. and? 1 coin. United Kingdom. many organizations would not act upon their implied moral obligations. 2008. Police If you have to register with the police you must register within seven days of your arrival in the UK. hands-on. pg. Banknotes are issued in? 5. divided into 100 pence (p). and? 50 denominations. It is important that new employees be properly trained and embrace the importance of workplace safety as it is easy for seasoned workers to negatively influence the new hires. workers' compensation claims. fines. illnesses. lost goodwill from the workforce. and give recommendations on avoiding accidents to management and employees in an organization. 50p. 2p. Nadia. Safety training classes help establish a safety culture in which employees themselves help promote proper safety procedures while on the job. Legal . International students who are in the UK and intend to study for more than twelve months are obliged to obtain a UK license by taking a driving test. from customers and from the wider community). set suitable safety controls in place. A safety training program can also help a trainer keep the required OSHA-mandated safety training courses organized and up-to-date. When you move to a new address or you return to your country. The vehicle you drive must be registered. • Living conditions Embassy You have to register at your country's embassy. Australia and the USA. feelings on safety The event of an incident at work (such as legal fees. But it could involve much more than you think. 20p. lost production. “Like it or not. you are required to inform the embassy.Occupational requirements may be reinforced in civil law and/or criminal law. They recognize hazards and measure health and safety risks. organizations have a duty to provide health and safety training. This paper looks at the main tasks undertaken by OHS practitioners in Europe. To register you will need your passport and two passport-size photos of yourself. 20. it is accepted that without the extra "encouragement" of potential regulatory action or litigation. Driving You must hold a valid driving license. 10p. insured. There are 1p. The application forms for tax and vehicle registration can be obtained from a post office. legal liability. giving your address. ‘Reducing The Risks’.OH!!! • Health consciousness & welfare. innovative effective safety training which will ultimately lead to an effective safety culture. property damage. 10. Check in your passport to find out if this is necessary. taxed and if it is over three years old it must have an MOT certificate.14)An effective training program can reduce the number of injuries and deaths. and the main knowledge and skills that are required of them. investigation time. Occupational health and safety officers promote health and safety procedures in an organization. and missed time from work. can be purged with the establishment of new. 5p. Training and Coaching Today. That negative influence however.
gardening. All British universities have a wide range of clubs and societies like cinema. opened in 1994. During the autumn. September to November. Other popular activities are reading.you are expected to pay the price marked on the goods. Transport People travel on average about 200 km (125 miles) a week. and going out for meals. supermarkets and street markets are the cheapest places to buy food. There has also been a shift towards ready-made meals. Bargaining is not customary . The temperature rises through the spring months of march to may. as a student in Britain you benefit from subsidized entry to many venues like theatres and cinemas as well as having the opportunity to benefit from reduced price travel. And. Climate Britain is warmed by the Gulf Stream. Travel by car has grown a lot. poultry. Pubs serve both alcoholic and non-alcoholic drinks. There are many international associations for you to join. A huge variety of cuisine's are available in Britain and there is usually a type of restaurant to suit everybody's taste and budget. One of the most popular socializing activities in Britain is eating out or visiting a local public house . There are about 80 seaports. and yoga. To buy alcohol in Britain you must be over eighteen years of age. large supermarkets. Shops and shopping Shops are usually open from 0900 until 1800 except local shops which may have their own more specific opening hours. skimmed milk and fast foods are the popular courses for many people. The Channel Tunnel. Most shops are closed on Sundays except those in big cities. Food Fresh fruit. The winter months from December to February are the coldest. has improved links to the European mainland. There are libraries. and red meat. and other convenience made products.600 mm (over 60 inches) in the highland areas of the west and north. sports facilities and cafes as well as plenty of entertainment. frozen foods. The British have cut back on butter. cycling. potatoes. 18 . fish. Entertainment The main leisure activity in Britain is watching television. but less than 800 mm (30 inches) over the more fertile lowlands of the south and east. Also British Airways is one of the world’s leading airlines. Some of the most popular are swimming. The average annual rainfall is more than 1.known as a 'Pub'. eggs. careers advice centers. or the cinema. Unless you want to pay a large fine or spend some time in prison. Heathrow airport is the world's busiest airport for international passengers. In general. music. do not do it. with the shortest hours of daylight. an ocean current that flows from the Gulf of Mexico to Europe. Over half of the population takes part in some kind of sport. and is highest throughout the summer months from June to August.Drunken driving laws are very strict and rigorously enforced. you are also expected to stand in line for services if a shop is busy. street markets are particularly good for fresh fruit and vegetables. home improvements. temperatures gradually fall again. drama and more. The climate is changeable through the seasonal cycle. Small traditional shops have had to close their doors because people are now buying from new.
You will also have to share a bathroom with the other people living in the same house/block. in which you will have to live and sleep. but some institutions may provide one in each room. Halls of residence usually provide breakfast and an evening meal. Bathrooms and other facilities are usually shared. · Bedsits: This is a single room. · Lodgings: This is a rented room in a private house. The house/block will be divided into several bedsits which will be rented by other people who may not be students. Institution Accommodations Accommodation provided by the institution might be the most suitable type if you are coming to the UK for the first time. either alone or sharing with another student. These are the questions you should ask your institution about accommodations: · What types of accommodation do you have? · Will I be offered accommodation each year? · Will I have to leave my accommodation during the vacations? · Will I have to provide my own linen and kitchen utensils? · How much will the accommodation cost? · Will accommodation be available for my family? *Do not travel to Britain with your family unless you have accommodation in advance. Make sure that you follow the college's application procedures. Private Accommodations: · Hostels: This is the best alternative if you are unable to get a place in a hall of residence or you are coming to Britain for the first time. you will have to share them. situated on the premises of the institution. Staying in a hostel can give you the opportunity to become familiar with the area where you are studying and this is an advantage if you want to find private accommodation later. This is usually. but you can buy and prepare your own food. Your booking will only be secure after you have paid a deposit. Bedsits vary in size and quality so you should never take a room without 19 . · Self-catering halls: This accommodation is similar to halls of residence. do your own laundry and provide your own bed-linen and towels. perhaps with their family. Residents live in study bedrooms. · private accommodation. which you will have to find and arrange either by yourself or with the help of advice and suggestions from the accommodation office at the institution. Meals may be provided or you may be able to use the kitchen to cook your own food. Cooking facilities may be in the bedsit or elsewhere in the building. Some hostels provide rooms for both single and married students. Applying for Accommodations In most cases accommodation provided by the institution has to be arranged before you come to Britain. and in particular. There are two main types of accommodation: · accommodation owned. The landlord/landlady will also live in the house. Some institutions have a small number of self-catering flats for students with families. but the food may be basic. Telephones are normally provided on each floor. Halls can be single sex (male or female only) or mixed (taking both male and female students). There are two main types: · Halls of residence: These are large buildings occupied by many students. you will usually be asked if you would like your institution to provide accommodation or to arrange alternative private accommodation for you. *Apply in advance. so start making arrangements as soon as you have been accepted on your course. in which case. that you meet deadlines for booking accommodation. They usually provide some meals or have cooking facilities and allow students to prepare their own food.Accommodations In many areas of the UK there is a shortage of student accommodation. You will have to clean your own room. but not always. managed and provided by your institution. When you are offered a place on your course.
published 26 January 2009 is based on companies' reported performance up to June 2008 . each with 6% share. Technological Factors • Government research spending The UK R&D Scoreboard The UK Government's annual R&D Scoreboard. Because of the Industrial Revolution.7% reduction for unlisted companies. It gives R&D intensity for companies listed by value added • New inventions and development The Industrial Revolution. ripe for change.400 global investors of R&D. and the strength of the economy of Britain.8% compared with a 2. The biggest UK sector by spend remains pharmaceuticals and biotechnology (37%). spending an average of 29% of their sales on R&D. increased their R&D investment by 10. the figures reported by the Scoreboard do not either.400 companies. 20 . and political stance of the country.a rise of 6. reflecting the tendency for smaller firms to be in R&D intensive sectors. the power of the British nation amongst others of the world. and its system of national banks holding tight to its financial security. as well as brought lasting effects in Britain in each of these areas. 79% of this expenditure was by companies based in the USA. social. Japan. and fixed line telecommunications the next biggest. With its fast growing monopoly on ocean trade.6 billion on R&D . In the UK 850. Globally. The remaining UK companies outside the top 88 grew their R&D by just 1.2% . The 1. software. The top 88 UK companies. with larger firms with larger absolute values of R&D spend being in less-intensive sectors. This is significantly more than the larger UK firms. Note that as company accounts do not distinguish between expenditure in the UK and outside the UK. occurred because of the stable economic.400 companies in the world most active in R&D.4% to £274 billion. automobiles and parts. pharmaceuticals and fixed line telecommunications.before the current recession. Germany. Britain was. the five biggest sectors were pharmaceuticals and biotechnology. The latest scoreboard. 338 companies have sales of less than £50M. France and the UK. It was the great historical epoch we call the Industrial Revolution which would forever change city life. reports on the patterns and trends of the 850 largest corporate spenders of R&D in the UK and the 1. with aerospace and defence. and electronics.a faster rate than the rest of the top 1. the 2008 DIUS R&D Scoreboard. at the time of the Industrial Revolution. the production of machinery. based on R&D expenditure reported in company accounts. Listed companies (ie those with shareholders) increased their R&D by 9. The Scoreboard reports that the 850 top-spending UK firms spent £21. automobiles and parts. who also rank in the top 1. its renewed interest in scientific discovery. banking. technology hardware. software.4 per cent on the previous scoreboard.400 companies in the world that spent the most on R&D increased their expenditure by 9. The Government also publishes the 2007 DTI Value Added Scoreboard lists the top 800 UK companies and the top 750 global companies by Value Added .3% . one of the most vital periods of change in Great Britain. The UK sectors that increased their R&D expenditure the most were oil and gas. social class structure.a fall in real terms. which is endorsed by the R&D Society.the amount of wealth created by a company in a year.
The improved cultivation of healthier fruits. Alongside new inventions came exciting discoveries in medicine. vegetables. and stale bread. sugar. adding to Britain’s strong labor force in cities. As city populations grew due to the advancement of industry. electricity was used. as well as people’s lungs. citizens were able to move up the rungs of the social class ladder in British society. Thus. factories. and thus they could afford only to eat rancid meats. Factories subjected men. however. Britain’s national banking system provided it with capital from investments and a surplus of finances for which to use in commerce on the international scale. Along with construction improvements in cities. and the strongest economically. along with the introduction of mechanical reapers and threshers. which aided a greater production of more durable metals.never again would the British have to suffer the results of no changes regarding the inequalities of the working world. an influx of unemployed farm workers was created. and unprotected work around dangerous machinery. the new use of electric engines in cars improved the rate of transportation in British cities. which included a sewer and provided for running water in individuals’ homes. its domination of the seas via a strong military force gave it control of ocean transportation and trade. The improvement of steel production specifically helped cities to construct sturdier buildings which had fewer fire and other safety hazards. This increased the efficiency of streetlight in Britain. The Industrial Revolution was a positive era to have occurred in Britain. In fact. which contributed to the extreme malnutrition and sickness in the cities. promoting the growth and trade of the country. the Industrial Revolution appeared to bring no benefits at all to the country. New inventions of the time included John Kay’s "flying shuttle" weaving device and George Stephenson’s "Rocket" railway train. all largely dealing with the exchange of new and improved industrial machinery. now made cheaper to produce. along with innovations such as Abraham Darby’s thought to use coal instead of charcoal in order to create fuel. providing for better treatments of diseases and thus promoting the overall health of British society. The tremendous use of coal in industrial production polluted the atmosphere. tea. An increase in the number of workers in industry meant that factories could run more efficiently and produce more goods than ever before. so that all British citizens. Similar improvements included the expansion of public services such as a fire department and police force. instead of gas power. thus improving their financial and educational statuses. Britain’s American colonies played an important role in providing the country with such vital raw materials. Britain obtained much capital from its many new international trading ventures with major nations. as well as decreased the pollution level formerly heightened by older engines. Ongoing British trade of tobacco. It was this expanded variety of mechanical tools that would fuel the continuation of the Industrial Revolution. As Britain’s incoming finances grew and increased. steam ships. Great Britain was the leading figure in the Industrial Revolution. Living conditions in cities became unsanitary. These and other devices greatly increased farm production in Britain. Great Britain was also rich in natural resources such as water and coal. which had created many fumes. and other foods grown on British farms using the new inventions bettered the health and growth of the population. However. which enabled farmers to plant seeds in straight rows. as well as lessened the fire hazard gas lamps had proposed. such as stronger steel. women. Great Britain grew to become the most powerful manufacturing nation. to light city street lamps. nor doubt the strength of their country. as well as Henry Bessemer's renovation of steel production. Other new developments included a seed drill. fatty fried foods. Secondly. in all of Europe. Due to its sturdy financial and economic conditions. the positive outcomes of the Industrial Revolution rivaled the damage of its more negative effects. Scientific advancements of the time included new metalproduction techniques. In the beginning. Also. and workers’ conditions in the coalmines were not much better than in factories. Food was expensive for poor factory workers. helping to manufacture a much greater amount of new machinery. harsh punishments. which meant there were more workers to help run industrial factories. The enclosure movement restricted the ownership of public farmlands specifically to the wealthy landowners. improved the city’s sanitary conditions. not only the wealthy and upper class. Better lit cities also contributed to a decreased urban crime rate. yet come to view the word "technology" in a completely new way. and slaves internationally was largely a result of this control. and other devices which increased transportation and also the movement of workers and new industrial ideas as well. each which increased the level of safety in cities. First of all. As a result of this movement. These could provide an ample energy supply for trains. Each of these improvements aided both the production and transportation of products and materials used for trade and in industrial factories. Public education also developed. a new water system. could rid 21 . and even children workers to low wages. as well as cramped and impoverished.
themselves of ignorance and illiteracy. The term ‘flaw’. document this fact. and social state. As a result. Functional Obsolescence Functional obsolescence results from a flaw in the structures. in this context. political. Due to this era in Great Britain. and other businessmen worked to increase Britain’s production of goods. self-improvement in the workplace. because of Britain’s national economic. entrepreneurs. the property may realize a corresponding reduction in value. University to business technology transfer offers specific challenges. remains unchanged today a. yet. beyond those encountered in industry more widely. • Rate of technology transfer • Life cycle and speed of technological obsolescence Traditional mortality studies alone are insufficient to assess the depreciation of utility property that is subject to technological obsolescence. First. instead of the usual twelve to sixteen. a new labor code was made. international trade enhances the speed of technology transfer. Human capital primarily reflects output through private rates of return (captured in our index of labor quality) rather than measured TFP. and proved the benefits of a futuristic way of thinking. lessening worker’s hour requirement to eight hours. The Industrial Revolution caused the people of Britain to turn away from the past. it can be concluded that.. promoting more international commerce for Britain and supporting an increase in profits on Britain’s industrial goods. indeed. This paper examines the issues in university to business technology transfer in the UK and USA and presents the results of a survey of UK and US university technology transfer officers. revolutionary. the consistency of university technology transfer policies and the accessibility of university technologies to business. developed in the first half of this century.e. new inventions and innovations contributed to a more modern outlook on life. the Industrial Revolution was. b. Assessing Technological Obsolescence Obsolescence is a measure of an asset’s loss in value resulting from a reduction in the utility of the asset relative to market expectations. external obsolescence and functional obsolescence. Because of the increased production of machinery which further forwarded industrial advances in technology. The study also looks at perceived barriers to university to business technology transfer and offers suggestions for possible improvements to the process. using a single mortality survivor curve for all vintage for all future years. or design that diminishes the function. and human capital in stimulating each source of productivity growth. While R&D raises rates of innovation. Several writings. the products of city factories became cheaper and more available. factory owners were able to provide their workers with higher wages and better working conditions. grossly understates the true impact of technological obsolescence. in which British factory owners. materials. the country was ready to surmount on its shoulders the immense change of the Industrial Revolution. The development of new machinery in factories added to the safety of working there. Technology transfer is statistically signi8cant and quantitatively important. Thus. if market expectations increase. Such a loss in value is said to be the result of obsolescence. international trade. and decreased the need for so many workers to labor for strenuous hours. the current mortality analysis process. It should be noted that while the absolute usefulness of an asset may remain constant. published in the early 1980's. utility. and instead to look toward improvements in their way of life which would last through upcoming years. As a result. industrial businesses received more income from consumers’ purchases. Thus. technological obsolescence is having a more profound impact on the future economic life of utility property today than it had in the past. Thus. Findings indicate significant differences in the motivations of universities in each country to transfer technology. Second. the theory of capitalism developed. There are two principle reasons for this. the current process. i. and value of an asset. refers to any deficiency in the asset which negatively impact its ability to perform the 22 . In this respect. There are two forms of obsolescence. We examine the roles played by research and development (R&D).
producing muffins and cakes. His energy bills had jumped 70% in the past 18 months.” said Griffin. said energy costs made up 30% of his total overheads. BUSINESSES across the country are heading into annual negotiations with their power suppliers. from £30. it generally overshadows all other causes of obsolescence. The loss in value resulting from this deficiency is a form of functional obsolescence. London and Manchester. facing the prospect of a 100% rise in their annual bills.desired function. Customer expectation is a typical example: New and more powerful generations of personal computers increased customer expectations for personal computing power. Wholesale electricity prices for this winter have jumped to more than £90 per megawatt hour. Flaws are relative to need.000 – the firm has already been hit by soaring wheat costs.000 to £160. Conclusion: Both technological obsolescence and traditional life cycle factors affect the useful life. provide further evidence of the inflationary pressures on the economy. With the rapid pace of technological change. Ignoring technological obsolescence and its unique mortality characteristics will result in a gross overstatement of the life. The only way round it is to take advantage of any opportunities. this is. both should be taken into account. Jon Davies. As such. Even some household names are complaining about soaring power bills. One chain of six small car dealerships in South Yorkshire has seen its electricity bill jump to £60. called Technological Obsolescence. behind closed doors. which are hammering company finances across the length and breadth of the country.000 a year. with his three data centres alone now consuming £1. if the need evolves over time and the asset can no longer meet the need. A specialist baker in Surrey. “Now I worry about how much I can squeeze out of each unit of power. Tesco has seen all the financial advantages of an eight-year programme to halve its power consumption through green initiatives wiped out by soaring prices. In fact.000. and they do so simultaneously.5m worth of electricity a year. James Griffin. managing director of Supplier switch. an internet services company that employs 250 people in Gloucester. Relative to customer expectations (needs) older PCs have a flaw or relative deficiency. In this paper. then the asset’s value is impaired. The soaring costs. “I used to worry about how much revenue I could generate from each of my servers. technological obsolescence is the principle focus of the obsolescence analysis. given the economic conditions.” Laurence Dupree at consultancy Bearing Point said: “In the past three months we have seen a massive increase in companies looking for savings across the supply chain. You can’t just wait until your contract is about to expire before you start shopping round for deals. A car-parts manufacturer in Manchester. About half of UK small and mid-sized firms strike annual energy contracts that expire in October. when technological obsolescence is occurring. and are beginning to enter talks about new deals. has seen its power bill climb from £70. Gas prices have jumped 130% over the same period. Technological obsolescence is one form of functional obsolescence. according to Supplierswitch. meanwhile.” Griffin’s fears reflect the experience of firms across the UK.000 a year ago. While the power of older PCs remain constant. It is no longer about improving brand image.com’s figures – an increase of 110% on last winter. technological obsolescence is the principle cause of functional obsolescence today. has seen its bill jump from £250.000 to £500.” • Energy use and costs 23 . said: “Clearly it’s not going to be easy for companies to pass all of those increased costs on to their customers. consumer needs increase. a director of Star. it is about looking for efficiency savings in the face of rising costs.
This helps businesses to: • Remain up-to-date • Drive business forward • Sustain and survive competition In short. Technology is a very broad concept and is used to refer to several braches of science and study. Today. Some examples are information technology. technology has become one of the significant factors that maximize an organization’s ROI. etc. so do its benefits. understands the new IT issues facing a business. and drives the IT changes from the top down. Let us look at the benefits of technology in some major areas of day-to-day life: Benefits of Technology in Business: The days when the Chief Information Officer (CIO) took implementation decisions and passed the responsibility down the line are passed. biotechnology. This is a clear indicator of the benefits businesses are enjoying through the implementation of technology. Today technology is an integral part of any business right from the purchase of computers and software to the implementation of network and security tools.• (Changes in) Information Technology The term “Technology” is derived from the Greek word “Technologia” and “techne” means “craft” and “logia” means the “study of something”. As the term technology branches into various fields of science and study. Benefits of Technology in Communication: 24 . the CIO is an individual who possesses business as well as technical skills. medical technology.
Greater access: Technological advancements have opened education to learners with learning disabilities and in remote locations. 25 . discussion boards. broadcasting news. The means and the modes of communication are unlimited. This saves cost and time by reducing office visits. making travel arrangements to university admissions. technology has made it faster.and time-saving: Physicians can follow-up. They set their own goals. Communication: The world is a smaller place and technology allows everyone to keep in touch with their families and friends at a more affordable cost. or simply paying bills. manage the process and content of learning. Benefits of Technology in Healthcare: The marriage between medicine and technology has reshaped healthcare and revolutionized the medical profession. Cost. Some of the benefits of technological advancements in the field of communication are: • Speed : time is no longer a constraint in communication • Clarity : With megapixel images and video. banking. and patients with complex birth defects. Immediate response: Most e-learning programs provide immediate feedbacks on learner assessments. Medical devices: Medical aids allow patients to continue recovery at home reducing their hospital stay. and high fidelity audio systems clarity in communication has become a never-before experience • Proximity : technological advancements have made the world a smaller place to live in • Dissemination : whether spreading information. Vulnerable population: Technology aids the very young. easier. and communicate with peers. Self-paced: Learners can chart courses at their own pace. Speed: From sending gifts to making payments everything gets a done with a few clicks. Flexibility: Physicians can answer routine and less critical queries at a convenient time. Some of the benefits are: Convenience: Provides a great deal of convenience in expediting personal and business transactions be it shopping. elderly. e-libraries. etc that allow clarifications at a faster pace than in traditional classrooms. Some of the benefits of technology in this field are: Personalized learning experience: Learners are able to take control and manage their own learning. Some of the major benefits are: Secure environment: Technology allows physicians and patients to interact in a secure and comfortable environment to discuss sensitive issues. and smarter Benefits of Technology in Education: Technological advancements in the field of education are fast evolving. Development: Technology has brought about development in many fields such as medicine. banking.From hand-held computers to touch phones. and disabled children by alleviating their problems so that they can continue living in their homes Benefits of Technology in Society: Today technology pervades almost all aspects of our daily life from shopping. saving time and cost. Today. and re-direct patients to resources on the Internet. This ensures higher levels of motivation both in terms of completing the course as well as in performance. technological advancements in the field of communication are endless. Similarly there are features such as chat. chronic illnesses. Accuracy: Technology has reduced errors in mundane and monotonous chores. provide advice. or sharing knowledge. e-learning is a familiar and popular term.
Examples of mobile IT devices include: • (Changes in) Mobile Technology a. e. business. reductions in inventory. g. • (Changes in) Internet Top concerns for CEO's in today's business environment are: a. education. for internal activities such as knowledge sharing and new product development. the threat posed by competitors. c. e. acquisition of new customers and increased sales. improved efficiency and productivity. Bluetooth . J.technology that is portable.change management. General Electric and many others are reporting benefits from the use of the Internet. people and business processes as an integrated whole and making changes in all these dimensions. c. Mobile technology is exactly what the name implies . Business of all sizes in all sectors are using the Internet in many different ways . laptop and notebook computers palmtop computers or personal digital assistants mobile phones and 'smart phones' global positioning system (GPS) devices wireless debit/credit card payment terminals Mobile devices can be enabled to use a variety of communications technologies such as: a. cost savings. E-business online is itself capable of delivering these benefits. finding new opportunities. Sainsbury. wireless fidelity (Wi-Fi) . organization. b. d. Technology has evolved and transformed our lives and society. information and knowledge sharing. h.to work with partners and suppliers. However the benefits are achieved not by technology (which is an enabler) but by addressing strategy.government. These benefits include: a. c. good implementation practices and clear business objectives are required in order to reap the full benefits. e. f. improved communications. b. Improved customer service. b. Overall. The Internet is just like other information technologies . it has brought about tremendous growth and benefit to mankind. and much more. d. etc. for procurement. d.a type of wireless local area network technology b. technology. i. Better customer focus and service. Companies such as United Technologies. better transfer of best practices. and improving responsiveness. controlling costs. improved speed of response. harmonization and standardization of procedures.connects mobile devices wirelessly 26 .
enabling home working. The growth of cloud computing has also impacted positively on the use of mobile devices. place an order online This leads to great flexibility in working . For example. Benefits Mobile computing can improve the service you offer your customers. networking 'hot spots' are being provided in public areas that allow connection back to the office network or the internet. 27 .c.data networking services for mobile phones d.allowing you to update customer details whilst away from the office. 'third generation' (3G). you could: a. set up a new customer's account b. Increasingly. For example. by using a wireless payment terminal diners can pay for their meal without leaving their table.data networking services using modems and telephone lines e.secure access to a private network It is therefore possible to network the mobile device to a home office or the internet while travelling. Alternatively. More powerful solutions can link you directly into the office network while working off site.for example. you can enable customers to pay for services or goods without having to go to the till. or working while travelling. supporting more flexible working practices by providing services over the internet. global system for mobile communications (GSM) and general packet radio service (GPRS) data services . virtual private networks .over the internet . dial-up services .. for instance to access your database or accounting systems. when meeting with customers you could access your customer relationship management system . For example. check prices and stock availability c.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview. | https://www.scribd.com/doc/143804665/61663095-Pest-Analysis-England-2 | CC-MAIN-2016-40 | refinedweb | 12,703 | 59.8 |
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
On Mon, 29 Apr 2019, Lukasz Majewski wrote: > +/* Support for 64 bit version of clock_* Linux syscalls. > + > + Support for following time related (and Y2038 safe) syscalls has been added > + in the 5.1 Linux kernel: > + > + clock_gettime64 (nr. 403) > + clock_settime64 (nr. 404) > + clock_getres_time64 (nr. 406) > + clock_nanosleep_time64 (nr. 407) > + */ > +#if __LINUX_KERNEL_VERSION >= 0x050100 > +# define __ASSUME_64BIT_TIME 1 > +#endif This comment and macro definition are the key thing that need reviewing, probably over several iterations, before the rest of this patch series can be properly reviewed. It is critical that the comment is completely clear and unambiguous about the exact macro semantics on the various relevant classes of architectures. See what I wrote in <> and <>. I don't think the comment meets those requirements at present - that is, if you try to deduce from it what the macro definition should be on all the listed classes of architectures, either the conclusion is not clear or it sometimes conflicts with the actual definition. In particular, for existing 64-bit architectures, my understanding is that the kernel *does not* add new syscall names or numbers for the syscalls you list, and so it would be incorrect to define the macro in that case, but this patch defines it anyway. Note 1: the comment should not reference the above URLs; it should be self-contained. As stated in the second message above, it needs to be clear to people who haven't read any of the mailing list discussions around Y2038 issues. Note 2: if the comment actually needs to define the classes 1a, 1b, 2a, 2b, 3, it's probably using the wrong abstractions. It should be very careful to refer to the abstraction that actually most reliably determines the presence or absence of the new syscalls (which might be the size of "long int" used in the syscall interface - glibc's __syscall_slong_t, which happens always to be the same size as __TIMESIZE for existing glibc ports but might not be for future ports - but make sure of that). Once the relevant abstraction is very clear, the reader can deduce the answer for each class of glibc ports. Note 3: it's wrong to state the syscall numbers in the comment; that is not a relevant part of understanding the interface. Stating the names, however, makes sense, provided you make sure not to use the __ASSUME_ macro for any *other* syscalls without updating the comment, and, at that time, reviewing whether the same definition conditions still work for all those syscalls. (Given that, as previously discussed, there might be *some* new syscalls even for architectures that already have 64-bit time, in order to provide timespec-based versions of syscalls currently using timeval.) -- Joseph S. Myers joseph@codesourcery.com | http://sourceware.org/ml/libc-alpha/2019-04/msg00600.html | CC-MAIN-2020-05 | refinedweb | 469 | 58.01 |
View Complete Post
I am developing a custom task in VB.NET using VS2008.
My Project sets the Assembly Name to MyCompany.Dts.Tasks.MyTask, the Root Namespace is
MyCompany.Dts.Tasks
All my classes define the namespace as MyCompany.Dts.Tasks
When I compile the class library, and view the component in the "Choose Toolbox Items" dialog in VS, the Type Name is:
MyCompany.Dts.Tasks.MyCompany.Dts.Tasks.MyTask
Where is this extra decoration coming from?
Thanks.
Thank you
Paul DiLascia
MSDN Magazine November 2000
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/22698-custom-task-icon-issue.aspx | CC-MAIN-2017-43 | refinedweb | 102 | 51.85 |
(14)
Shivprasad (8)
Mahesh Chand(6)
Matthew Cochran(4)
Nipun Tomar(4)
Puran Mehra(4)
Rajeev Kumar(3)
Microsoft Press(3)
manish Mehta(2)
shrijeet (2)
Filip Bulovic(2)
Ashish Singhal(2)
Michael Livshitz(2)
Dhananjay Kumar (2)
Pramod Singh(1)
TimothyA Vanover(1)
Shivani (1)
Paul Abraham(1)
Manisha Mehta(1)
Baseem George(1)
Dipal Choksi(1)
Jay Smith(1)
Nicholas Christopher(1)
Chris Blake(1)
Satya Akkinepally(1)
Amit Kumar Agrawal(1)
Benjamin Wootton(1)
Mike Gold(1)
Angel Tsvetkov(1)
Surapureddy Sriram(1)
Ravi Rama(1)
Cinnie Patel(1)
Mostafa Belal(1)
Anand Thakur(1)
Munir Shaikh(1)
Justin Finch(1)
Pradeep Tiwari(1)
Amr Monjid(1)
Santhosh Veeraraman(1)
Sudheendra Desai(1)
Mohammad Elsheimy(1)
John Charles Olamendy(1)
Rick Malek(1)
Ashish Shukla(1)
Ravish Sindhwani(1)
Srihari Chinna(1)
Scott Lysle(1)
Manikavelu Velayutham(1)
Bechir Bejaoui(1)
Venkatesan Jayakantham(1)
Destin joy(1)
Vivek Srivastava(1)
Suthish Nair(1)
Ramesh dharam(1)
Matt Perdeck(1)
Vishal Nayan Database Access Class with Overloaded SELECT Statements
Jan 24, 2001.
A useful database access class with overloaded SELECT and other statements..
Defining Custom Entry Points
Apr 30, 2001.
In this article i am going to demonstrate how we can manipulate the IL code to change the behaviour of how the .NET programme executes.
Wireless Model : How Does It Work?
May 09, 2001.
The browser sends an HTTP request to the Web server, which interprets the request and determines which resources to retrieve or execute. If the URL specifies a file, then the server sends it back..
Working with Reflection
Jul 05, 2001.
This article is about the use of reflection package in Csharp. Through Reflection we can get Information about the class.
XML Generator
Aug 17, 2001.
This article is about advance use of reflection package in Csharp..
A Database Component: Inserting Serializable Objects
Apr 23, 2002.
The attached zip file contains two projects - a database component (DLL) project and a Windows forms project called Test. Both projects are written and compiled using Visual Studio .NET version 1.0.
Shell Commands within C#
Jul 10, 2002.
In this article we will examine a few examples for executing shell commands outside of our program using C#.
Using DTS from C#
Sep 16, 2002.
In this article I will concentrate on enumerating, executing and changing properties or global variables of DTS package.
Using SQL in .NET: Part 1
Oct 04, 2002.
In this article, I will show you how to execute SQL queries from your C# applications.
DTS Custom Task in C#
Jan 28, 2003.
During last year I wrote an article about DTS and how to use it in C#. In that article I stated that I didn’t manage to solve the problem related to CustomTask. Today I finally forced myself to tackle that problem again and here is the result. Writing simple CustomTask in C# is the easiest part.
An XML-RPC Package for .NET
Feb 06, 2003.
This package provides a simple XML-RPC client and server for C# applications. It's pure C#, using .Net for networking and XML functionality.
Run Time Compiler
Feb 07, 2003.
In this article, I am going to discuss how to take a string then compile it and then how you execute it.
Data Access Component for SQL Server in C#
Jul 07, 2003.
The attached source code a data access component for SQL Server that contains methods to get a DataSet.
Distributed Computing Using .NET Remoting
Jan 05, 2004.
AppDomain is an isolated environment for executing Managed code. Objects within same AppDomain are considered as local whereas object in a different AppDomain is called Remote object...
Asynchronous Data Access using Callback Model
Aug 17, 2005.
The problem with the ADO.Net 1.x is that one thread has to wait for the other thread to complete. So to overcome this problem Microsoft has introduced Asynchronous data access, through which one can execute multiple threads at a time.
Top 10 New Features in ADO.NET 2.0
Aug 19, 2005.
This article summarizes the new and updated features of ADO.NET 2.0, which ships with .NET Framework 2.0. In my following articles I will discuss these features in more details with sample examples.
RegisterWaitForSingleObject in ThreadPool
Nov 04, 2005.
In this article, we are going to describe how to use Threadpool to execute a thread at specified time interval or instant by using Threading AutoResetEvent.
How to Execute Oracle Stored Procedures Dynamically in C#
Nov 10, 2005.
In this article, I wiil show how we can store schema of stored procedures in an XML file and load and run the stored procedure from UI application using C# and Oracle..
System Date Time Check Format
Oct 18, 2006.
This program executes the system date time format. Is it in dd/mmm/yyyy or other format and the which cultuerInfo is the system running on.
How to Execute an Application in a Remote Application Domain
Jan 18, 2007.
This article explains executing an application in a remote application domain
OleHelper for Microsoft Access (.MDB) in ASP.NET 2.0
Apr 18, 2007.
This article is very much similar to the Microsoft Data Access Application Block, provides static methods in a "sealed class OleHelper" where I have Execute Dataset, ExecuteNonQuery & PrepareCommand to work with Microsoft Access database.
Error Opening SSIS Package
May 30, 2007.
I got a weird error when trying to open an SSIS package in VS2005.
Using the BackgroundWorker Component in .NET 2 Applications
Jul 03, 2007.
In this article I will show (step-by-step) how you can use the BackgroundWorker Component in .NET 2 applications to execute time-consuming operations.
Multi-threaded Asynchronous Programming in C#... Through The Web Page. Part VI
Jul 27, 2007.
Async web pages which are primarily used as a server performance enhancement. There are two main methods we will be looking at that are used to execute operations asynchronously from web pages: Page.AddOnPreRenderCompleteAsync() and Page.AddOnPreRenderCompleteAsync()..
Dynamic Building and Execution of Assembly Using CodeDome & Reflection
Feb 01, 2008.
This article shows how to enter code, compile it , execute it and apart from that access code and data member from the main or other referenced assembly during runtime.
Using the BackgroundWorker component
Mar 12, 2008.
This article discusses the BackgroundWorker component in .NET 2.0, it will show you how to use it to execute tasks in the background of your application
Executing multiple stored procedures within a single transaction context from ADO.Net
Mar 20, 2008.
This article discusses how we can execute multiple stored procedures in ADO.NET with different parameter values in a single Transaction context.
Methods, Delegates, Anonymous Delegates, and Lambda Expressions... Oh My!
Mar 22, 2008.
This article covers different syntax we can use for declaring and executing methods.#.
High Performance Abstraction Layer for Non-Blocking Asynchronous Thread Safe SqlServer Calls
Apr 25, 2008.
One.
Multiple Active Result Sets (MARS)
May 12, 2008.
This article gives you information about the Mars a feature supported by MSSqlServer 2005 and .Net 2.0/3.x, which is a feature to execute more than one query with one single connection to database.
Model View Presenter (MVP)
Dec 31, 2008.
In this article we will understand MVP, execute a sample project with MVP, implement the same using windows UI and then finally we will discuss about the differences between MVP and MVC.
Unified Modeling Language (UML): Part II
Feb 27, 2009.
This is second part of the series of articles on Unified Modeling Language (UML).
Include Files in Application Package in Silverlight 2.0
Mar 13, 2009.
This article describes how to include files in application .xap package and read them.
State Machine Work flow Discussion
Mar 14, 2009...
How to improve your LINQ query performance by 5 X times ?
Jul 17, 2009..
Stored Procedures
Aug 13, 2009.
In this article I will explain you about stored procedure. How to create, execute and alter stored procedure. Why to use stored procedures and advantages of stored procedures.
LINQ Deferred Execution
Aug 24, 2009.
In this article we will see how LINQ defers the execution of the query till you request the results. Plus, learn how to execute the query immediately.
Client execution from server controls
Sep 09, 2009.
In this article I will expalin how to execute client script from server controls.
The Architecture of Visual SourceSafe 2005
Sep 11, 2009.
In this article, I want to take a look at the architecture of Visual SourceSafe 2005; an important package which can be used in software development projects as a source control system.
Working with COBOL Data Groups
Oct 19, 2009.
COBOL programmers have used data groups to package data together for a long time. How though does a non-COBOL user access the data being used by the COBOL programs?
5 simple steps to execute unit testing using NUNIT
Nov 22, 2009.
Unit.
The CommandType Enumeration in ADO.NET
Jan 05, 2010.
In this article I will explain the CommandType Enumeration in ADO.NET.
Simple 6 steps to run your first Azure Worker Role Program
Jan 07, 2010.
In this article we will try to understand 6 basic steps to create worker role project and as we run through the article we will try to understand the various fundamental methods which are executed in worker role projects.
Adding Editor, and Deleting data in Web Forms using ADO.NET
Feb 11, 2010.
In this article I will explain Adding Editor, and Deleting data in Web Forms using ADO.NET.
Executing Asynchronous Web Services
Feb 16, 2010.
In this article I will explain Executing Asynchronous Web Services.
System.Windows.Xps.Packaging Reference Missing
Mar 01, 2010.
If you need to use XpsDocument class in your WPF project, you must add reference to System.Windows.Xps.Packaging namespace.
Executing a Stored Procedure Programmatically
Mar 04, 2010.
In this article I will explain executing a Stored Procedure Programmatically.
Deployment and Packaging in .NET
May 07, 2010.
In this article I will explain you about Deployment and Packaging in .NET.
Multithreading in WPF Part I
May 27, 2010.
Multithreading means executing more than one code block at a time .
How to: Deploy or Retract WSP Package in Sharepoint 2007
Jun 17, 2010.
In this article I will show you how to deploy and retract a WSP in a Sharepoint web application.
Working with Entity Data Model and Executing Stored Procedures
Jun 21, 2010.
In this article we are going to work with the following concepts that EDM support..
BackgroundWorker in C#
Aug 20, 2010.
A BackgroundWorker component executes code in a separate dedicated secondary thread. In this article, I will demonstrate how to use the BackgroundWorker component to execute a time consuming process while main thread is still available to the user interface..
Performance Tuning High Level- SQL SERVER
Aug 26, 2010.
You can view the query execution plan while executing the query in your query analyzer. Through that you can tune your query. It will show you whether the appropriate tables are using the proper index or not.
Connect to an SSIS package and consume its data from .Net application
Sep 01, 2010.
In this article, I will show how to connect to a Data reader destination component within a given SQL Server Integration Services package..
SQL Profiler InputBuffer Events in SQL Server
Sep 10, 2010.
In this article we will see how to find the last execution command of your user in SQL Server.
Scheduling SSIS Package Using SQL Server Agent
Oct 08, 2010.
This article shows how to schedule a SSIS package using SQL Server Agent.
How to Run a Power Shell Script in SharePoint 2010
Oct 27, 2010.
In this article I am showing you how to execute power shell scripts.
Incremental Cube Refresh using SSIS
Nov 03, 2010.
This article shows how to incrementally refresh cube using SSIS.
How to Send Email Using Send Mail Task of SSIS
Nov 03, 2010.
In this article we will see how to send email using Send Mail Task of SSIS
Chapter 4: From 2003 to 2010: Deploying an Application
Nov 25, 2010.
After reading this chapter on deployment techniques, you will be able to Deploy a Web application and an SQL database using Web Deployment Packages. Deploy a Web application using One-Click Publish.
How Create Excel File Through Open XML Package (OOXML)
Dec 14, 2010.
In this article I will explain how we can create Excel File through OOXML.
Deploy and run your Cloud Application project - Create Service Package Only
Dec 25, 2010.
This article demonstrates how to deploy and run your first Cloud Application on Windows Azure.
Deferred Execution and Immediate Execution in LINQ
Dec 31, 2010.
Deferred Execution executes a query only when a loop starts. What I mean here is that, we iterate through the query variable to get the result.
Compiled Queries in LINQ
Jan 07, 2011.
There might be a scenario where we need to execute a particular query many times and repeatedly. LINQ allows us to make this task very easy by enabling us to create a query and make it compiled always. We call this type of query a compiled query.
Web Solution Packages
Jan 11, 2011.
Managed code assemblies can be deployed to a SharePoint server using a new capability called Solution Deployment...
Understanding C# Constructors and it's type
Mar 03, 2011.
In other words a constructor is a class default method that gets automatically executed whenever a class's object is created or whenever the class is initialized.
SQL Server Integration Services (SSIS) - Transforming SQL Data to Excel Sheet
Mar 14, 2011.
In this article we will see how data can be transferred from SQL Server DB to Excel sheet.
Basics of SQL Server Integration Services
Mar 14, 2011.
In this article we will see what a SQL Server Integration Services (SSIS) is; basic on why SSIS is used for, how to create a SSIS Package and how to debug the same.
SQL Server Integration Services (SSIS) -Import Data using Wizard
Mar 15, 2011.
In this article we will see how to import data from SQL Server using the wizard which is provided with SSMS (SQL server Management Studio).
SQL Server Integration Services (SSIS)- Export Data using Wizard
Mar 15, 2011.
In this article we will see how to export data from SQL Server using the wizard which is provided with SSMS (SQL server Management Studio).
JIT (Just-In-Time) Compiler
Mar 16, 2011.
JIT is the just in time compiler, it is quite different with other compilers. It compiles the source code just in time, and it does not compiles hole code, only the code is to be executed is compiled.
SQL Server Integration Services (SSIS) - Breakpoints in SSIS Packages
Mar 17, 2011.
In this article we are going to look into the options to debug the SSIS package, mainly on the Break point options available with SSIS.
SQL Server Integration Services (SSIS) - Scripting in SSIS Packages
Mar 17, 2011.
This article describes the scripting option available with SSIS, which is nothing but the Script Task component in SSIS.
SQL Server Integration Services (SSIS) - Options to Deploy a Package in SSIS
Mar 17, 2011.
In this article we are going to see how to deploy a SSIS package upon completion of developing, building and executing the package.
SQL Server Integration Services (SSIS) - Options to Execute a Package in SSIS
Mar 17, 2011.
In this Article we will see How to Execute a SSIS Package Upon Completion of Developing and Building the Package.
SQL Server Integration Services (SSIS) - Building and Debugging a Package
Mar 17, 2011.
In this article we will look into how to build and debug a package in SSIS.
SQL Server Integration Services (SSIS) – Checkpoints in SSIS Packages
Mar 19, 2011.
In this article we will look into the use of checkpoints in a SSIS package.
SQL Server Integration Services (SSIS) – Send Mail in SSIS Packages
Mar 19, 2011.
In this article we are going to see how to send mail from SSIS using SMTP
UML Diagrams: Part 1
Mar 21, 2011.
UML (Unified Modeling Language) is a standardized general-purpose modeling language in the field of object-oriented software engineering. This article describes various UML diagrams and their usages.
SQL Server Integration Services (SSIS) - Conditional Split Task in SSIS
Mar 24, 2011.
This is part of the series of article on SSIS where we shall see conditional Split Task in SSIS Package.
SQL Server Integration Services (SSIS) - Folder Structure in SSIS
Mar 24, 2011.
In this article I shall describe the folder structure used in deploying the SSIS package and the usage of those folders.
SQL Server Integration Services (SSIS) - Backup Database task in SSIS and Send Mail
Mar 24, 2011.
In this article we are going to see how to use a Backup Database task to create an execution plan in SSIS to take a database backup.
About SSIS-Execute-Package. | http://www.c-sharpcorner.com/tags/SSIS-Execute-Package | CC-MAIN-2016-30 | refinedweb | 2,825 | 58.38 |
1. Let's get started
Let's display some messages on the console and request the user to enter his name. Output a friendly welcome message with the user's name.
Run this application with the debugger and without the debugger. Can you see any differences? Try setting a breakpoint and play around with the debugger.
2. The first countdown
Use a
for loop to create a countdown from 10 to 1. Each number should be displayed after a waiting time of 1 second. Realize that waiting period by using
Thread.Sleep(1000) of the
System.Threading namespace.
3. Currency converter
Write a little currency converter as a console application. The user should be asked to enter a value. Once a value has been entered the user is requested to enter a valid currency exchange rate. Afterwards the converted value should be displayed. Finally, if no error occurred, ask the user if another value should be converted. If yes, then start again by asking the user to enter a value. Otherwise exit the program.
4. A simple calculator
Request the user to enter two numbers and an operation (
+,
- or
*). Use a
switch statement to perform the right computation. Display the result.
5. Computing prime numbers
The user has to enter some number, which should be evaluated by your program. If the program detects a non-positive, or invalid number, the program should exit. Otherwise the evaluation should display if the entered number has been a prime number or not. In case of not being a prime number, the program has to display all the prime factors of the given number.
6. Write a swap method
Write a simple method called
Swap() to exchange to given integers. The method should work in such a way, that the following program works:
int a = 0; int b = 1; //Calling Swap with a and b if(a != 1 || b!= 0) throw new Exception("Swap does not work yet!"); Console.WriteLine("Swap seems to work for this case!");
7. Guess a number
Write a guess-a-number game. The .NET random generator is available by generating a new instance of the
Random class. The game should give the user the opportunity to enter the range of numbers. Based on this range the number of tries should be determined (you can decide how hard the game should be). For each try the user has the ability to guess a number. If the guessed number is the generated number, then the user won the game. If the number of tries has been exceeded, the user lost the game.
8. Simple numerical integration
Approximate the integral of a function by taking the sum of the areas of rectangles, i.e. compute
F=∫abf(x)dx≈ΔxΣif(xi).
Pack everything in a method called
Integrate. This method should take 4 input parameters:
- A
double, which represents the start of the integral a.
- A
double, which represents the end of the integral b.
- An
int, which represents the number of rectangles N.
- A
Func<double, double>, which represents the function f.
Use your method to show that the value of the integral is becoming more precise as N increases.
9. Digit sum
Create a method, which has an
int argument (called number) and an
out of
int argument (called digits). The return type should be an
int as well.
This method would then compute the digit sum of all digits in the given number argument. The digits argument should be set to the number of given digits in number, while the result of the computation would be returned.
10. A useful class
Create a class, which simplifies receiving user input in the console. The class should have a default constructor and another constructor, which requires a
string to be passed. The given
string should be used as the message for the user the prompt. In case of the default constructor a default message should be displayed.
The class should have a method called
ParseInt() and another method called
ParseDouble(). The
ParseInt() method will try to get a valid integer value. If no valid integer value is passed in, it will display an error message and prompt the user to enter a valid integer value. Once a valid integer value is passed in, this loop is broken and the value is returned. The
ParseDouble() method does the same with a floating point value instead of an integer.
11. Extending existing classes
Create a class called
MyRandom, which derives from the .NET class
Random, but returns different values. Implement a new method called
Next() and a possibility to access the
protected method
Sample() from outside.
12. A simple but useful structure
Create a structure called
Point3, which contains 3 (private)
float variables called x, y and z. Create a property for each variable. Changing a variable (setting it) should result in output on the console. The output has to tell us which variable changed and what changed (old value, new value).
Look up the syntax for overloading the addition operator
+ (look it up by viewing the meta data (definition) of e.g. the
decimal structure). Implement it for your structure.
Finally use your structure in a new version of your calculator. This time not integers, but
Point3s will be used.
13. Vehicle, Car, ...
Start by creating an
abstract class called
Vehicle. Create two
abstract methods called
Accelerate() and
Brake(). Additionally you should place an (protected) integer variable called
tires. Add a property called
Tire, which defines a
get block for the variable.
Derive a class called
Car from
Vehicle. Implement the two functions in such a way, that they display the state change like "The car is is driving" or "The car is braking" on the console. Additionally extend the class with an integer property called
Gears (with
get and
set) and set the value of
tires to 4.
Finally derive from
Car and call the new class
Porsche. Override the method
Drive() with some string that might represent your attitude towards driving a Porsche.
14. Let's create a copy
Extend the class
Car of assignment 13 with a copy-constructor, i.e. a constructor which takes an instance of the own class as input parameter, and uses this instance to create a (usually deep) copy of all fields.
Also override the method
Equals() in a way that it is possible to compare one instance of the class
Car with another for equality.
Finally try to cast an instance of your
Car class to the
Porsche class. Why is this not possible?
15. (MP) Write a class for (2D) plotting
Write a project that makes storing plot-data in C# objects possible. The project has to fulfill the following requirements:
- An arbitrary amount of data series should be storable.
- Each series consists of an arbitrary amount of data points.
- Each data point consists of an X and a Y value.
- Each series contains information about the min x and y, as well as max x and y values.
- Each series has a label and a color (just a string) assigned.
The project aims to be very generic, such that the created plotting class construct could be used with various renderes. A renderer is a special class, which takes some input, analyzes it and starts streaming some output in a certain form. In this case you could input an instance of your plotting class into some renderer, which would transform it to e.g. HTML output, displaying something on the console or a bitmap image.
16. Communication between Forms
Write a small Windows Forms application that consists of two windows (forms). The first (main) window should have a
ComboBox with two values, "blocking" and "non-blocking", a
Button and an empty
Label.
The second window should have a
TextBox and two
Button controls, one with the text being "OK" and another one with the text being "Cancel".
Pressing the button in the main window should open the second window. If the value "blocking" has been selected from the
ComboBox control, then the
ShowDialog() method should be picked, otherwise the
Show() method is the way to go.
The second window should be closed on pressing one of the buttons. The difference between the two buttons is, however, that the OK button does also "transmit" the value in textbox to the label of the main window.
17. Getting to know the .NET-Framework
Place a variety of controls on a
Form and display some of the information you can read out using the
Environment class like the current username and the current directory.
18. Reaction speed
Write a Windows Forms application that measures the reaction speed using the classes
Random,
Timer and
Stopwatch. The
Timer can be placed per drag and drop on the
Form as a control.
First you use a random number to determine the value for the timer (when to display a label, that the button can now be pressed). Then, when the timer is firing it's elapsed event, start a new stopwatch measurement. Once the user presses the button the stopwatch should be stopped and the reaction time should be displayed.
19. A set of simple LINQ queries
Start by using the following LINQ query:
var random = new Random(); var list = Enumerable.Range(0, 100).Select(m => random.Next(0, 1000)).ToArray();
Create now a set of new variables based on the variable list. In total create the following variables with LINQ:
- Find the minimum element min in list.
- Find the maximum element max in list.
- Save the ordered array as a
List<int>in ordered.
- Save all odd elements of list in onlyodd.
- Take only the 10 elements after the first 15 elements and save them as an array in elements.
- Take the first element that is bigger than 999 and save it in hulk.
20. A custom user control
Create a custom user control that contains a
Timer control and a
Label control as well as a
Button. The user control should do the following:
- Show the button with the text "Start" and the interval time of the timer.
- When the button is pressed the time string on the label should be updated.
- The button should be named "Stop" now until the timer finishes (tick event).
- The button should now display "Reset". If that is being pressed then the initial state should be shown again.
21. Threads and communication
Implement a multithreaded program. One thread, the worker thread, is doing some time consuming "work" (i.e. counts to infinity) and reports periodically its progress to another thread, the UI thread.
A sample code for the worker thread (without reporting the progress) would be the following:
//Ping is available in System.Net.NetworkInformation Ping ping = new Ping(); //Replace true with a statement if the loop should continue while (true) { var reply = ping.Send("132.199.99.246"); //Use information from reply when reporting progress //Just wait a bit before making sending again Thread.Sleep(1000); }
One last thing: Avoid cross-threading exceptions!
22. (MP) Write a simple math tool
Write a simple math tool that let's users enter numbers in two textboxes. Only integer numbers separated by commas should be allowed. Place all user controls in a
TabControl.
There should be one
ComboBox element and one
Button control. The combobox contains the following strings:
- Intersection
- Union
- Except
- Symmetric Except
- Square
- Square Root
- Sum
- Min
- Max
If the button is pressed a function is called depending on the currently selected item in the combobox. While the first four functions are requiring both sets of numbers (obtained from the two textboxes), the other functions only require one of the sets.
It should be obvious that each function represents a LINQ function. The result of the LINQ operation should be displayed in a new tab (called results) of your form.
23. Async / await with C# 5
Compute the following method in an async task and await the result:
double ComputePi() { var sum = 0.0; var step = 1e-9; for(var i = 0; i < 1000000000; i++) { var x = (i + 0.5) * step; sum = sum + 4.0 / (1.0 + x * x); } return sum * step; }
Start the computation by presing a button. The button click event handler should then disable the button, await the result of the computation, set the result of the computation in a label and enable the button again.
24. The Task Parallel Library
Use the Task Parallel Library to make a parallel version of the method given in exercise 23. Try to avoid race conditions, i.e. synchronize the communication on shared variables. Reduce the required communication where possible and reduce the usage of shared variables to a minimum. Compare executation time and result with the serial implementation.
Hint Use
Parallel.For instead of
for and take the overload that defines a (thread-)local variable as well.
25. A little picture viewer
Use the
PictureBox control, a
ComboBox and a
FolderBrowserDialog to create a simple application, which lets a user open a certain directory. All images of the selected directory (the following types are enough: png, gif, jpg, bmp) should be displayed in the combobox. Once the selected index of the combobox changes, the picture behind the selected item should be displayed in the picturebox.
As a little extra work you can try to implement a zoom and rotate operation.
26. Loading and saving notes
Nowadays really simple note apps seem to be in fashion again. Write a small program that uses your own class called
Note. An instance of
Note contains a creation date, a last update date, a priority enumeration value (think about good values in this enumeration), a title, remark and description, as well as a due date. Dates should be saved as
DateTime.
The application should be able to create new note instances, load existing notes from the file system and save notes to the file system. Use the
XmlSerializer for loading and saving instances of the
Note class. Use the
DateTimePicker as the control for any
DateTime value.
Hint Using the dialoges called
OpenFileDialog and
SaveFileDialog will help a lot.
27. Dynamic programming and webrequests
Create a method to read out XML files that have been placed on some webserver. In the first stage the content (XML) should be received. Afterwards, create a new instance of
XmlDocument as a
dynamic type.
Show that you can actually access the member of the XML document dynamically, i.e. that the following lines of code,
var document = new XmlDocument("..."); var element = document.GetElement("root").GetElement("child");
could be replaced by the following code (in case of an XML document with a root node called root and a child node called child):
dynamic document = new XmlDocument("..."); var element = document.root.child;
Show that
object,
var and
dynamic are three different things.
The URL of a sample XML document is.
28. Events
Extend the custom user control (exercise 20) with some events called
- Started,
- Stopped, passing the passed time and the reason (button pressed or time over) in the event arguments and
- Updated, passing the current time in the event arguments.
Create an event-handler for the
Updated event in the code-behind (no-designer) and update the name of the form with the given information.
29. A simple paint program
Create a Windows Forms application that consists of a
PictureBox control. Once the user presses the left mouse button a line is drawn.
Additionally integrate some buttons that let the user choose a color and thickness for drawing a line. Think about how you could implement drawing helpers like drawing rectangles, ellipses and more.
30. A plotting tool
Use the plotting class from exercise 15 to create a tool that will parse simple math expressions, evaluate them in a given interval and plot the results.
Create a user control that let's you re-use all the work done before. The user control will depend on your plotting class and should perform the drawing of the plot. The parsing should be done in another class that just expects a string to be given.
31. Your first dynamic webpage
Create a webpage with ASP.NET MVC 4. The
HomeController should display a welcome page with an input box and a submit button on the
Index() action. Once a user presses the submit button a page should be shown which displays the content of the input box.
32. A simple Windows Store app
Recreate your reaction time game as a Windows Store app. You can reuse your logic (maybe you can now improve it), but you will have to re-implement the UI. Use the designer for doing the redesigning.
Solutions
| http://florian-rappl.de/Lectures/Page/54/exercises | CC-MAIN-2017-43 | refinedweb | 2,771 | 65.01 |
Enabling Pagination in Blazor with OData
Hassan.
In this article, we are going to talk about navigation from an abstract perspective as a plain API call, then leverage that power in a Blazor application to enable data navigation or pagination.
API Navigation
Clone and run the example project I built for OData with EDM and run the project then try the following API call:
The response to that would be:
As you can see the response contains the count of all the data that this API could provide, we are going to need that later for our pagination process.
Now let’s try to control the quantity of that data using $skip and $top functionality as follows:
The response to that would be:
You will notice that we have full control over navigating through that data.
The skip functionality will allow us to move forward in our list, while the top functionality will enable us to control the amount of data returned with every API call.
however, and since the data isn’t returned in any enforced particular order, using skip and top do not necessarily guarantee returning the same results every time, try to make the call multiple times and see the results change every time.
Therefore, we have to enforce some form of order to assure the returned results are consistent, so we are going to use OrderBy functionality to maintain that order.
Now your API call should look something like this:
The response to that would be always:
Now we have more consistent, reliable order of data.
But an API call with ordered data can only seem amazing to back-end engineers but not necessarily to end users, which means we need a powerful easy to use UI framework to help us put that kind of power into action and displaying it to the world.
Thanks to Daniel Roth and his amazing team we now have Blazor, a modern framework for building interactive client-side web UI using .NET and C#.
Integrating with Blazor
In order for you to start a Blazor project you need to have few prerequisites in place which are:
- Install .NET Core 3.0 on your machine, you can find it here
- In VS2019 go to option -> .NET Core -> Use Previews of the .NET Core SDK
Once that’s done, restart your Visual Studio and start a new project.
If you’re still having problems enabling .NET Core 3.0 on your machine, watch this tutorial.
Now you can start a new project with ASP.NET Core and Blazor (server-side), make sure you select ASP.NET Core 3.0 from the dropdown so you can find that type of project as follows:
Once the project is created, you will notice that Blazor comes with a pre-built web application samples such as counter and fetch data.
Let’s start by creating a new folder, call it Models, then let’s define the data models we need to create an integration between Blazor and OData API.
So what we need here is a student and an API response models.
We need the API response model because once we enabled OData with EDM the response became more than just a list of students, it returns more metadata that we are going to use for our pagination project shortly.
So your Student model will be just identical to the one we built in the API project:
But the API response model, let’s call it StudentsApiResponse will look as follows:
Now that we have built the models, let’s build a service class to call our API and serialize the incoming data into consumable strongly typed values, let’s call it StudentsService.cs.
And because we will need to do some JSON serialization then we need to install Newtonsoft.Json package to simplify the serialization process and the notations on our Models.
Once that’s installed, we type the following code to perform an API integration:
Let’s explain what the code is doing exactly.
We created a method GetStudentsAsync that returns the StudentsApiResponse that we get from our API call, and the method has two parameters to control the navigation through the API leveraging OData functionality to skip forward and control the size of the data coming back.
We are using HttpClient to perform an API GET call, we are enforcing the order by name as we have explained above, passing in the value of top and skip through string interpolation.
Then we deserialized the JSON response into StudentApiResponse.
This method is the point of integration between our OData-enabled API and our Blazor application, everything else from here is mainly focused on the rendering of that data.
In the Pages folder, let’s create a new file, FetchStudents.razor to render our data.
FetchStudents.razor will contain some C# code, HTML and Razor code to handle the service calling, rendering and navigation process.
Let’s start with the C# code, we need to reference at the top of our razor page the route let’s call it /fetchstudents – we also need to reference the namespaces where our models live and where our services live.
Finally we need to inject StudentsService so we can call the GetStudentsAsync method in our razor page.
Then we need to build three functions to handle the navigation of our data, one to be called to initialize the rendering and first page of data, one for navigating forward and one for navigating backward.
Here’s the code for these functions:
You can think of students, skip, top and count as global variables that are shared across all components in this razor page.
Each one of these functions makes an asynchronous call to our StudentsService method to get new data based on skip and top values.
You’ll notice that we control the values returned by increasing and decreasing the values of skip variable, everything else stays the same.
We made the code redundant for the purpose of this demo, otherwise the call to StudentsService can be simplified.
That wraps all most of the C# code we need for this page.
Now comes the HTML & Razor parts as follows:
We need to have a view when our data is still loading, which is when students are still in null state.
Now we need to render a table of students as follows:
Lastly, comes the navigation part, we need a next button, previous button and a label to show the count and how many pages left as follows:
We are showing the total number of pages in addition to the skip value + 1 because it’s zero-based counter.
Then we show and hide Previous and Next buttons based on whether there are any more navigation data in any direction or not.
The full code for the FetchStudents.razor page should look like this:
The next thing we need to do here is to register the StudentsService in the startup.cs file in the ConfigureService as a singleton as follows:
The last thing we need to do here is to add a navigation option to our Blazor app, so we are going to modify NavMenu.razor file in the Shared folder and add another option for fetching students as follows:
Now let’s run our projects, both the OData API project and Blazor project need to be running at the same time, you can easily configure your solution to do that by simple right-clicking on your solution file, go to properties, under common Properties select Startup Project (it should be selected by default) then choose Multiple startup projects as shows in the following screenshot:
Now, let’s run the project, your navigation menu on the left, go to Fetch Students and start experiencing full navigation experience with Blazor and OData
Final Notes
- Blazor and .NET Core 3.0 are still in preview stage at the time of this article, I encourage you to stay up to date with our blogs to learn more about the latest updates with these technologies.
- This is the source code of the project we built, let us know if you have any issues running it.
Bundling multiple powerful technologies such as Blazor and OData with ASP.NET Core might save you a lot of time implementing functionality that is simply a boilerplate, a functionality that doesn’t necessarily make your application any different from any other.
Our mission at Microsoft is to empower you to achieve more, because re-implementing a feature like pagination every time you need to list some data seems like a time-consuming task, instead we build technologies and frameworks like these to make your life easier, and help you get to your end goal faster. and we will continue to build powerful technologies that makes building robust mobile, web and desktop applications even simpler. because we want every developer to turn their ideas into a reality as fast as possible, whether it’s infrastructure, web development, cloud or mobile.
I encourage you to keep up with us on the latest products we offer in the software development world, we still have a lot to offer and we invite everyone to come celebrate innovation and success in our open source projects and our communities. | https://devblogs.microsoft.com/odata/enabling-pagination-in-blazor-with-odata/ | CC-MAIN-2019-26 | refinedweb | 1,543 | 54.97 |
At CrowdRiff, we’re constantly looking for ways to improve our projects. The recent trend towards CSS in JavaScript was hard to ignore, so we decided to implement the Styled Components library in one of our newer codebases.
To be honest, I was skeptical — I like CSS! I understand how it works and I like when JavaScript and CSS have distinct files. I know what files to look at when we have a visual bug, I have a fairly solid handle on how SCSS works, and I understand our codebase’s established patterns of CSS.
But setting skepticism aside, I decided to try it out.
Spoiler: We Didn’t Hate It
Disclaimer: Our stack is React/Redux, so the examples below assume a basic understanding of those frameworks.
For our use case, we didn’t convert our entire project to Styled Components. We chose to keep the majority of our CSS in the more traditional (i.e. separated) file structure and use Styled Components for elements whose styling (colours, images, etc.) is configured by the user.
What We Liked About Styled Components
- It Makes Components Less Bulky
In our use case, there’s a lot of heavy lifting that has to be done through CSS to inject user-specific styling into a component. Many components’ render methods end up containing style objects that clutter them up and split the CSS into two places, making the code harder to parse. Styled Components help keep the concerns of styling and element architecture separated and make components more readable.
Furthermore, when you have components that rely on JavaScript for their style, Styled Components gives control of those states back to CSS instead of using a multitude of conditional class names.
Below is a render method using props to set specific colours. the handleHover is not written below but it would be responsible for setting the ‘isHovered’ piece of state.
// The render method of some component
render() {
const style = {
color: `${this.props.themeColor}`,
}
const hoverStyle = {
color: `${this.props.hoverThemeColor}`
} return (
<button
onMouseEnter={this.handleHover}
onMouseOut={this.handleHover}
style={this.state.isHovered ? hoverStyle : style}
>
Click Me!
</button>
);
}
Here is the same code but using Styled Components:
// Declare your Styled Component, in the same file or a different file than your component
import styled from 'styled-components';const BrandedButton = styled.button`
color: ${props => props.themeColor};
&:hover {
color: ${props => props.themeHoverColor};
}
`
render(){
return (
<BrandedButton themeHoverColor="pink" themeColor="blue" >
Click Me!
</BrandedButton>
);
}
Instead of relying on two separate style objects and having to use React event handlers to setup a simple hover state, we use a styled component. This makes it easy to inject values that only exist in JavaScript into our CSS while still allowing CSS to handle the various UI states.
2. The ThemeProvider
The ThemeProvider is used as a wrapper that injects theme props into all of its child components. If you’re using a state library like Redux, you can avoid having multiple connected components request the same properties from state by using the ThemeProvider to pass these props to all your styled components.
We found the ThemeProvider particularly useful when building a ‘configuration’ page, which is essentially a duplicate page that allows a user to customize certain stylistic elements, like colour. We wrapped our configuration page in a ThemeProvider that was referencing a different piece of state than the non-editable portion of the app. That allowed us to reuse all the styled components while also showing the user feedback as they updated their stylistic elements.
3. The CSS Function
With Styled Components’ css function, you can use props to conditionally render css, which meant that we no longer had to render conditional class names based on props. This reduces clutter in your components as well as maintains a separation of concerns between CSS and JavaScript.
Take this button, for example:
import styled from 'styled-components';const Button = styled.button`
color: ${props => props.isSecondary ? ‘blue’ : ‘white’};
`
What if we wanted to add a third iteration of this button? It would quickly become difficult with the ternary pattern. With the css function from Styled Components, you can easily add as many conditions as you’d like!
Here, we used Styled Components to give a button different colours depending on its props.
// Declaring the styled component
const Button = styled.button`
color: ‘white’;
${props => props.isSecondary && css`
color: ‘blue’;
`}
${props => props.isDisabled && css`
color: ‘grey’;
`}
`
// Using the disabled iteration of the styled component
<Button isDisabled />
The button is styled to have white text by default, but if an ‘isDisabled’ prop is applied, the color property will be overwritten since it appears later in the style declaration, and the button will be given a colour of grey.
4. Testing
We implemented the Jest Styled Components library for our testing in Jest. It makes testing styled components painless by creating consistent class names and allowing you to assert specific CSS rules that might be important to the integrity of your app.
Below is an example of an assertion that Jest Styled Components allow you to make on your component:
expect(button).toHaveStyleRule('color', 'blue');
Pairing these simple assertions with the various states of your app is a powerful pattern to catch visual regressions.
What We Found Slightly Frustrating About Styled Components
There were certain style rules that we found hard to apply to Styled Components; namely, rules that set the placement of an element on a page (e.g. margin or display properties). They were difficult to standardize, so we still ended up leaning heavily on plain ol’ CSS for component placement in the flow of elements.
The syntax of Styled Components also takes some time to get used to, but if you’re familiar with template literals then it shouldn’t be too much of a stretch. Also, Styled Components does a great job with its docs and has been in the ecosystem for long enough that there are many examples around the internet!
The verdict: Use!
Overall, we’d definitely recommend Styled Components. Sure, there were some bumps along the way, but it reduced the complexity of our codebase. Styled Components FTW.
Photo by Marc Steenbeke on Unsplash.
Thank you to the CrowdRiff team for all of the editing! | https://medium.com/building-crowdriff/styled-components-to-use-or-not-to-use-a6bb4a7ffc21 | CC-MAIN-2021-25 | refinedweb | 1,033 | 54.42 |
The state of the Lambda in Ruby 1.9
forloop. The concept has many names in other languages and theory:
- lambda function
- anonymous function
- closure (e.g. the term used for the lamdba functions in Java 7)
This is a somewhat confusing term, because the term closure also refers to the capturing of the scope surrounding the code. A Block doesn't necessarily need to capture the scope - this code
x = lambda {|x,y| x + y}doesn't use any free variables (i.e. variables that are unbound; x and y are declared in formal argument list), and hence doesn't require the creation of a closure
(lambda (arg) "hello world").
Another language influential in Ruby's design, Smalltalk, uses a very concise syntax using brackets:
[arg| ^"hello world"].
Ruby's most convenient and often used syntax for Blocks is as a parameter to a function, which allows to simply append a Block surrounded by either
do/endor braces
{/}. Eg.
5.times {|x| puts x}It's convenient, and also allows idioms such as Builder, which allows to create hierarchical data structures very easily by using a nested Blocks. (Tip: An upcoming article here on InfoQ will explain the details of creating a Builder in Ruby - watch out for it in the 2nd half of January).
However, there was one problem: passing more than one Block to a function or method didn't work as easily. It was possible, but not with this shorthand. Instead, a Block had to be created using either the
Proc.new {}or
lambda {}notations. While not horrible, these options are much more verbose and introduce unwelcome tokens that clutter up the code. (Note:
Proc.new {}and
lambda {}notations have subtle differences as well, but this is not significant in this context).
Workarounds are possible for this in certain situations. For instance, if an API call requires multiple Blocks, helper functions could be mixed into the class to a) help with Blocks and b) have the side effect of looking like named arguments:
find (predicate {|x,y| x < y}, predicate{|x,y| x > 20})The
predicatefunction is nothing more than:
def predicate(&b)I.e. returns the Block. Whether this is appropiate or not depends on the specific use case. In this case, the shown code is - arguably - more expressive then the equivalent:
b
end
find (lambda{|x,y| x < y}, lambda {|x,y| x > 20})
Why? Because
lambdaleaks implementation details about how this is implemented - with one block argument, no extra keyword would be needed. The
predicatesolution annotates the code and generates the
lambda. To be clear: this is a workaround.
Ruby 1.9 now introduces an new, more concise syntax for creating lambda functions:
x = ->{puts "Hello Lambda"}The new syntax is shorter and removes the unfamiliar term lambda. To be clear: this is syntactic sugar. It does, however, help to write APIs that yield very readable code. Some of these APIs might be called "internal DSLs", although the definition for those are quite fuzzy. For these, the new lambda definition helps getting rid of the quite obscure term "lambda" in the middle of otherwise purely domain or problem specific code.
Sidu Ponnappa reports about another syntax change in 1.9:
Explicitly invoking one block from another in Ruby 1.9.0. This method was something I didn't even cover in my previous post, because the parser would simply blow up when parsing |*args, &block|. Here's what it looks like. [..]This code doesn't work in Ruby 1.8.x - it actually fails at the parser stage with:class SandBox
def abc(*args)
yield(*args)
end
define_method :xyz do
|*args, &block|
block.call(*args)
end
end
SandBox.new.abc(1,2,3){|*args| p args} # => [1, 2, 3]
In Ruby 1.9, this works fine.In Ruby 1.9, this works fine.benchmark3.rb:8: syntax error, unexpected ',', expecting '|'
define_method :xyz do |*args, &block|
^
benchmark3.rb:11: syntax error, unexpected kEND, expecting $end
Another change in 1.9 fixes a long standing issue: block arguments are now local. Take this code:
foo = "Outer Scope"In 1.8, the code would print "I'm not local to this block", wheras in 1.9 it prints "Outer Scope". In short, blocks now behave as expected: the block argument shadows the variable of the same name in the outher scope inside the block. (Let's preempt the question "How can I access the variable in the outer scope?". You don't - just choose a different name for the block argument).
[1,2,3].each{|foo|
foo = "I'm not local to this block"
}
puts foo
What do you think about the Ruby 1.9 lambda/block changes? Do they address all existing concerns or are there other problems left?
Tip: see all Ruby 1.9 stories on InfoQ.
I don't know
by
Michael Neale
new syntax comment
by
Roger Pack
z = proc {|x, y = 3| 33 }
SandBox example segfaults Ruby 1.9
by
Paul Harvey
See redmine.ruby-lang.org/issues/show/871 | http://www.infoq.com/news/2008/01/new-lambda-syntax/ | CC-MAIN-2014-35 | refinedweb | 839 | 65.93 |
world, unless the real need for that. That makes practical knowledge on this keyword is minimal for most of the programmers. This article explores with very simple explanations and example to make you understand when and why transient variable will be used in Java. If you are Java programmer and want to receive the weekly updates on Java tips to improve your knowledge, please subscribe to our free newsletter here.
also read:
What is Serialization?
If you want to understand what is transient, please learn about what is Serilization concept in Java if you are not familiar with that. Serialization is the process of making the object’s state is persistent. That means the state of the object is converted into stream of bytes and stored in a file. In the same way we can use the de-serilization concept to bring back the object’s state from bytes. This is one of the important concept in Java programming because this serialization is mostly used in the networking programming. The object’s which are needs to be transmitted through network has to be converted as bytes, for that purpose every class or interface must implement serialization interface. It is a marker interface without any methods.
What is Transient?.
Transient Keyword Example
Look into the following example to understand the purpose of transient keyword:
package javabeat.samples; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.io.Serializable; class NameStore implements Serializable{ private String firstName; private transient String middleName; private String lastName; public NameStore (String fName, String mName, String lName){ this.firstName = fName; this.middleName = mName; this.lastName = lName; } public String toString(){ StringBuffer sb = new StringBuffer(40); sb.append("First Name : "); sb.append(this.firstName); sb.append("Middle Name : "); sb.append(this.middleName); sb.append("Last Name : "); sb.append(this.lastName); return sb.toString(); } } public class TransientExample{ public static void main(String args[]) throws Exception { NameStore nameStore = new NameStore("Steve", "Middle","Jobs"); ObjectOutputStream o = new ObjectOutputStream (new FileOutputStream("nameStore")); // writing to object o.writeObject(nameStore); o.close(); // reading from object ObjectInputStream in =new ObjectInputStream( new FileInputStream("nameStore")); NameStore nameStore1 = (NameStore)in.readObject(); System.out.println(nameStore1); } } // output will be : First Name : Steve Middle Name : null Last Name : Jobs
In the above example, the variable middle Name is declared as transient, so it will not be stored in the persistent storage. You can run the above example and check the results.
your explanation is good pls give program explanation also
Why dont you want to persist the middle name?
Its just an example to explain the meaning of Transient keyword.
Yes. It is just an example to use the Transient keyword. If you have any suggestions, please post it here. Thank you for the comments.
In Swing/AWT i tried to serialize JPanel which is using GroupLayout. Unfortunatelly GroupLayout can not be serialized, so i try to set it's decleration with transient keyword. but this does not change anything? Why is that so, it only works if i remove grouplayout altogether, just declaring transient does not work… What could be the reason??
You can serialize JPanel properties (fields,…), and when deserialize create JPanel with the saved properties .
The use of transient is to tell the serialization mechanism that the field should not be saved along with the rest of that object's state.
i mean GroupLayout
Excellent explanation! I like explanations like this, reveals the complete truth about the transient keyword.
Serilization = Serialization”That means the state of the object is converted into stream of bytes and stored in a file.”
Maybe you meant “in a byte stream”?
Stopped reading on this sentence.
Please correct the horrible grammar in this article. It will make people think that you are sloppy.
Nicely explained but indeed please correct the grammar.
Done some update on the post. It was written without much thought,
A part from the ”Serilizition” , everything seems ok
Thank you for pointing out the problem, I have updated it
perfect thanks.. I have a question though.. new FileOutputStream(“nameStore”), new FileInputStream(“nameStore”).. where is the file extension? should it be.txt? I saw.ser extension in an article once.. I see it here completely ignored.. someone explain please..
Are you getting any error?. There is no need for providing the extension.
nice explanation
nice but just it’s joking
nice but just it’s joking
nice but just it’s joking
nice but just it’s joking
nice but just it’s joking
It likes [XmlIgnoreAttribute] in C# when using XmlSerializer.
A good explanation of transient keyword thanks:)
I learned something else today, thanks…. Never came across the transient keyword so far.
Krishna, Article is good and clears the concept.
However Please try to resolve the grammer issue, no offence but people will not take it seriously until grammer is fixed.
Hello Nirmal,
Thank you for the comments. I have updated the content.
Thanks,
Krishna
surprisingly clear article despite the grammatical issues. thx!
HI,
Thank you for the comments!!
Thanks,
Krishna
what prospers xsd in the webservices ?
09654457708
you can find some more details in the below link,””
final int a=23; // This makes constants
final int array[]={34,3445,5}; // but this snippet wont make constant but array Objects can make constants.
how to make individual array elements constant?
Dude you just made my day
Thank you!!
Thanks Krishna garu,
It would have been good if you specify some real time uses of the “transient” keyword.
Thanks,
Ram
Great post! | http://www.javabeat.net/what-is-transient-keyword-in-java/ | CC-MAIN-2015-06 | refinedweb | 923 | 60.21 |
in reply to
[solved] Help! Overloading operators doesn't work
Have you tried taking Moose and namespace::autoclean out of the picture?
Once I remove them and fix the remaining code, it works:
#!/usr/bin/perl
use 5.012003; # Perl version requirement
#===========================================================
# Percentage CLASS (script below)
#===========================================================
package Percentage;
#use Any::Moose;
#use namespace::autoclean;
sub new {
my $class = shift;
bless { int => @_ } => $class;
};
use overload
q{""} => \&to_string,
'0+' => \&to_num,
'+' => \&add;
sub to_string {
my $self = shift;
print "to_string()\n";
return $self->int();
}
sub to_num {
my $self = shift;
print "to_num()\n";
return $self->int();
}
sub add {
my ($left, $right) = @_;
print "add()\n";
my $result = new Percentage( $left->int() + $right->int() );
return $result;
}
sub int { $_[0]->{int} }
1;
#===========================================================
# SCRIPT
#===========================================================
package Example::Script;
use 5.012003; # Perl version requirement
use strict;
use warnings;
# printed in output :
my $a = new Percentage(2); # build
my $a_num = $a->int(); #
print "$a\n"; # Percentage=Hash(0x11cfe8c)
print "$a, $a_num\n"; # Percentage=Hash(0x11cfe8c), 2
my $b = new Percentage(3); # build
my $c = $b + $a; #
print "$b, $c\n"; # Percentage=Hash(0x116912c), 3693356
__END__
to_string()
2
to_string()
2, 2
add()
to_string()
to_string()
3, 5
[download]
I assume that something somewhere within Moose(::Any) or namespace::autoclean messes up the overloading. Maybe the bug reports or feature limitations tell a clearer story.
Update: Searching Moose for overload points me to Class::MOP::Class, which has an API to introspect overloading. Maybe consult that to find out whether your class actually has overloading, or maybe Moose / MOP want you to set up all your overloading through this mechanism instead of overload directly.
You're right, it's because of namespace::autoclean
There is an example in the Moose Cookbook, of overloading with Moose :
It uses the overloading operator.
According to a namespace::autoclean review, if you use overloading with Moose you should use namespace::sweep instead of namespace::autoclean :
"This pragma was written to address some problems with the excellent namespace::autoclean. In particular, namespace::autoclean will remove special symbols that are installed by overload, so you can't use namespace::autoclean on objects that overload Perl | http://www.perlmonks.org/?node_id=991144 | CC-MAIN-2015-48 | refinedweb | 348 | 54.26 |
Q:
Q:
Q:. The programmer must make sure that arguments match, or must manually insert explicit casts.
In the case of printf-like functions, some compilers (including gcc) and some versions of lint are able to check the actual arguments against the format string, as long as the format string is an immediate string literal.
See also questions 5.2, 11.3, 12.9, and 15.2.
Q:; }(Note that a second call to va_start is needed to re-start the scan when the argument list is processed a second time. Note the calls to va_end: they're important for portability, even if they don't seem to do anything.)
A call to vstrcat looks something like
char *str = vstrcat("Hello, ", "world!", (char *)NULL);Note the cast on the last argument; see questions 5.2 and 15.3. (Also note that the caller must free the returned, malloc'ed storage.)
vstrcat accepts a variable number of arguments, all of type char *. Here is an example which accepts a variable number of arguments of different types; it is a stripped-down version of the familiar printf function. Note that each invocation of va_arg() specifies the type of the argument being retrieved from the argument list.
(The miniprintf function here uses baseconv from question 20.10 to format numbers. It is significantly imperfect in that it will not usually be able to print the smallest integer, INT_MIN, properly.)
#include <stdio.h> #include <stdarg.h> #ifdef MAIN void miniprintf(const char *, ...); main() { miniprintf("Hello, world!\n"); miniprintf("%c %d %s\n", '1', 2, "three"); miniprintf("%o %d %x\n", 10, 10, 10); miniprintf("%u\n", 0xffff); return 0; } #endif extern char *baseconv(unsigned int, int); void miniprintf(const char *fmt, ...) { const char *p; int i; unsigned u; char *s; va_list argp; va_start(argp, fmt); for(p = fmt; *p != '\0'; p++) { if(*p != '%') { putchar(*p); continue; } switch(*++p) { case 'c': i = va_arg(argp, int); /* *not* va_arg(argp, char); see Q 15.10 */ putchar(i); break; case 'd': i = va_arg(argp, int); if(i < 0) { /* XXX won't handle INT_MIN */ i = -i; putchar('-'); } fputs(baseconv(i, 10), stdout); break; case 'o': u = va_arg(argp, unsigned int); fputs(baseconv(u, 8), stdout); break; case 's': s = va_arg(argp, char *); fputs(s, stdout); break; case 'u': u = va_arg(argp, unsigned int); fputs(baseconv(u, 10), stdout); break; case 'x': u = va_arg(argp, unsigned int); fputs(baseconv(u, 16), stdout); break; case '%': putchar('%'); break; } } va_end(argp); }
See also question 15.7.
Q: How can I write a function that takes a format string and a variable number of arguments, like printf, and passes them to printf to do most of the work?
A: Use vprintf, vfprintf, or vsprintf. These routines are like their counterparts printf, fprintf, and sprintf, except that instead of a variable-length argument list, they accept a single va_list pointer.
As an example,"); }
Q: How can I write a function analogous to scanf, i.e. that accepts similar arguments, and calls scanf to do most of the work?
A: C99 (but not any earlier C Standard) supports vscanf, vfscanf, and vsscanf.
References: C9X Secs. 7.3.6.12-14
Q: I have a pre-ANSI compiler, without <stdarg.h>. What can I do?
A: There's an older header, <varargs.h>, which offers about the same functionality.
Here is the vstrcat function from question 15.4, rewritten to use <varargs.h>:
#include <stdio.h> #include <varargs.h> #include <string.h> extern char *malloc(); char *vstrcat(va_alist) va_dcl /* no semicolon */ { int len = 0; char *retbuf; va_list argp; char *p; va_start(argp); while((p = va_arg(argp, char *)) != NULL) /* includes first */ len += strlen(p); va_end(argp); retbuf = malloc(len + 1); /* +1 for trailing \0 */ if(retbuf == NULL) return NULL; /* error */ retbuf[0] = '\0'; va_start(argp); /* restart for second scan */ while((p = va_arg(argp, char *)) != NULL) /* includes first */ strcat(retbuf, p); va_end(argp); return retbuf; }(Note that there is no semicolon after va_dcl. Note that in this case, no special treatment for the first argument is necessary.) You may also have to declare the string functions by hand rather than using <string.h>.
If you can manage to find a system with vfprintf but without <stdarg.h>, here is a version of the error function (from question 15.5) using <varargs.h>:
#include <stdio.h> #include <varargs.h> void error(va_alist) va_dcl /* no semicolon */ { char *fmt; va_list argp; fprintf(stderr, "error: "); va_start(argp); fmt = va_arg(argp, char *); vfprintf(stderr, fmt, argp); va_end(argp); fprintf(stderr, "\n"); }(Note that in contrast to <stdarg.h>, under <varargs.h> all arguments are variable, so the fmt argument must also be picked up via va_arg.)
References:
H&S Sec. 11.4 pp. 296-9
CT&P Sec. A.2 pp. 134-139
PCS Sec. 11 pp. 184-5, Sec. 13 p. 250
Q:
Q:
Q:
Q: I can't get va_arg to pull in an argument of type pointer-to-function.
A: Try using a typedef for the function pointer type.
The type-rewriting games which the va_arg macro typically plays are stymied by overly-complicated types such as pointer-to-function. To illustrate, a simplified implementation of va_arg is
#define va_arg(argp, type) \ (*(type *)(((argp) += sizeof(type)) - sizeof(type)))where argp's type (va_list) is char *. When you attempt to invoke
va_arg(argp, int (*)())the expansion is
(*(int (*)() *)(((argp) += sizeof(int (*)())) - sizeof(int (*)())))which is a syntax error (the first cast (int (*)() *) is meaningless). [footnote]
If you use a typedef for the function pointer type, however, all will be well. Given
typedef int (*funcptr)();the expansion of
va_arg(argp, funcptr)is
(*(funcptr *)(((argp) += sizeof(funcptr)) - sizeof(funcptr)))which works correctly.
See also questions 1.13, 1.17, and 1.21.
References:
ISO Sec. 7.8.1.2
Rationale Sec. 4.8.1.2
Q:.
Q:”
Read sequentially: prev next up
about this FAQ list about eskimo search feedback copyright
Hosted by | http://c-faq.com/~scs/cgi-bin/faqcat.cgi?sec=varargs | CC-MAIN-2017-22 | refinedweb | 979 | 67.55 |
sandbox/Antoonvh/primes.c
A Morton curve. Basilisk employs a diagnoally mirrored version, i.e. (capital) N-order curve to be precise. Image courtesy of Asger Hoedt
Prime numbers along a Z-order space-filling curve
Stanislaw Ulam once dicided that is was a good idea to order the prime numbers (primes) along a spiralling space-filling curve. Doing so, he discovered what is now known as the Ulam’s spiral. It is attractive to use the Basilisk toolbox to study the behaviour of primes along a Z-order space-filling curve. If you are curious what role this curve plays within Baslisk, have a look here.
#include "grid/quadtree.h" //<- For it's 2D Z-order indexing iterator #include "utils.h" //<- For visualization purposes #include "tag.h" //<- For finding connected regions. int i=9;
Find primes
We need to find all primes upto n and store them in an array b of length n. This is done by using Eratosthenes’ sieve, an ancient algorithm that was not developed with computational efficientcy nor paralellization in mind.
void getprimes(int b[],int n){ b[0]=0; //zero is not a prime b[1]=0; //one is not a prime for (int j=2;j<n;j++){ b[j]=j; } int j=2; while(j<=ceil(sqrt(n))){ int num = b[j]; if (b[j]!=0){ int ind = 2*num; while (ind<=n){ b[ind]=0; ind+=num; } } j++; } }
The loop
The Z-order (or (capital)N-order) space-filling curve seems to be most suitable to study grids where is a power of 2 (e.g. 2,4,128 etc.). In order to study the behaviour of the prime-number locations in the Z-order indexed grid we perform our analysis on an increasingly larger grid.
int main(){ char name[100]; FILE * fp2 = fopen("connectedregions","w"); static FILE * fp1 = popen ("ppm2gif --delay 200 > g.gif ", "w");
Loop over increasingly larger grids
for (int maxlevel=1;maxlevel<=i;maxlevel++){ init_grid(1<<maxlevel); int d[1<<(maxlevel*dimension)]; getprimes(d,1<<(maxlevel*dimension)); int m = 1;
Mark cells at prime locations along the space-filling curve:
scalar field[]; foreach(){ field[]=d[m++]; }
We can view the result by using this line of output.
output_ppm (field, fp1,n=pow(2,i),min=0,max=1,map = gray);
Here it is, for all the iterations:
White indicates locations of the prime numbers
Connected regions
We see that there exist connected regions, these could potentially have interesting properties. Hence, we tag the connected regions with a unique tag.
int am = tag(field); int regsize[am]; for (m=0;m<am;m++) regsize[m]=0;
And we store the length (i.e. size) of each region.
foreach(){ if (field[]>0) regsize[(int)field[]]++; } sprintf(name,"prime%d.dat",maxlevel); FILE * fp = fopen (name, "w"); for (m=1;m<am;m++) fprintf(fp,"%d\n",regsize[m]);
For the largest grid (i.e. points), we plot some statistics on the lengths of the connected regions.
A truly remarkable feature is that the frequency () of the region’s lengths () appears to scale with the region’s length, according to;
We also log the number of connected regions for each grid-refinement iteration.
fprintf(fp2,"%d\t%d\n",maxlevel,am+1); }
The result of this procedure are plotted below:
Again we obtain a result that I did not expect, but I am not an expert.
}
The next step
The next step is to increase the dimensionality of our Z-order-indexing curve and do the same analysis in 3 dimensions. The results are presented here. | http://basilisk.fr/sandbox/Antoonvh/primes.c | CC-MAIN-2018-43 | refinedweb | 596 | 62.68 |
No More var_dump – Introducing Symfony VarDumper!
Recently, Symfony went from Zend-like bloat and rigidity to extreme decoupling and modularity. With the new Developer Experience initiative, Symfony has done a Laravel-style 180° and dove right into making its components more end-user friendly, its docs more complete, and its AppBundles unbundled, simplifying entry and further development almost exponentially. Considering user friendliness, it’s a long way from “best pals friendly” but it’s definitely no longer hostile. One factor that contributes to this factor a lot is their continuous pushing out of new components that are incredibly useful outside of Symfony’s context. One such component is the new VarDumper.
Why?
You’re developing a feature. You either don’t feel like writing tests, or what you’re developing needs some variable testing in the middle of a function – something you can’t quite cover with a test. Inevitably, you resort to something like
die(var_dump($var));. Even if you’ve abstracted it into a shorthand method like
vddd($var), it’s still clumsy and unreadable almost as much as the monochromatic output it generates.
There’s little choice in the matter – sometimes we simply need our
vddds. And sure, if you’re an Xdebug user, you’re probably used to a slightly better looking output than the raw PHP prints. Still, few good solutions existed that beautified this output for us enough to make it worth installing a dev dependency. Until VarDumper.
What is VarDumper?
Symfony VarDumper is a component designed to replace your
var_dumps. It performs essentially the same functionality, but provides you with much, much more information in a much prettier format. It’s the
var_dump you’ve always wanted.
As per their.
Installing and Using
Let’s quickly install it into our Homestead Improved instance and run a couple of tests. If you’re not familiar with HI yet, please take 5 minutes to get it over with so you can follow along with the tests.
Installation
As with any decent modern PHP project, installing is as simple as running
composer require symfony/var-dumper
Usage examples
It’s used via the newly exposed
dump function:
$var1 = "test"; dump($var1);
Let’s try something more complex now.
$a = [ 'ak1' => 'av1', 'ak2' => 'av2', 'ak3' => 'av3', 'ak4' => 'av4', 'ak5' => 'av5', ]; $b = [ 'bk1' => 'bv1', 'bk2' => 'bv2', 'bk3' => 'bv3', 'bk4' => 'bv4', 'bk5' => 'bv5', ]; $object = new \stdClass(); $object->prop1 = 10; $object->prop2 = 20; $object->prop3 = 30; $object->prop4 = 40; $c = [ 'a' => &$a, 'b' => $b, $object ]; dump($c);
As you can see, VarDumper wonderfully exports the variables we defined, declaring everything verbosely – all coupled with some practical CSS that not only syntax highlights everything but also allows us to expand and collapse various parts of the data dump. Hmm, but what are those plusses next to the properties of the
stdObject? Public properties? How does it display private ones, then? Does it at all? Let’s see.
class Test { public $prop1 = 10; private $prop2 = 20; protected $prop3 = 30; private $prop4 = 40; public function __construct($value) { $this->undefinedProp = $value; } } $t = new Test(50); dump($t);
So public properties are plusses, private ones are minuses, protected ones are hashes, neat. Not only that, but every dumped property also has a title hint which exposes more information when you hover your mouse over them:
What’s more, runtime added properties are specifically defined in both the hint, and visually – they’re surrounded by quotes.
What about runtime added methods? Let’s see.
class Test { public $m1; protected $m2; public function __construct() { $this->m2 = function() { return "I'm method 2"; }; } public function buildFunction() { $this->m3 = function() { return "I'm method 3"; }; } public function __call($method, $args) { if (isset($this->$method)) { $func = $this->$method; return call_user_func_array($func, $args); } } } $t = new Test(); $m1 = function() { return "I'm method 1"; }; $t->m1 = $m1; $t->buildFunction(); $t->m1(); dump($t);
You can see VarDumper exposes much more information about objects, using reflection, than the typical var_dump – even the lines of code where the methods are defined.
These are just some of the nifty tricks VarDumper has up its sleeve – for the rest, and screenshots of how they look, check the announcement post out.
Conclusion
VarDumper is a great tool for quick and dirty safety and sanity checks, and when used in conjunction with Symfony’s DebugBundle, it becomes even more powerful – in fact, it’s included by default in dev and debug versions of Symfony installations, from version 2.6 onward.
Some might argue that using it with a simple non namespaced
dump() command has the potential to cause conflicts with other utility methods you might have in your project, and it obscures its source on first glance. While a changeable function name to call the utility might allay those worried minds, I believe we’re dealing with very rudimentary and production-insignificant functionality here, and thus this non-issue may be forgiven.
Now that you know about VarDumper, will you be adding it to your own projects for on-the-fly debugging purposes?
Replies
This is a long missing feature. Tried in my Silex application:
use Symfony\VarDumper;
...
dump($var);
Interesting package. I've added it to a laravel app, but a
dump()of a eloquent collection with 150 over items only has the 1st five expandable. The rest were truncated with
....
Any idea how to configure VarDumper to show all details instead of truncating them?
Disclaimer: author of Kint here.
I'd just like to encourage those who have not heard of it, check Kint out:
It's been in the works for years, has tons of useful features like keyboard navigation, plain text alternative, trace output and so on, and most importantly - it's effortless to install and use. Composer support is there, but highly optional - and Kint even supports PHP5.2 when you're stuck with terrible old projects. Needless to say, every later major version is supported and provides more functionality.
Looks good, thanks for chiming in! A "battle of the vardumpers" article might not be a bad idea
Thanks for this !! I'm just switching from kint
+1 for kint too
Also just thought I'll mention it on this thread that VarDumper works awesome on the console with some good color coding !!
Cheers
21 more replies | https://www.sitepoint.com/var_dump-introducing-symfony-vardumper/ | CC-MAIN-2018-51 | refinedweb | 1,044 | 59.84 |
Here is a real example of how to add a column to a list in M3 using a Script for Smart Office. It’s an illustration of my previous post on the same subject, an implementation of what was discussed in the comments of the first post, and a continuation of Karin’s post.
In this example I will add three columns to the list of Stock Location – MMS010/B1 to display the Geographic codes X, Y, and Z, which correspond to the longitude, latitude, and altitude of a Stock Location in a Warehouse. The Geo codes are stored in MMS010/F.
Why it matters
The benefit of this kind of solutions is to avoid an M3 Java modification.
From a technical point of view, this example illustrates how to dynamically add content to an existing M3 panel, how to access the ListView’s new internals in Smart Office 10.x, how to call an M3 API, how to use a background thread, how to indicate activity in the UI thread while the background thread works, and how to use the ScrollViewer to load data at each page scroll.
Desired result
This is the desired result:
Source code
Here is the complete source code:
import System; import System.Collections; import System.Windows.Controls; import System.Windows.Input; import System.Windows.Media; import Lawson.M3.MI; import Mango.UI.Services.Lists; import MForms; /* Displays the Geo codes XYZ in MMS010/B1 in three new columns loaded by calling MMS010MI.ListLocations. Thibaud Lopez Schneider, Infor, 2012-09-27 */ package MForms.JScript { class ShowGeoCodes { /* PENDING - Horizontal align the columns contents to the right - Vertical align the columns headers to top - Auto width the columns */ var controller: Object, content: Object, debug: Object; var listView; // System.Windows.Controls.ListView var rows: System.Windows.Controls.ItemCollection; var columns: System.Windows.Controls.GridViewColumnCollection; var scrollViewer: System.Windows.Controls.ScrollViewer; var oldCount: int = 0, newCount: int = 0; var GeoCodes; // System.Collections.Generic.IList[Lawson.M3.MI.MIRecord] public function Init(element: Object, args: Object, controller : Object, debug : Object) { try { // global variables this.controller = controller; this.content = controller.RenderEngine.Content; this.debug = debug; this.listView = controller.RenderEngine.ListControl.ListView; // == controller.RenderEngine.ListViewControl this.rows = listView.Items; this.columns = listView.View.Columns; // append three new columns to the ListView var newColumns = ['Geo code X', 'Geo code Y', 'Geo code Z']; for (var i in newColumns) { var gvch = new GridViewColumnHeader(); gvch.Content = newColumns[i]; var gvc = new GridViewColumn(); gvc.Header = gvch; gvc.CellTemplateSelector = new ListCellTemplateSelector(columns.Count, controller.RenderEngine.ListControl.Columns); columns.Add(gvc); } // register the ScrollChanged event of the ListView oldCount = newCount = rows.Count; var border = VisualTreeHelper.GetChild(listView, 0); var grid = VisualTreeHelper.GetChild(border, 0); this.scrollViewer = VisualTreeHelper.GetChild(grid, 3); this.scrollViewer.add_ScrollChanged(OnScrollChanged); // load the Geo codes XYZ by calling MMS010MI var CONO = UserContext.CurrentCompany; var WHLO = ScriptUtil.FindChild(content, 'WWWHLO').Text; BeginLoadGeoCodes(CONO, WHLO); // attach event to cleanup controller.add_RequestCompleted(OnRequestCompleted); } catch (ex: Exception) { debug.WriteLine(ex); } } /* Loads the Geo codes XYZ by calling MMS010MI.ListLocations. */ function BeginLoadGeoCodes(CONO: int, WHLO: String) { controller.RenderEngine.ShowMessage('loading Geo codes...'); content.Cursor = Cursors.Wait; var record = new MIRecord(); record['CONO'] = CONO; record['WHLO'] = WHLO; var parameters = new MIParameters(); parameters.OutputFields = ['WHSL', 'GEOX', 'GEOY', 'GEOZ']; parameters.MaxReturnedRecords = 0; MIWorker.Run('MMS010MI', 'ListLocations', record, EndLoadGeoCodes, parameters); } /* Handles the response from MMS010MI.ListLocations. */ function EndLoadGeoCodes(response: MIResponse) { try { controller.RenderEngine.ClearMessage(); content.Cursor = Cursors.Arrow; if (response.HasError) { controller.RenderEngine.ShowMessage(response.ErrorMessage); } else { this.GeoCodes = response.Items; ShowGeoCodesXYZ(0, rows.Count-1); } } catch(ex: Exception) { debug.WriteLine(ex); } } /* Loads more rows on ScrollViewer. */ function OnScrollChanged(sender: Object, e: ScrollChangedEventArgs) { try { if (e.VerticalChange != 0) { oldCount = listView.Items.Count; } else { var newCount = listView.Items.Count; var diff: int = newCount - oldCount; var fromRow = oldCount; var toRow = listView.Items.Count - 1; if (diff > 0) { ShowGeoCodesXYZ(fromRow, toRow); } } } catch (ex: Exception) { debug.WriteLine(ex); } } /* Shows the Geo codes XYZ for the specified rows. */ function ShowGeoCodesXYZ(fromRow: int, toRow: int) { var rows = IList(listView.ItemsSource); for (var i = fromRow; i <= toRow; i++) { var WHSL = rows[i].Item[0]; var codes = GetGeoCode(WHSL); // replace this row by a new row that's incremented by three new columns var row = rows[i]; var oldArray = row.Items; var newArray = new String[oldArray.length + 3]; oldArray.CopyTo(newArray, 0); newArray[newArray.length-3] = codes.GEOX; newArray[newArray.length-2] = codes.GEOY; newArray[newArray.length-1] = codes.GEOZ; row.Items = newArray; rows.RemoveAt(i); rows.Insert(i, row); } } /* Returns the Geo codes XYZ for the specified WHSL. */ function GetGeoCode(WHSL: String) { var i = 0; while (i < this.GeoCodes.Count && this.GeoCodes[i]['WHSL'] != WHSL) i++; // search array if (i < this.GeoCodes.Count) return { 'GEOX': this.GeoCodes[i]['GEOX'], 'GEOY': this.GeoCodes[i]['GEOY'], 'GEOZ': this.GeoCodes[i]['GEOZ'] }; // found WHSL else return { 'GEOX': '', 'GEOY': '', 'GEOZ': '' }; // no hit } function OnRequestCompleted(sender: Object, e: RequestEventArgs) { try { var controller: MForms.InstanceController = sender; if (controller.RenderEngine == null) { // program is closing, cleanup scrollViewer.remove_ScrollChanged(OnScrollChanged); controller.remove_RequestCompleted(OnRequestCompleted); } } catch (ex: Exception) { debug.WriteLine(ex); } } } }
Result
This is the final result:
Future work
This solution loads all records from the API call with MaxReturnedRecords=0. This could be a problem when there are more than several hundred records, or when the server/network response time is bad. I yet have to find a solution to improve this.
Also, the user could scroll the page while the response of the API call hasn’t arrived yet. I yet have to improve the code for that.
Finally, in my next article I will illustrate how to achieve the same result without programming, by using Custom Lists and Mashups.
UPDATE 2012-09-27
I updated the script to detach the event handlers on program close, i.e. cleanup.
Related articles:
- Custom Lists & Mashups, how to add columns to a list with a Custom List and Mashups
- How to add a column to a list, older post on how to add columns to a list with a script
- How to add a column to a list — #Comments
- Adding a new column in a M3 List, by Karin
- BackgroundWorkers in Smart Office Scripts – Part 2 – How to disable/enable the user interface, how to indicate activity, and how to show progress
- Geocoding of Stock Locations in MMS010
33 thoughts on “How to add a column to a list (continued)”
Hi Thibaud,
In all of the examples I have seen, columns are added to the list view but how feasible is to actually insert the new column in between existing ones? I need to insert a new column in OIS101 and this list view in CRS020 is already with the needed columns so I can’t just replace any of them. Could you please provide an example or feedback on this topic?
Thank you,
Gaston
Hi Gaston. Yes, in .NET you can insert an element at a specified position of the Array using the Insert method: . It goes after the oldArray.CopyTo(newArray, 0); Hope it helps. /Thibaud
While trying to copy a B Panel list view
(var newItems = new String[row.Items.length];
row.Items.CopyTo(newItems, 0) ) I got an error because the B Panel I was copying had editable cells. How can I add editable cells as my custom column?
Hi Jean, the object tree is different for editable cells. I could dig some source code from my boxes and I could help you as part of a project. /Thibaud
I have the same Problem with editalbe cells an row.Items.CopyTo(newItems,0); Please help me …
Hej Jörg, The object tree is different for editable cells. You can use a tool like Microsoft Inspect (see my post on Tools for Scripts) and find the hierarchy of objects for an editable cell. I searched my archive for past examples but I don’t have any for copying editable cells. I could help you through an Infor project and we would find the answer. Or try asking karinpb on the Smart Office blog. Hope it helps. Mvh, /Thibaud
Thanks very much for the valuable information. I follow the logic to work on PPS170 to tell users which proposal is for customer order and which one is for safety stock.
The listView needs to be refreshed by calling listView.Items.Refresh() to have updated items displayed.
I add a button for users to manually update it when they browse to next page instead of using ScrollChanged, which does not work. I also give users a feedback using MessageBox to tell them how many rows browsed and how many updated.
Thanks for sharing your solution Warren.
Hi Warren,
I have the same requirement. I have to update an editable cell when a user clicks a button and the value of that cell is blank. In the debug line I see the value being updated. However, in M3 even if I add the refresh the new values still doesn’t reflect. Please see my code below for the onClick method.
P.S. I’ve also tried to move the refresh after the assign and out of the for loop. Still no luck.
I got it to work. Apparently listView.Items[i].Items[7] = defWHSL; is what’s causing the refresh not to work I had to change it to row.Item[7].Text = defWHSL
Glad you found the solution. Thank you for sharing.
UPDATE: The script works correctly from the Script Tool, but I had to dispatch a Delegate in order for the script to run when it’s deployed on the server. The delta is:
…
import System.Windows.Threading;
import Mango.Core.Util;
package MForms.JScript {
class ShowGeoCodes_V2 {
…
public function Init(element : Object, args : Object, controller : Object, debug : Object) {
try {
// global variables
this.controller = controller;
this.content = controller.RenderEngine.Content;
this.debug = debug;
// dispatch delegate
var StartDelegate : VoidDelegate = Start;
content.Dispatcher.BeginInvoke(DispatcherPriority.Background, StartDelegate);
// move the rest of the Init code to the Start function
} catch (ex : Exception) {
…
}
}
function Start() {
// move to here
}
…
}
}
I want to know if there’s a way to get and set the color of text in the browse list through a J Script instead of using personalize.
Bonjour Jean, it’s probably possible but the browse list is difficult to access; Smart Office doesn’t have a public API for it. I think you have to hack into the hierarchy of ancestor windows, and descend to find the popup. I can probably do it after a day or two of investigation. Email me at Ciber for that. Or ask Karin if she knows. /Thibaud
Thanks thibaudatwork ,, for that useful Post ,,, is there a way to remove an existing column or Hide it ? ,,, for example if i want to Hide the first Column ,,i used that Code in Script DLL
IList columns = (IList)listControl.Columns;
columns.Remove(columns[0]);
—–
actually the number of Columns is decreased ,, but on the View the Column STill exists ,,, i found using Google the Property AutoGenerateColumnsProperty ,, but i don’t know where i can find it . i think i need to set it to false before removing the required column.
or i need to refresh the Grid with some way
regards
Hi Zaher, yes you can remove a column. You have to remove it from the view and from the model. It’s been a couple of years since I worked with this, I think the view is ListView and the model is ListControl. Otherwise, the easiest is to simply remove it from the M3 View (PAVR), press F4 twice in the dropdown list for the View. /Thibaud
ok ,,, thanks alot ,, but is there a way to Remove a specific Action ,,, in the Actions Menu … i could successfully remove Options From the Related Option and in the Basic Options (Change- Create ,…) .. but for the Actions menu ( Refresh, Cancel, Setings, Close) i could not edit it 😦
hi i’m getting an error on the following line, with 13.2 this no longer works:
scrollViewer = VisualTreeHelper.GetChild(grid, 3);
do you have a solution for it?
Hi Shiraz, that technique is deprecated. Use ScrollChangedEvent as illustrated by norpe at
Thanks a lot 🙂
Dear thibaudatwork,
I used your code in MMS010. data is not coming in grid as you shown in the final result. grid header is coming but data is not coming. what should i do?
Hi jaju, that code is deprecated with the various updates from the comments and Karin’s and Peter’s blog posts. It’s just the API for the data model that has a simple change. You need a .NET developer that can assist you. Refer to and
Iam getting error like this
System.NullReferenceException: Object reference not set to an instance of an object.
at MForms.JScript.ShowGeoCodes.ShowGeoCodesXYZ(Int32 fromRow, Int32 toRow)
at MForms.JScript.ShowGeoCodes.EndLoadGeoCodes(MIResponse response)
Jaju, you’ll have to debug your code line by line (comment all the lines and uncomment one by one, or use a lot of debug.WriteLine) to identify the variable that’s null.
Also, try this:
var rows: IList;
if (listView.ItemsSource != null) {
rows = IList(listView.ItemsSource);
} else {
rows = listView.Items;
}
Hi, all is clear but the part
var border = VisualTreeHelper.GetChild(listView, 0);
var grid = VisualTreeHelper.GetChild(border, 0);
this.scrollViewer = VisualTreeHelper.GetChild(grid, 3);
The numbers 0, 0, 3 and the structure is magic for me. I tried to do simillar script, in script tool is everything working, in the script put to personalization this part does not work, is there a way, how to find the tree structure of page and how to get to the scrollViewer for which I need to attach event handler?
Hello Jan,
Yes, the constant numbers will break if the visual tree changes in Smart Office.
The best tools to see the visual tree are: the Windows SDK Inspect tool, and Snoop:
Otherwise visit the tree in code:
As for your problem that the code works in Script Tool but does not work when deployed, you may need a StartDelegate:
Hope it helps,
–Thibaud
UPDATE: Carlos Roda says the line:
this.scrollViewer = VisualTreeHelper.getChild(grid, 3)
changed to border:
VisualTreeHelper.getChild(border, 0)
Hi! Did you ever find a way to align the column content (not the header) to the right? Haven’t been able to find a way to do this myself. | https://m3ideas.org/2012/09/21/how-to-add-a-column-to-a-list-continued/?replytocom=582 | CC-MAIN-2022-05 | refinedweb | 2,374 | 58.89 |
Hi Dan,
I may not know Axiom enough to judge this, but I would assume
that the org.xml.sax.ContentHandler that you get from
Session.getImportContentHandler() [1] could be a good start.
... without knowing it in detail I would assume that there is something
that translates Axiom into SAX?
Generally, JCR allows to deserialize arbitrary namespaced
xml into the repository so you should be in good shape. The
Session.exportDocumentView() should provide the respective export.
Does that make sense, or did I misunderstand your question?
regards,
david
---
[1]
[2]
On 1/26/08, Dan Diephouse <dan.diephouse@mulesource.com> wrote:
> Stefan Guggisberg wrote:
> > hi everybody,
> >
> > please allow me to quickly introduce myself. i am a committer in the
> > apache jackrabbit project
> > and also an active member of the jsr-170 and jsr-283 expert groups.
> > not too surprisingly
> > i think that APP and JCR are a natural fit ;-)
> >
> > i noticed that there's already JCR support in abdera (i.e.
> > JcrCollectionProvider). i'd be
> > therefore very interested to learn more about your experiences with
> > JCR/jackrabbit.
> >
> > what are the biggest problems you're facing when using jackrabbit? if
> > there's anything
> > i can help in making it easier for you to integrate jackrabbit as a
> > persistence layer i'd be
> > happy to support your efforts :)
> >
> > cheers
> > stefan
> >
> I think the main thing that it could use is just a review from someone
> who actually understands JCR :-) If you were interested in just checking
> out my JCR code I'd appreciate it.
>
> The other thing we really need to figure out his how to store extension
> elements inside Abdera. i.e. take an Axiom Element and convert it into a
> Node. I suppose we could write a xml storage converter like the one
> built into JCR. Any recommendations?
>
> - Dan
>
> --
> Dan Diephouse
> MuleSource
> |
>
> | http://mail-archives.apache.org/mod_mbox/abdera-dev/200801.mbox/%3C1b0d43d00801280618g704345f1m6f7bae3523263ca5@mail.gmail.com%3E | CC-MAIN-2021-31 | refinedweb | 302 | 64.1 |
When,.
I started with a Perl script that uses
SOAP::Lite to hit a Web service at
XMethods, like so:
#! /usr/bin/perl -w use strict; use SOAP::Lite; my $temp = SOAP::Lite -> service('') ->.
I have to confess I'm still tempted to dismiss this speech stuff as an amusing parlor trick. But it may finally be reaching a tipping point. "Look, Dad's talking to the computer," my kids snickered. When I showed.
The namespace made pre-existing items or your new items -- in order to step around these conflicts.
As you build up vocabularies, it's easy to forget that the recognition engine is speaker-independent, but language dependent. For example, I've been enjoying Brent Simmons' Huevos,clicks fifty Lite. It's already more usefully speakable then most OS X apps I've tried. Along with menu navigation, you can speak the crucial commands "Next Unread," "Mark All as Unread," and "Open in Browser." These are more mnemonic than their keyboard equivalents (Command-G, Command-Shift-K, and Command-B), and especially in the case of Command is an author, information architect, software developer, and new media innovator.
Return to the O'Reilly Network. | http://archive.oreilly.com/lpt/a/2889 | CC-MAIN-2014-52 | refinedweb | 197 | 66.03 |
C Programming Files I/O
In C programming, file is a place on your physical disk where information is stored. that you can easily create using Notepad or any simple text editors.
When you open those files, you'll see all the contents within the file as plain text. You can easily edit or delete the contents.
They take minimum effort to maintain, are easily readable, and provide least security and takes bigger storage space.
2. Binary files
Binary files are mostly the .bin files in your computer.
Instead of storing data in plain text, they store it in the binary form (0's and 1's).
They can hold higher amount of data, are not readable easily and provides a better security than text files.
File Operations
In C, you can perform four major operations on the file, either text or binary:
- Creating a new file
- Opening an existing file
- Closing a file
- Reading from and writing information to a file
Working with files
When working with files, you need to declare a pointer of type file. This declaration is needed for communication between the file and program.
FILE *fptr;
Opening a file - for creation and edit
Opening a file is performed using the library function in the "stdio.h" header file: fopen(). library function fclose().
fclose(fptr); //fptr is the file pointer associated.
Writing to a text file
Example 1: Write to a text file using fprintf()
#include <stdio.h> int main() { int num; FILE *fptr; fptr = fopen("C:\\program.txt","w"); if(fptr == NULL) { printf("Error!"); exit(1); } printf("Enter num: "); scanf("%d",&num); fprintf(fptr,"%d",num); fclose(fptr); return 0; }
This program takes a number from user and stores in the file
program.txt.
After you compile and run this program, you can see a text file program.txt created in C drive of your computer. When you open the file, you can see the integer you entered.
Reading from a text file
Example 2: Read from a text file using fscanf()
#include <stdio succesfully created the file from Example 1, running this program will get you the integer you entered.
Other functions like
fgetchar(),
fputc() etc. can be used in similar way.
Reading and writing to a binary file
Functions
fread() and
fwrite() are used for reading from and writing to a file on the disk respectively in case of binary files.
Writing to a binary file.
fwrite(address_data,size_data,numbers_data,pointer_to_file);
Example 3: Writing to a binary file using fwrite()
#include <stdion; num.n3 = 5n + 1; fwrite(&num, sizeof(struct threeNum), 1, *fptr); } fclose(fptr); return 0; }
In this program,
fwrite() function as above.
fread(address_data,size_data,numbers_data,pointer_to_file);
Example 4: Reading of", num.n1, num.n2, num.n3); } fclose(fptr); return 0; }
This program will start reading the records from the file
program.bin in the reverse order (last to first) and prints it. | https://www.programiz.com/c-programming/c-file-input-output | CC-MAIN-2016-50 | refinedweb | 482 | 57.47 |
A.
In practice, at least on a larger scale (like e.g. Flathub) you don’t want to do all the work on a single machine like this. Instead you have an entire build-system where the repository is the last piece.
Enter flat-manager
To support this I’ve been working on a side project called flat-manager. It is a service written in rust that manages Flatpak repositories. Recently we migrated Flathub to use it, and its seems to work quite well.
At its core, flat-manager serves and maintains a set of repos, and has an API that lets you push updates to it from your build-system. However, the way it is set up is a bit more complex, which allows some interesting features.
Core concept: a build
When updating an app, the first thing you do is create a new build, which just allocates an id that you use in later operations. Then you can upload one or more builds to this id.
This separation of the build creation and the upload is very powerful, because it allows you to upload the app in multiple operations, potentially from multiple sources. For example, in the Flathub build-system each architecture is built on a separate machine. Before flat-manager we had to collect all the separate builds on one machine before uploading to the repo. In the new system each build machine uploads directly to the repo with no middle-man.
Committing or purging
An important idea here is that the new build is not finished until it has been committed. The central build-system waits until all the builders report success before committing the build. If any of the builds fail, we purge the build instead, making it as if the build never happened. This means we never expose partially successful builds to users.
Once a build is committed, flat-manager creates a separate repository containing only the new build. This allows you to use Flatpak to test the build before making it available to users.
This makes builds useful even for builds that never was supposed to be generally available. Flathub uses this for test builds, where if you make a pull request against an app it will automatically build it and add a comment in the pull request with the build results and a link to the repo where you can test it.
Publishing
Once you are satisfied with the new build you can trigger a publish operation, which will import the build into the main repository and do all the required operations, like:
- Sign builds with GPG
- Generate static deltas for efficient updates
- Update the appstream data and screenshots for the repo
- Generate flatpakref files for easy installation of apps
- Update the summary file
- Call out out scripts that let you do local customization
The publish operation is actually split into two steps, first it imports the build result in the repo, and then it queues a separate job to do all the updates needed for the repo. This way if multiple builds are published at the same time the update can be shared. This saves time on the server, but it also means less updates to the metadata which means less churn for users.
You can use whatever policy you want for how and when to publish builds. Flathub lets individual maintainers chose, but by default successful builds are published after 3 hours.
Delta generation
The traditional way to generate static deltas is to run
flatpak build-update-repo --generate-static-deltas. However, this is a very computationally expensive operation that you might not want to do on your main repository server. Its also not very flexible in which deltas it generates.
To minimize the server load flat-manager allows external workers that generate the deltas on different machines. You can run as many of these as you want and the deltas will be automatically distributed to them. This is optional, and if no workers connect the deltas will be generated locally.
flat-manager also has configuration options for which deltas should be generated. This allows you to avoid generating unnecessary deltas and to add extra levels of deltas where needed. For example, Flathub no longer generates deltas for sources and debug refs, but we have instead added multiple levels of deltas for runtimes, allowing you to go efficiently to the current version from either one or two versions ago.
Subsetting tokens
flat-manager uses JSON Web Tokens to authenticate API clients. This means you can assign different permission to different clients. Flathub uses this to give minimal permissions to the build machines. The tokens they get only allow uploads to the specific build they are currently handling.
This also allows you to hand out access to parts of the repository namespace. For instance, the Gnome project has a custom token that allows them to upload anything in the
org.gnome.Platform namespace in Flathub. This way Gnome can control the build of their runtime and upload a new version whenever they want, but they can’t (accidentally or deliberately) modify any other apps.
Rust
I need to mention Rust here too. This is my first real experience with using Rust, and I’m very impressed by it. In particular, the sense of trust I have in the code when I got it past the compiler. The compiler caught a lot of issues, and once things built I saw very few bugs at runtime.
It can sometimes be a lot of work to express the code in a way that Rust accepts, which makes it not an ideal language for sketching out ideas. But for production code it really excels, and I can heartily recommend it!
Future work
Most of the initial list of features for flat-manager are now there, so I don’t expect it to see a lot of work in the near future.
However, there is one more feature that I want to see; the ability to (automatically) create subset versions of the repository. In particular, we want to produce a version of Flathub containing only free software.
I have the initial plans for how this will work, but it is currently blocking on some work inside OSTree itself. I hope this will happen soon though.
4 thoughts on “Introducing flat-manager”
Can deltas please be generated for `org.gnome.Sdk.Debug`? These are insane in size each day…
They should be there, but only for x86-64, are you using some other arch?
Can I use it for OS ostree repositories?
It has some flatpak specific behaviour, but with some minor work it could be useful for other ostree repos too. | https://blogs.gnome.org/alexl/2019/03/19/introducing-flat-manager/ | CC-MAIN-2019-43 | refinedweb | 1,115 | 61.26 |
Learn to test your Clojure code
Are you wondering how to test your code in Clojure?
Are you worried that your tests won't work with your tools?
Have you struggled getting the testing flow you're used to in other languages?
There's a lot to learn about testing in Clojure. It's not hard, it's just ... undocumented. Instead of spending weeks or months searching blog posts and learning by trial and error, and still not feeling confident, isn't there a way to learn it correctly all at once? And couldn't there be a course for systematically training you to understand and apply the Clojure testing library to your code?
Introducing Intro to clojure.test
Intro to clojure.test is a course in Clojure testing fundamentals. It builds up all of the skills you need to start testing your code.
Animations to visually explain the core concepts.
A picture is worth a thousand words, and your visual cortex can process a ton of information. These visuals help you understand ideas faster than you can in text, so much so that it feels obvious!
Annotated screencasts of real-world code.
Seeing code being written is great. It's rarely written in the same order it's finally read in. But often the context is missing. The annotationshelp focus you on reading and understanding the code the way anexpert sees it.
Exercises to practice and reinforce good habits.
Small, focused exercises are proven to make you better at a skill. Everything in the course is broken down into small, learnable units, each with exercises, so that you're able to master each bit before moving onto the next. This is known as deliberate practice and it's what differentiates experts from the rest.
All of the code in multiple formats.
All of the code is in a git repository with tags marked for each point in the course. There are git commands on each page of the course telling you how to check out exactly the code at the point you are reading. There are code previews that show the entire file right there in the page. And there are code snippets whenever there is code to type that you can copy and paste if you need to.
Intro to clojure.test<<
Want a handy reference for writing your tests? This cheatsheet contains everything you need to write tests, make assertions, and set up fixtures. It even shows the commands for runnings tests at the REPL.
Intro to clojure.test
Lessons
Introduction
We briefly discuss why we should learn clojure.test and become acquainted with our characters, Josie and Ignacio, who use a few namespaces to keep track of their fitness routines. We need to help them develop tests.
Testing Conventions
Although clojure.test can be used in any namespace, some conventions are widely used by IDEs and other frameworks. We learn those conventions to help us work well with them.
Creating tests
Learn the basics of creating the required namespaces and their dependencies, then setting up a test.
Multiple Assertions
Learn the multiple assertions pattern to test many things about the same code. | https://purelyfunctional.tv/courses/intro-to-clojure-test/ | CC-MAIN-2018-43 | refinedweb | 526 | 75.81 |
Gallery is a layout widget used to display items in a
horizontally scrolling list and positions the current selection at the center of
the view.
In this tutorial, you'll create a gallery of photos and then display a toast message each time a gallery item is selected.
After the
Main.axml layout is set for the content view, the
Gallery is captured from the layout with
FindViewById(int). The
Adapter property
is then used to set a custom adapter (
ImageAdapter) as the source
for all items to be displayed in the dallery. The
ImageAdapter is
created in the next step.
To do something when an item in the gallery is clicked, an anonymous delegate
is subscribed to the
ItemClick event. It shows a
Toast that displays the index position (zero-based) of the
selected item (in a real world scenario, the position could be used to get the
full sized image for some other task).
First, there are a few member variables, including an array of IDs that
reference the images saved in the drawable resources directory
(
Resources\Drawable\).
Next is the class constructor, where the
Context for an
ImageAdapter instance is defined
and saved to a local field. Next, this implements some required methods
inherited from
BaseAdapter. The constructor and the
Count property are self-explanatory. Normally,
GetItem(int) should return the actual object at the
specified position in the adapter, but it's ignored for this example. Likewise,
GetItemId(int) should return the row id of the item,
but it's not needed here.
The method does the work to apply an image to an
ImageView that will be embedded in the
Gallery. In this method, the member
Context is used to create a new
ImageView. The
ImageView is
prepared by applying an image from the local array of drawable resources,
setting the
Gallery.LayoutParams height and width for the image,
setting the scale to fit the
ImageView
dimensions, and then finally setting the background to use the styleable
attribute acquired in the constructor.
See
ImageView.ScaleType for other image scaling
options.
Walkthrough
- Start a new project named HelloGallery.
- Find some photos you'd like to use, or download these sample images. Add the image files to the project's
Resources/Drawable/directory. In the Properties window, set the Build Action for each to AndroidResource.
- Open the
Resources/Layout/Main.axmlfile and insert the following:
<?xml version="1.0" encoding="utf-8"?> <Gallery xmlns:
- Open
Activity1.csand insert the following code for the
OnCreate()method:
protected override void OnCreate (Bundle bundle) { base.OnCreate (bundle); // Set our view from the "main" layout resource SetContentView (Resource.Layout.Main); Gallery gallery = (Gallery) FindViewById<Gallery>(Resource.Id.gallery); gallery.Adapter = new ImageAdapter (this); gallery.ItemClick += delegate (object sender, Android.Widget.AdapterView.ItemClickEventArgs args) { Toast.MakeText (this, args.Position.ToString (), ToastLength.Short).Show (); }; }
- Create a new class called
ImageAdapterthat subclasses
BaseAdapter:
public class ImageAdapter : BaseAdapter { Context context; public ImageAdapter (Context c) { context = c; } public override int Count { get { return thumbIds.Length; } } public override Java.Lang.Object GetItem (int position) { return null; } public override long GetItemId (int position) { return 0; } // create a new ImageView for each item referenced by the Adapter public override View GetView (int position, View convertView, ViewGroup parent) { ImageView i = new ImageView (context); i.SetImageResource (thumbIds[position]); i.LayoutParameters = new Gallery.LayoutParams (150, 100); i.SetScaleType (ImageView.ScaleType.FitXy); return i; } // references to our images int[] thumbIds = { Resource.Drawable.sample_1, Resource.Drawable.sample_2, Resource.Drawable.sample_3, Resource.Drawable.sample_4, Resource.Drawable.sample_5, Resource.Drawable.sample_6, Resource.Drawable.sample_7 }; }
Run the application. It should look like the screenshot below:
References
Portions of this page are modifications based on work created and shared by the Android Open Source Project and used according to terms described in the Creative Commons 2.5 Attribution License. This tutorial is based on the Android Gallery tutorial. | http://developer.xamarin.com/guides/android/user_interface/gallery/ | CC-MAIN-2014-35 | refinedweb | 640 | 58.28 |
12 April 2007 08:28 [Source: ICIS news]
SINGAPORE (ICIS news)--PetroChina increased its spending on environmental and safety measures last year in the wake of the Songhua river pollution disaster in 2005, statistics from a company report showed late on Wednesday.?xml:namespace>
The China-based major spent yuan (CNY)13.2bn ($1.7bn) in 2006 compared with a total of CNY17.85bn in the five years prior to that.
Out of this amount, CNY2.1bn was spent on building and improving three-level water pollution prevention and control projects, it said in its first corporate social responsibility report.
These facilities included a number of accident treatment pools and buffering pools which could collect and treat sewage water should another accident occur, it added.
In late 2005, subsidiary Jilin Petrochemical spilt benzene into the ?xml:namespace>
The number of fatal accidents and injuries among workers at the Chinese major fell by 25% and 12% respectively in 2006 compared with a year ago, it said.
“However, we regret to see three major accidents happened and 21 lives [were] lost,” it added.
PetroChina reduced the oil-related waste and chemical on demand (COD) content in its waste water by 7.2% and 4.9% to 1,131 tonnes and 22,264 tonnes respectively in 2006.
On the green energy front, the company had started building a pilot biodiesel unit in
($1=CNY | http://www.icis.com/Articles/2007/04/12/9019819/petrochinas-safety-spending-surges-in-2006.html | CC-MAIN-2015-18 | refinedweb | 231 | 64.51 |
From libtclcad:
/** @file tkCanvBezier.c
*
* This module provides Bezier curve support for canvases.
*
* Author
* John Anderson
*/
#include <stdio.h>
#include "tkInt.h"
#include "tkPort.h"
#include "tkCanvas.h"
tkInt.h tkPort.h and tkCanvas.h hold internal tk
structures which should not be refered to externaly.
This is a very bad practice that breaks builds using
external tk.
Becouse of tk version mismatch (tk-8.4.9 on my
gnu/linux/gentoo) and internal from brlcad-7.4.2
libtk-8.4.6 linking process breaks on some structure
sizes mismatch.
Gentoo is from source distro, but after installing tk
there is no way to extract tkInt.h tkPort.h and
tkCanvas.h from the source.
Is there any way to get around using the tkInt.h and
friends?
Sean Morrison
2005-09-29
Logged In: YES
user_id=785737
tkCanvBezier is actually a component of Tk, intended as a new canvas
type. The module actually used to reside inside of our modified sources to
libtk, hence the references to internal files as it's meant to be a part of Tk.
Nobody has had time to press the modification through the (arduous) Tk
TIP process.
As this was design and not by mistake, it's technically not a bug
regardless of the unfortunate impact it has on external Tk builds. So it'll
be moved to feature requests so that someone can take a look at whether
it's easy to decouple the private headers.
Sean Morrison
2005-09-29
Lee Butler
2006-02-07
Sean Morrison
2007-12-17
Logged In: YES
user_id=785737
Originator: NO
This issue was taken care of earlier in the year, and should now be fully resolved. Instead of maintaining a full Tk modification (in libtk or in our libtclcad) for a new Bezier Canvas widget type, the code that used the widget was modified to use the 'raw' bezier curve support available to all Canvas widgets. By using the 'raw' widget with some additional direct spline solving of our own, I was able to replicate the behavior of the TkCanvasBezier entirely in Tcl making it possible to remove tkCanvBezier.c and it's associated dependencies with the Tk internals.
The solution arrived at is considerably slower than our canvas, but much easier to maintain given we don't have to try and merge the change back into the Tk core. Thanks for the report and hopefully this will help with the Gentoo portage integration (as well as with other package management systems).
Sean Morrison
2007-12-17 | http://sourceforge.net/p/brlcad/feature-requests/20/ | CC-MAIN-2014-35 | refinedweb | 424 | 66.54 |
strait 0.5.1
Simple Traits for Python
A simple implementation of traits for Python
Abstract
I provide a simple implementation of traits as units of composable behavior for Python. I argue that traits are better than multiple inheritance. Implementing frameworks based on traits is left as an exercise for the reader.
Motivation
Multiple inheritance is a hotly debated topic. The supporters of multiple inheritance claim that it makes code shorter and easier to read, whereas the opposers claim that is makes code more coupled and more difficult to understand. I have spent some time in the past facing the intricacies of multiple inheritance in Python and I was one of its supporters once; however, since then I have worked with frameworks making large use of multiple inheritance (I mean Zope 2) and nowadays I am in the number of the people who oppose it. Therefore I am interested in alternatives.
In recent years, the approach of traits has gained some traction in a few circles and I have decided to write a library to implement traits in Python, for experimentation purposes. The library is meant for framework builders, people who are thinking about writing a framework based on multiple inheritance - typically via the common mixin approach - but are not convinced that this is the best solution and would like to try an alternative. This library is also for authors of mixin-bases frameworks which are unsatisfied and would like to convert their framework to traits.
Are traits a better solution than multiple inheritance and mixins? In theory I think so, otherwise I would not have written this library, but in practice (as always) things may be different. It may well be that using traits or using mixins does not make a big difference in practice and that the change of paradigm is not worth the effort; or the opposite may be true. The only way to know is to try, to build software based on traits and to see how it scale in the large. In the small, more or less any approach works fine: it is only by programming in the large that you can see the differences.
This is the reason why I am releasing this library with a liberal licence, so that people can try it out and see how it works. The library is meant to play well (when possible) with pre-existing frameworks. As an example, I will show here how you could rewrite Tkinter classes to use traits instead of mixins. Of course, I am not advocating rewriting Tkinter: it would be silly and pointless; but it may have sense (or not) to rewrite your own framework using traits, perhaps a framework which is used in house but has not been released yet.
I am not the only one to have implemented traits for Python; after finishing my implementation I made a little research and discovered a few implementations. Then I have also discovered the Enthought Traits framework, which however seems to use the name to intend something completely different (i.e. a sort of type checking). My implementation has no dependencies, is short and I am committed to keep it short even in the future, according to the principle of less is more.
There is also an hidden agenda behind this module: to popularize some advanced features of the Python object model which are little known. The strait module is actually a tribute to the metaprogramming capabilities of Python: such features are usually associated to languages with a strong academic tradition - Smalltalk, Scheme, Lisp - but actually the Python object model is no less powerful. For instance, changing the object system from a multiple inheritance one to a trait-based one, can be done within the fundamental object system. The reason is that the features that Guido used to implement the object system (special method hooks, descriptors, metaclasses) are there, available to the end user to build her own object system.
Such features are usually little used in the Python community, for many good reasons: most people feel that the object system is good enough and that there is no reason to change it; moreover there is a strong opposition to change the language, because Python programmers believe in uniformity and in using common idioms; finally, it is difficult for an application programmer to find a domain where these features are useful. An exception is the domain of the Object Relation Mappers, whereas the Python language is often stretched to mimic the SQL language, a famous example of this tendency being SQLAlchemy). Still, I have never seen a perversion of the object model as big as the one implemented in the strait module, so I wanted to be the first one to perform that kind of abuse ;)
What are traits?
The word traits has many meanings; I will refer to it in the sense of the paper Traits - Composable Units of Behavior which implements them in Squeak/Smalltalk. The paper appeared in 2003, but most of the ideas underlying traits have been floating around for at least 30 years. There is also a trait implementation for PLT Scheme which is somewhat close in spirit (if not in practice) to what I am advocating here. The library you are reading about is by no means intended as a porting of the Smalltalk library: I am just stealing some of the ideas from that paper to implement a Pythonic alternative to mixins which, for lack of a better name, I have decided to call traits. I feel no obligation whatsoever to be consistent with the Smalltalk library. In doing so, I am following a long tradition, since a lot of languages use the name traits to mean something completely different from the Smalltalk meaning. For instance the languages Fortress and Scala use the name trait but with a different meaning (Scala traits are very close to multiple inheritance). For me a trait is a bunch of methods and attributes with the following properties:
- the methods/attributes in a trait belong logically together;
- if a trait enhances a class, then all subclasses are enhanced too;
- if a trait has methods in common with the class, then the methods defined in the class have the precedence;
- the trait order is not important, i.e. enhancing a class first with trait T1 and then with trait T2 or viceversa is the same;
- if traits T1 and T2 have names in common, enhancing a class both with T1 and T2 raises an error;
- if a trait has methods in common with the base class, then the trait methods have the precedence;
- a class can be seen both as a composition of traits and as an homogeneous entity.
Properties from 4 to 7 are the distinguishing properties of traits with respect to multiple inheritance and mixins. In particular, because of 4 and 5, all the complications with the Method Resolution Order disappear and the overriding is never implicit. Property 6 is mostly unusual: typically in Python the base class has the precedence over mixin classes. Property 7 should be intended in the sense that a trait implementation must provide introspection facilities to make seemless the transition between classes viewed as atomic entities and as composed entities.
A hands-on example
Let me begin by showing how you could rewrite a Tkinter class to use traits instead of mixins. Consider the Tkinter.Widget class, which is derived by the base class BaseWidget and the mixin classes Tkinter.Grid, Tkinter.Pack and Tkinter.Place: I want to rewrite it by using traits. The strait module provides a factory function named include that does the job. It is enough to replace the multiple inheritance syntax:
class Widget(BaseWidget, Grid, Pack, Place): pass
with the following syntax:
class Widget(BaseWidget): __metaclass__ = include(Pack, Place, Grid)
I said that the conversion from mixins to traits was easy: but actually I lied since if you try to execute the code I just wrote you will get an OverridingError:
>>> from Tkinter import * >>> class Widget(BaseWidget): ... __metaclass__ = include(Pack, Place, Grid) Traceback (most recent call last): ... OverridingError: Pack overrides names in Place: {info, config, configure, slaves, forget}
The reason for the error is clear: both Pack and Place provide methods called {info, config, configure, slaves, forget} and the traits implementation cannot figure out which ones to use. This is a feature, since it forces you to be explicit. In this case, if we want to be consistent with multiple inheritance rules, we want the methods coming from the first class (i.e. Pack) to take precedence. That can be implemented by including directly those methods in the class namespace and relying on rule 3:
class TOSWidget(BaseWidget): __metaclass__ = include(Pack, Place, Grid) info = Pack.info.im_func config = Pack.config.im_func configure = Pack.configure.im_func slaves = Pack.slaves.im_func forget = Pack.forget.im_func propagate = Pack.propagate.im_func
Notice that we had to specify the propagate method too, since it is a common method between Pack and Grid.
You can check that the TOSWidget class works, for instance by defining a label widget as follows (remember that TOSWidget inherits its signature from BaseWidget):
>>> label = TOSWidget(master=None, widgetName='label', ... cnf=dict(text="hello"))
You may visualize the widget by calling the .pack method:
>>> label.pack()
This should open a small window with the message "hello" inside it.
A few caveats and warnings
First of all, let me notice that, in spite of apparency, include does not return a metaclass. Insted, it returns a class factory function with signature name, bases, dic:
>>> print include(Pack, Place, Grid) <function include_Pack_Place_Grid at 0x...>
This function will create the class by using a suitable metaclass:
>>> type(TOSWidget) <class 'strait.MetaTOS'>
In simple cases the metaclass will be MetaTOS, the main class of the trait object system, but in general it can be a different one not inheriting from MetaTOS. The exact rules followed by include to determine the right class will be discussed later.
Here I want to remark that according to rule 6 traits take the precedence over the base class attributes. Consider the following example:
>>> class Base(object): ... a = 1>>> class ATrait(object): ... a = 2>>> class Class(Base): ... __metaclass__ = include(ATrait)>>> Class.a 2
In regular multiple inheritance you would do the same by including ATrait before Base, i.e.
>>> type('Class', (ATrait, Base), {}).a 2
You should take care to not mix-up the order, otherwise you will get a different result:
>>> type('Class', (Base, ATrait), {}).a 1
Therefore replacing mixin classes with traits can break your code if you rely on the order. Be careful!
The Trait Object System
The goal of the strait module it to modify the standard Python object model, turning it into a Trait Object System (TOS for short): TOS classes behave differently from regular classes. In particular TOS classes do not support multiple inheritance. If you try to multiple inherit from a TOS class and another class you will get a TypeError:
>>> class M: ... "An empty class" ... >>> class Widget2(TOSWidget, M): ... pass ... Traceback (most recent call last): ... TypeError: Multiple inheritance of bases (<class '__main__.TOSWidget'>, <class __main__.M at 0x...>) is forbidden for TOS classes
This behavior is intentional: with this restriction you can simulate an ideal world in which Python did not support multiple inheritance. Suppose you want to claim that supporting multiple inheritance was a mistake and that Python would have been better off without it (which is the position I tend to have nowadays): how can you prove that claim? Simply by writing code that does not use multiple inheritance and it is clearer and more mantainable that code using multiple inheritance.
I am releasing this trait implementation hoping you will help me to prove (or possibly disprove) the point. You may see traits as a restricted form of multiple inheritance without name clashes, without the complications of the method resolution, and with a limited cooperation between methods. Moreover the present implementation is slightly less dynamic than usual inheritance.
A nice property of inheritance is that if you have a class C inheriting from class M and you change a method in M at runtime, after C has been created and instantiated, automagically all instances of C gets the new version of the method, which is pretty useful for debugging purposes. This feature is lost in the trait implementation provided here. Actually, in a previous version, my trait implementation was fully dynamic and if you changed the mixin the instances would be changed too. However, I never used that feature in practice, and it was complicating the implementation and slowing doing the attribute access, so I removed it.
I think these are acceptable restrictions since they give back in return many advantages in terms of simplicity: for instance, super becomes trivial, since each class has a single superclass, whereas we all know that the current super in Python is very far from trivial.
The magic of include
Since the fundamental properties of TOS classes must be preserved under inheritance (i.e. the son of a TOS class must be a TOS class) the implementation necessarily requires metaclasses. As of now, the only fundamental property of a TOS class is that multiple inheritance is forbidden, so usually (but not always) TOS classes are instances of the metaclass MetaTOS which implements a single inheritance check. If you build your TOS hierarchy starting from pre-existing classes, you should be aware of how include determines the metaclass: if your base class was an old-style class or a plain new style class (i.e. a direct instance of the type metaclass), them include will change it to MetaTOS:
>>> type(TOSWidget) <class 'strait.MetaTOS'>
In general you may need to build your Trait Based Framework on top of pre-existing classes possessing a nontrivial metaclass, for instance Zope classes; in that case include is smart enough to figure out the right metaclass to use. Here is an example:
class AddGreetings(type): "A metaclass adding a 'greetings' attribute for exemplification purposes" def __new__(mcl, name, bases, dic): dic['greetings'] = 'hello!' return super(AddGreetings, mcl).__new__(mcl, name, bases, dic)
class WidgetWithGreetings(BaseWidget, object): __metaclass__ = AddGreetings
class PackWidget(WidgetWithGreetings): __metaclass__ = include(Pack)
include automatically generates the right metaclass as a subclass of AddGreetings:
>>> print type(PackWidget).__mro__ (<class 'strait._TOSAddGreetings'>, <class '__main__.AddGreetings'>, <type 'type'>, <type 'object'>)
Incidentally, since TOS classes are guaranteed to be in a straight hierarchy, include is able to neatly avoid the dreaded metaclass conflict.
The important point is that _TOSAddGreetings provides the same features of MetaTOS, even if it is not a subclass of it; on the other hand, _TOSMetaAddGreetings is a subclass of AddGreetings which calls AddGreetings.__new__, so the features provided by AddGreetings are not lost either; in this example you may check that the greetings attribute is correctly set:
>>> PackWidget.greetings 'hello!'
The name of the generated metaclass is automatically generated from the name of the base metaclass; moreover, a register of the generated metaclasses is kept, so that metaclasses are reused if possible. If you want to understand the details, you are welcome to give a look at the implementation, which is pretty short and simple, compared to the general recipe to remove the metaclass conflict in a true multiple inheritance situation.
Cooperative traits
At first sight, the Trait Object System lacks an important feature of multiple inheritance as implemented in the ordinary Python object system, i.e. cooperative methods. Consider for instance the following classes:
class LogOnInitMI(object): def __init__(self, *args, **kw): print 'Initializing %s' % self super(LogOnInitMI, self).__init__(*args, **kw)
class RegisterOnInitMI(object): register = [] def __init__(self, *args, **kw): print 'Registering %s' % self self.register.append(self) super(RegisterOnInitMI, self).__init__(*args, **kw)
In multiple inheritance LogOnInitMI can be mixed with other classes, giving to the children the ability to log on initialization; the same is true for RegisterOnInitMI, which gives to its children the ability to populate a registry of instances. The important feature of the multiple inheritance system is that LogOnInitMI and RegisterOnInitMI play well together: if you inherits from both of them, you get both features:
class C_MI(LogOnInitMI, RegisterOnInitMI): pass
>>> c = C_MI() Initializing <__main__.C_MI object at 0x...> Registering <__main__.C_MI object at 0x...>
You cannot get the same behaviour if you use the trait object system naively:
>>> class C_MI(object): ... __metaclass__ = include(LogOnInitMI, RegisterOnInitMI) ... Traceback (most recent call last): ... OverridingError: LogOnInitMI overrides names in RegisterOnInitMI: {__init__}
This is a feature, of course, since the trait object system is designed to avoid name clashes. However, the situation is worse than that: even if you try to mixin a single class you will run into trouble
>>> class C_MI(object): ... __metaclass__ = include(LogOnInitMI)
>>> c = C_MI() Traceback (most recent call last): ... TypeError: super(type, obj): obj must be an instance or subtype of type
What's happening here? The situation is clear if you notice that the super call is actually a call of kind super(LogOnInitMI, c) where c is an instance of C, which is not a subclass of LogOnInitMI. That explains the error message, but does not explain how to solve the issue. It seems that method cooperation using super is impossible for TOS classes.
Actually this is not the case: single inheritance cooperation is possible and it is enough as we will show in a minute. But for the moment let me notice that I do not think that cooperative methods are necessarily a good idea. They are fragile and cause all of your classes to be strictly coupled. My usual advice if that you should not use a design based on method cooperation if you can avoid it. Having said that, there are situations (very rare) where you really want method cooperation. The strait module provide support for those situations via the __super attribute.
Let me explain how it works. When you mix-in a trait T into a class C, include adds an attribute _T__super to C, which is a super object that dispatches to the attributes of the superclass of C. The important thing to keep in mind is that there is a well defined superclass, since the trait object system uses single inheritance only. Since the hierarchy is straight, the cooperation mechanism is much simpler to understand than in multiple inheritance. Here is an example. First of all, let me rewrite LogOnInit and RegisterOnInit to use __super instead of super:
class LogOnInit(object): def __init__(self, *args, **kw): print 'Initializing %s' % self self.__super.__init__(*args, **kw)
class RegisterOnInit(object): register = [] def __init__(self, *args, **kw): print 'Registering %s' % self self.register.append(self) self.__super.__init__(*args, **kw)
Now you can include the RegisterOnInit functionality as follows:
class C_Register(object): __metaclass__ = include(RegisterOnInit)
>>> _ = C_Register() Registering <__main__.C_Register object at 0x...>
Everything works because include has added the right attribute:
>>> C_Register._RegisterOnInit__super <super: <class 'C_Register'>, <C_Register object>>
Moreover, you can also include the LogOnInit functionality:
class C_LogAndRegister(C_Register): __metaclass__ = include(LogOnInit)
>>> _ = C_LogAndRegister() Initializing <__main__.C_LogAndRegister object at 0x...> Registering <__main__.C_LogAndRegister object at 0x...>
As you see, the cooperation mechanism works just fine. I will call cooperative trait a class intended for inclusion in other classes and making use of the __super trick. A class using the regular super directly cannot be used as a cooperative trait, since it must satisfy inheritance constraints, nevertherless it is easy enough to convert it to use __super. After all, the strait module is intended for framework writers, so it assumes you can change the source code of your framework if you want. On the other hand, if are trying to re-use a mixin class coming from a third party framework and using super, you will have to rewrite the parts of it. That is unfortunate, but I cannot perform miracles.
You may see __super as a clever hack to use super indirectly. Notice that since the hierarchy is straight, there is room for optimization at the core language level. The __super trick as implemented in pure Python leverages on the name mangling mechanism, and follows closely the famous autosuper recipe, with some improvement. Anyway, if you have two traits with the same name, you will run into trouble. To solve this and to have a nicer syntax, one would need more support from the language, but the __super trick is good enough for a prototype and has the serious advantage of working right now for current Python.
Cooperation at the metaclass level
In my experience, the cases where you need method cooperation in multiple inheritance situations are exceedingly rare, unless you are a language implementor or a designer of very advanced frameworks. In such a realm you have a need for cooperative methods; it is not a pressing need, in the sense that you can always live without them, but they are a nice feature to have if you care about elegance and extensibility. For instance, as P. J. Eby points it out in this thread on python-dev:
A major use case for co-operative super() is in the implementation of metaclasses. The __init__ and __new__ signatures are fixed, multiple inheritance is possible, and co-operativeness is a must (as the base class methods must be called). I'm hard-pressed to think of a metaclass constructor or initializer that I've written in the last half-decade or more where I didn't use super() to make it co-operative. That, IMO, is a compelling use case even if there were not a single other example of the need for super.
I have always felt the same. So, even if I have been unhappy with multiple inheritance for years, I could never dismiss it entirely because of the concern for this use case. It is only after discovering cooperative traits that I felt the approach powerful enough to replace multiple inheritance without losing anything I cared about.
Multiple inheritance at the metaclass level comes out here and again when you are wearing the language implementor hat. For instance, if you try to implement an object system based on traits, you will have to do so at the metaclass level and there method cooperation has its place. In particular, if you look at the source code of the strait module - which is around 100 lines, a tribute to the power of Python - you will see that the MetaTOS metaclass is implemented as a cooperative trait, so that it can be mixed-in with other metaclasses, in the case you are interoperating with a framework with a non-trivial meta object protocol. This is performed internally by include.
Metaclass cooperation is there to make the life of the users easier. Suppose one of you, users of the strait module, wants to enhance the include mechanism using another a metaclass coming for a third party framework and therefore not inheriting from MetaTOS:
class ThirdPartyMeta(type): def __new__(mcl, name, bases, dic): print 'Using ThirdPartyMeta to create %s' % name return super(ThirdPartyMeta, mcl).__new__(mcl, name, bases, dic)
The way to go is simple. First, you should mix-in MetaTOS in the third party class:
class EnhancedMetaTOS(ThirdPartyMeta): __metaclass__ = include(MetaTOS)
Then, you can define your own enhanced include as follows:
def enhanced_include(*traits): return include(MetaTOS=EnhancedMetaTOS, *traits)
In simple cases using directly ThirdPartyMeta may work, but I strongly recommend to replace the call to super with __super even in ThirdPartyMeta to make the cooperation robust.
Discussion of some design decisions and future work
The decision of having TOS classes which are not instances of MetaTOS required some thought. That was my original idea in version 0.1 of strait; however in version 0.2 I wanted to see what would happen if I made all TOS classes instances of MetaTOS. That implied that if your original class had a nontrivial metaclass, then the TOS class had to inherit both from the original metaclass and MetaTOS, i.e. multiple inheritance and cooperation of methods was required at the metaclass level.
I did not like it, since I was arguing that you can do everything without multiple inheritance; moreover using multiple inheritance at the metaclass level meant that one had to solve the metaclass conflict in a general way. I did so, by using my own cookbook recipe, and all my tests passed.
Neverthess, at the end, in version 0.3 I decided to go back to the original design. The metaclass conflict recipe is too complex, and I see it as a code smell - if the implementation is hard to explain, it's a bad idea - just another indication that multiple inheritance is bad. In the original design it is possible to add the features of MetaTOS to the original metaclass by subclassing it with single inheritance and thus avoiding the conflict.
The price to pay is that now a TOS class is no more an instance of MetaTOS, but this is a non-issue: the important thing is that TOS classes perform the dispatch on their traits as MetaTOS would dictate. Moreover, starting from Python 2.6, thanks to Abstract Base Classes, you may satisfy the isinstance(obj, cls) check even if obj is not an instance of cls, by registering a suitable base class (similarly for issubclass). In our situation, that means that it is enough to register MetaTOS as base class of the original metaclass.
Version 0.4 was much more complex that the current version (still short, it was under 300 lines of pure Python), since it had the more ambitious goal of solving the namespace pollution problem. I have discussed the issue elsewhere: if you keep injecting methods into a class (both directly or via inheritance) you may end up having hundreds of methods flattened at the same level.
A picture is worth a thousand words, so have a look at the PloneSite hierarchy if you want to understand the horror I wanted to avoid with traits (the picture shows the number of nonspecial attributes defined per class in square brackets): in the Plone Site hierarchy there are 38 classes, 88 overridden names, 42 special names, 648 non-special attributes and methods. It is a nighmare.
Originally I wanted to prevent this kind of abuse, but that made my implementation more complex, whereas my main goal was to keep the implementation simple. As a consequence this version assume the prosaic attitude that you cannot stop programmers from bad design anyway, so if they want to go the Zope way they can.
In previous versions I did provide some syntactic sugar for include so that it was possible to write something like the following (using a trick discussed here):
class C(Base): include(Trait1, Trait2)
In version 0.5 I decided to remove this feature. Now the plumbing (i.e. the __metaclass__ hook) is exposed to the user, some magic has been removed and it is easier for the user to write her own include factory if she wants to.
Where to go from here? For the moment, I have no clear idea about the future. The Smalltalk implementation of traits provides method renaming out of the box. The Python implementation has no facilities in this sense. In the future I may decide to give some support for renaming, or I may not. At the present you can just rename your methods by hand. Also, in the future I may decide to add some kind of adaptation mechanism or I may not: after all the primary goal of this implementation is semplicity and I don't want to clutter it with too many features.
I am very open to feedback and criticism: I am releasing this module with the hope that it will be used in real life situations to gather experience with the traits concept. Clearly I am not proposing that Python should remove multiple inheritance in favor of traits: considerations of backward compatibily would kill the proposal right from the start. I am just looking for a few adventurous volunteers wanting to experiment with traits; if the experiment goes well, and people start using (multiple) inheritance less than they do now, I will be happy.
Trivia
strait officially stands for Simple Trait object system, however the name is also a pun on the world "straight", since the difference between multiple inheritance hierarchies and TOS hierarchies is that TOS hierarchies are straight. Moreover, nobody will stop you from thinking that the s also stands for Simionato ;)
- Author: Michele Simionato <michele simionato at gmail com>
- License: BSD License
- Platform: any
- Categories
- Package Index Owner: micheles
- DOAP record: strait-0.5.1.xml | http://pypi.python.org/pypi/strait/0.5.1 | crawl-002 | refinedweb | 4,759 | 50.26 |
We
PingBack from
Hi Mike;
I think the videos are great to get an understanding for all the great work your team has done.
If your team is in charge of the Silverlight Client module, I would vote for creating a video that takes you through all the essential steps to create the server side service and then create the client side and have them work with each other.
Another great additions to your videos would be the "Hands-on Lab"
This is very valuable to get the ball rolling.
We are very grateful of your work!
Thanks!
..Ben
Thanks for the suggestion. We’ll work on getting a silverlight video out. I’ll create another blog post when we have something.
I would like to see client-side example consuming some popular public REST-based services such as GData (Google Calendar would be perfect), Digg, etc.
Specifically, how can we use the System.Data.Services.Client namespace to access Flikr, Digg, GData, etc, in WinForms/WPF clients. Including the authorization steps, and, creating client-side objects WITHOUT the benefits of running a tool over the *.svc metadata. To be clear, I am talking about consuming someone else’s data service, not one that I create myself.
Bill: the challenge is that these services are in many cases completely different, so writing a client for all of them would effectively mean writing more or less a client per service and then put some wrapper on top. Some, like GData, are actually more uniform and based on AtomPub, but Flikr, Digg, etc. have their own APIs (and feeds, but those are more for consumption by feed readers than a generic programming interface).
-pablo
I've decided to collect interesting posts on various topics and when reach critical mass publish | https://blogs.msdn.microsoft.com/odatateam/2008/08/15/how-do-i-videos/ | CC-MAIN-2017-13 | refinedweb | 298 | 70.23 |
7.7.
7.7. 7.7.1 The main difference between ResNet (left) and DenseNet (right) in cross-layer connections: use of addition and use of concatenation.¶.
7.7.2. Dense Blocks¶
DenseNet uses the modified “batch normalization, activation, and
convolution” architecture of ResNet (see the exercise in
Section 7.6). First, we implement this architecture in
the
conv_block function.
import sys sys.path.insert(0, '..') import d2) # Concatenate the input and output of each block on the channel # dimension X = nd.concat(X, Y, dim=1).
blk = DenseBlock(2, 10) blk.initialize() X = nd.random.uniform(shape=(4, 3, 8, 8)) Y = blk(X) Y.shape
(4, 23, 8, 8)
7.7.
blk = transition_block(10) blk.initialize() blk(Y).shape
(4, 10, 4, 4)
7.7.4. DenseNet Model¶
Next, we will construct a DenseNet model. DenseNet first uses the same single convolutional layer and maximum pooling layer as ResNet..
# Num_channels: the current number of channels num_channels, growth_rate = 64, 32: num_channels //= 2 net.add(transition_block(num_channels))
Similar to ResNet, a global pooling layer and fully connected layer are connected at the end to produce the output.
net.add(nn.BatchNorm(), nn.Activation('relu'), nn.GlobalAvgPool2D(), nn.Dense(10))
7.7.5. Data Acquisition and Training¶
Since we are using a deeper network here, in this section, we will reduce the input height and width from 224 to 96 to simplify the computation.
lr, num_epochs, batch_size, ctx = 0.1, 5, 256, d2l.try_gpu() net.initialize.5404, train acc 0.809, test acc 0.869, time 14.6 sec epoch 2, loss 0.3104, train acc 0.886, test acc 0.886, time 13.0 sec epoch 3, loss 0.2632, train acc 0.904, test acc 0.895, time 13.0 sec epoch 4, loss 0.2362, train acc 0.914, test acc 0.905, time 13.0 sec epoch 5, loss 0.2110, train acc 0.923, test acc 0.810, time 13.0 sec
7.7.
7.7.7. Exercises.
7.7.8. References¶
[1] Huang, G., Liu, Z., Weinberger, K. Q., & van der Maaten, L. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (Vol. 1, No. 2). | https://www.d2l.ai/chapter_convolutional-modern/densenet.html | CC-MAIN-2019-22 | refinedweb | 369 | 72.83 |
A pseudorandom number generator (PRNG) is a deterministic algorithm capable of generating sequences of numbers that approximate the properties of random numbers. Each sequence is completely determined by the initial state of the PRNG and the algorithm for changing the state. Most PRNGs make it possible to set the initial state, also called the seed state. Setting the initial state is called seeding the PRNG.
Calling a PRNG in the same initial state, either without seeding it explicitly or by seeding it with the same value, results in generating the same sequence of random numbers in different runs of the program. Consider a PRNG function that is seeded with some initial seed value and is consecutively called to produce a sequence of random numbers,
S. If the PRNG is subsequently seeded with the same initial seed value, then it will generate the same sequence
S.
As a result, after the first run of an improperly seeded PRNG, an attacker can predict the sequence of random numbers that will be generated in the future runs. Improperly seeding or failing to seed the PRNG can lead to vulnerabilities, especially in security protocols.
The solution is to ensure that the PRNG is always properly seeded. A properly seeded PRNG will generate a different sequence of random numbers each time it is run.
Not all random number generators can be seeded. True random number generators that rely on hardware to produce completely unpredictable results do not need to be and cannot be seeded. Some high-quality PRNGs, such as the
/dev/random device on some UNIX systems, also cannot be seeded. This rule applies only to algorithmic pseudorandom number generators that can be seeded.
Noncompliant Code Example (POSIX)
This noncompliant code example generates a sequence of 10 pseudorandom numbers using the
random() function. When
random() is not seeded, it behaves like
rand(), producing the same sequence of random numbers each time any program that uses it is run.
#include <stdio.h> #include <stdlib.h> void func(void) { for (unsigned int i = 0; i < 10; ++i) { /* Always generates the same sequence */ printf("%ld, ", random()); } }
The output is as follows:
1st run: 1804289383, 846930886, 1681692777, 1714636915, 1957747793, 424238335, 719885386, 1649760492, 596516649, 1189641421, 2nd run: 1804289383, 846930886, 1681692777, 1714636915, 1957747793, 424238335, 719885386, 1649760492, 596516649, 1189641421, ... nth run: 1804289383, 846930886, 1681692777, 1714636915, 1957747793, 424238335, 719885386, 1649760492, 596516649, 1189641421,
Compliant Solution (POSIX)
Call
srandom() before invoking
random() to seed the random sequence generated by
random(). This compliant solution produces different random number sequences each time the function is called, depending on the resolution of the system clock:
#include <stdio.h> #include <stdlib.h> #include <time.h> void func(void) { struct timespec ts; if (timespec_get(&ts, TIME_UTC) == 0) { /* Handle error */ } else { srandom(ts.tv_nsec ^ ts.tv_sec); for (unsigned int i = 0; i < 10; ++i) { /* Generates different sequences at different runs */ printf("%ld, ", random()); } } }
The output is as follows:
1st run: 198682410, 2076262355, 910374899, 428635843, 2084827500, 1558698420, 4459146, 733695321, 2044378618, 1649046624, 2nd run: 1127071427, 252907983, 1358798372, 2101446505, 1514711759, 229790273, 954268511, 1116446419, 368192457, 1297948050, 3rd run: 2052868434, 1645663878, 731874735, 1624006793, 938447420, 1046134947, 1901136083, 418123888, 836428296, 2017467418,
This may not be sufficiently random for concurrent execution, which may lead to correlated generated series in different threads. Depending on the application and the desired level of security, a programmer may choose alternative ways to seed PRNGs. In general, hardware is more capable than software of generating real random numbers (for example, by sampling the thermal noise of a diode).
Compliant Solution (Windows)
The
BCryptGenRandom() function does not run the risk of not being properly seeded because its arguments serve as seeders:
#include <stdio.h> #include <Windows.h> #include <Bcrypt.h> #include <Ntstatus.h> #include <Wincrypt.h> void func(void) { BCRYPT_ALG_HANDLE hAlgorithm = NULL; long rand_buf; PUCHAR pbBuffer = (PUCHAR) &rand_buf; ULONG cbBuffer = sizeof(rand_buf); ULONG dwFlags = BCRYPT_USE_SYSTEM_PREFERRED_RNG; NTSTATUS status; for (unsigned int i = 0; i < 10; ++i) { status = BCryptGenRandom(hAlgorithm, pbBuffer, cbBuffer, dwFlags); if (status == STATUS_SUCCESS) { printf("%ld, ", rand_buf); } else { /* Handle Error */ } } }
The output is as follows:
1st run: -683378946, 1957231690, 1933176011, -1745403355, -883473417, 882992405, 169629816, 1824800038, 899851668, 1702784647, 2nd run: -58750553, -1921870721, -1973269161, 1512649964, -673518452, 234003619, -1622633366, 1312389688, -2125631172, 2067680022, 3rd run: -189899579, 1220698973, 752205360, -1826365616, 79310867, 1430950090, -283206168, -941773185, 129633665, 543448789,
Risk Assessment
Automated Detection
Related Vulnerabilities
Search for vulnerabilities resulting from the violation of this rule on the CERT website.
Related Guidelines
Key here (explains table format and definitions)
CERT-CWE Mapping Notes
Key here for mapping notes
CWE-327 and MSC32-C
- Intersection( MSC30-C, MSC32-C) = Ø
- MSC32-C says to properly seed pseudorandom number generators. For example, if you call rand(), make sure to seed it properly by calling srand() first. So far, we haven’t found any calls to rand().
- Failure to seed a PRNG causes it to produce reproducible (hence insecure) series of random numbers.
- CWE-327 = Union( MSC32-C, list) where list =
- Invocation of broken/risky crypto algorithms that are not properly seeded
CWE-330 and MSC32-C
Independent( MSC30-C, MSC32-C, CON33-C)
CWE-330 = Union( MSC30-C, MSC32-C, CON33-C, list) where list = other improper use or creation of random values. (EG the would qualify)
MSC30-C, MSC32-C and CON33-C are independent, they have no intersections. They each specify distinct errors regarding PRNGs.
27 Comments
Robert Seacord
How is this different from MSC30-C. Do not use the rand() function for generating pseudorandom numbers
Elpiniki Tsakalaki
MSC30-C addresses the problem that numbers generated by
rand()have a comparatively short cycle, meaning that the numbers may be predictable. Suppose we have the following code:
After some iterations (I believe at about RAND_MAX/2 - according to birthday paradox), we will start having collisions. Right?
MSC18-C is quite different from MSC30-C in the following sense:
MSC30-C: Calling rand() in a large for loop will sooner or later result in generating the same random numbers.
MSC18-C: Calling rand() to create a sequence of random numbers will always result in the same sequence generated at different runs of the program. For example, suppose we want to create a sequence of 10 random numbers and thus we write the following code:
Running our program will produce let's say: 41, 18467, 6334, 265000, 19169, 15724, 11478, 29358, 26962, 24464 (actually this is a real sequence generated when I run the program)
Running the same program a second time will produce the exact same sequence. More generally, any subsequent runs of the program will generate the same sequence.
If program changes to:
then different runs of the program will produce different sequences of random numbers.
The same thing holds for random() for POSIX, too! And the solution is also the same. Use of srandom() seeds random() to produce different sequences at different runs of the program. I have not written this yet, but I will do so soon. Again:
MSC30-C: Mentions that random() generates random numbers generated with a bigger cycle.
MSC18-C: Mentions that when come to sequences of random numbers generated in different runs of the program, then random() has the same behavior with rand() and should be properly seeded.
I will also check CryptGenRandom() for Windows.
From a security perspective, MSC18-C is different in the following sense:
MSC30-C: A malicious user should have to wait a quite reasonably amount of time before starting predicting (with some probability) patterns of generated random numbers.
MSC18-C: After the first run of the program, a malicious user will know the sequence of random numbers to be generated in any subsequent runs.
If our program is a game where the player has to find hidden treasures, then the next time a user plays, (s)he will know where the treasures are and (s)he can cheat!
In a worse case, suppose that Alice and Bob want to communicate 10 times per day. Suppose also, that they use asymmetric cryptography (Diffie-Hellman, or El-Gamal, or any other protocol where parties pick some random numbers). Suppose Bob and Alice run a program to generate a sequence of 10 random numbers. Each number produced will be used in the corresponding communication in between them, i.e. 1st number will be used for their first communication and then dropped, 2nd number for their second communication and then dropped, etc. The problem is that if Bob and Alice run the same program the next day to produce another sequence of random numbers, then the sequence generated will be the same! On the other hand, if Alice and Bob use srand() to seed rand(), then different sequences will be produced.
Both MSC30-C and MSC18-C refere to the problem of randomness, but from a different point of view and by transferring the time window until collision takes place. Vulnerable time window for MSC18-C will be just the next run!!!
Taking all the above into account, I am starting thinking if this should be a rule instead of just a recommendation. What do you think about it?
David Svoboda
To be honest, this might make a good rule, rather than a recommendation. It is universally applicable (srand() can seed either randomely or deterministically), forgetting to seed will yield a vul, and it is not that difficult to enforce automatically (and it is a snap to check with dynamic analysis).
There may still be a good reason this should be a recommendation, not a rule, but I can't think of it, and IMHO the best way to find it is to make this a rule and find someone who knows more about random numbers than us to criticize it
So go ahead & make it a rule, not a rec.
David Svoboda
This rule is not quite valid, because MSC30-C depreates the use of rand(), recommending random() for POSIX and CryptGenRandom() for Windows. So you shouldn't be advocating proper usage of rand().
However, you can make valid recommendation out of this by generalizing it to something like "ensure your random number generator is properly seeded". MSC30-C doesn't address seeding (though the code examples do seed properly).
That done, there's just a few other things you'll need:
Elpiniki Tsakalaki
It was not my intention to advocate proper usage for deprecated rand(). My objective was to address the problem of generating different sequences of random numbers. The same problem holds for rand(), random() and I believe this will also be the case for CryptGenRandom(). I will check and update MSC18-C soon.
I also think that MSC18-C should be changed in something like "ensure your random number generator is properly seeded". Shall I do this?
I would also like to ask how the Risk Assessment table is going to be filled. In adition, what about Automated Detection and Related Vulnerabilities? Any hints?
David Svoboda
wrt "ensure your RNS is properly seeded", yes do that. Make it a rule, not a rec, as outlined in my earlier comment.
As to your other questions, consult the wiki pages in the Introduction section; they specifically answer your question (as well as explain the differences between rules and recommendations.)
Dhruv Mohindra
Some caveats that you can try to address -
David Svoboda
I'm not sure if seeding a RNG with the current time is theoretically vulnerable to an attacker modifying the time, since, due to the interval between time modification and RNG seeding, an attacker can't use time modification to pass a known value to a RNG seed.
Still, if one is that concerned about randomization, one would not be using rand()/srand().
Other points:
Dhruv Mohindra
Since 'time' is predictable, and changes the sequence only after every second, it might not be hard to supply a stale time so that the same sequence is regenerated. (srand generates the same sequence when the seed is the same. An attacker can account for the expected network delay and set the time so that it matches a previous run of the protocol). But yes, the attacker would need to know the exact time when the protocol will be rerun. (which is possibly not hard, consider smartcards where the attacker can see the person swiping). Also consider, srand(time) will generate the same sequence everyday at that 'time'. I suspect it is fine to use rand() for casual use as you pointed out.
The other issue was that suppose several processes are initiated at the same time. They will all end up using the same 'random' sequence due to the dependence on system time. Perhaps for *nix systems, one should have srand with (current time concatenated with process id) as a parameter to distinguish these?
Elpiniki Tsakalaki
time() returns the current time counted as seconds since 01/01/1970. So, the srand(time(NULL)) will not generate the same sequence everyday at that 'time'.
Moreover, I have changed the rule so that it mentions that time() is only used as an example. Seed to be used depends on the security level each application requires. Also, for real random numbers, hardware is better.
Robert Seacord
I'm not sure how the compliant solution can make use of the
rand()function given: MSC30-C. Do not use the rand() function for generating pseudorandom numbers
Elpiniki Tsakalaki
I have completed the rule. The rule does address the seeding for all three functions, i.e. rand(), random() and CryptGenRandom(), mentioned in MSC30-C. Do not use the rand() function for generating pseudorandom numbers for completeness. I have the feeling that as the rule is written now, it both examines the case for seeding rand() and does not contradict to MSC30-C.
Please take a look and let me know if further modifications are needed.
Geoff Clare
The C Standard doesn't specify the encoding of
time_t, so the comment
/* Create seed based on current time counted as seconds from 01/01/1970 */
in the first compliant solution is wrong. (It is okay in the POSIX solution.)
Elpiniki Tsakalaki
So, do you think it is better to change the comment in the first compliant solution to something like /*
Create seed based on current time */? I can do this but I don't get it. OK, ISO C defines time_t as an arithmetic type, but does not specify any particular type, range, resolution or encoding for it. But if you run time(NULL) and convert the result to years and days, then you will find 'today' counted from 'Unix Epoch'. Do you think that a different C compiler would give another result?
David Svoboda
Niki, I agree with Geoff's recommendation. His point is that C makes no guarantee that time_t represents number of seconds since the epoch. While POSIX may guarantee that, your first compliant example purports to be standard C; thus it should not make any assumptions about the internals of time_t. It is possible, and legal, for a standards-compliant compiler to return a time_t that does not represent seconds from the epoch, and your standard-compliant example must take that possiblity into account.
Geoff Clare
Changing the comment as you suggest would be fine.
ISO C just requires time() to return a value that represents the current time in some way. The value does not have to be a count of some time unit such as seconds, and subtracting two time_t values does not have to produce anything meaningful - that's why the
difftime()function exists.
Elpiniki Tsakalaki
OK, done!
Pavel Vasilyev
I get funny results in Linux with next code
id - is maximal number which me need;
20 - is experemental number
On little id this function get good results! Why? I do not know!
David Svoboda
Your results are probably funny because in POSIX the value returned by time(2) represents the number of seconds since the epoch. So calling time(2) twice during the same second returns the same result...consequently your seeds are the same and you'll therefore get the same random sequence in each loop iteration. As noted in previous comments, C99 only requires that time(2) return an integral type representing time, and it is POSIX that requires time(2)'s return value to be expressed in seconds.
BTW my srand48(3) manpage says:
Robert Seacord (Manager)
This rule sort of violates our guideline of having at least one NCE based on the C Standard. In this case, that example would be rand(), but because we have a rule against using rand(), I decided to omit that example.
Geoff Clare
A statement has been added to the POSIX CS saying "This may not be sufficiently random for ... small embedded systems that have an
unsigned inttype with a width of 16 bits." POSIX specifies a minimum width of 32 bits for
unsigned int, so this statement should either be removed or be extended to point out that such systems do not conform to POSIX.
David Svoboda
Fixed, thanks.
Robert Seacord
I'm still puzzling a bit over why we have this statement. Were their earlier versions of POSIX which allowed 16 bit unsigned int? Certainly C does.
Geoff Clare
Yes, prior to the 2001 edition, POSIX.1 allowed 16 bit unsigned int.
Robert Seacord
Thanks. I added this history to the description. It now makes more sense (to me) why we are discussing 16 bit unsigned int in this context.
versat
The compliant solution for Windows maybe should be updated according to the changes made for MSC30-C (CryptGenRandom() replaced by BCryptGenRandom()).
David Svoboda
Ryan Steele has provided us with a new compliant example using BCryptGenRandom(). | https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87152219 | CC-MAIN-2019-22 | refinedweb | 2,912 | 60.75 |
Use Studio to Configure MongoDB Connector 6.2 - the MongoD <connector name> to your Mule project to automatically populate the XML code with the connector’s namespace and schema location and add the required dependencies to the project’s
pom.xml file:
In the Mule Palette view, click (X) Search in Exchange.
In Add Modules to Project, type
mongodbin the search field.
Click MongoDB in Available modules.
Click Add.
Click Finish.
Adding a connector to a Mule project in Studio does not make that connector available to other projects in your Studio workspace.
Configure an Input Source for the Connector
Configure an input source for the connector, such as the Object Listener operation, HTTP Listener, or Scheduler. The Object Listener operation polls the given MongoDB collection for new documents that apply to the filter and invokes your app when it finds one.
For example, to configure
HTTP Listener MongoDB, follow these steps:
In the Mule Palette view, select MongoDB and then select the desired operation.
Drag the operation onto the Studio canvas to the right of the input source. MongoDB system.
To configure the global element for MongoDB Connector, follow these steps:
On the Global Elements tab, click Create.
Expand Connector Configuration and search for
MongoDB config.
Click OK.
Enter a value for the following properties:
Servers (host:port)
Servers where your MongoDB instance is located, as specified by one or more host:port pairs. The default value is
localhost:27017. Follow these steps to add one or more host:port instances:
Select Edit inline.
Click the green plus sign (+).
Add the host:port instance and click Finish.
Repeat steps 2 and 3 for each host:port instance you want to add.
Database
Database that sends and retrieves data from the MongoDB instance
Username
Username to access the MongoDB instance
Corresponding password
Use the MongoDB configuration with Spring property placeholders for the Connection field values when you plan to deploy your application to CloudHub or to Mule runtime engine. You can hard code your connection credentials in the global element properties if you are in the development stage and want to accelerate the development process.
Global Configuration with SSL (Optional)
MongoDB Connector v6.0.0 and later supports the use of SSL, if your MongoDB instance supports it. To set up the global configuration for SSL, select the TLS context field in the Security tab in your MongoDB connection.
Logging
By default, logging is enabled via SLF4J API. The driver uses the following logger names:
org.mongodb.driver, the root logger
cluster, for logs related to monitoring of the MongoDB servers to which the driver connects
connection, for logs related to connections and connection pools
protocol, for logs related to protocol message sent to and received from a MongoDB server
insert, for logs related to insert messages and responses
update, for logs related to update messages and responses
delete, for logs related to delete messages and responses
query, for logs related to query messages and responses
getmore, for logs related to getmore messages and responses
killcursor, for logs related to killcursor messages and responses
command, for logs related to command messages and responses
uri, for logs related to connection string parsing
management, for logs related to JMX | https://docs.mulesoft.com/mongodb-connector/6.2/mongodb-connector-studio | CC-MAIN-2021-49 | refinedweb | 537 | 51.07 |
Up until now, if you started writing a function component and then ran into a situation where you needed to add state, you’d have to convert the component to a class.
Write out
class Thing extends React.Component, copy the function body into the
render() method, fix the indentation, and finally add your state.
Today, you can get that same functionality with a hook, and save yourself the work.
What’s a “hook”? Good question. Learn about Hooks here.
In this article, we’re looking specifically at the
useState hook.
What Does useState Do?
The
useState hook lets you add state to function components. (I’m going to keep calling these “hooks” but they’re actually functions, and they come bundled with React 16.8). By calling
useState inside a function component, you’re creating a single piece of state.
In classes, the state is always an object. You can store properties on that object.
With hooks, the state doesn’t have to be an object. It can be any type you want – an array, a number, a boolean, a string, etc. Each call to
useState creates a single piece of state, holding a single value.
This will probably make more sense with an example.
Example: Show/Hide a Component With useState
This example is a component that displays some text with a “read more” link at the end, and will expand to show the rest of the text when the link is clicked.
Or if you’re more the video type, watch me build a similar component here:
Read through the comments to see what’s happening here:
// First: import useState, which is a named export from 'react' // We could alternatively skip this step, and write React.useState import React, { useState } from 'react'; import ReactDOM from 'react-dom'; // This component expects 2 props: // text - the text to display // maxLength - how many characters to show before "read more" function LessText({ text, maxLength }) { // Create a piece of state, and initialize it to `true` // `hidden` will hold the current value of the state, // and `setHidden` will let us change it const [hidden, setHidden] = useState(true); // If the text is short enough, don't bother with the // buttons if (text.length <= maxLength) { return <span>{text}</span>; } // Render the text (shortened or full-length) followed by // a link to expand/collapse it. // When a link is clicked, update the value of `hidden`, // which will trigger a re-render return ( <span> {hidden ? `${text.substr(0, maxLength).trim()} ...` : text} {hidden ? ( <a onClick={() => setHidden(false)}> read more</a> ) : ( <a onClick={() => setHidden(true)}> read less</a> )} </span> ); } ReactDOM.render( <LessText text={`Focused, hard work is the real key to success. Keep your eyes on the goal, and just keep taking the next step towards completing it.`} maxLength={35} />, document.querySelector('#root') );
Try out the working example in this CodeSandbox!
With just one line of code, we’ve made this function stateful:
const [hidden, setHidden] = useState(true);
Once that’s done, the “read more” / “read less” links just need to call
setHidden when they’re clicked.
useState returns an array with 2 elements, and we’re using ES6 destructuring to assign names to them. The first element is the current value of the state, and the second element is a state setter function – just call it with a new value, and the state will be set and the component will re-render.
const [hidden, setHidden] = useState(true);
But what is this function doing, really? If it gets called every render (and it does!), how can it retain state?
Tricksy Hooks
The “magic” here is that React maintains an object behind the scenes for each component, and in this persistent object, there’s an array of “state cells.” When you call
useState, React stores that state in the next available cell, and increments the pointer (the array index).
Assuming that your hooks are always called in the same order (which they will be, if you’re following the Rules of Hooks), React is able to look up the previous value for that particular
useState call. The first call to
useState is stored in the first array element, the second call in the second element, and so on.
It’s not magic, but it relies on a truth you may not have thought about: React is the one calling your component, so it can set things up beforehand. And moreover, the act of rendering a component is not just a function call. JSX like
<Thing/> gets compiled to
React.createElement(Thing) – so React is clearly in control of how and when it is called.
For a play-by-play of how this “call order” magic works, see my Intro to Hooks post.
Example: Updating state based on previous state
Let’s look at another example: updating the value of state based on the previous value.
We’ll build a, uh, “step tracker.” Very easy to use. Just like a Fitbit. Every time you take a step, simply click the button. At the end of the day, it will tell you how many steps you took. I’m working on securing my first round of funding as you read this.
import React, { useState } from 'react'; function StepTracker() { const [steps, setSteps] = useState(0); function increment() { setSteps(steps => steps + 1); } return ( <div> Today you've taken {steps} steps! <br /> <button onClick={increment}> I took another step </button> </div> ); } ReactDOM.render( <StepTracker />, document.querySelector('#root') );
This example looks a lot like the last one. This time I’ve imported
useState directly from React, so we don’t have to write out
React.useState.
First, we’re creating a new piece of state by calling
useState, initializing it to 0. It returns the current value of
steps (0) and a function for updating it. We have an
increment function to increase the step counter.
You’ll notice we’re using the functional or “updater” form of
setSteps here. We could just call
setSteps(steps + 1) and it would work the same in this example, but I wanted to show you the updater form, because it’ll be useful in case your update is happening in a closure which has closed over the old (stale) value of the state. Using the updater form ensures you are operating on the latest value of state.
Another thing we’ve done here is to extract the
increment function, instead of inlining the arrow function on the button’s
onClick prop. We could have written button this way and it would’ve worked just the same:
<button onClick={() => setSteps(steps => steps + 1)}> I took another step </button>
Example: state as an array
Remember that state can hold any value you want! Here’s an example of a list of random numbers. Clicking the button adds a new random number to the list:
function RandomList() { const [items, setItems] = useState([]); const addItem = () => { setItems([ ...items, { id: items.length, value: Math.random() * 100 } ]); }; return ( <> <button onClick={addItem}>Add a number</button> <ul> {items.map(item => ( <li key={item.id}>{item.value}</li> ))} </ul> </> ); }
Notice we’re initializing the state to an empty array
[], and take a look at the
addItem function.
The state updater function (
setItems, here) doesn’t “merge” new values with old – it overwrites the state with the new value. This is a departure from the way
this.setState worked in classes.
So in order to add an item to the array, we’re using the ES6 spread operator
... to copy the existing items into the new array, and inserting the new item at the end.
Example: state with multiple keys
Let’s look at an example where state is an object. We’ll make a login form with 2 fields: username and password.
I’ll show you how to store multiple values in one state object, and how to update individual values.
function LoginForm() { const [form, setValues] = useState({ username: '', password: '' }); const printValues = e => { e.preventDefault(); console.log(form.username, form.password); }; const updateField = e => { setValues({ ...form, [e.target.name]: e.target.value }); }; return ( <form onSubmit={printValues}> <label> Username: <input value={form.username} name="username" onChange={updateField} /> </label> <br /> <label> Password: <input value={form.password} name="password" type="password" onChange={updateField} /> </label> <br /> <button>Submit</button> </form> ); }
Try it out in this CodeSandbox.
First up, we’re creating a piece of state and initializing it with an object:
const [form, setValues] = useState({ username: '', password: '' });
This looks a lot like how you might initialize state in a class.
Then we have a function to handle the submission, which does a
preventDefault to avoid a page refresh and prints out the form values.
The
updateField function is more interesting. It uses
setValues (which is what we called the state updater) and passes an object, but it must be sure to include the existing state with
...form if it doesn’t want to overwrite it. Try taking out the
...form line and see how the form behaves.
At the bottom we have a pretty standard-looking chunk of JSX to render the form and its inputs. Since we’ve passed a
name prop to the inputs, the
updateField function can use it to update the appropriate state. This way you can avoid having to write a handler function for each field.
An Easier Way with useReducer?
There’s another hook called
useReducer which is more suited to managing state with multiple values. In fact,
useState internally calls
useReducer! Learn about the useReducer hook here (that link also includes a video with examples of useReducer). | https://daveceddia.com/usestate-hook-examples/ | CC-MAIN-2019-47 | refinedweb | 1,579 | 64.3 |
A frequent need in graphics processing is that of adding a "watermark" of some sort to an image. I'm not talking about digital signatures or machine-readable hidden text here, though some image processing applications use the term "watermark" to refer to those. Here, I'm just referring to some sort of visible copyright notice, logo, bug, or text -- basically, any sort of visual device that says "hey, this image belongs to so and so," as a small measure of copyright protection that can be applied to the image itself.
In the days of VB 4, 5 and 6, accomplishing this would have been a fairly involved undertaking. Today, however, the GDI+ components in the .NET framework eliminate majority of the difficulty in a task like this. In fact, the code module for my own watermark helper class contains a whopping 122 lines of code, including white space, unused functions and some leftover test code. That shouldn't be too daunting, now should it? There are already several articles available on popular developer sites describing how to add a watermark to an image, so instead of rehashing what's already out there, I want to describe a specific scenario I needed to deal with, deal with some issues that needed addressing in other approaches I encountered, then describe the approach that worked best for addressing my scenario.
Here are two scenarios that might call for adding a watermark to an image. Suppose someone wants to post their digital pictures to a photo hosting site, but they don't want their images being swiped by others and used without their knowledge, and they also don't want to have to keep reminding Aunt Edna where their family photo albums can be found on the internet. Or suppose your client is a professional photographer who wants a way to offer proofs to clients that show off the full glory of his ultra-high-resolution equipment, yet he doesn't want to risk having those proofs used by the customer without paying for the images. In either scenario, a watermark would solve the problem nicely. Our need is to add some simple identifying information to the image - perhaps a business name, a logo, or the URL of the photo album - which would be difficult to remove or obscure. Most folks don't work with PhotoShop, Paint Shop Pro or the like on a regular basis, and similarly most folks don't want to take the time or effort to acquire and learn these apps. In this case, I thought the scenario justified creating a simple application that would de-skill the process of adding watermarks to images. Let's have a look at the requirements:
The basic design of this app requires three major functions to be addressed, given the requirements above:
Directoryobject.
Bitmapobject from the path and filename;
Bitmapby applying the watermark to the source bitmap.
Bitmapto a file in the output directory.
Before we can process any images, we need some images to process (duh). Since this article isn't about file handling or Windows Forms, I'll recommend using code from another CodeProject article - Alberto Venditti's Image Batch Resizer - to get that part up and running. That project provides a shell that handles files in exactly the way we need - getting an input directory, an output directory, filtering the input files for JPEGs, and then looping though those input files to perform some bitmap processing on them. To get things moving along in a down-and-dirty fashion, just add a text box to
frmMain for getting the desired text of the watermark from the user. As for the image processing itself, we'll need to make some changes to the
btnGo_Click procedure in
frmMain, specifically by calling a function we'll create in the next section in place of the call to the "
Reduce" subroutine in Alberto's code. [Note - have a look at that code while you're in there, since it demonstrates some other very useful techniques, such as using memory streams to handle images, scaling bitmaps with minimal coding, and saving bitmaps and/or streams to files.]
Since this was my first foray into making use of .NET's imaging features, I did what any good developer would do and Googled [VB.NET image watermark]. Jaison John's CodeProject article Watermark Website Images At Runtime was one of my first good finds. In addition to offering some nifty ASP.NET integration tricks, Jaison demonstrates how to accomplish a few things essential to our project. The core ideas I gleaned from Jaison John's main watermarking routine were:
Bitmapobject from a file on disk using one of the overloaded constructors of the .NET framework's
Bitmapobject. The
Bitmapis the main .NET object representing an image, and the overloads we'll use to construct the object accepts a
Stringargument of the path to an image file:
Dim bmp as Bitmap = New Bitmap(strInputFilePath)
Graphicsobject from a
Bitmapusing
Graphics.FromImage(Bitmap). The
Graphicsclass is a utility class that .NET provides for manipulating and modifying the
Bitmapfrom which it was created.
Bitmapusing the
.DrawStringmethod of the
Graphicsclass.
This was a great start. Jaison's technique draws 14-point Verdana text in Beige over a given image, starting at pixel (0,0), like so (code is a slightly-tweaked copy of Jaison's, comments are mine):
'Construct a Bitmap object from a jpg's filename: dim bmp as Bitmap = New Bitmap(strInputFilePath) 'Obtain a Graphics object from & for that Bitmap: dim canvas as Graphics = Graphics.FromImage(bmp) 'Draw the watermark string onto the Bitmap: canvas.DrawString(strWatermark, _ New Font("Verdana", 14, FontStyle.Bold), _ New SolidBrush(Color.Beige), 0, 0) 'Save the watermarked bitmap to a new file: bmp.Save(strOutputFilePath)
Not bad for such a short block of code, eh? Given our requirements, though, we still have some work ahead of us to whip our watermark into shape. [Note: the classes used in this article are found in the
System.Drawing namespace, so save yourself some time and add the
Imports statement for that namespace to your code module if you're singing along and starting from scratch.] This brings us to...
A quick bit of trial and error revealed that successive calls to
.DrawString would by default create text instances stacked one on top of another -- that is, the ZOrder of each bit of rendered text would increment with each successive call to
.DrawString. Going back to the code block in the previous section, we can see that the
.DrawString method of the
Graphics object took five arguments:
Stringto be drawn (
strWatermark),
Fontobject in 14-point Verdana, styled in Bold),
SolidBrushobject containing the color in which to draw it (
Color.Beige),
Bitmap(0,0).
Armed with all that information, we have all that we need to change our code block like so to create our drop-shadow effect:
'Draw the watermark string onto the Bitmap in Black to create 'the "shadow", offset 2 pixels from our original position: canvas.DrawString(strWatermark, _ New Font("Verdana", 14, FontStyle.Bold), _ New SolidBrush(Color.Black),.White), 0, 0)
As you can see in the code comments, all we've done here is drawn the same bit of text - first in black, then in white - on top of itself with a 2-pixel offset. The end result of this is our white text with a 2-pixel black drop-shadow, which brings us to...
There are many ways to skin a cat, and making text transparent in GDI+ is no exception. Just to extend my familiarity with the graphics functions in .NET, I tried three different approaches to this before settling on one. Briefly, these approaches were:
The
DrawString method of the
Graphics class will accept any class derived from
Brush for the third argument, and these include
SolidBrush,
TextureBrush, and
HatchBrush. It didn't take too much fiddling around with these to find that they wouldn't provide a very direct solution to the transparency problem. These brushes can be quite useful for creating other effects, and in a pinch I could employ a hatch brush to achieve a faux-transparent effect, but this wasn't the obvious answer for the functionality I needed here.
An article on Vb-Helper.com presented another approach which involved:
Bitmapwith just the text or watermark on it,
Bitmaptransparent,
Bitmapin a pixel-by-pixel fashion, manipulating the transparency of the pixels by setting the ALPHA component to 128 (50% opacity), and finally
Bitmap's
.DrawImagemethod to draw the modified secondary
Bitmapover the original picture.
This approach did exactly what I wanted. The only problems were that I was seeing some aliasing at the edges of the letters, and the performance of the pixel-by-pixel bitmap operation was clearly going to be unacceptable. However, the technique in that article did reveal some things about .NET's GDI classes that led me to my eventual solution. Let's look at that approach, specifically the portion of the code that manipulates the pixels' ALPHA components:
' Set the watermark's pixels' Alpha components. Const ALPHA As Byte = 128
The workers here are the
Bitmap object's
GetPixel and
SetPixel methods, but the really interesting function is
Color.FromArgb. This bit of code uses
GetPixel to grab the values of each pixel in the image, then reduces the opacity of that same pixel by calling
SetPixel and passing it a color argument constructed from:
ALPHAcomponent of 128 (or 50% opacity),
R,
Gand
Bvalues of the pixel.
That's where the light went on for me; if I'd have known earlier that a
Color object could be constructed in a transparency-aware fashion, then I would have simply done that in the first place. Hindsight being 20/20, I went back to my original code...
Looking at the place where I was calling
.DrawString and needed to pass a
Color object to the
SolidBrush object's constructor, it should be a no-brainer at this point to figure out what to change. Where I had previously created
SolidBrush objects using
Color.White and
Color.Black, could now create semi-transparent brushes by constructing them with
Color.FromArgb:
'Draw the watermark string onto the Bitmap in Black to create 'the "shadow", offset 2 pixels from our original position: canvas.DrawString(strWatermark, _ New Font("Verdana", 14, FontStyle.Bold), _.FromArgb(128, 255, 255, 255)), 0, 0)
There the problem was solved. The only changes needed to my earlier code are bolded. By constructing my
SolidBrush objects using a semi-transparent color in the first place, I could completely avoid all the overhead of instantiating a second bitmap, not to mention all the repeated calls to
GetPixel and
SetPixel.
So now all that's left is to address the matter of scaling our textual watermark to the image it's being applied to. The challenge here is to take a blob of text, which is vector-based and sized in points for a given font, and scale it relative to a bitmap, which is raster-based and sized by pixel count. We previously saw how to scale a bitmap fairly easily, but having just eliminated the need to instantiate a second bitmap, I wasn't too eager to go the route of putting the watermark on its own bitmap and scaling that to the JPG image without investigating some other approaches first. A bit of hunting around in the documentation turned up just what I was looking for; the
Graphics class provides a
MeasureString method that would return a
SizeF structure containing the pixel dimensions of a given string drawn in a given font.
Armed with the
MeasureString method, let's take stock of our variables and see what the most straightforward implementation would be. Our goal is to have our watermark "appropriately sized" for the JPG it's going on, so we have to decide what that means now. I consulted a psychic, an interior designer, my horoscope, and a well-known book on etiquette to determine that the ideal watermark size is 50% of the JPG's width. Your mileage may vary, and you should of course consult your own highly-respected sources. At any rate, we'll add variables for the
SizeF structure, for the
DesiredWidth, and for a
Ratio we'll calculate, plus an object variable for the font so we don't have to keep creating a new
Font object every time we reference it:.
dim StringSizeF as SizeF, _ DesiredWidth as Single, wmFont as Font, RequiredFontSize as Single, Ratio as Single wmFont = New Font("Verdana", 14, FontStyle.Bold)
Now we have everything we need to get some rough but per formant sizing logic up and running. First, we calculate the desired width of the watermark as 50% of the width of the JPG we'll be drawing it on (for simplicity's sake, I'm assuming the height will "just be okay," and in the vast majority of cases it is indeed.):
DesiredWidth = bmp.Width * .5
Next, we find out how big our string is, in pixels:
StringSizeF = canvas.MeasureString(strWatermark, wmFont)
Now that we know the size of our string when drawn, for example, in 14-point Verdana, we can derive a
Ratio of [font size : pixel width] given our
String and typeface. We'll do this by dividing the
width of our
String (in pixels) by the size of our font (in points):
Ratio = StringSizeF.Width / wmFont.SizeInPoints
Given that
Ratio, it's just a simple extrapolation to get a font size in points that will yield a watermark of our desired width, given the same typeface and
String:
RequiredFontSize = DesiredWidth / Ratio
Done! Now we simply reinitialize the
Font, changing the hard-coded "14" to the
RequiredFontSize variable, and we're ready to draw our watermark - transparent, drop-shadowed and sized for the image it's going on:
wmFont = New Font("Verdana", RequiredFontSize, FontStyle.Bold) 'Draw the watermark string onto the Bitmap in Black to create 'the "shadow", offset 2 pixels from our original position: canvas.DrawString(strWatermark, _ wmFont, _, _ wmFont, _ New SolidBrush(Color.FromArgb(128, 255, 255, 255)), 0, 0) bmp.SetResolution(96, 96)
And what exercise would be complete without a Gotcha! After running several folders worth of images through this algorithm, I saw some drastic variations in the size of the watermark that were seemingly unexplained. The debugger wasn't helping much here, so I sifted through the original images looking for anything that might be causing this. Sure enough, there were different DPI resolutions among the JPG files. D'oh! The right thing to do would be to get the DPI of the original JPG and handle the watermark accordingly, but it was getting late and this section of code was starting to become downright tiresome, so I cheated by adding a call to
Bitmap.SetResolution(96,96) before all measuring and sizing logic. This had the intended effect of "normalizing" things so the watermark size was consistently 50% of the input image size again. Since there wasn't a noticeable performance penalty after adding this, I decided I was done. Oh, happy day!
After all our tweaks, if we string the above together we end up with a simple, straightforward and per formant function for watermarking images. Even though there's still plenty of room for customization and enhancement (such as letting the user pick the font, fiddle with the placement, color & transparency, etc.), we've fulfilled our design criteria at this point and can safely break for dinner. In my implementation, I did go back and add code to prepend a � copyright symbol to the watermark text. I also fiddled a bit with the ALPHA values, ultimately deciding that things looked a bit better if the white text was slightly more opaque (ALPHA = 136) than the black drop-shadow (ALPHA = 120).
Looking back on what it took to make this work, it's worth noting that all the functionality we needed was contained in a mere two classes in .NET - the
Bitmap object and the
Graphics object.
Happy watermarking!
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/GDI-plus/TransparentImageWatermark.aspx | crawl-002 | refinedweb | 2,695 | 58.92 |
NAME
pivot_root - change the root file system
SYNOPSIS system mounts a temporary root file system (e.g., an initrd), then mounts the real root file system, file system in any way. In the future, there may be a mechanism for kernel threads to explicitly relinquish any access to the file system, file system as the current root. - put_old must be underneath new_root, that is, adding a nonzero number of /.. to the string pointed to by put_old must yield the same directory as new_root. - No other file system may be mounted on put_old. See also pivot_root(8) for additional usage examples. If the current root is not a mount point (e.g., after chroot(2) or pivot_root(), see also below), not the old root directory, but the mount point of that file system is mounted on put_old. new_root does not have to be a mount point. In this case, /proc/mounts will show the mount point of the file system containing new_root as root (/).
RETURN VALUE
On success, zero is returned. On error, -1 is returned, and errno is set appropriately.
ERRORS
pivot_root() may return (in errno) any of the errors returned by stat(2). Additionally, it may return: EBUSY new_root or put_old are on the current root file system, or a file system is already mounted on put_old. EINVAL put_old is not underneath new_root. ENOTDIR new_root or put_old is not a directory. EPERM The calling process does not have the CAP_SYS_ADMIN capability.
VERSIONS
pivot_root() was introduced in Linux 2.3.41.
CONFORMING TO
pivot_root() is Linux-specific and hence is not portable.
NOTES
Glibc does not provide a wrapper for this system call; call it using syscall(2).
BUGS
pivot_root() should not have to change root and current working directory of all other processes in the system. Some of the more obscure uses of pivot_root() may quickly lead to insanity.
SEE ALSO
chdir(2), chroot(2), stat(2), initrd(4), pivot_root(8)
COLOPHON
This page is part of release 3.35 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.ubuntu.com/manpages/precise/man2/pivot_root.2.html | CC-MAIN-2014-35 | refinedweb | 351 | 66.94 |
Making My Site Fluid?wm22 Mar 25, 2011 7:29 AM
I just created my first website using Flash Catalyst and exported it for the web. The problem that I am having is that I made my background 1920px wide, so when I open the html file in a browser it would be large enough for higher res monitors. But, on smaller monitors, my content is cutoff on the right.
So I opened the html file in a text editor, but I cannot figure out what I need to change so that my site stays centered no matter the browser window size or monitor size.
Any help would be very much appreciated!
1. Re: Making My Site Fluid?topelovely Mar 28, 2011 2:21 PM (in response to wm22)
From Flash cs5 professional:
Just create an empty rectangle box, convert to a symbol movieclip and name the instance anything you want I have named mine frame_mc. Place the script in a layer above the movieclip and insert the actionscript below. That's all
import flash.display.Stage;
import flash.events.Event;
var myStage:Stage = this.stage;
myStage.scaleMode = StageScaleMode.NO_BORDER;
myStage.align = StageAlign.TOP_LEFT;
function resizeDisplay(event:Event):void{
var swfWidth:int = myStage.stageWidth;
var swfHeight:int = myStage.stageHeight;
/*var frameYPos:Number = swfHeight - frame.height;
var frameXPos:Number = swfWidth - frame.width;*/
frame_mc.width = swfWidth/2;
frame_mc.height = swfHeight/2;
}
myStage.addEventListener(Event.RESIZE, resizeDisplay);
Then use the flash UILoader component to call your catalyst swf file.
You cannot publish a liquid flash site from flash catalyst, but with flash builder. No one is giving a working solution in the forum yet. My work around is to use the above script in flash professional using UILoader to call the catalyst swf file. But this is also giving me some problem. see my post. If yours work, please let me know.
2. Re: Making My Site Fluid?AdeptDigital Mar 28, 2011 3:12 PM (in response to wm22)1 person found this helpful
wm22,
This seems to be a rather simple fix if you are willing to use a little html and css. First, I would avoid such a large image in the Catalyst and keep the width down to your content. For example if your content is only 1000px wide and you want to fill a 2000px wide space just cut off the left and right 500px of the image and use them as curtains in the HTML and CSS files. Then your SWF file from Catalyst is not quite as large. I have also played, with limited success, at leaving the Catalyst background transparent and shifting the JavaScript to allow for the window:transparent command just like with a regular SWF from Flash.
But lets say for the sake of argument that you need the SWF from Catalyst to be that wide and all you really want to do is center the SWF no matter what. Have you tried a simple text based align:center command in your HTML? I have a 1200 px Catalyst output that is centered here:
electronic-lifestyle.com
I used a wrapper div with the margins set to auto to achieve my centerpoint but for your purposes you could use the same div to simply set the align:center in your CSS. I've seen this work with text fields, perhaps it will work with the flash output. The only bummer is that you may get a horizontal scroll bar on the bottom that you have to take care of with the CSS. Remember that just because we are using this hard core program doesn't mean we can't take advantage of some low tech fixes for our problems.
It may be something worth trying if your feeling stuck and like me loath heavy AS3 coding to solve what should be a simple fix.
Brett
3. Re: Making My Site Fluid?AdeptDigital Mar 28, 2011 3:37 PM (in response to wm22)1 person found this helpful
Hey,
Just found the page I was thinking of when I was writing about the transparent mode for your project. Look here:
If your site can use this then the image you want can be a simple background in HTML and the SWF from FC can function on top of it.
Good Luck,
Brett
4. Re: Making My Site Fluid?wm22 Mar 30, 2011 1:22 PM (in response to wm22)
Thanks guys for the help!!! | https://forums.adobe.com/thread/827763 | CC-MAIN-2017-51 | refinedweb | 739 | 81.33 |
Read from USB (debug QI Charger odd influence on board behaviour)
Hi,
I have my wipy3+pytrack build into a IP67 case and fitted it with QI wireless charger so I don't have to reopen de box every time I need to charge the battery.
Since I have set this up, I noticed random shutdowns and reboots of the board. To be sure, I tested in parallel my code on a few boards with and without QI charger. The boards without show no problems.
So: I'd like monitor what goes on with the USB, what is causing this behaviour? I read USB is UART0 which normally is REPL. Can I disable REPL temporary to be able to read what is going on?
I was think about putting a config file on /sd and depending on the value of it, the REPL is activated, deactivated?
I'm totally in the dark here, any tips or suggestions are welcome.
@fsergeys I'm not familiar with Qi chargers, but isn't there a chance the electromagnetic field generated when you transmit data may "trigger" the charger somehow? Since those chargers are designed to be inserted into phones it shouldn't be an issue, but you never know...
@jcaron
The battery is Lipo - 3.7v 1300mAh
The Qi receiver (connected to the micro USB) is a QProd 1000mha
The transmitter is a CHOETECH PROMU Qi T511 Charger.
I have the boards in machine.deepsleep when charging (I haven't tested the behaviour while charging)
When in operations (not on the charger), this is where the odd behaviour happens. So that is the really funky part, because at that time, the Qi receiver is not supposed to be having any effect.
@fsergeys What's the full setup? Do you have a battery? Is the charger on a charging base?
Have you checked that the supplied power is within specs, both in terms of voltage and available current? You may want to insert a USB power monitoring device between the charger and Pytrack to check.
What's the charger model?
@jcaron
Normally there should be no serial communication if you read the Qi specifications. Just charging the battery.
My hypothesis is indeed that I will not "find" anything. But need to confirm that.
What else could be causing the boards to randomly reboot or shutdown when the Qi chargers are plugged into the USB and not charging on the transmitter. If you have hypotheses I can test, I'd be glad with a few clues...
I'm going to rerun my tests this afternoon with the boards yesterday connect to the Qi charger and today not connect to validate my findings that Qi charger plugged or unplugged does make a difference
@fsergeys What do you mean by "read"? Unless the charger emulates the host side of a serial connection compatible with the Pytrack and communicates on that, I don't think you'll be able to do much, or did I miss something?
If you modify your boot.py to have:
import os os.dupterm(None)
The REPL will no longer run on UART 0, in order to recover from this you will need to use either safe boot or FTP | https://forum.pycom.io/topic/2869/read-from-usb-debug-qi-charger-odd-influence-on-board-behaviour/ | CC-MAIN-2019-22 | refinedweb | 537 | 72.46 |
Library
An Introduction to Tkinter
Alarm handlers and other non-event callbacks :::
Back Next
after(delay_ms, callback, args...). Register an alarm
callback that is called after the given number of milliseconds
(Tkinter only guarantees that the callback will not be called
earlier than that; if the system is busy, the actual delay may be
much longer). The callback is only called once for each call to
after. To keep calling the callback, you need to reregister the
callback inside itself:
class App:
def __init__(self, master):
self.master = master
self.poll() # start polling
def poll(self):
...
self.master.after(100, self.poll)
You can provide one or more arguments which are passed to the
callback. This method returns an alarm id which can be used with
after_cancel to cancel the callback.
after_cancel(id). Cancels the given alarm callback.
after(delay_ms). Wait for the given number of
milliseconds. Note that in the current version, this also blocks
the event loop. In practice, this means that you might as well
do:
time.sleep(delay_ms*0.001)
after_idle(callback, args...). Register an idle
callback which is called when the system is idle (that is, when
there are no more events to process in the mainloop). The callback
is only called once for each call to after_idle. | http://www.pythonware.com/library/tkinter/introduction/x9507-alarm-handlers-and-other.htm | crawl-002 | refinedweb | 213 | 66.64 |
Question:
I'm not sure how to accomplish overloading the << operator for a method. This is how I assumed it would work:
def roles<<(roles) ... end
That however, throws errors. Any suggestions?
Solution:1
You need to do that from within a class. Like this:
class Whatever attr_accessor :roles def initialize @roles = [] end end
You can't really have a
<<roles method. You'd have to have an accessor for
roles that supports the
<< operator.
EDIT: I've updated the code. Now you can see how the
<< operator should be overloaded, but you can also do what the
roles<< part. Here's a small snippet of it's usage:
w = Whatever.new w << "overload for object called" # and overloads for the roles array w.roles << "first role" w.roles << "second role"
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/02/tutorial-how-do-you-overload-operator.html | CC-MAIN-2018-09 | refinedweb | 151 | 68.36 |
Daemons and Agents
Daemons and agents, collectively known as background programs, are programs that operate without any graphical user interface. As a developer, you can use background programs to perform actions without user interaction, and also to manage a shared state between multiple other programs.
This technote describes the most common problems encountered when developing daemons and agents on Mac OS X, and offers detailed advice on solving those problems.
You should read this technote if you're developing a background program for Mac OS X. You should also read this technote if you're developing a plug-in that's hosted by a background program (for example, a CUPS filter), because your plug-in must abide by the rules of the host's environment.
Introduction
Many problems can be solved by programs running in the background. These programs have no graphical user interface (GUI), and only interact with the user indirectly. For example:
A web server runs in the background and responds to HTTP requests from clients.
A calendar application installs a background program that manages the calendar database, and relaunches the GUI application when calendar events occur.
This technote discusses the issues associated with writing programs that run in the background. It starts with a formal definition of the types of background programs that you can write (Daemonomicon). It then goes on to discuss the unusual execution contexts encountered by background programs (Execution Contexts) and how Mac OS X uses layered frameworks to manage the issues raised by these contexts (Layered Frameworks). It also offers advice about how to create a background program, starting with some general design recommendations (Design Considerations), followed by advice on launching a background program (Getting Launched) and how to communicate between the various components of the program (Daemon IPC Recommendations). Finally, there are some specific coding recommendations (Coding Recommendations) and some miscellaneous hints and tips (Hints and Tips).
Before reading this technote, you should look at the formal documentation for System Startup Programming and Multiple User Environments.
In cases where the implementation is different on older systems, or is likely to change in the future, the text warns you about these changes.
The examples in this technote are from Mac OS X 10.5.
Daemonomicon
There are a variety of terms used to describe programs that run in the background. In many cases the same term is used by different people to mean different things. This section defines the terms used throughout the rest of the technote.
A background program is a program that runs in the background, without presenting any significant GUI. This category is subdivided into daemons (system wide background programs) and agents (which work on behalf of a specific user). The next two sections describe these subdivisions in detail.
Daemons.
It is important to realize that not all launchd daemons interact with
launchd in the same way. The single unifying feature of all launchd daemons is the property list file. This tells
launchd how to interact with the daemon. For example:
A launchd-aware launchd daemon is typically launched on demand and must explicitly check in with
launchd. For an example of this, see Sample Code 'SampleD'.
If you have a daemon that was previously launched via
inetd(or
xinetd), you can run it from
launchdby creating a property list file with the
inetdCompatibilityproperty set appropriately. This results in an inetd-compatible launchd daemon. For an example of this, check out
/System/Library/LaunchDaemons/finger.plist.
Virtually any program can be run as a launchd daemon by creating a minimal property list file with the
OnDemandproperty set to false.
Keep in mind that, regardless of how it interacts with
launchd, a launchd daemon must not daemonize itself; doing so would undermine
launchd's ability to monitor the daemon.
For more information about launchd daemons, read the launchd documentation.
Agents).
Agents have acquired a variety of different names over the years. These include background-only applications (BOAs), faceless background-only applications (FBAs), and UI elements (implying that the agent displays some GUI, but is not a full blown application with a menu bar). These names are more-or-less irrelevant to this discussion. What is relevant, and what distinguishes different types of agents, is how the agent is launched.
Login Item
A login item is launched when the user logs in using the GUI. A login item can be any openable item, but it is typically an application or an agent.
You can install a login item using the shared file lists interface to Launch Services (see the
LSSharedFileList.h header file in the LaunchServices subframework of the CoreServices framework). This API is available on Mac OS X 10.5 and later. On earlier systems, you can install a login item is by sending Apple events to the
System Events process. Sample Code 'LoginItemsAE' shows one way to do this.
Global Login Item
A global login item is a login item that is launched when any user logs in. Installing a global login item is roughly equivalent to installing that login item for all users on the system: every time a user logs in,
loginwindow launches that user's login items and all global login items.
In Mac OS X 10.5 and later you can install a global login item using the shared file lists interface to Launch Services.
launchd Agent).
launchd agents are further classified by their target session type, as shown in Table 1.
To run your agent in a particular session type, use the session type strings from Table 1 as the value of the
LimitLoadToSessionType property in your agent's property list file. If you want to run in more than one session type, you can set
LimitLoadToSessionType to an array, where each element is a session type string. If you don't specify the
LimitLoadToSessionType property,
launchd assumes a value of
Aqua.
If you set
LimitLoadToSessionType to an array, be aware that each instance of your agent runs independently. For example, if you set up your agent to run in
LoginWindow and
Aqua, the system will first run an instance of your agent in the loginwindow context. When a user logs in, that instance will be terminated and a second instance will launch in the standard GUI context.
Finally, there are some significant gotchas associated with developing a pre-login launchd agent; see Sample Code 'PreLoginAgents' for more information.
Ancient Daemons (and Agents)
For information about older ways to launch daemons and agents, see Deprecated Daemonomicon.
Execution Contexts
Most Mac OS X programmers are familiar with the user ID (UID) associated with a process (commonly referred to as the process's owner). On a traditional BSD system, this UID control the capabilities of that process. You can, more or less, assume that two processes with matching UIDs have the same capabilities.
This is not true on Mac OS X. There are other elements of the process's context that significantly alter its capabilities. So, for example, a daemon whose UID is set to that of the logged in console user is not equivalent to an application that has been launched by that user.
The following sections describe the elements of process's context, and how they affect background programs.
UIDs and GIDs
A process's UIDs (its effective (EUID), real (RUID), and saved (SUID) UIDs) are the most well known elements of the process's context. These UIDs control various capabilities of the process, mostly centered on the BSD portions of the system (file system, networking, BSD process control). For example, a process's ability to open a file is controlled by its EUID, and its ability to signal another process is controlled by its EUID and the EUID and RUID of the target.
Processes also have a set of group IDs (GIDs) that are analogous to the UIDs, plus a list of supplemental group IDs.
For more information about these UIDs and GIDs, and how they affect the capabilities of your process, see Security Overview.
Mach Bootstrap Basics
Many Mac OS X subsystems work by exchanging Mach messages with a central service. For such a subsystem to work, it must be able to find the service. This is typically done using the Mach bootstrap service, which allows a process to look up a service by name. All processes inherit a reference to the bootstrap service from their parent.
To get an idea of how this works, you can run launchctl with the
bslist argument. This lists all of the services registered with the bootstrap service. Listing 1 shows an example of its output.
Listing 1 BootstrapDump from Terminal
As you can see, there are numerous services published via the bootstrap service. Virtually all of them are private. For example, you are not expected to send messages directly to the "com.apple.dock.server" service; rather, you would call a routine exported by a framework (perhaps
SetApplicationDockTileImage) and, behind the scenes, it exchanges messages with the service to do the job you requested.
Bootstrap Namespaces
The previous example raises an interesting point: which Dock does
SetApplicationDockTileImage interact with? If there are multiple users logged in (via fast user switching), then there are multiple instances of the Dock running, so how does
SetApplicationDockTileImage know which is the right one? Moreover, how can all of these Docks register the same service name ("com.apple.dock.server") without suffering a name conflict?
The solution to these problems is that the bootstrap service can create multiple bootstrap namespaces. Each login session has its own bootstrap namespace, to which all processes running in that session inherit a reference. So, when the Dock registers its service, the registration goes into that login session's namespace. Any other process within that session inherits a reference to the same namespace, and can, therefore, see the Dock's service. Processes in other login sessions reference a different namespace, which prevents them from seeing that service. However, they might be able to see a different instance of the Dock that's registered in their namespace.
It's worth noting the distinction between GUI and non-GUI per-session bootstrap namespaces. A GUI per-session bootstrap namespace is instantiated by the GUI infrastructure (
loginwindow and
WindowServer) when a user logs in via the GUI. A non-GUI per-session bootstrap namespace is created when a user logs in via SSH. While there is no fundamental difference between these namespaces, GUI-based services, like the Dock, only register themselves in GUI per-session bootstrap namespaces.
In Mac OS X 10.5 and later these non-GUI namespaces are, after the user has logged in, moved below the per-user bootstrap namespace by the launchd PAM module (
/usr/lib/pam/pam_launchd.so) under the direction of the SSH PAM configuration file (
/etc/pam.d/sshd).
Namespace Hierarchy
Bootstrap namespaces are arranged hierarchically. There is a single global bootstrap namespace. Below that is a per-user bootstrap namespace for each user on the system, and below each of those is a per-session bootstrap namespace for each login session. You can print a map of the namespaces on the system using the
BootstrapDump program (downloadable as Sample Code 'BootstrapDump') with the
MAP argument, as illustrated by Listing 2.
Listing 2 Bootstrap Namespace Hierarchy
If a process running within a namespace registers a service, it is visible in that namespace and any of its descendent namespaces. However, it is not visible in any other namespaces. In practice, this means that:
processes using the global namespace can only see services in the global namespace
processes using a per-user namespace can see services in that per-user namespace and the global namespace
processes using a per-session namespace can see services in that per-session namespace, the parent per-user namespace, and the global namespace
services registered in a per-session namespace can only be seen by processes using that per-session namespace
services registered in a per-user namespace can be seen by any process in any of that user's sessions
services registered in the global namespace can be seen by all processes
You can use
launchctl's
bslist command to view a bootstrap namespace by pointing it at a process using that namespace. For example, Listing 3 shows how to list the services in the global namespace by targeting
kextd.
Listing 3 Dumping the global bootstrap namespace
So the "com.apple.KernelExtensionServer" service is registered in the global bootstrap namespace, and can be seen by all processes on the system. On the other hand, "com.apple.dock.server" service (from Listing 1) is registered in the per-session namespace and can only be seen by processes using that namespace.
So, the rules to remember are:
If you're developing an agent, make sure it inherits a reference to the correct per-user or per-session bootstrap namespace. Table 2 shows the execution context for each type of background program.
If you're developing a daemon, make sure it inherits a reference to the global bootstrap namespace.
If you're developing a daemon, make sure it only uses services that are registered in the global bootstrap namespace. See Layered Frameworks for more information about this.
If you don't follow these rules, things might look like they're working, but you'll run into obscure problems down the line. The next section gives an example of one such problem. This is but one example; there are many more potential problems. The only guaranteed way to avoid such problems is to follow these rules, which are discussed further in Layered Frameworks.
Namespace Exploration
All of this might seem a little theoretical, and you're probably wondering how it affects you. Let's make things crystal clear by looking at a concrete example of how bootstrap namespaces affect application functionality.
Launch Activity Monitor (in
/Applications/Utilities) from the Finder.
If it's not currently showing its main window, choose Activity Monitor from the Window menu to reveal that window.
Configuring it to show a CPU usage meter in its dock icon by choosing Show CPU Usage from the Dock Icon submenu of the View menu. You'll note that the Dock icon now displays a up-to-date representation of the CPU usage, and that the Dock menu works correctly. Figure 1 shows what I'm talking about.
Now quit Activity Monitor.
Next, open a Terminal window and SSH to yourself, as shown in Listing 4.
Listing 4 Connecting to yourself via ssh
When you log in via ssh, the system creates a new per-session bootstrap namespace for your session. So, anything you execute in this Terminal window is running in a different per-session bootstrap namespace from the GUI applications that you're running.
Now run Activity Monitor from this Terminal window, as shown in Listing 5.
Listing 5 Running Activity Monitor from within SSH login session
Figure 2 shows what you get. You'll notice two anomalous things:
The Dock tile CPU meter is not displayed.
If you control click on the Dock tile, the contextual menu contains an "Application Not Responding" item.
These problems occur because the application is running in a different per-session bootstrap namespace, and is unable to look up the services that it needs to operate correctly (for example, "com.apple.dock.server").
Window Server
The window server (on Mac OS X 10.5 this is
/System/Library/Frameworks/ApplicationServices.framework/Frameworks/CoreGraphics.framework/Resources/WindowServer) is a single point of contact for all applications. It is central to the implementation of the GUI frameworks (AppKit and HIToolbox) and many other services (for example, Process Manager).
Most of the services provided by the window server are implemented using Mach messages. Thus, to use the window server reliably, you must inherit a reference to a valid GUI per-session bootstrap namespace. This is an expected consequence of the rules given earlier.
What's unexpected, however, is that applications do work (somewhat) if you run them from outside of a GUI login session (that is, if they inherit a reference to the global bootstrap namespace, or some other non-GUI bootstrap namespace). This is because the window server advertises its services in the global bootstrap namespace! This is known as the global window server service.
The reasons for this non-obvious behavior are lost in the depths of history. However, the fact that this works at all is pretty much irrelevant because there are important caveats that prevent it from being truly useful. The following sections describe these caveats in detail.
More Than Window Server
The window server is not the only service that's required for an application to function correctly. As described earlier, the Dock service is also required, and it is only registered in GUI per-session bootstrap namespaces. There are many other services like this.
Permission To Connect
The console user is the user whose GUI login session is using the console. The console device,
/dev/console, is owned by the console user.
A process can only use the global window server service if its EUID is 0 (it's running as root) or matches the UID of the console user. All other users are barred from using it.
For a demonstration of this, you can SSH to your own machine and try to run Activity Monitor from your shell. Listing 6 shows an example of doing this from Terminal. The first attempt to run Activity Monitor command works because it's running as the same user as Terminal. The second attempt fails because the test user (
mrgumby) does not match the console user, and thus cannot access the global window server service.
Listing 6 Accessing the window server from console and non-console users
This limitation makes it very hard to reliably use the global window service because:
Standard security practice is that daemons should not run as root; rather, they should be run by a dedicated user (that is, the
wombatddaemon is run by a dedicated
_wombatuser).
Also, standard security practice dictates that programs running as root should try to reduce their attack surface by limiting the list of frameworks that they use. Thus, in general, programs running as root should not use high-level frameworks that rely on the window server, like AppKit and HIToolbox.
So, solving the problem by running as root is a security no-no.
There is no easy way to solve the problem by running your daemon as the console user because, with fast user switching, the console user can change at any time.
Window Server Lifecycle
The final nail in the coffin of the global window server service relates to the window server lifecycle. Contrary to what you might expect, the window server is not always running. Rather, at certain times (see the note below), the window server quits and is relaunched by
launchd. As with the global window server service, the reasons for this are also lost in the depths of history. However, the consequences are crucial: when the window server quits, any process that's connected to it will terminate.
You can see this in action using the program from Listing 7. This program monitors the foreground application using Carbon Events, which requires it to connect to the window server.
Listing 7 A simple program that connects to the window server
To test this, you have to log in using SSH from a different machine (because Terminal will hold up the GUI logout if there is a process running in one of its windows). Listing 8 shows what this session might look like.
Listing 8 Death by GUI logout
To replicate this yourself, do the following.
Log into the console of the victim machine (
victim.local.in this example).
From another machine, SSH to the victim machine and run the
AppSwitchMonitorprogram. This will run forever, printing a message when the foreground application changes.
Using the GUI, log out.
You should notice a delay in the log out. This is
loginwindowwaiting for
AppSwitchMonitorto respond to the
'quit'Apple event.
AppSwitchMonitordoes get this event, which causes a return from
RunApplicationEventLoop. However, the while loop around
RunApplicationEventLoopkeeps it running.
The
AppSwitchMonitorprogram eventually dies. Printing the exit status reveals that it died because of a
SIGKILLsignal (you can decode this exit status using the macros in
<sys/wait.h>; see the wait man page for details).
This program is killed because the window server keeps track of the processes that are using its services. When you log out, the system (actually
loginwindow) tries to quit these. For each GUI process, it sends a
'quit' Apple event to the process. If any GUI process refuses to quit,
loginwindow halts the logout and displays a message to the user.
The situation for non-GUI processes is slightly different:
loginwindow first tries to quit the process using a
'quit' Apple event; if that fails it terminates the program by sending it a
SIGKILL signal. There is no way to catch or ignore this signal.
The upshot of this is that, if your process connects to the window server, it will not survive a normal logout.
There are other issues related to the window server lifecycle that cause problems for daemons. Even if you don't actually connect to the window server, you can still get into trouble if your daemon registers services in the per-session bootstrap namespace. When the window server quits, any per-session bootstrap namespaces created by that window server are deactivated. For more information about this, see Bootstrap Namespaces: One More Thing.
Pre-Login Trust Issues
If, in Mac OS X 10.5 and later, you see a message like that shown in Listing 9 you might mistakenly think that the solution is to get the system to 'trust' your application, perhaps via code signing.
Listing 9 Pre-Login Trust Message
However, this isn't the case (r. 5544764) . This message is really telling you is that you're trying to connect to the window server from the wrong context. You see this message if you try to connect to the global window server service from outside of the pre-login context before the user has logged in; typically this means that you're trying to use the window server from a daemon.
You should not attempt to fix this by convincing the window server to trust your program; doing so will just cause other problems further down the road. For example, if you do successfully connect to the window server from your daemon, you still have to deal with window server lifecycle issues described previously.
Instead, you should fix this problem by changing your code to run in the correct context. If you need to connect to the window server in a pre-login context, create a pre-login launchd agent. For an example of this, see Sample Code 'PreLoginAgents'.
Security Context
The security context is another piece of execution context associated with a process. The security context is explicitly managed by the Mac OS X security server (
securityd). The security context almost always follows the bootstrap namespace: that is, there is a single global security context (also known as the root security context), per-user security contexts (for per-user launchd agents), and a per-session security context for each login session.
In most cases the security context is not directly relevant to your program; more often than not, the bootstrap namespace is the thing that trips you up.
On the other hand, the security context has one nice attribute: you can get useful information about the context using the
SessionGetInfo routine (from
<Security/AuthSession.h>). This routine return two useful pieces of information.
The first is the session identifier, a 32-bit number that uniquely identifies this session. This can be helpful in a number of places. For example, if you want to create a shared memory object (perhaps using shm_open) whose scope is limited to processes running within a particular login session, you can fold the session identifier into the object's name. Thus, two shared memory objects from different session won't collide in the shared memory object namespace, and client processes within the login session can easily find the correct shared memory object.
Additionally,
SessionGetInfo returns a set of flags that describe the current security context. These are:
sessionIsRoot— set if this is the global security context
sessionHasGraphicAccess— set if programs running in this context can access the window server
sessionHasTTY— set if programs running in this context can access the terminal (
/dev/tty)
sessionIsRemote— set if this context is the being run over the network
Execution Context Summary
Table 2 shows how the execution context of your daemon or agent is affected by the mechanism used to launch it.
Notes:
Configured using the
UserNameproperty in the property list file; defaults to
rootif the attribute is not specified.
Uses the global bootstrap namespace unless the
SessionCreateproperty is specified in the property list file, in which case the daemon runs in its own per-session bootstrap namespace.
Uses the global security context unless the
SessionCreateproperty is specified in the property list file, in which case the daemon runs in its own per-session security context.
Prior to Mac OS X 10.5 there was no way to control the target session type of a launchd agent; launchd agents were executed per-user in an unpredictable context. See Agents for details.
Once the user has logged in, this pre-login bootstrap namespace and security context become the bootstrap namespace and security context of the logged in user's login session.
Execution Context Example
Figure 3 is graphical example of this information. Each box is a process. The text after the process name is either the UID of that process (text in round brackets) or a note that you can look up below (text in square brackets) or both. A directed line represents the parent/child relationship between two processes. Each blue box represents a per-user bootstrap namespace. Each red box represents a per-session bootstrap namespace. Items that aren't in any shaded box are in the global bootstrap namespace. There are two GUI login sessions (user A logged in first and then fast user switched to user B) and two non-GUI login session (user A and user C).
). Currently
ftpdruns in the global security context, but this may change in future systems; it will then will act more like
sshd, and create a per-user security context for each FTP session.
As you look at Figure 3, consider the various different types of process that it shows.
There is a global instance of
launchd(PID 1). There is also an instance of
launchdfor each user.
There is a single instance of the
WindowServerprocess that resides in the global security context. However, it knows about all GUI login sessions and can register its services in each of them.
The first instance of
loginwindow, the one associated with user A's login session, is a child of the global
launchd. The second instance, created when user A fast user switched to user B, is a child of the window server.
Each per-user instance of
launchdmanages the context for all of that user's login sessions.
The pasteboard server (
pboard) is a typical GUI launchd agent.
StdIOAgentis a hypothetical non-GUI launchd agent. It's included to illustrate where this type of agent would appear in the diagram. Mac OS X does not currently install any non-GUI launchd agents by default.
CCacheServeris a per-user launchd agent that maintains the Kerberos credentials cache for a specific user.
The
ReportCrashprogram is a GUI launchd agent with special values in its property list that tell
launchdto run it in response to crashes.
sshdis a launchd daemon with the
SessionCreateproperty set, which means that it runs in its own bootstrap namespace. Once a user logs in, the launchd PAM session module (
/usr/lib/pam/pam_launchd.so, as configured by
/etc/pam.d/sshd) moves the bootstrap namespace to within the appropriate per-user bootstrap namespace.
configdis a run-of-the-mill launchd daemon.
ftpdis a launchd daemon that is launched when someone connects to the TCP port specified in its property list file.
Layered Frameworks
Most Mac OS X functionality is implemented by large system frameworks. Many of these frameworks use Mach-based services that they look up using the bootstrap service. This can cause all sorts of problems if you call them from a program which references the wrong bootstrap namespace.
Apple's solution to this problem is layering: we divide our frameworks into layers, and decide, for each layer, whether that layer supports operations in the global bootstrap namespace. The basic rule is that everything in CoreServices and below (including System, IOKit, System Configuration, Foundation) should work in any bootstrap namespace (these are daemon-safe frameworks), whereas everything above CoreServices (including ApplicationServices, Carbon, and AppKit) requires a GUI per-session bootstrap namespace.
The only fly in this ointment is that some frameworks aren't properly layered. QuickTime is a perfect example of this. Because of its traditional Mac OS heritage, QuickTime isn't clearly layered into 'core' and 'application' frameworks. Rather, there is one big framework, and its not documented which bits work from a daemon. The only way to be one hundred percent safe is to not use frameworks like these, but that isn't an option for many developers. You can, however, minimize the risk by restricting the set of routines that you call. Living Dangerously describes this idea in more detail.
In summary, the concrete recommendations are:
When writing a daemon, only link to daemon-safe frameworks (see Framework Cross Reference).
When writing a GUI agent, you can link with any framework.
If you're writing a daemon and you must link with a framework that's not daemon-safe, consider splitting your code into a daemon component and an agent component. If that's not possible, be aware of the potential issues associated with linking a daemon to unsafe frameworks (as described in the next section).
Living Dangerously
If your daemon uses frameworks that aren't daemon-safe, you can run into a variety of problems.
Some frameworks fail at load time. That is, the framework has an initialization routine that assumes it's running in a per-session context and fails if it's not.
This problem is rare on current systems because most frameworks are initialized lazily.
If the framework doesn't fail at load time, you may still encounter problems as you call various routines from that framework.
A routine might fail benignly. For example, the routine might fail silently, or print a message to
stderr, or perhaps return a meaningful error code.
A routine might fail hostilely. For example, it's quite common for the GUI frameworks to call abort if they're run by a daemon!
A routine might work even though its framework is not officially daemon-safe.
A routine might behave differently depending on its input parameters. For example, an image decompression routine might work for some types of images and fail for others.
The behavior of any given framework, and the routines within that framework, can change from release-to-release.
The upshot of this is that, if your daemon links with a framework that's not daemon-safe, you can't predict how it will behave in general. It might work on your machine, but fail on some other user's machine, or fail on a future system release, or fail for different input data. You are living dangerously!
If you must call a framework that's not daemon-safe from your daemon, you should start by filing a bug describing what you're doing and why. Apple will consider your input as it develops future system software.
Next, you should try to minimize the number of potentially unsafe routines that you call. This will reduce (but not eliminate) the compatible risk.
Finally, you should test your daemon on a wide variety of platforms. And make sure you test with the pre-release builds Mac OS X that are available via Apple's developer software seeding program.
Framework Cross Reference
Table 3 summarizes which frameworks are daemon-safe.
Notes:
This framework has been daemon-safe since at least Mac OS X 10.4. However, a previous version of this technote documented it as not being daemon safe.
This is not a framework that you would usually call yourself.
You should define the Java property
java.awt.headlessas
true, which causes Java to throw an exception if you run code that connects to the window server. See Technical Q&A QA1328, 'Server Processes and the Dock'.
With restrictions; please write to Developer Technical Support (DTS) for the details.
Core Python functionality is daemon safe, but certain modules are not. Thus, depending on the modules you use, your Python program may not be daemon safe.
This table lists the frameworks as of Mac OS X 10.5. If a framework is not listed here, it is best to assume that it is not daemon-safe.
If you're interested in the status of a subframework, and that subframework is not listed here, you should look up the status of the framework's umbrella. For example, you can tell that the OSServices framework is safe because its umbrella framework, CoreServices, is safe.
If a framework is listed as safe, it will continue to be safe in future system releases. If a framework is listed as not being safe, it may become safe in future releases.
Design Considerations
This section lists some important points that you should consider when designing a background program.
Is It Necessary?
When contemplating a background program, the first thing to consider is whether you need it at all. Background programs consume resources all the time (although you can minimize the resources they consume by launching on demand), so if you can avoid using one you'll be doing the user a favor. Also, a background program can make the system less robust because it introduces a common point of failure (in this respect, a daemon is worse than an agent, which is worse than no background program at all). On the other hand, using a background program is often better than using a convoluted mechanism for distributed state management.
If you've decided to implement a background program, you must then determine whether you need an agent or a daemon. The main reason for using a daemon is that you need to share some state between multiple processes in different login sessions. If you don't need this, consider using an agent.
The Perils of the Monolith
A common question we get in DTS is "How can I launch a GUI application from my daemon?" The answer is that you can't. This is a direct consequence of Mac OS X's layered architecture: a daemon can't launch a GUI application because the daemon is running in the wrong context. Even if it could choose which context to run the GUI application in, which would it choose? And what happens if the computer is sitting at the login window, which means there are no valid GUI contexts?
The correct solution to this problem is to split your program into multiple components, each specialized for a particular role. So, you might have a daemon for managing the global state and an agent operating on behalf of each of the logged in users. If the daemon needs the user to answer a question, it signals all the agents and the agents use local knowledge to decide what to do. For example, the agent might behave differently depending on whether it's running in a GUI or non-GUI login session, or whether the GUI session is active (that is, using the console in a fast user switched environment).
The nice thing about this approach is that it simplifies the daemon's state. The daemon just knows that it's expecting one of the agents to provide the answer to the question; it doesn't care how many agents there are, what type of login context they're running in, or how they go about interacting with the user..
Getting Launched
Once you've got the basic design under control, you need to decide how to launch each component. Figure 4 and Figure 5 show how you can evaluate your needs to come up with the best approach.
As this algorithm shows, Apple recommends that, where possible, you implement background programs using launchd. The reasons for this are:
launchdmakes it easy for your background program to launch on demand
launchdmakes it easy to install and manage your background program (using launchctl)
Apple recommends that you use UNIX domain sockets for client/daemon communication, and
launchdmakes this easy
use of the most common alternative, a startup item, is deprecated as of Mac OS X 10.4
Daemon IPC Recommendations
Most daemons use some sort of inter-process communication (IPC) to communicate between the daemon and its clients. If you're writing a daemon, one of your first design decisions should be the IPC mechanism to use. This section describes the pitfalls of using Mach message-based IPC in your daemon, and why you should consider UNIX domain sockets as an alternative.
Mach Considered Harmful
The Mach APIs represent the lowest level interface to the kernel. As such, they are the most likely to change as the system evolves. Apple has consistently recommended that third party developers avoid them. This applies to daemons and agents as it does to everything else.
However, in the case of daemons and agents, Mach APIs are particularly troublesome because you have to worry about bootstrap namespaces. While it's possible—some would even claim easy—to manage bootstrap namespaces correctly, it's better to just avoid the problem altogether by avoiding Mach APIs.
Apple's general recommendation for avoiding the use of Mach APIs includes the advice that it's better to use high-level wrappers. For example, rather than send and receive Mach messages using Mach APIs directly, it's better to use the higher-level CFMessagePort API. This is good advice in general; but, if you're writing a daemon or agent, you will still run into bootstrap namespace problem. Under the covers,
CFMessagePortCreateLocal registers the service with the bootstrap service, and
CFMessagePortCreateRemote uses the bootstrap service to look up a registered service by name. So, even if you avoid using Mach messages directly, if you use an API that's layered on top of Mach messages, you still have to worry about bootstrap namespaces.
The high-level IPC APIs that are most likely to trip you up are:
CFMessagePort — Within this API, all named message ports are implementing using the bootstrap service.
Distributed Objects (DO) — Named DO connections (such as you would register using
-[NSConnection registerName:]), are implemented using the bootstrap service.
Apple events — Apple events are implemented in terms of Mach messages. While it is possible to use Apple events in a daemon (using the routines from
<AE/AEMach.h>, it's still a tricky exercise.
Overall, it is easier to just avoid Mach messages entirely. Mac OS X provides numerous alternative IPC mechanisms. My personal favorite is UNIX domain sockets.
UNIX Domain Sockets Are Your Friend
UNIX domain sockets are somewhat like TCP/IP sockets, except that the communication is always local to the computer. You access UNIX domain sockets using the same BSD sockets API that you'd use for TCP/IP sockets. The primary difference is the address format. For TCP/IP sockets, the address structure (that which you pass to
bind,
connect, and so on) is
(struct sockaddr_in), which contains an IP address and port number. For UNIX domain sockets, the address structure is
(struct sockaddr_un), which contains a path.
When the server binds to a UNIX domain socket, the system creates a file system object that represents the socket. For example, the PPP daemon's UNIX domain socket is
/var/run/pppconfd. When you look at this with
ls -l (see Listing 10), you'll see the first character of the listing is 's', indicating that this object is a socket.
Listing 10 Looking at PPP's UNIX domain socket
Once the server is running, the client can connect to it by simply passing this path to the
connect call. Once the connection is in place, communication proceeds as it would for TCP/IP sockets.
To learn more about UNIX domain sockets, consult any standard UNIX reference. I particularly recommend UNIX Network Programming by Stevens et al.
For an example of using UNIX domain sockets in a client/server environment, see Sample Code 'CFLocalServer'.
Advantages of UNIX Domain Sockets
When implementing a daemon, the UNIX domain sockets API offers a number of advantages over other IPC mechanisms.
Compared to a TCP socket, it guarantees that only processes running on your local machine can connect to your server. You can also get this guarantee with TCP, but it requires extra work.
Compared to Apple events, it works on all versions of Mac OS X (there are problems using Apple events from a daemon prior to Mac OS X 10.2). Also, it's a connection-oriented API, so the server automatically learns about the death of a client and it's easy for the server to asynchronously notify the client.
Compared to Mach messages (and any high-level wrappers, like CFMessagePort), it eliminates any bootstrap namespace concerns. Also, when you use Mach messages directly, you have to do extra work for the server to be notified when a client dies unexpectedly.
It can be easily integrated into any server architecture, including those based on threads, a runloop (using CFSocket), a select loop, or kqueues.
It is a well known POSIX API with numerous sources of good documentation.
Source code based on UNIX domain sockets code is portable to other POSIX platforms.
It can be nicely integrated into a launchd daemon. Specifically, starting with Mac OS X 10.4.6 it is possible for a launchd daemon to be launched on demand when a client connects to a UNIX domain socket.
It supports descriptor passing, which makes it easier for you to factor your program into privileged and non-privileged components.
Best Practice
When using UNIX domain sockets, keep in mind the following points.
A UNIX domain socket appears as an item in the file system. The client and server usually hard code the path to this socket. You should use a path to an appropriate directory (like
/var/tmp) and then give the socket a unique name within that directory. For example, Sample Code 'CFLocalServer' uses a path of
/var/tmp/com.apple.dts.CFLocalServer/Socket.
Use a socket path that's appropriate to the scope of communications. For example, a daemon can use a constant socket path because all clients connect to the same daemon. On the other hand, an agent should embed some scope identifier in the path. For example, if all of the agent's clients are in the same login session, it should include the session identifier in the path. Or if the agent is intended to be used by all processes of a particular user, it could embed the user ID in the path (or, better yet, the user's UUID).
When creating your UNIX domain socket, you should be careful to avoid security problems caused by file system race conditions. Sample Code 'CFLocalServer' shows one approach for this; download the sample and look at the
SafeBindUnixDomainSocketroutine in
Server.c.
Alternatively, if your daemon runs as root, you can put your UNIX domain socket in
/var/run. This directory is only writable by privileged processes, which avoids the above mentioned security problems.
You can confirm the identity of the program at the other end of the socket using the LOCAL_PEERCRED socket option, introduced in Mac OS X 10.4.
Cross Architecture IPC
Regardless of what IPC mechanism you use, you will have to deal with cross-architectural issues. For example, a given daemon may have clients that have a different:
byte order (endianness) — The most common case of this is an Intel daemon with a PowerPC client (being run using Rosetta).
pointer size — For example, a 32-bit daemon with 64-bit clients, or vice versa.
You must design your IPC mechanism to cope with these issues. Common strategy include:
use a text-base protocol — For example, you might use a CFPropertyList that's been flattened to XML.
define a byte order and stick to it — For example, you could require that all communications be in network byte order (big endian) and require little endian clients to swap the byte order.
use native byte order and require compatibility code to swap
include an indication of the byte order with the message (commonly called a byte order mark)
don't pass pointers over the wire (doing so is a little strange anyway)
when passing a size, define it in architecture neutral terms — For example, use
uint32_tor
uint64_trather than
size_t.
Alternatively, require that all clients be of the same architecture.
Coding Recommendations
The following sections list various coding recommendations for background programs.
Launching On Demand
To minimize your use of system resources, it's best to launch your background program (daemon or agent) on demand. That is, rather than always launching at startup (or, for an agent, at login), your program only launches when its services are required.
The basic strategy for launching on demand is:
You install a launchd configuration file that indicates the criteria under which you should be launched.
launchdwill read this file and waits for your launch criteria to be satisfied.
When your launch criteria are satisfied,
launchdstarts your program, providing it some way to get information about the event that triggered the launch.
The launch criteria supported by
launchd are listed in Table 4. See launchd.plist for the details.
Notes:
launchdwill create a Mach port and register it in the appropriate bootstrap namespace. It launches your background program if any messages arrive on that port.
Typically this is either a TCP socket or a UNIX domain socket. Launching on demand based on a connection to a UNIX domain socket was not feasible until Mac OS X 10.4.6. Specifically, the
SockPathModeproperty list key was not supported on PowerPC until Mac OS X 10.4.6 (r. 4252903) .
Typically this is a UDP socket.
By default
launchd assumes that your background program wants to run on demand. This behavior is controlled by the
KeepAlive property on Mac OS X 10.5 and later, and the now-deprecated
OnDemand property on Mac OS X 10.4.x. Table 5 shows the more common behaviors, and the properties that you need to set in order to get that behavior.
As you can see, the
KeepAlive property has a lot more flexibility than the
OnDemand property that it replaces.
Daemons Accessing User State
It is not possible for a daemon to act on behalf of a user with 100% fidelity. While this might seem like a controversial statement, it's actually pretty easy to prove. For example, consider something as simple as accessing a preference file in the user's home directory. It's not possible for a daemon to reliably do this. If the user has an AFP home directory, or their home directory is protected by FileVault, the volume containing the home directory will only be mounted when the user is logged in. Moreover, it is not possible to mount the that volume without the user's security credentials (typically their password). So, if a daemon tries to get a user preference when the user is not logged in, it will fail.
In some cases it is helpful to impersonate the user, at least as far as the permissions checking done by the BSD subsystem of the kernel. A single-threaded daemon can do this using seteuid and setegid. These set the effective user and group ID of the process as a whole. This will cause problems if your daemon is using multiple threads to handle requests from different users. In that case you can set the effective user and group ID of a thread using
pthread_setugid_np. This was introduced in Mac OS X 10.4.
Logging
In most cases you will want your background program to log information about its activity. These logs are useful during initial bring up, and can also help the user troubleshoot problems. Mac OS X supports a number of different logging facilities.
Apple System Log
The Apple System Log (ASL) is a logging facility introduced in Mac OS X 10.4. It allows you to create flexible, structured log entries. It also allows you to write management tools that query these logs.
For information about ASL, see its man page. For an example of its use, see Sample Code 'SampleD'.
Syslog
Syslog is a traditional BSD logging facility. It allows you to log (relatively unstructured) messages, and configure how those messages are processed.
For information about syslog, see the man pages for the syslogd, syslog, and logger programs; for the syslog.conf configuration file; and for the syslog API.
Logging With printf
If you're creating a launchd daemon or launchd agent,
launchd makes it easy to log using
printf. Specifically:
You can supply a specific destination for
stdoutand
stderrby setting the
StandardOutPathand
StandardErrorPathproperties in your program's property list.
If you don't supply these properties, the behavior in system-specific. Prior to Mac OS X 10.5, your program's
stdoutand
stderrwill be connected to
/dev/null. In Mac OS X 10.5 and later,
launchdwill capture any output to these streams and redirect it to ASL.
If you're not a launchd daemon or launchd agent, the situation is more complex.
For daemons that aren't launchd daemons,
stdoutand
stderrare typically connected to
/dev/null. If you want to log using
printf, you will have to redirect them appropriately.
For applications running on Mac OS X 10.5 and later,
launchdwill redirect
stdoutand
stderrto ASL (much as it does for launchd agents).
For applications running on systems prior to Mac OS X 10.5,
stdoutand
stderrgo directly into the console log. For information about the console log, see Technical Note TN2124, 'Mac OS X Debugging Magic'.
Daemon Security Considerations
By definition, a daemon is a system-wide service. Thus, when creating a daemon, you need to worry about security. While a detailed discussion of secure programming is outside the scope of this document, here's a short list of things that you should consider.
If possible, create an agent rather than a daemon. This will limit the scope of any security problems.
If your daemon uses the network, do not trust data you receive from the network. Doing so might allow a remote attacker to subvert your daemon.
When).
Try to avoid running your daemon with elevated privileges (for example, running it as root). If your daemon must run with elevated privileges, do not trust data received from non-privileged processes. Doing so might allow a local user to escalate their privileges.
Watch out for denial of service attacks. If your daemon sends a message to a client, make sure that it works correctly if the client never responds. Don't allow clients to run your daemon out of memory, or file descriptors, or any other resource.
Watch out for other common security problems, particularly buffer overruns. The nature of daemons makes these problems more worrisome.
For more information about writing secure software, consult one of the numerous texts available on the topic. I highly recommend Building Secure Software.
Agents and Fast User Switching
If you're writing an agent, you should be aware of the subtleties associated with fast user switching. For example, your agent might need to know:
whether it's running in a GUI or non-GUI login session
whether its GUI login session is active (that is, using the console)
and so on
For the detailed information about this, see Multiple User Environments.
Process Manager and Launch Services
Historically, neither Process Manager nor Launch Services were daemon safe. This represents a significant gotcha for daemon writers who, quite reasonably, expect that process services should be available to a daemon. However, if you look at the these APIs in their historical context, you can understand why this restriction existed.
Consider the Process Manager routine
GetNextProcess. Numerous programs, from both Apple and third parties, use this routine to display a list of running applications. It wouldn't make sense for this list to show applications running in other GUI login sessions. Thus, this routine uses the bootstrap namespace as an implicit parameter to determine the login session, and hence the list of running applications to return.
Launch Services, which is layered on top of the Process Manager, had similar issues.
Thus, both Process Manager and Launch Services have, historically, been located in the ApplicationServices framework, and were not available to daemons.
Apple recognizes that this restriction is quite inconvenient and is working to improve things. Mac OS X 10.5 represents the first step on that path. Specifically, the underlying implementation of Process Manager has moved from the window server to the CoreServices daemon (
coreservicesd). With this change, it is now possible to create daemon-safe system services that deal with Process Manager processes.
The first beneficiary of this change in Launch Services. In Mac OS X 10.5 Launch Services has been moved down to the Core Services layer (it is now a subframework of the CoreServices framework). Thus, it is now daemon-safe.
This raises the question of what happens when you use Launch Services to launch an application from a daemon. Because the daemon is running in the global bootstrap namespace, Launch Services can't use the bootstrap namespace to derive the login session. Rather, Launch Services uses the EUID of the calling process. The behavior is as follows:
If the EUID of the calling process is zero, the application is launched in the context of the currently active GUI login session. If there is no currently active GUI login session (no one is logged in, or a logged in user has fast user switched to the login window), the behavior is unspecified (r. 5321293) .
If the EUID of the calling process is not zero and it matches that of a user logged in via the GUI, the application is launched in the context of that user's GUI login session.
If the EUID of the calling process is not zero but it does not match that of a user logged in via the GUI, the behavior is unspecified (r. 5321281) .
Unfortunately, while the underlying implementation of Process Manager is now at the CoreServices layer, Process Manager itself remains in the ApplicationServices framework. Thus, it's still not possible to use Process Manager from a daemon.
Hints and Tips
This section describes various hints and tips that you might find useful when developing a background program.
Starting a Daemon
If you're writing a daemon, you may encounter problems when you start the daemon immediately after installing it. There are two essentially conflicting requirements.
You don't want to force the user to reboot.
If you launch your daemon directly from your configuration application (or installer), it inherits the wrong context (see Execution Contexts for details).
There are a number of ways that you can resolve this conundrum, listed in order of most- to least-preferable.
If you have a launchd daemon, you can start it by running launchctl.
launchctlworks by sending a message to
launchd, asking it to launch the daemon on your behalf, so your daemon is a child of
launchdand inherits the correct context.
Otherwise, you can launch the daemon in the global bootstrap namespace using the StartupItemContext tool. This tool is not perfect (r. 4283301) , but it works in most situations.
Finally, if your daemon only uses daemon-safe frameworks and it doesn't use or register Mach messaging services directly or indirectly, you should be able to just launch it. It will be running in the wrong context, but that probably won't cause serious problems.
Debugging Startup
Because background programs are usually run indirectly (that is, they're sublaunched by a system service, such as
launchd), it can be tricky to debug their startup code. There are three tricks you can use here.
You can add a command line option (traditionally it's "-d" for "debug") that causes your program to run as a standard tool (for example, a startup item wouldn't daemonize itself). This allows you to debug it directly from the GDB.
If you're developer a launchd daemon or agent, you can add the
WaitForDebuggerproperty to your property list file. A true value will cause
launchdto start your program in a suspended state, whereupon you can attach with the debugger.
This property is supported in Mac OS X 10.5 and later.
If the preceding options don't help, you can add a pause system call to your startup code, as illustrated by Listing 11. This stops the program until a signal arrives and, as it turns out, attaching to the process with GDB sends a signal to the process and unblocks it.
Listing 11 Pausing at startup
Debugging Rogue Window Server Use
If your daemon dies with a message like that shown in Listing 6, and you can't think of any reason why it should be attaching to the window server, there are a variety of things you can do to debug the problem. The first is to set the
INIT_Processes environment variable. When you see the resulting message in the system log, you can attached with GDB and do a backtrace to see what triggered the connection. For more details about this environment variable, see Technical Note TN2124, 'Mac OS X Debugging Magic'.
Watch Your Logs!
While developing a background program, it's a good idea to habitually monitor its log file. You'd be amazed how often an unexpected log message will let you quickly isolate an obscure problem. This can be as simple as logging using ASL or printf and running the Console application in the background.
If you're writing a daemon that's launched at startup time (before you can launch the Console application or even SSH info the machine), you may be able to see its log messages by holding down command-V at startup (for "verbose boot"). To permanently enable verbose boots, add the "-v" argument to your
boot-args NVRAM variable, as shown in Listing 12.
Listing 12 Enabling verbose boots
Bootstrap Namespaces: One More Thing
The earlier discussion of bootstrap namespaces (see Bootstrap Namespaces) talked extensively about how they are created, but it glossed over the issue of how they are destroyed. It turns out that a bootstrap namespace exists until the last process that references it quits. So, as long as your process is running, you are guaranteed that your bootstrap namespace will be valid.
There is one gotcha, however. When the system creates a bootstrap namespace, it associates it with the process that created it. When that process quits, the namespace is deactivated. For example, if you start a daemon from within an SSH login session and then log out, the process that created the bootstrap namespace (effectively this is
sshd) quits, and the bootstrap namespace that it created, which is the namespace that your daemon inherited, is deactivated.
A deactivated bootstrap namespace allows you to look up services, but it does not allow you to register new services. Any attempt to register a service in a deactivate namespace will fail with an error
BOOTSTRAP_NOT_PRIVILEGED (1100). Similarly, any high-level wrapper that registers a service will fail. For example,
CFMessagePortCreateLocal will print an error message and return
NULL if you call it after your namespace has been deactivated.
Earlier versions of Mac OS X (prior to Mac OS X 10.2) did not deactivate bootstrap namespaces; rather, when the process that created a namespace quits, the bootstrap service would destroy the namespace immediately. Any process still referencing that namespace would find itself without any bootstrap namespace (technically, their bootstrap port would now be a Mach dead name).
Apple started deactivating, rather that destroying, bootstrap namespaces as a stopgap measure to improve compatibility for programs that were incorrectly running in the wrong namespace (typically due to a lack of understand of bootstrap namespaces on behalf of the program's author). Future systems may eliminate the concept of deactivated bootstrap namespaces, and thus revert to the pre-10.2 behavior. As long as your program is running in the correct namespace, this should not cause any compatibility problems.
Careful With That Fork, Eugene
Starting in Mac OS X 10.5
launchd works to garbage collect any child processes of a launchd daemon or agent process when that process quits. Specifically, when a launchd daemon or agent quits,
launchd will send a
SIGTERM to the associated process group.
This can cause problems if you're developing a launchd daemon that creates a child process to run some other program (by the traditional fork and exec combination, or via posix_spawn). The child process will inherit its process group ID from your daemon. If your daemon quits before the child process, the child process will receive a
SIGTERM because it's in the same process group as your daemon.
There are a number of ways to avoid being tripped up by this, listed below in order of most- to least-preferable.
run the 'child' via
launchd— If you make the 'child' a separate launchd job, everything will Just Work™.
create a new session for the child — If you must continue to create the child process manually, have the child process run in a new session (and, consequently, in a new process group) by calling setsid.
use the
AbandonProcessGroupproperty — If you add this property to your program's property list file,
launchdwill not attempt to garbage collect your child processes.
Traditionally you would isolate the child process from the daemon by having the child call daemon. And, if you did this, the child would indeed end up in a separate process group, and would not be garbage collected by
launchd. Keep in mind, however, that the
daemon routine is officially deprecated in Mac OS X 10.5 and later.
Further Reading
System Startup Programming Topics
Multiple User Environments
Kernel Programming Guide (specifically the "Bootstrap Contexts" section)
Technical Note TN2124, 'Mac OS X Debugging Magic'
Sample Code 'LoginItemsAE'
Sample Code 'BootstrapDump'
Sample Code 'CFLocalServer'
Sample Code 'PreLoginAgents'
"Advanced Programming in the UNIX Environment", W Richard Stevens, Stephen A Rago, ISBN 0201433079
"UNIX Network Programming, Volume 1, The Sockets Networking API", W Richard Stevens, Bill Fenner, Andrew M Rudoff, ISBN 0131411551
Building Secure Software: How to Avoid Security Problems the Right Way, John Viega, Gary McGraw, ISBN 020172152X
launchd.plist configuration file man page
xinetd tool man page (not on Mac OS X 10.5 and later)
SystemStarter tool man page
StartupItemContext tool man page
syslog.conf configuration file man page
wait system routine man page
daemon system routine man page
pause system routine man page
Old Systems and Technology
The bulk of this technote reflects the reality of Mac OS X 10.5. This section describes technology from earlier systems that is now deprecated or unsupported, and discusses other topics that do not apply on modern systems.
Deprecated Daemonomicon
A number of mechanisms used to launch background programs are now deprecated or unsupported. This section describes these mechanisms.
Startup Item
A startup item is a daemon launched at startup time by the SystemStarter program (which is launched by
launchd on Mac OS X 10.4 and later, and directly from
/etc/rc.local on earlier systems). A third party startup item should be installed in the
/Library/StartupItems directory. For more information about startup items, see System Startup Programming Topics.
Starting with Mac OS X 10.4, startup items are deprecated in favor of launchd daemons.
mach_init Daemon
A mach_init daemon is launched by the Mach initialization process (
mach_init on Mac OS X 10.3.x and earlier,
launchd on Mac OS X 10.4 and later). A mach_init daemon is installed by placing a property list file in the
/etc/mach_init.d directory. Apple does not support third party development of mach_init daemons.
Starting with Mac OS X 10.5, mach_init daemons are deprecated in favor of launchd daemons; a launchd daemon can now specify a list of Mach service names to register as part of its property list file.
inetd and xinetd Daemon
inetd daemons and xinetd daemons are launched by the Internet super server (originally
inetd and later
xinetd and now
launchd). An inetd daemon is installed by adding a line to
/etc/inetd.conf. An xinetd daemon is installed by adding a configuration file to the
/etc/xinetd.d directory.
xinetd was introduced with Mac OS X 10.2. Where
xinetd is available, you should prefer it over
inetd because it is easier to configure.
Starting with Mac OS X 10.4, inetd and xinetd daemons are deprecated in favor of launchd daemons; a launchd daemon can specify a list of TCP and UDP ports to listen on as part of its property list file.
Starting with Mac OS X 10.5, inetd and xinetd daemons are no longer supported. However, it's easy to run an existing inetd or xinetd daemon by creating and installing a simple launchd property list file (resulting in an inetd-compatible launchd daemon). For an example of this, check out
/System/Library/LaunchDaemons/finger.plist.
For more information about inetd and xinetd daemons, see UNIX Network Programming and xinetd web site.
System Login Item
A system login item is a global login item that is launched in each GUI login session before the user logs in. A system login item is useful if you need to use the window server while the login screen is displayed. An example of this is a screen sharing program that allows a remote user to log in via the login window.
System login items are deprecated on Mac OS X 10.5 and later (r. 5316827) . You should, instead, use a pre-login launchd agent.
If you need to develop a system login item for use on systems prior to Mac OS X 10.5, please contact Developer Technical Support (DTS) for details.
mach_init Agent
A mach_init agent is like a mach_init daemon, except that it runs in the context of a particular user. It is launched (indirectly) by
loginwindow as part of the process of logging in a GUI user. A mach_init agent is installed by placing a property list file in the
/etc/mach_init_per_user.d directory. Apple does not support third party development of mach_init agents.
Starting with Mac OS X 10.5, mach_init agents are deprecated in favor of launchd agents; a launchd agent can now specify a list of Mach service names to register as part of its property list file.
Execution Context Summary for Deprecated Technologies
Table 6 shows how the execution context of a deprecated background program is affected by the mechanism used to launch it.
Notes:
Configured using the fifth column of the daemon's entry in
inet.conf.
Configured using the
userattribute in the configuration file; defaults to
rootif the attribute is not specified.
Uses the global bootstrap namespace unless the
session_createattribute is specified in the configuration file, in which case the daemon runs in its own per-session bootstrap namespace.
Uses the global security context unless the
session_createattribute is specified in the configuration file, in which case the daemon runs in its own per-session security context.
Execution Context Example on Mac OS 10.4
Figure 6 is graphical example of execution contexts on Mac OS X 10.4. Before consulting this figure, you should read the up-to-date information in Execution Context Example. This explains the overall thrust of the diagram. The text in this section concentrates on the differences between this diagram and the equivalent diagram for a modern system.
In this diagram there are three per-session bootstrap namespaces, two for GUI login sessions (users A and B) and one for an SSH login session (user A). These are represented by red boxes. Items that aren't in any red box are in the global bootstrap namespace.
).
As you look at Figure 6, consider the various different types of process that it shows.
There is a single instance of the
WindowServerprocess that resides in the global bootstrap namespace. However, the window server knows about all GUI per-session bootstrap namespaces. In this case, there's a GUI per-session bootstrap namespace for user A's login session, and another for user B's.
All applications are children of window server, which does the work to ensure that they run in the correct bootstrap namespace.
The first instance of
loginwindow, the one associated with user A's login session, is a child of
launchd. The second instance, created when user A fast user switched to user B, is a child of the window server.
Each instance of
loginwindowmanages the context for its associated login session.
loginwindowalso runs
pbs(the pasteboard server) directly.
The
Crash Reporterapplication is actually a mach_init agent. Note how its parent process is
launchd, but it runs in the appropriate user's bootstrap namespace.
sshdis a launchd daemon with the
SessionCreateproperty set, which means that it runs in its own non-GUI login session
lookupdis a mach_init daemon.
mds(the Spotlight daemon) is a startup item.
ftpdis a launchd daemon.
Differences between this and Mac OS X 10.5 include:
In Mac OS X 10.4 you rarely see per-user instances of
launchd.
In Mac OS X 10.4, all applications are children of the window server. In Mac OS X 10.5, that responsibility has been taken over by the per-user instances of launchd.
In Mac OS X 10.4 there are no per-user bootstrap namespaces. Thus, it is not possible to register a service, like the Kerberos credentials cache server (
CCacheServer) on Mac OS X 10.5, that is available to all login sessions of a particular user.
In Mac OS X 10.4 the pasteboard server (
pbs) was launched directly by
loginwindow. In Mac OS X 10.5, the pasteboard server (
pboard) is a standard launchd agent.
In Mac OS X 10.4 launchd agents are launched per-user, with no regard to the user's login session type. This severely limits their usefulness. In fact, Mac OS X 10.4 makes no use of launchd agents. In contrast, on Mac OS X 10.5 a launchd agent can specify which types of login sessions that it supports, and launchd agents are used extensively by the system.
The mechanism by which the crash reporting application (
Crash Reporteron 10.4,
ReportCrashon 10.5) is launched has changed, making the crash reporting process simpler and more reliable.
Getting Launched Prior To Mac OS X 10.5
Prior to Mac OS X 10.5,
launchd was significantly less powerful than it is today. Thus, a number of the techniques shown in the figures in Getting Launched are not available. If you need to support older systems, you should consult Figure 7 and Figure 8 when deciding on the best approach to launching your program.
Obsolete Programming Techniques
This section covers some programming techniques that are only interesting if you're developing a deprecated background program.
Daemonization
If you're writing a daemon, you may need to isolate your process from the environment that it inherits from its parent. This step is known as daemonization. For the nitty-gritty details, see Advanced Programming In The UNIX Environment; also, see Careful With That Fork, Eugene.
Whether daemonization is necessary depends on how you were launched. Table 7 shows this for each type of daemon.
If you need to daemonize, you can do so using the daemon routine.
Daemonization and Bootstrap Namespaces
Historically the
daemon routine did not change the bootstrap namespace of the calling program. This has caused numerous problems, and necessitated the creation of the StartupItemContext tool. For more information about this issue, see Starting a Daemon.
Starting with Mac OS X 10.5 the
daemon routine does switch the calling program into the global bootstrap namespace
(r. 5185932)
. However, this has, in turn, caused some compatibility problems
(r. 5499553)
. To avoid these problems the following rules now apply:
If your program is built with the Mac OS X 10.5 SDK (or later) and you set the deployment target to Mac OS X 10.5 (or later),
daemonwill switch your program to the global bootstrap namespace.
Otherwise,
daemonbehaves as it always has.
Daemonizing Frameworks
Many Mac OS X frameworks do not work reliably if you call fork but do not call exec. The only exception is the System framework and, even there, the POSIX standard places severe constraints on what you can do between a
fork and an
exec.
Under the covers the daemon routine calls fork. Thus, if you call
daemon and use frameworks above the System frameworks, you may run into problems. Critically, frameworks that are otherwise daemon safe, like Core Foundation, are not safe to call from this state.
Listing 13 Core Foundation complaining about fork-without-exec
There are two common solutions to this problem:
adopt launchd — A launchd daemon does not call
daemon, and thus does not suffer from this problem. This is the preferred solution.
exec yourself — If you can't adopt launchd (perhaps your product needs to support Mac OS X 10.3.x), you can solve this problem by exec'ing yourself. Listing 14 shows the basic idea. Keep in mind that this is just a minimal sample—a real daemon would actually handle errors!
Listing 14 Using exec to avoid framework problems after daemonization
Document Revision History
Copyright © 2007 Apple Inc. All Rights Reserved. Terms of Use | Privacy Policy | Updated: 2007-11-05 | https://developer.apple.com/library/content/technotes/tn2083/_index.html | CC-MAIN-2017-39 | refinedweb | 12,359 | 54.42 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Method Arguments4:34 with Jason Seifer
It is possible to pass variables in to methods we write. When a method is given a list of variables to work with, those variables are called arguments.
Concepts
The number of arguments a method takes is called the arity.
Code Samples
Define a method called "add" which takes two arguments, variables a and b:
def add(a, b) puts a + b end
- 0:00
When we create a method in Ruby, we can send variables and other things into it.
- 0:06
When we define a method that can have a variable,
- 0:10
that variable is called an argument.
- 0:13
When you create a method that has arguments,
- 0:16
you can have as many arguments as you want.
- 0:20
The number of arguments that a method takes is called the Arity.
- 0:27
Let's go ahead and add on to the method that we just created,
- 0:31
to have it take arguments.
- 0:34
So here's our work space where we have the add method.
- 0:38
And let's go ahead and just run this again.
- 0:41
So I type in ruby methods.rb and
- 0:45
we can see that it prints out 2 + 2.
- 0:50
When we create a method that takes arguments, we do that by putting
- 0:55
a parenthesis and then the arguments that we want it to have.
- 1:01
And we give those arguments names inside of the parentheses.
- 1:05
So, I'll type a and a closed parenthesis.
- 1:11
Now, I can type put and
- 1:16
then a + 2.
- 1:20
If I were to run this right now, Ruby would give me an error.
- 1:25
And it's saying, I have the wrong number of arguments.
- 1:30
That's because I need to tell Ruby what goes in to this variable.
- 1:37
So now, I'm going to send in the argument to the method.
- 1:43
So, if I put the number 2 in here, when I call this
- 1:48
method Ruby will pretend that a now has the value of 2.
- 1:53
So let me go ahead and run this again.
- 1:56
And just like we thought it prints 4.
- 2:01
Let's go ahead and add another argument to this method.
- 2:05
Because it's not very useful to just add one number to the number 2.
- 2:11
In order to add another argument to a method, we separate it with a comma.
- 2:17
The space is optional, but it's easier to read.
- 2:23
So now, we will put the value of a and b together.
- 2:30
Now if I run this again, we're going to get this wrong number of arguments error.
- 2:36
And that's because we haven't sent in another number.
- 2:41
This time, we'll add together 2 and 3.
- 2:46
And hopefully it returns 5, if my math is correct.
- 2:52
And it does.
- 2:54
Now this is not extremely useful on its own,
- 2:56
to just have a method that we created that only adds two numbers together.
- 3:01
We could have done that using just the simple plus method.
- 3:05
The usefulness of methods comes when we call it multiple times in a row.
- 3:11
We take functionality that would have otherwise
- 3:15
had to have been typed out multiple times and only type it out one time.
- 3:20
Let's go ahead and see that now.
- 3:22
If I wanted to call this again with different numbers,
- 3:25
I could say add, And then give it another couple numbers.
- 3:31
And I could do this as many times as I want to.
- 3:37
Let me clear my screen here and run that again.
- 3:44
And we can see that I've added these numbers again
- 3:48
only calling this one method.
- 3:52
Again this is not very useful.
- 3:55
But what I could do is type, These two things.
- 4:03
Adding this number, and this number.
- 4:08
And then print that out.
- 4:10
Now if I were to run this again,
- 4:13
we can see it calls that same code multiple times.
- 4:20
I could have written this out several times in a row.
- 4:23
However, by putting that behavior inside of a method, I save myself some typing.
- 4:30
And if I ever wanna change it, I only have to do that in one place. | https://teamtreehouse.com/library/ruby-basics/ruby-methods/method-arguments | CC-MAIN-2017-26 | refinedweb | 808 | 91 |
Artifact 10fae2093aefa2b00be13bcac40c06e90924bc83:
- File src/hwtime.h — part of check-in [9883b406] at 2008-05-29 20:22:36 on branch trunk — Consolidated inline assembly versions of "hwtime()" into hwtime.h. Provided MSVC version. Modified code for consistent use of hwtime(). Changed implementation to use sqlite_uint64 and sqlite_int64 instead of unsigned long long int and long long int for MSVC compiler support. (CVS 5178) (user: shane size: 1867)
/* ** 2008 May 27 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: ** ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ****************************************************************************** ** ** This file contains inline asm code for retrieving "high-performance" ** counters for x86 class CPUs. ** ** $Id: hwtime.h,v 1.1 2008/05/29 20:22:37 shane Exp $ */ #ifndef _HWTIME_H_ #define _HWTIME_H_ /* ** The following routine only works on pentium-class (or newer) processors. ** It uses the RDTSC opcode to read the cycle count value out of the ** processor and returns that value. This can be used for high-res ** profiling. */ #if (defined(__GNUC__) || defined(_MSC_VER)) && \ (defined(i386) || defined(__i386__) || defined(_M_IX86)) #if defined(__GNUC__) __inline__ sqlite_uint64 sqlite3Hwtime(void){ unsigned int lo, hi; /* We cannot use "=A", since this would use %rax on x86_64 */ __asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi)); return (sqlite_uint64)hi << 32 | lo; } #elif defined(_MSC_VER) __declspec(naked) __inline sqlite_uint64 __cdecl sqlite3Hwtime(void){ __asm { rdtsc ret ; return value at EDX:EAX } } #endif #else #error Need implementation of sqlite3Hwtime() for your platform. /* ** To compile without implementing sqlite3Hwtime() for your platform, ** you can remove the above #error and use the following ** stub function. You will lose timing support for many ** of the debugging and testing utilities, but it should at ** least compile and run. */ sqlite_uint64 sqlite3Hwtime(void){ return ((sqlite_uint64)0); } #endif #endif /* !defined(_HWTIME_H_) */ | https://sqlite.org/src/artifact/10fae2093aefa2b0 | CC-MAIN-2020-34 | refinedweb | 309 | 63.49 |
Adds C-style preprocessor directive support to JS brunch compilations. This allows you to have the same source files, but multiple config.coffee files to compile for different environments.
Add
"jspreprocess-brunch": "x.y.z" to
package.json of your brunch app.
In each
config.coffee file of your brunch app, add a line to specify your
buildTarget.
It can also be helpful to have different public paths for each config. That way different configurations with different buildTarget's will compile their output to different folders.
ex:
config_prod.coffee
exports.config =buildTarget: 'PRODUCTION'paths:public: 'prod'...
ex:
config_debug.coffee
exports.config =buildTarget: 'DEBUG'paths:public: 'debug'...
Directives for if, else, elif, and endif are available to control what javascript gets compiled. Each must be on its own line and prepended by the double slash comment op.
// #BRUNCH_IF (PRODUCTION)...// #BRUNCH_ELIF (DEBUG)...// #BRUNCH_ELSE...// #BRUNCH_ENDIF
Note the use of parentheses. Unlike C preprocessor directives, those parentheses are required for
#BRUNCH_IF and
#BRUNCH_ELIF statements.
The
#BRUNCH_IF and
#BRUNCH_ELIF directives also support the OR (||) operator.
ex:
// #BRUNCH_IF (PRODUCTION || iOS)
#BRUNCH_IFstatements are not yet supported.
Using the config_prod.coffee and config_debug.coffee examples from before, you can compile each config like so:
brunch build -c config_prod
brunch build -c config_debug | https://www.npmjs.com/package/jspreprocess-brunch | CC-MAIN-2017-22 | refinedweb | 204 | 53.98 |
Attempting to plod through and answer bits in these.
After a few more emails I'll mail a new list of todo items.
Hen
On Sun, 30 Jun 2002, Stephen Colebourne wrote:
> Well .............
>
> 1) rename method in Objects
> - change isNull() to defaultIfNull()
+1.
>
> 2) add hashCode building methods to Objects, or its own class based on best
> hashCode practice
Sounds like a nice idea. Could you start a separate thread to discuss
the best practices and what this could do. A new feature, so could spend
lots of debating time.
>
> 3) add CharUtils and UStringBuffer and refactor all the Strings code to be
> shared
Which do we declare to be most important. If we refactor to share code,
then where do we choose to lose performance? UStringBuffer won't be able
to extend StringBuffer. I'd like to thread this off into another
conversation, similar to 2).
>
> 4) add low level reflection code in reflect subpackage as was discussed a
> few weeks ago. (needs further discussion)
I think this is worthwhile getting in. Another separate topic.
>
> 5) examine the Avalon system utilities. Do they go in Lang or elsewhere
I'd prefer to hold this one off until after a 1.0.
>
> 6) Numbers has a lot of comments about future things. Plus containsDigits()
> has question marks against the null handling
Question marks meaning, should containsDigit consider null to contain
digits or not, or should it not even check.
> 7) Classes clashes badly with the reflection stuff. Either we include
> reflection or we don't.
Hopefully we can integrate them. Classes would appear to have a place in
that it faces java.lang.Class, but maybe there are no real methods to add
here. There are System.err.println's in Classes which aren't too stunning.
Do we follow the poor java.lang convention of Class in lang but Method etc
in lang.reflect? Or just have the Class features in lang.reflect too.
>
> 8) Naming conventions. It seems that every other Commons project is using
> XxxUtils for their utility names (Collections, BeanUtils, IO, Pattern). It
> seems that we should at least consider renaming the classes for consistency.
Need to dredge up the old emails about the pros and cons. The XxxUtils
seems an ugly convention just to say 'this class is made up of static
methods for dealing with Xxx'.
>
> 9) Constant I don't fully understand. If its meant to be the enumerated type
> pattern I would suggest:
> - make it abstract
> - make the constructors protected
> - add equals and hashCode methods
> - add extra documentation as to how the subclass should be written
> What I don't understand is the need for all the different object types. At
> the most, I would consider String and int to be all thats needed. (An int
> would allow it to be Comparable). In fact I would have the Comparable
> version extend the non-Comparable one (two classes).
Constant is not meant to be a full enumerated type. It is driven by the
number of people who do: public static int SKY_MAGIC = 5;
A terrible thing (imo) which means that the API is full of doMagic(int).
The primary idea of Constant is to make it obvious in an API where the
magic numbers are. From an OO purist point of view, maybe I should be
making an object for every magic number, ie) Enum.
The reason for having it as a concrete class is to not force people to
have to type every magic number they have.
There is no equals method because I believe this would be wrong.
Constant c1 = new Constant("c1"); and
Constant c2 = new Constant("c1"); should not be equal. I shouldn't be able
to say:
if(MAGIC_NUMBER.equals(new Constant("c1"))) {
as I quite obviously haven't gone to Magic.MAGIC_NUMBER to get my value.
Only having String and int is no good if I want to do a constant for PI
[please ignore it already existing].
Biggest issue to me on Constant is that I have discovered it has
Serialisation problems, might have to scrap it for this alone as I've not
found a solution without making it an Enum structure.
> 10) Strings - some methods are very specific (too specific) and could be
> removed
> - parseObjectKey
> - removeUnderscores
These are quite old. We might want to enquire as to who is dependant on
them and then kill them.
> - translate, it seems unclear as to what its doing
Unix tr style thingy :)
> - reverseDottedName, reverseDelimitedString will do this so is dot a valid
> special case
Sounds good.
> - interpolate - this seems to be very specific to a particular syntax
I think it is quite a standard syntax, but also not something that needs
to live inside Strings. org.apache.commons.util.BasicInterpolator would be
my vote.
>
> Some methods have misleading names
> - isValid() - should be isNotEmpty(), a not empty string is not necessarily
> valid
+1.
> Some methods could be optimised:
> - capitalise
> - uncapitalise
> - overlayString
Go for it. I don't pretend to be a great algorithm coder.
> Some methods may be wrong:
> - chomp/chompLast hard codes \n rather than uses System.LINE_SEPARATOR
> - wordWrap hard codes \n rather than uses System.LINE_SEPARATOR
Definitely for LINE_SEPARATOR. chomp stuff needs a bit of investigating as
I was following the PHP versions a bit closely.
>
> The class seems too big. Maybe some smaller associated utility classes would
> be appropriate?
> CharSet methods seem to be a group. Could they be in their own class? Also
> why is evaluateSet public when a CharSet cannot be passed in anywhere.
CharSet used to be a public class. It was put into Strings for
organisational purposes while it lived in the Util project and hasn't been
broken out again.
> (PatternStringUtils)
> Maybe the casing methods could be broken out? (CaseStringUtils)
People expect these to be in Strings I think.
> Maybe the random methods could be broken out? (RandomStringUtils)
I think this would be a good thing.
>
> Add
> - left(String, int len), right(String, int len) and mid(String, int from,
> int len) to Strings, no exceptions (as per BASIC)
So what do these do? Didn't have them in BBC BASIC in 1984 :)
> - isAlphanumericSpace/isNumericSpace, for specifically space and not
> whitespace
> - contains(String, String), does the first string contain the second
+1 to both.
--
To unsubscribe, e-mail: <mailto:commons-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:commons-dev-help@jakarta.apache.org> | http://mail-archives.apache.org/mod_mbox/commons-dev/200206.mbox/%3CPine.LNX.4.33.0206301516250.16699-100000@umbongo.flamefew.net%3E | CC-MAIN-2016-26 | refinedweb | 1,067 | 66.64 |
Function
Runs the group of tasks in the current group in parallel, if possible.
Syntax
#include <dx/dx.h>
Error DXExecuteTaskGroup()
Functional Details
Begins executing the tasks belonging to the current task group. DXCreateTaskGroup waits for the completion of all tasks in this task group. The tasks are started in decreasing order of the work estimate given in DXAddTask.
Return Value
Returns OK if all tasks in the task group complete without error; otherwise, returns ERROR. Any error code returned is set by the task involved.
See Also
DXAddTask, DXCreateTaskGroup
13.8 , "Parallelism".
[Data Explorer Home Page | Contact Data Explorer | Same document on Data Explorer Home Page ] | http://www.cc.gatech.edu/scivis/dx/pages/progu138.htm#HDRDXETG | crawl-003 | refinedweb | 107 | 51.24 |
Scala Scripting and the 15 Minute Blog Engine[web search]
Scala Scripting and the 15 Minute Blog Engine.
- Serious Business
- Scala Scripting with Ammonite
- Ammonite Goodies
- The 15 Minute Static Blog
- In The Wild
- Limitations
- Conclusion
Serious Business
Traditionally, Scala has always been used for engineering big, complicated systems. Looking at Github's Trending Scala Repositories brings up, at time of writing, things like apache/spark, apache/incubator-predictionio, akka/akka and scalanlp/breeze.
Looking at this list of things, the focus is pretty clear: distributed systems, type-system & compiler research, big-data, machine-learning. Serious Business.
It turns out that Scala is a great language for Serious Business. Some nice properties include:
Relatively high-performance on the high-performance JVM runtime makes your web-scale or big-data Serious Business complete faster.
Type-safety makes sure you don't make careless mistakes when doing your very-important Serious Business.
Good tools for abstractions (inline-functions, implicits, typeclasses) means you can manage the complexity of your Serious Business and keep it from getting out of hand. \
While Scala is great for doing Serious Business, it is less ideal for doing quick, hacky 1-20 liners in the command-line or as scripts. Your options have traditionally been:
The REPL: whether the default or the Ammonite-REPL, this works great for interactive work, but doesn't satisfy the use case where you need to write something down and run it later.
Use the
scalacommand to run script files: this works in theory, but in practice is unmaintained, and has enough rough edges (the error messages are atrocious!) and missing features (you can't import other files, or easily use third-part libraries) that it doesn't actually work in practice.
Use a standalone SBT project. This means you pay the 3-4s startup time to run your project from the command line, not to mention all the boilerplate associated with it:
- A
project/build.propertiesfile
- A
build.sbtfile
- A
src/folder with
src/main/scala/pkg/inside
- And finally your single
Main.scalafile inside...
- Wrapped in boilerplate
package pkg; object Main{def main(args: Array[String]): Unit = {}}...
- ... which then contains your real, useful code.
Use Conscript, which isn't well documented and shares much of the same problems of the
scalacommand (can't import files) and SBT (slow start times)
Clearly, none of these satisfy the use case of "write code in file, run it". That is where Scala Scripting comes in.
Scala Scripting with Ammonite
Scala Scripting avoids many of the limitations of the status-quo Scala world. A new feature of the Ammonite project, Scala Scripting lets you write tiny, self-contained Scala scripts that you can run quickly-and-conveniently from the command line using the
amm script-runner.
To begin with, you can download the Ammonite script-runner onto any machine with a JVM (Unix/Bash is shown here, Windows works with some fiddling):
> sudo curl -L -o /usr/local/bin/amm > chmod +x /usr/local/bin/amm
Now, you can create and run a
.sc file using the
amm executable on your path:
> cat Foo.sc println("Hello World") > amm Foo.sc Hello World
A
.sc file is simply a file full of Scala code, except top-level statements can be anything, and not just
classes
traits and
objects.
amm runs the file, compiling it, caching the compiled code, and executing it. You can also add a
#!/usr/bin/env amm shebang line to let you execute the file directly
> cat Foo.sc #!/usr/bin/env amm println("Hello World") > ./Foo.sc Hello World
Ammonite Goodies
So far, this isn't anything you couldn't do with the
scala command, which also lets you run these scripts:
> scala Foo.sc Hello World
So what does Scala Scripting with Ammonite's
amm runner give you over just running scripts with the
scala command? Quite a lot, it turns out
Importing Files
The most basic feature that Scala Scripting with
amm gives you over the
scala command is that it lets you import scripts from each other, using an
import $file.Foo syntax. For example:
> cat Bar.sc val cat Foo.sc import $file.Bar println(Bar.myMessage) > amm Foo.sc Hello World!
Here, we have two scripts,
Foo.sc and
Bar.sc.
Foo.sc imports a value from
Bar.sc, and uses it when you run your script. This is something that is impossible using the standard
scala command, and is fundamental to any scripting language: Bash, Python, Ruby, all let you import scripts from other scripts. And why not? If Python forced you to put all your code in a single file, there's no way it would be as popular as it is today! Scala Scripting gives you this capability for Scala scripts.
This example is obviously trivial - you wouldn't split a 2-line script into 2 files just because you could - but when your scripts grow to 100 lines or more, the ability to split them becomes valuable.
In general, this import system works the same as you'd expect coming from Python:
import $file.Foo imports the
Foo.sc file in the same folder,
import $file.sub.Foo would import the
sub/Foo.sc file. You then get access to the
Foo object with all the definitions in that script file available as fields or methods. Unlike in Ruby, imports don't dump all their contents into local scope by default: use
import $file.Foo, Foo._ if you want to do that explicitly.
You can do all the standard things you'd expect with Scala imports: rename
Foo => ... as part of the import, import multiple files at once, or import a file and immediately import some value from it:
> cat Bar.sc val cat Foo.sc import $file.Bar, Bar.myMessage println(myMessage) > amm Foo.sc Hello World!
The only limitation is that you have to
import $file.Bar separately from importing things from
Bar, e.g. you can't
import $file.Bar.myMessage all at once. Instead, you need to
import $file.Bar, Bar.myMessage.
Apart from that, though, it works basically seamlessly, and as you'd expect. On-demand, the
Bar.sc file is loaded, compiled, evaluated, made available under the
$file.Bar full path and imported as the
Bar object. Similarly, doing
import $file.sub.Foo to load the
sub/Foo.sc script will make it available as the
Foo object in the
$file.sub package, and import
Foo for your use in your script. This works basically seamlessly, just as the import system of a real scripting language would work.
Importing Libraries
Part of the beauty of working with Scala/SBT or Java/Maven is the dependency management system: rather than trying to remember to
apt-get install or
brew install or
pip install things before running your code, you simply add any libraries you want to your build file (whether
build.sbt or
pom.xml) and they get pulled down automatically for you when you compile your code. No more forgetting to run the correct incantations, or needing to uninstall a library because it clashes globally with another version someone else is using. Just add the dependency to your build file, and the next time you run your code, it just works. If someone else wants to run your code, they just run it, the first time being slightly slower as dependencies get pulled down and cached, and again it just works without manual installation.
This functionality is missing when you want to use the
scala command to run scripts. Sure, you could download the jars manually and use the
-cp flag to add them, but that's a pain in the neck and nobody does it.
With Scala Scripting, the
amm command does all this for you. For example, if I want to use the popular Scalatags HTML templating library to generate some HTML in my script, I might try to do:
> cat Foo.sc import scalatags.Text.all._ println(a("Hello", href := "").render) > amm Foo.sc
But this results in
Foo.sc:1: not found: value scalatags import scalatags.Text.all._ ^ Foo.sc:2: not found: value a ;val res_1 = println(a("Hello", href := "").render) ^ Foo.sc:2: not found: value href ;val res_1 = println(a("Hello", href := "").render) ^ Compilation Failed
Naturally, the
amm script runner doesn't include a HTML generation library. And why should it? It's a script runner, after all, not a web framework! Nonetheless, you can easily pull down libraries using the
import $ivy syntax:
> cat Foo.sc import $ivy.`com.lihaoyi::scalatags:0.6.0`, scalatags.Text.all._ println(a("Hello", href := "").render) > amm Foo.sc :: loading settings :: url = jar:file:/usr/local/bin/amm!/org/apache/ivy/core/settings/ivysettings.xml :: resolving dependencies :: com.lihaoyi#scalatags_2.11-caller;working confs: [default] found com.lihaoyi#scalatags_2.11;0.6.0 in central found org.scala-lang#scala-library;2.11.8 in chain-resolver [2.11.8] org.scala-lang#scala-library;2.11.8 found com.lihaoyi#sourcecode_2.11;0.1.1 in chain-resolver found com.lihaoyi#acyclic_2.11;0.1.2 in central found org.scala-lang#scala-compiler;2.11.0 in central found org.scala-lang.modules#scala-xml_2.11;1.0.1 in chain-resolver found org.scala-lang.modules#scala-parser-combinators_2.11;1.0.1 in m2 found org.scala-lang#scala-reflect;2.11.8 in chain-resolver [2.11.8] org.scala-lang#scala-reflect;2.11.8 <a href="">Hello</a> > amm Foo.sc <a href="">Hello</a>
In the
import $ivy statement, the ivy-organization, library-name, and version are separated by
:s, with the first
:: representing a Scala library (similar to
%% vs
% in SBT). As you can see, the first time you run the script it resolves and pulls down the relevant libraries from Maven Central, and makes Scalatags available in your tiny
.sc script! On subsequent runs, it's cached and can be used instantly.
While downloading a HTML-generation library in a script may seem frivolous, the Java ecosystem has thousands of libraries available in the standard Ivy repositories. Implementations of languages from Ruby to Python to Javascript, tools like the Google Closure Compiler, libraries for dealing with geospatial data or obscure file formats. If there's a library for a task available in any language, chances are it is also available in Java. And anything available in Java is available to Scala Scripts via
import $ivy.
Anything from the simplest library to the most complex framework can be used in your Scala scripts. For example, if you want to spin up a HTTP server for whatever reason, you can simply import Twitter's Finagle Server in your script and get to work using it:
> cat Foo.sc import $ivy.`com.twitter::finagle-http:6.36.0` import com.twitter.finagle._, com.twitter.util._ val service = new Service[http.Request, http.Response] { def apply(req: http.Request): Future[http.Response] = { val response = http.Response(req.version, http.Status.Ok) response.contentString = "Hello!" Future.value(response) } } val server = Http.serve(":8080", service) Await.ready(server) > amm Foo.sc Jul 23, 2016 1:26:13 PM com.twitter.finagle.Init$$anonfun$1 apply$mcV$sp INFO: Finagle version 6.36.0 (rev=7efeb4cc2babd7c99731090fa76ad960627fce14) built at 20160708-090553
And immediately you can navigate to
localhost:8080 and see
Hello! show up in your browser.
Startup Performance
While the last two points are comparing the advantages of using Scala Scripting's
amm command over the
scala command, using SBT also satisfies those criterion. SBT allows you to use multiple files in a project, and depend on libraries via your
build.sbt file. Nevertheless, SBT has its own problems. One big one is performance.
For example, if we compare a
.sc script and a
.scala file run using SBT:
> cat Foo.sc println("Hello!") > cat Foo.scala object Foo{ def main(args: Array[String]): Unit = { println("Hello!") } }
The first thing you might notice is that the
.scala file we'll feed into SBT has all sorts of boilerplate around the code we actually care about, while the
.sc file doesn't. It slows you down writing a short script if you need these reams of boilerplate. Nevertheless, the slowdown doesn't stop when you're done writing it:
> time sbt run [info] Loading global plugins from /Users/haoyi/.sbt/0.13/plugins [info] Set current project to test (in build file:/Users/haoyi/test/) [info] Updating {file:/Users/haoyi/Dropbox%20(Personal)/Workspace/test/}test... [info] Resolving org.fusesource.jansi#jansi;1.4 ... [info] Done updating. [info] Compiling 1 Scala source to /Users/haoyi/test/target/scala-2.10/classes... [info] Running Foo Hello! [success] Total time: 1 s, completed Jul 23, 2016 1:43:00 PM real 0m5.334s user 0m13.813s sys 0m0.863s > time sbt run [info] Loading global plugins from /Users/haoyi/.sbt/0.13/plugins [info] Set current project to test (in build file:/Users/haoyi/Dropbox%20(Personal)/Workspace/test/) [info] Running Foo Hello! [success] Total time: 0 s, completed Jul 23, 2016 1:43:06 PM real 0m3.699s user 0m9.042s sys 0m0.592s
As you can see, while the first run (where we need to compile the code) takes more than 5 seconds, subsequent runs still end up taking about 3.7 seconds, which is a lot of time given we just need to run one line of code! Obviously this time does not go up linearly with the size of our code snippet, but nevertheless it is a hefty fixed cost to running any code using SBT. While it might not matter for large, Serious Business applications, it's a pain for small 1-10 line scripts.
With a Scala Script, we can see that the first run isn't much better. We still need to compile the code, and Scala compiles code really slow...
> time amm Foo.sc Hello! real 0m4.632s user 0m15.607s sys 0m0.665s
However, for subsequent runs, this time drops:
> time amm Foo.sc Hello! real 0m0.557s user 0m0.699s sys 0m0.084s
We're looking at about 0.5 seconds of overhead for running arbitrary Scala Scripts, vs ~3.7 seconds using SBT. While 0.5 seconds is still not great - you wouldn't want to call it in a tight loop using
ls | xargs for example - at least it's not infuriatingly-slow.
While it's possible to make SBT projects start up quickly using
sbt stage and running the generated executable, it takes third-party plugins and is fiddly to set up. With a Scala Script, you can just run it directly and benefit from the "reasonable" startup overhead once the script (and any it imports) are compiled during the first run.
Filesystem Operations
Traditionally, when working in a Serious Business language like Scala, there is always a tension between doing your file operations in your Serious Business language, or writing a Bash script and shelling out to run it. The tradeoff is basically:
Doing your file operations in your Serious Business language means you get all the tooling, safety, performance, and other nice properties of your Serious Business language
Doing your file operations in Bash is 5-10x more concise than doing it in your Serious Business language (e.g. using the
java.niopackage)
In the end, both are bad choices: one is tediously verbose, and the other is incredibly sketchy and unsafe.
To solve this dilemma, Ammonite includes a really nice Ammonite-Ops package, which you can use to perform basic filesystem operations:
> cat Foo.sc import ammonite.ops._ println("Starting contents") ls! cwd foreach println mkdir(cwd/"my-folder") println("Contents after creating folder") ls! cwd foreach println write(cwd/"my-folder"/"foo.txt", "Hello Hello") println("Written file contents") println(read! cwd/"my-folder"/"foo.txt") > amm Foo.sc Starting contents /Users/haoyi/Dropbox (Personal)/Workspace/test/Foo.sc Contents after creating folder /Users/haoyi/Dropbox (Personal)/Workspace/test/Foo.sc /Users/haoyi/Dropbox (Personal)/Workspace/test/my-folder Written file contents Hello Hello
As you can see, you can conveniently perform common operations like
ls,
mkdir,
read and
write all from the comfort of your Scala code. No longer do you need to deal with
BufferedReaders or
BufferedWriters or
InputStreamReaders and all that boilerplate.
ammonite.ops also provides a convenient API for spawning subprocesses and printing or capturing their output:
> cat Foo.sc import ammonite.ops._, ImplicitWd._ %git 'init %git('add, "-A") %git('commit, "-am", "First Commit") %git 'branch val log = %%('git, 'log, "--pretty=oneline").out.lines val hashes = log.map(_.split(" ").apply(0)) println("Hashes") hashes.foreach(println) > amm Foo.sc Reinitialized existing Git repository in /Users/haoyi/Dropbox (Personal)/Workspace/test/.git/ [master a604379] First Commit 1 file changed, 4 insertions(+) * master Hashes a604379bc68b9b48a70451c8323a85f7d8713092 06b5fe2352c5842ccff81d6a4b17d97e907c05b8
Here,
% operations run a command immediately and print any output to stdout, while
%% operations like the
git log capture the output and make it available as a value you can use in your program. This makes creating subprocesses just as easy as working in Bash, and without all the boilerplate associated with the
sys.process or
java.lang.ProcessBuilder APIs. For the rare cases where you need extra control those APIs are still available, but for the common case this is far more convenient.
With
ammonite.ops, basic file and process operations are as convenient in Scala as they would be in Bash. That means you no longer feel the urge to shell-out to a Bash script in order to make things more concise: you can comfortably do all your work in Scala, both low-level file operations and high-level logic. Want to shell out to Git, and then take the output and process it using Scala collection operations? Go ahead! The example above has already shown you how to do it.
The 15 Minute Static Blog
The initial selling point for Ruby on Rails was a demo of how you could Build your own Blog Engine in 15 minutes. This was impressive in the 2005 world where fiddling with PHP/MySQL/Apache to first set up a website could easily take hours, and it was impressive how fast Rails let you set up and start making progress.
In 2016, the new hotness is static site generators; unlike dynamic blogs, these generate the bulk of the web page at build-time, only leaving small sections like the comments to be generated dynamically, often by third-party services like Disqus. Here, we will use Ammonite's Scala Scripting functionality discussed above to do something you probably couldn't imagine doing using traditional Scala tooling: building a static blog generator in 15 minutes with zero boilerplate.
Minute 0: Hello world
Just as a reminder, we will start from the simplest possible working script:
> cat Blog.sc println("Hello!") > amm Blog.sc Hello!
This works, and will be our starting point to expand into a fully functional static blog engine
Minute 1: Deciding the Content Layout
Typically, static blog engines take their input as markdown files, with additional metadata (provided in each file, or as separate JSON/YAML files) and use that to generate HTML. For this exercise, let's arbitrarily decide that there will be a
posts/ folder that contains any markdown files we want to convert into blog posts, and each one will be named
1 - My First Post.md
2 - Second Blog Post.md
3 - Yet Another Post.md
- etc.
With the number before the
- indicating the order of the blog post in the final blog. We can create some sample posts at the command line:
> mkdir posts > touch "posts/1 - My First Post.md" > touch "posts/2 - Second Blog Post.md" > touch "posts/3 - Yet Another Post.md" > ls Blog.sc posts > ls posts/ 1 - My First Post.md 2 - Second Blog Post.md 3 - Yet Another Post.md
Make our
Blog.sc file find these posts is easy with the filesystem operations provided to Scala Scripts:
> cat Blog.sc import ammonite.ops._ val postFiles = ls! cwd/'posts val unsortedPosts = for(path <- postFiles) yield { val Array(prefix, suffix) = path.last.split(" - ") (prefix.toInt, suffix, path) } val sortedPosts = unsortedPosts.sortBy(_._1) println("POSTS") sortedPosts.foreach(println)
>)
As you can see, we can easily mix bash-like operations like
ls with full-fledged Scala code like for-loops,
.foreach, or pattern matching on
Array()s to extract the first and second elements, and good old
println for debugging and seeing what's going on. Here, we can see that our script is able to understand the layout of the blog posts, extract their name, and sort them in order.
Minute 4: Generating a HTML Page
A static blog engine needs to generate a static blog, and static blogs are made of HTML pages. We could generate HTML by stitching together strings like
"<div>" + content + "</div>", but doing so is tedious and unsafe, prone to XSS injection if we're not careful. Luckily, in Scala Scripts we can easily import whatever Java or Scala libraries we want, including e.g. the Scalatags HTML generation library:
> git diff diff --git a/Blog.sc b/Blog.sc index a57c0d6..a2cdbac 100644 --- a/Blog.sc +++ b/Blog.sc @@ -1,3 +1,4 @@ +import $ivy.`com.lihaoyi::scalatags:0.6.0` import ammonite.ops._ val postFiles = ls! cwd/'posts @@ -10,3 +11,17 @@ val sortedPosts = unsortedPosts.sortBy(_._1) println("POSTS") sortedPosts.foreach(println) +val HTML = { + import scalatags.Text.all._ + + html( + head(), + body( + h1("Haoyi's Blog"), + for((_, suffix, _) <- sortedPosts) + yield h2(suffix) + ) + ).render +} + +write(cwd/"index.html", HTML)
>) > cat index.html <html><head></head><body> <h1>Haoyi's Blog</h1><h2>My First Post.md</h2><h2>Second Blog Post.md</h2> <h2>Yet Another Post.md</h2></body></html>
Now, the HTML being written to
index.html isn't particularly nicely formatted, but it'll do for now. In fact, you can open up
index.html in the browser and see it:
Not particularly pretty, but it'll do for now, and we can build on top of it.
Minute 8: Rendering Markdown
As sample blog posts, I'm just going to take some generic text from the Github Markdown Guide:
> head -n 5 posts/1\ -\ My\ First\ Post.md Sometimes you want numbered lists: 1. One 2. Two 3. Three
> head -n 5 posts/2\ -\ Second\ Blog\ Post.md # Structured documents Sometimes it's useful to have different levels of headings to structure your documents. Start lines with a `#` to create headings. Multiple `##` in a row denote smaller heading sizes. ### This is a third-tier heading
> head -n 5 posts/3\ -\ Yet\ Another\ Post.md There are many different ways to style code with GitHub's markdown. If you have inline code blocks, wrap them in backticks: `var example = true`. If you've got a longer block of code, you can indent with four spaces: if (isAwesome){ return true }
Perhaps not the most insightful thought-pieces, but they'll do for now. The next question would be, how can we parse the markdown?
It turns out, there are perfectly good markdown parsers in Java,
And we can pick any we'd like. Glancing through them, it seems atlassian/commonmark-java is the most active and has the clearest readme, so let's use that.
The readme gives you the maven snippet necessary to use this parser
<dependency> <groupId>com.atlassian.commonmark</groupId> <artifactId>commonmark</artifactId> <version>0.5.1</version> </dependency>
This directly corresponds to the
$ivy import:
import $ivy.`com.atlassian.commonmark:commonmark:0.5.1`
Note that it's a single
: between the
groupId and the
artifactId, as this is a Java library (Scala libraries would need a
::)
They also give us some Java sample code to get started using the library. Translating into Scala is basically replacing all the local variables with
vals. We can use this in our code, reading the
.md files, transforming them into HTML and writing them into HTML files:
> git diff diff --git a/Blog.sc b/Blog.sc index a2cdbac..d6419a9 100644 --- a/Blog.sc +++ b/Blog.sc @@ -1,4 +1,6 @@ import $ivy.`com.lihaoyi::scalatags:0.6.0` +import $ivy.`com.atlassian.commonmark:commonmark:0.5.1` + import ammonite.ops._ val postFiles = ls! cwd/'posts @@ -11,6 +13,20 @@ val sortedPosts = unsortedPosts.sortBy(_._1)) + write( + cwd/'blog/(suffix.stripSuffix(".md").replace(" ", "-").toLowerCase + ".html"), + output + ) +} val HTML = { import scalatags.Text.all._
You can see the new
for loop in the middle with all the code cribbed from the
atlassian/commonmark-java docs, basically verbatim. We are converting the "raw" names of the files to URL-friendly names using a
.replace and
.toLowerCase; this isn't particularly robust but it'll do for now.
Running this, it will download the
atlassian/commonmark-java library the first time and give you
>) > tree . ├── Blog.sc ├── blog │ ├── my-first-post.html │ ├── second-blog-post.html │ └── yet-another-post.html ├── index.html └── posts ├── 1\ -\ My\ First\ Post.md ├── 2\ -\ Second\ Blog\ Post.md └── 3\ -\ Yet\ Another\ Post.md
As you can see, our
my-first-post.html and
second-blog-post.html files are all in place. You can look at the generated code:
> head blog/my-first-post.html <p>Sometimes you want numbered lists:</p> <ol> <li>One</li> <li>Two</li> <li>Three</li> </ol> <p>Sometimes you want bullet points:</p> <ul> <li>Start a line with a star</li> <li>Profit!</li>
Or open them in the browser:
Sure it's missing all the
<html></html> stuff that's meant to wrap it, but we can fix that.
Minute 12: Cleanup and Links
The last two things to make our blog-engine kinda-functional are:
- Properly wrap the blog posts in
<html><body></body></html>wrappers
- Add links between the pages
This can be done relatively easily:
> git diff diff --git a/Blog.sc b/Blog.sc index d6419a9..98977f9 100644 --- a/Blog.sc +++ b/Blog.sc @@ -8,7 +8,9 @@ val unsortedPosts = for(path <- postFiles) yield { val Array(prefix, suffix) = path.last.split(" - ") (prefix.toInt, suffix, path) } - +def mdNameToHtml(name: String) = { + name.stripSuffix(".md").replace(" ", "-").toLowerCase + ".html" +} val sortedPosts = unsortedPosts.sortBy(_._1) println("POSTS") @@ -22,9 +24,17 @@ for((_, suffix, path) <- sortedPosts) { val document = parser.parse(read! path) val renderer = HtmlRenderer.builder().build() val output = renderer.render(document) + import scalatags.Text.all._ write( - cwd/'blog/(suffix.stripSuffix(".md").replace(" ", "-").toLowerCase + ".html"), - output + cwd/'blog/mdNameToHtml(suffix), + html( + head(), + body( + h1(a("Haoyi's Blog", href := "../index.html")), + h1(suffix.stripSuffix(".md")), + raw(output) + ) + ).render ) } val HTML = { @@ -35,7 +45,7 @@ val HTML = { body( h1("Haoyi's Blog"), for((_, suffix, _) <- sortedPosts) - yield h2(suffix) + yield h2(a(suffix, href := ("blog/" + mdNameToHtml(suffix)))) ) ).render }
We're not doing anything particularly fancy: just wrapping the output in a "standard"
html/
head/
body wrapper (which you can tweak yourself) and doing some string-munging to make sure the links all point at the right files.
We can run it:
> rm -rf index.html > rm -rf blog/ > amm Blog.sc
And it works!
Minute 15: Bootstrap
The last step would be to layer on some Bootstrap CSS over our ugly unstyled page, because that's how you take unstyled content and make it web-scale.
> git diff diff --git a/Blog.sc b/Blog.sc index 98977f9..66bca16 100644 --- a/Blog.sc +++ b/Blog.sc @@ -13,6 +13,13 @@ def mdNameToHtml(name: String) = { } val sortedPosts = unsortedPosts.sortBy(_._1) +val bootstrapCss = { + import scalatags.Text.all._ + link( + rel := "stylesheet", + href := "" + ) +} println("POSTS") sortedPosts.foreach(println) for((_, suffix, path) <- sortedPosts) { @@ -28,7 +35,7 @@ for((_, suffix, path) <- sortedPosts) { write( cwd/'blog/mdNameToHtml(suffix), html( - head(), + head(bootstrapCss), body( h1(a("Haoyi's Blog", href := "../index.html")), h1(suffix.stripSuffix(".md")), @@ -41,7 +48,7 @@ val HTML = { import scalatags.Text.all._ html( - head(), + head(bootstrapCss), body( h1("Haoyi's Blog"), for((_, suffix, _) <- sortedPosts)
Here, we're simply including the
link as described in the Bootstrap docs, except we're converting it to Scalatags syntax and stripping off the security related bits because YOLO.
And there you have it!
The final code, if anyone wants to try it, is
> cat Blog.sc import $ivy.`com.lihaoyi::scalatags:0.6.0` import $ivy.`com.atlassian.commonmark:commonmark:0.5.1` import ammonite.ops._ val postFiles = ls! cwd/'posts val unsortedPosts = for(path <- postFiles) yield { val Array(prefix, suffix) = path.last.split(" - ") (prefix.toInt, suffix, path) } def mdNameToHtml(name: String) = { name.stripSuffix(".md").replace(" ", "-").toLowerCase + ".html" } val sortedPosts = unsortedPosts.sortBy(_._1) val bootstrapCss = { import scalatags.Text.all._ link( rel := "stylesheet", href := "" ) }) import scalatags.Text.all._ write( cwd/'blog/mdNameToHtml(suffix), html( head(bootstrapCss), body( h1(a("Haoyi's Blog", href := "../index.html")), h1(suffix.stripSuffix(".md")), raw(output) ) ).render ) } val HTML = { import scalatags.Text.all._ html( head(bootstrapCss), body( h1("Haoyi's Blog"), for((_, suffix, _) <- sortedPosts) yield h2(a(suffix, href := ("blog/" + mdNameToHtml(suffix)))) ) ).render } write(cwd/"index.html", HTML)
Paste this into a
Blog.sc file, put markdown files in the
1 - Hello.md
2 - Post.md format under
posts/ and run
amm Blog.sc to generate the HTML pages. Once generated, those pages can go anywhere: view them locally, push them to github pages, where-ever. They're static, and require no hosting. The first time you run the script, it will take 2-3 seconds to compile, and every time after you change the code. After that, though, it should be fast and run in about 0.5s. You can edit the markdown files and it will re-generate quickly, as long as you don't touch the Scala code and it does not need to re-compile.
And that's our 15 Minute Static Blog Engine using Scala Scripting, in 60 lines of code!
Although the blog engine we built is in no means complete, it is a start and it's just more of the same to flesh out details: adding a
<meta charset="utf-8" /> tag, adding a Disqus comment box, Google Analytics, or adding custom CSS to make it look more pretty. The source code for this blog itself would be an interesting example of a blog-engine written as Scala Scripts, but fleshed out with all the bells and whistles I wanted in a "real" deployment.
In The Wild
Ammonite's Scala Scripting functionality is new, and although many people are already using the Ammonite-REPL, there hasn't been time for wide adoption of these scripts.
Nevertheless, this blog itself is generated using scala scripts, with three files in the repo, with the added code implementing functionality such as generating timestamps from git history (easy to do by shelling out to git via
$$('git, 'log, ...)), responsive CSS (built using Scalatags CSS) and a flag to push to github pages.
Apart from this blog, other use cases for Scala Scripts include Ammonite's own CI Runner, and my Resume. There are many usages in the wild of people using an older version of Scala Scripts that lacks some of the modern niceties they current provide.
Limitations
Ammonite's Scala Scripts have some limitations that are inherent to their design:
They run on the JVM, and will likely never have a sub-100ms startup cost due to classloading and other overheads. At a few 100ms, the overhead is far more than the 20-30ms it takes to start Python or Bash. While there may be improvements to make the JVM or compiled Scala code start faster, those would be incremental, and won't be the order-of-magnitude improvement to be competitive with other scripting languages.
The first run of a new or changed script will always take several seconds. This is inherent in its reliance on the Scala compiler, famously one of the slowest compilers in the world. Cached scripts start ok-fast, so if you're running scripts much more than modifying them it's not too terrible, but the edit-run cycle will always be far slower than traditional scripting languages. Work done by the Scala team to speed up the Scala compiler will help, but it will never be as fast as modifying and running a Python script.
Conclusion
In this post, we've explored the world of Scala Scripting using the Ammonite script-runner
amm. We've toured the features that make it distinct from existing Scala tools, and demonstrated how you can use it "for real" to make a working static blog engine in 15 minutes and 60 lines of code. We've discussed at the limitations, and seen a few examples of it being used for real work. This initial release of Ammonite's Scala Scripting functionality is new, and thus has some known bugs. These will get ironed out in coming releases as the feature stabilizes.
Realistically, Scala Scripts are always going to be a hybrid of Serious Business Scala code and Python scripts; hopefully less hacky than Python scripts, with static types that help catch errors before code gets run, and hopefully less Serious Business than Scala code living in SBT projects, letting you take advantage of both your Scala/JVM knowledge as well as the wealth of JVM libraries to write small scripts in, without too much overhead.
What's next for Scala Scripting? The next step, after this "launch", would be continual polish work on the existing feature-set - which is largely freshly-build and somewhat raw - and see if this style of programming gains any traction among the Scala community.
Fundamentally, Scala Scripting is an experiment that tests a hypothesis that it is possible to make a script-file format for Scala that is sufficiently pleasant to use that people will actually use it.
"Pleasant" encompasses many things: tool support, startup time, documentation, error-reporting, extensibility, predictability, featureset, and many other things.
The "status quo" script-running functionality clearly did not make the cut, whether using the
scala executable, or SBT. But I believe that this release of Ammonite's Scala Scripting feature is sufficiently improved that it's worth another shot at pushing it to see if it can get traction. In every category mentioned above, Scala Scripting with Ammonite is an significant improvement over the status quo.
Scala.js and the Ammonite-REPL have already shown that the Scala language is useful for far more than just the Serious Business applications it has traditionally been used for. We shall see if Scala Scripting manages to find a niche within the Scala community and ecosystem.
If you yourself are interested in this project, try it out, ask questions on the Gitter Channel, and get involved! While the project has had help from Google Summer of Code, we could definitely use more people to share the vision and help drive the project forward. | https://jaytaylor.com/notes/node/1470864698000.html | CC-MAIN-2020-05 | refinedweb | 5,841 | 58.38 |
From: Geurt Vos (G.Vos_at_[hidden])
Date: 2001-05-07 02:52:18
> >
> > Hmm, I would say if I can assign a function pointer, it is
> > only natural that I can also assign 0. Additionally, since
> > 'if (f)', or better 'if (f !=/== 0)' is possible, it again
> > makes 'f = 0' an obvious possibility.
>
> I completely disagree with this :)
> The conversion to bool (const void*, const undeleteable*, whatever)
should
> be though of as: 'if f evaluates true, then f has some target it will
execute
> if you invoke f's function call operator' not 'f can be implicitly
converted
> to its function pointer'. boost::function deals with function objects, not
> function pointers; function objects are copy constructible and invocable,
and
> function pointers are a refinement of function objects that are also
> assignable and equality comparable. We need to look at the core concept
that
> boost::function handles - the function object - and not inject syntax for
it
> based on the properties of specific refinements of that concept (Jesse -
same
> argument goes for operator==).
>
> I think that allowing 'f = 0' is saying that boost::function is a
> conceptually a pointer (it isn't - a pointer would not have copy
semantics).
> If two boost::function objects end up eventually targetting the same code,
it
> is because the user has given it a "shallow" function object at some
point.
> Comparisons such as 'f == 0' say 'does f point to the zero location' and
> assume we've narrowed all our concepts to the function
> pointer concept.
>
Although I can live with this description, the thing is that the
entire class is too much a pointer to think of it is not a pointer.
Hence the bool conversion _will_ be seen as comparing to a 0 pointer
constant. Moreover, from the docs I quote:
--- "The header <boost/function.hpp> provides several classes that are generalized function pointers" --- Generalized function pointers? And still you say one shouldn't think of it as a pointer? So my advice to this all: either give it full pointer semantics (don't!) or make it a functor only thing, with only a high level interface and lose the ability to assign function pointers. I wouldn't want to go for the first option, because there will be great many problems and all. For one the 0 assignment won't work correctly in the following case (because it's construction, not assignment): void SomeFunc(const boost::function<void> &); Even though there's some optimization for function pointers, I strongly urge this support is removed. For one free functions won't be used that often, and also, explain to the user that function assignment - something they'll barely use - is optimized, whereas member function assignment is not. Concluding, IMO the low-level interface should go and only a high level interface should remain: function(); function(functor) void set(functor) void clear(); bool empty() const; operator()(...) // I mean several, not the ellipsis! I'm uncertain whether op== should be implemented... > > Note that I'm not saying support for 'f = 0' is crucial, I'm > > just saying that IMO 'it's not obvious' is not a very strong > > argument. > > > > > If assign0 is hidden in a detail namespace, user's know they've been warned > > > not to dig that far into Boost internals. Note that the same thing done for > > > "nil" here is done for "ignore" in the tuple code. > > > > One should not repeat the errors of the past... > > I'd be interested in why you think this, but this is the > wrong thread for it. > Don't remove the '...but seriously' part from it. It's quite simple though, if I don't like 'nil' specifically for 'function', it's quite obvious I also don't like something as generic sounding as 'ignore' for something specific (tuple in this case). I don't know the details about this 'ignore' thing, so I might actually like it :) > > ...but seriously, 'nil' specifically for 'function' doesn't > > feel right. There's nothing 'function' about 'nil'. I mean, > > 'nil' on its own is not related to function (at least that > > is not how the user will see it). Having a general boost::nil > > in e.g. nil.hpp or whatever sounds (somewhat) better. > > > > > I think most of us would use NULL if it were truly available, but > unfortunately that isn't going to happen any time soon. So instead we use > '0', which is only really meaningful because we read it as NULL (or we came > from a C backround where typing is willy-nilly). One resolution for this is > to have a 'nil' that has its own specific type and a slightly different > meaning. Instead of NULL or 0 meaning 'pointing to nothing', nil would mean > 'possessing no value' or 'possessing an unusable value'. > > I'd advocate using 'nil' instead of 0 or NULL because it conveys more > information than 0 and is more portable than NULL. Additionally the > difference in meaning allows it to apply to a greater variety > of situations. > Then why isn't there such a thing in boost?? It could be something like: struct nil_t { template <typename Type> operator Type *() const { return 0; } }; const nil_t nil = nil_t(); - 'nil' can be used with any pointer, but not with non-pointers (ints and floats) - it can be used as distinct type: function(nil_t = nil); Now to what it can't do: ...anyone? Geurt
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2001/05/11620.php | CC-MAIN-2020-50 | refinedweb | 918 | 62.17 |
Cheat::Meta::Spec - How to use the Cheat:: namespace
This document is Cheat::Meta::Spec version 0.0.5
# lib/Cheat/Sheet/Some.perl Some::Module # Short description qw( various exportable symbols if any ); routine( $placeholder, @arguments ); $context = function( @arguments); $object->method(); Some:OtherModule... # lib/Cheat/Sheet/Some.pod =head2 Some::Module Some::Module # Short description qw( various exportable symbols if any ); routine( $placeholder, @arguments ); $context = function( @arguments); $object->method(); =head2 Some::OtherModule ...
But I caution you:
I was quite close to revealing rare and powerful cheat codes.
--The Devil (Dinosaur Comics)
This is a collection of "cheat sheets": highly compressed, abbreviated documentation for various modules. Each module within the bundle covers a top-level namespace or a set of otherwise closely-related modules.
If you want to read cheat sheets, please check out Cheat::Meta. You can just install Bundle::Cheat::Sheet.
If you want to develop cheat sheets themselves, the current file is for you.
I created the Cheat:: top-level namespace and scheme after much hesitation and some encouragement. I discussed several alternatives; none satisfied.
Cheat sheets aren't executable code, not in the aggregate. Each line is formatted as valid Perl; that's its merit. But executing a whole cheat is madness. This presents a problem when searching for a top-level namespace.
Modules under Doc:: seem to be tools for manipulating documentation. Documentation:: is no better. Cheat sheets are not manuals or templates.
Other documentation on the CPAN is usually specific to another module or tightly-related group of modules. For example, Template::Manual documents Template and other modules within Template Toolkit. It wouldn't be nice to drop, say, Template::Cheat::Sheet into that namespace. Also, not every cheat sheet falls neatly under a single namespace. Cheat::Sheet::Util covers Scalar::Util, List::Util, Hash::Util.... Where should it go?
perlcheat.pod is in core. OpenResty::CheatSheet exists and is a fair example of the cheat sheet style; but again, I don't want to pollute other namespaces.
There doesn't seem to be a workable alternative to a new namespace.
To be useful, cheat sheets must be available as plain text files that can be opened in a user's code editor. These are not tutorials or even working demos. User is expected to copy one or more lines directly from the cheat sheet, paste into his ongoing project, and edit to suit.
Syntax highlighting or coloring is important to some users. Colors reveal the intent of various tokens. In a cheat sheet, it's important to be able to see immediately which tokens are literal and which placeholders. It could be argued that an entirely different coloring scheme (than is normal) would be even better; but I don't see that's going to happen.
When a Perl module is loaded, all
use() statements are executed (at "compile-time"), as if they were enclosed in a BEGIN block. Since a cheat sheet naturally includes many such statements, formatting one as an ordinary Perl module causes each cheated module to load. This might be correct from some very formal viewpoint but is rude.
So, presenting the cheat sheet as code in a
.pm file is out. Presentation as POD means that syntax coloring is lost. Also, verbatim code tends to copy out with too much, often incorrect, indentation. However, CPAN demands, and some users expect, code in POD format. So, a
.pod file -- POD only file -- is given.
Cheat sheets aren't tests or scripts. The only rational extension I can see for naming the raw code-only files is
.perl.
When authoring a cheat sheet, I decided to write a base or source file. A little utility script,
util/cheat-build.pl, uses Template (Template::Toolkit, TT) and a pair of
.tt2 template files to generate the usable cheats in
.perl and
.pod forms. I settled on the unusual
.agi extension for these source files. They have nothing whatever to do with AGI.
If you want to write a cheat sheet, great! Obviously, you might start by looking through the files of this distribution. I'm a great believer in consistency.
Each cheat sheet should be released in both POD and raw Perl forms. You're welcome to use existing files as models. Please name a cheat for Foo as Cheat::Sheet::Foo. Expect to install (for some lib/):
Raw Perl code.
Same content as POD verbatim paragraphs.
Please seriously consider contacting the maintainer of Bundle::Cheat::Sheet! I will be happy to assist you and I will include your properly-formatted cheat sheets in the Bundle. You can still package and upload them yourself and get all the credit. I would like to avoid naming conflicts and duplication of effort; please contact me before starting work.
Naturally, suggestions for cheat sheets, new or old, are always welcome.
Xiong Changnian
<xiong@cpan.org>
<xiong@cpan.org>
This library and its contents are released under Artistic License 2.0: | http://search.cpan.org/~xiong/Cheat-Meta-v0.0.5/lib/Cheat/Meta/Spec.pod | CC-MAIN-2017-26 | refinedweb | 826 | 60.01 |
Check the following FAQ list, then click the specific questions to view the answer.
No network connection when you compile
Program compilation failed
Access problem during compilation
When you access to Apsara Stack, after downloading configurations and accessing to mPaaS, compilation is rejected and NullPointerException occurs
How to debug applications
Precautions for using MultiDex in the mPaaS Inside project
Precautions for using MultiDex in the mPaaS Portal and Bundle projects
How to clear the Gradle cache
Upgrade to the latest Gradle plug-in
Camera cannot be turned on through the input file label in Huawei 10 System
How to depend on and use mPaaS in library?
How to fix 608 errors at runtime or native errors with libsgmain
No network connection when you compile
When you are compiling files, if there is no network, the compilation may fail. Follow the steps to confirm that the network of compilation environment is connected.
Confirm that the Internet is connected.
Confirm that the network proxy is not connected, including browser proxy settings and third-party network proxies.
Confirm that the IDE proxy is not configured.
In the gradle.properties file, confirm that the Gradle proxy is not configured. That is, the systemProp.http.proxyHost and the systemProp.http.proxyPort attribute is not configured. If configured, you can delete the relevant attribute.
Program compilation failed
If program compilation failed, you can make troubleshooting and solutions by following the steps.
According to the preceding steps, you can confirm that the network of compilation environment is connected.
Check the Gradle execution log to confirm if the added dependency is valid.
Check if the dependent GAV parameters including
group,
artifact, and
versionare configured correctly.
//Reference the debug pack group:artifact:version:raw@jar bundle "com.app.xxxxx.xxxx:test-build:1.0-SNAPSHOT:raw@jar" //Reference the release pack group:artifact:version@jar bundle "com.app.xxxxx.xxxx:test-build:1.0-SNAPSHOT@jar" manifest "com.app.xxxxx.xxxx:test-build:1.0-SNAPSHOT:AndroidManifest@xml"
In the command line tool built in the system, execute the following command to export the Gradle execute logs:
// Before executing the command, confirm the undefined productflavor attribute. Otherwise, the command will fail to run. // The following command will export the execution log to the log.txt file. gradle buildDebug --info --debug -Plog=true > log.txt
Check the log file exported from the fourth step. In the latest log, you will see the following record, which means the added dependency does not exist.
Caused by: org.gradle.internal.resolve.ArtifactNotFoundException: Could not find nebulacore-build-AndroidManifest.xml (com.alipay.android.phone.wallet:nebulacore-build:1.6.0.171211174825). Searched in the following locations: at org.gradle.internal.resolve.result.DefaultBuildableArtifactResolveResult.notFound(DefaultBuildableArtifactResolveResult.java:38) at org.gradle.api.internal.artifacts.ivyservice.ivyresolve.CachingModuleComponentRepository$LocateInCacheRepositoryAccess.resolveArtifactFromCache(CachingModuleComponentRepository.java:260)
Visit the http link in this log and log on to check the Maven library. For example, the http link can be the third line in the log listed in the preceding step.Note
In the
build.gradlefile, you can check the account name and password that you need to provide when you log on.
Execute the following command to refresh the
gradlecache.
gradle clean --refresh-dependencies
If the Maven library has a relevant dependency, delete the Gradle cache under your personal directory, then recompile.
The method of deleting the Gradle cache is as follows:
In the system such as macOS, Linux, and Unix, run the following commands:
cd ~ cd .gradle cd caches rm -rf modules-2
In the Windows system, by default, the path will be located to
C:\Users\\{Username}\\.gradle\caches. Delete the
modules-2folder.
Access problem during compilation
If there is an access problem during compilation, (you have waited for more than 20 minutes) you can improve the compilation efficiency by following the steps.
According to the preceding steps, you can confirm that the network of compilation environment is connected.
Confirm that the firewall is closed.
Confirm that the network configuration of the IntelliJ IDEA encoder is inactivated.
In the code, load Maven images in advance. See the following code example of Maven images loaded by Alibaba Cloud.
apply plugin: 'maven' buildscript { repositories { mavenLocal() // Load Maven images at first maven{ url ''} maven { credentials { username "Use the known user" password "Use the known password" } url "" } } dependencies { classpath 'com.android.tools.build:gradle:2.1.3' classpath 'com.alipay.android:android-gradle-plugin:2.1.3.3.3' classpath 'com.neenbedankt.gradle.plugins:android-apt:1.8' } } allprojects { repositories { flatDir { dirs 'libs' } mavenLocal() maven { credentials { username "xxxxxxxxx" password "xxxxxxx" } url "" } maven{ url ''} } }
When you access to Apsara Stack, after downloading configurations and accessing to mPaaS, compilation is rejected and NullPointerException occurs
In general, this is the problem of the configuration file, namely, the conf file. You need to check the fields. Check if any of the thirteen fields are missing. Compare with the files downloaded from the public cloud, and confirm if the field name is correct.
How to debug applications
During the development, you need to debug codes. This topic describes two debug methods.
Start the application through the debug mode
Run the application, then start debugging
Start the application through the debug mode
Use cases:
The initial code that you want to use when the debug application launched. For example, initialize the code during application init.
Procedures:
Execute the command
adb shell am start -W -S -D application pack name/The type name of the first page launched by the application. For example, the pack name of the mPaaS Demo is
com.mpaas.demo, and the type name of the first page launched by the application is
com.alipay.mobile.quinox.LauncherActivity. You can use the command line
adb shell am start -W -S -D com.mpaas.demo/com.alipay.mobile.quinox.LauncherActivityto launch the application through the debug mode. See the following picture for the first type name launched.
After the command is executed, the following dialogue box will appear on the mobile phone.
Set the breakpoint to the code line you want to debug. Then attach the breakpoint to the process where the application is. See the following picture.
Run the application, then start debugging
Use cases:
Start debugging after you trigger an event. For example, only when you click a button or redirect to a page, you need to debug.
Procedures:
After running the application, click the attached
button. Or after executing the preceding command, click the attached button, then start debugging.
Precautions for using MultiDex in the mPaaS Inside project
When you access to the mPaaS Inside, we have provided MultiDex. Thus, you can delete the MultiDex provided officially from the implementation.
dependencies{ implementation 'com.android.support:multidex:1.0.3' //Delete this line }
Meanwhile, you are suggested to add
multiDexEnabled trueunder the
androidmodule of the gradle.
android { defaultConfig { multiDexEnabled true } }
If you are using the mPaaS Inside, and not accessing to the hotfix, while MultiDex is needed to support, you have to call
MultiDex.install(this)in the
Application.
public class App extends Application() { public void attachBaseContext(Context context) { super.attachBaseContext(context); MultiDex.install(this); } }
If you are using the hotfix, namely, using the
QuinoxlessApplication, you do not need to perform explicit call in the code.
MultiDex is a solution to the problem that the number of Dex methods or types for the earlier versions of Android 5.0 exceeds 65535.
In the case of the bloating appliation features at the moment, we need to plan for the Dex.
Because the mPaaS access method is based on the bundle, the bundle will try to combine once with the user’s Dex when the bundle packages. However, mPaaS will ensure the first Dex sequence of the user.
In this case, in order to access to mPaaS, avoid adding excessive logic in the Application and make the first dex as small as possible. You can use the
--main-dex-list parameter to specify your type in the first Dex.
For example, if the first Dex types are excessive in your APK, some necessary bundles in mPaaS will fail to combine. As a result, the framework may fail to start in the running environment earlier than Android 5.0. That is, the problems of ClassNotFound or ClassNotDef.
Precautions for using MultiDex in the mPaaS Portal and Bundle projects
Portal and Bundle are not suggested to intervene in the MultiDex. Unless you are in the single portal project, and need to use the
multiDexEnabled true.If your Bundle is too big, you can only continue by the method of splitting the bundle. Do not activate the multidex support in the bundle.
How to clear the Gradle cache
Open the configuration page of the Gradle plug-in, then click Clean Cache button to delete all the cache data of the Gradle plug-in.
Upgrade to the latest Gradle plug-in
The content of this section is only applicable for 10.1.68 baselines. For more information about the baseline of this version, see Introduction to the baseline and Launch instructions of 10.1.68 baselines.
The version of the Android Gradle Plugin provided by Google is 3.5.x at the moment.
mPaaS also provides the plug-in of 3.5.x version as the adapter, which supports the APIs of Google Android Gradle Plugin 3.5.3 and Gradle 6.3.
Change in the access methods
You only need to import our plug-ins by adding the following dependency instead of importing the official plug-in of Android Gradle Plugin. Because of the dependency transmission, the plug-in will be imported automatically.
dependencies { classpath 'com.alipay.android:android-gradle-plugin:3.5.14' }
The version of Gradle Wrapper needs to be upgraded to 5.6 or later versions. Version 6.3 is recommended to use.
Change in the usages
No need to use the
apply plugin:'com.android.application.
If you are in the portal project, you only need to use the
apply plugin:'com.alipay.portal'.
If you are in the bundle project, you need to delete the
apply plugin:'com.android.application'and only need to use the
apply plugin:'com.alipay.bundle'.
If you are in the library project, you need to delete the
apply plugin:'com.alipay.library'and only need to use the
apply plugin:'com.android.library.
If using the latest stable version of Android Studio 3.5 or later versions, you need to add
android.buildOnlyTargetAbi=falsein the gradle.properties.
Our wireless security components do not support V2 signatures at the moment. Thus, if you need to use Android Studio debugging and install your APK, you need to disable V2 signatures. If you use the command line for creation, and your minSdkVersion is greater than or equal to 24, you need to disable V2 signatures as well. See the following method of disabling V2 signatures:
v2SigningEnabled false
After clearing the cache, you need to check if the mini program and HTML5 work.
Camera cannot be turned on through the input file label in Huawei 10 System
There are some differences between the implementations of Huawei 10 system URI and the standard Android. Thus, you may meet problems such as failing to turn on the camera in Huawei 10. You need to execute the following steps to solve this problem.
1. Upgrade baselines
If you are using 32 baselines, you need to upgrade the baseline to 10.1.32.18 or later.
If you are using 60 baselines, you need to upgrade the baseline to 10.1.60.9 or later.
If you are using 68 baselines, you need to upgrade the baseline to 10.1.68-beta.3 or later.
2. Configure FileProvider
You can reuse your current FileProvider or create a new FileProvider.
Create a new Java class to inherit the FileProvider.
import android.support.v4.content.FileProvider; public class NebulaDemoFileProvider extends FileProvider { }
Create a new nebula_fileprovider_path.xml in res/xml.
<?xml version="1.0" encoding="utf-8"?> <paths xmlns: <external-path </paths>
Add configurations in AndroidManifest.
<provider android: <meta-data android: </provider>
Here the value of android:authorities, namely
com.mpaas.demo.nebula.provider is an mPaaS sample code. You need to configure by yourself based on your applications. And the value cannot be configured as
com.mpaas.demo.nebula.provider, which will have conflicts with other mPaaS applications.
3. Implement the H5NebulaFileProvider
Create a new Java class, then implement the H5NebulaFileProvider and the getUriForFile method. In this method, you can call the implemented FileProvider to generate URI.
public class H5NebulaFileProviderImpl implements H5NebulaFileProvider { private static final String TAG = "H5FileProviderImpl"; @Override public Uri getUriForFile(File file) { try { return getUriForFileImpl(file); } catch (Exception e) { H5Log.e(TAG, e); } return null; } private static Uri getUriForFileImpl(File file) { Uri fileUri = null; if (Build.VERSION.SDK_INT >= 24) { fileUri = NebulaDemoFileProvider.getUriForFile(LauncherApplicationAgent.getInstance().getApplicationContext(), "com.mpaas.demo.nebula.provider", file); } else { fileUri = Uri.fromFile(file); } return fileUri; } }
Register the
H5NebulaFileProvider.
After you complete the mPaaS initialization, register the H5NebulaFileProvider before you start the off-line pack. Register once will take effect globally.
H5Utils.setProvider(H5NebulaFileProvider.class.getName(), new H5NebulaFileProviderImpl());
How to depend on and use mPaaS in library?
While using mPaaS, a module needs to be reused sometimes. The reuse is implemented by adding the module as a dependency. This section illustrates this method with an example of reusing scan module.
Prerequisites
The project has been accessed to mPaaS in native AAR mode.
Procedure
Create an Android Library type scan module in the project.
In the build.gradle file of the newly created scan module, add the following codes
api platform("com.mpaas.android:$mpaas_artifact:$mpaas_baseline"). The example is as follows:
dependencies { …… //This line is necessary when using mPaaS in the module. api platform("com.mpaas.android:$mpaas_artifact:$mpaas_baseline") …… }
Install scan component for scan module by Android Studio mPaaS plug-in. The directory is : mPaaS > Native AAR mode > Configure/Update component > Start configuration. After the installation, the scan component will automatically load.
Configure App main project.
plugins { id 'com.android.application' ...... // baseline.config (baseline) must be added in the build.gradle file of app module. id 'com.alipay.apollo.baseline.config' }
Call module.
Import scan module where it is used.
dependencies { api platform("com.mpaas.android:$mpaas_artifact:$mpaas_baseline") .... api project(':scan')//scan module }
How to fix 608 errors at runtime or native errors with libsgmain
If an exception occurs during runtime, search for the keyword
SecExcetpionin the Android Studio runtime log, and find that there is a 608 error code or a native error of libsgmain, you can follow the steps below to troubleshoot.
Drag and drop the APK directly into Android Studio and check if the targetSdkVersion in the Androidmenifest file is a version between 26-28.
Check if the
res/drawable/yw_1222.jpgfile exists.
Check the config file for Base64.
Check if the Gradle plugin
baseline.updateor
baseline.configis applied.
Check META-INF for three files, CERT.SF, MANIFEST.MF, and CERT.RSA.
Turn on
v1SignEnabledin
app/build.gradle.
Whether there is a
pply plugin: 'com.alipay.apollo.optimize'in
build.gradlein the project root directory.
After performing the above check steps and confirming that the result is correct, it means that there is a problem with the signed APK package uploaded on the console. If the signature is incorrect, the APK package needs to be re-uploaded. | https://www.alibabacloud.com/help/en/mobile-platform-as-a-service/latest/access-android-faq | CC-MAIN-2022-27 | refinedweb | 2,545 | 50.23 |
.
<Updated>
Consider the following super simple C# code
namespace SmartDeviceProject1
{
class Program
{
static void Main(string[] args)
{
System.Console.WriteLine("Hello");
}
}
}
Can you guess how many managed Type gets loaded to run this? I was doing some profiling the .NET Compact Framework loader (for entirely unrelated reason) and was surprised by the list that got dumped. 87 177 types**, never could've guessed that...
**This is for the compact framework CLR. Your mileage will vary if you run the same on the desktop CLR.
I.
I'm writing a small application (or rather a screen saver) that animates and demonstrates A* search algorithm. The idea is simple. On screen you see a start and end point and some random obstacles in between them. Then you see animation on how A* algorithm is used to navigate around the board to find a path (close to the shortest possible) between the two.
All of this is fairly standard. However, I got hit by a simple issue. I wanted to have the ability to design this board visually and also let my potential million users do the same. Obviously I don't have time to code up a designer and hence choose the all time manager's favorite technique. Re-use :)
So the final solution I took was to use Microsoft Office Excel as the WYSIWYG editor. I created a xlsx file with the following conditional formatting which colors cells based on the value of the cells.
So in this case
Using this I can easily design the board. The designer in action looks as follows
Since the excel sheet has conditional formatting the user just types in s, e, b, w in the cells and they all light up visually. At the end he/she just saves the file using File => Save As and uses CSV format. This saves the file shown above as
w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w w,,,,,,,,,,,,,,,,,,,,,,,b,,,,,,,,,,,,,,,,,,w w,,,,,,,,,,,,,b,b,b,,,,,,,,b,,,,,,,,,,,,,,,,,,w w,,,,s,,,,,,,,b,b,b,b,b,b,,,,,,b,,,,,,,,,,,,,,,,,,w w,,,,,,,,,,b,b,b,b,,b,b,b,b,,,,,b,,,,,,,,,,,,,,,,,,w w,,,,,,,,,b,b,b,,,,,,b,b,b,,,,b,,,,,,,,,,,,,,,,,,w w,,,,,,,,,b,b,,,,,,,,b,b,,,,b,,,,,,,,,,,,,,,,,,w w,,,,,,,,,b,b,,,,,,,,b,b,,,,b,,,,,,,,,,,,,,,,,,w w,,,,,,,,,,,,,,,,,,,,,,w w,,,,,,,,,,,,,b,b,,,,,,,,,b,,,,b,,,,,,,,,,,,,,w w,,,,,,,,,,,,,,,,,,,,,,,b,,,,b,,,,,,,,,,,e,,,w w,,,,,,,,,,,,,,,,,,,,,,,b,,,,b,,,,,,,,,,,,,,w w,,,,,,,,,,,,,,,,,,,,,,,b,,,,b,,,,,,,,,,,,,,w w,,,,,,,,,,,,,,,,,,,,,,,,,,,b,,,,,,,,,,,,,,w w,,,,,,,,,,,,,,,,,,,,,,,,,,,b,,,,,,,,,,,,,,w w,,,,,,,,,,,,,,,,,,,,,,,,,,,b,,,,,,,,,,,,,,w w,,,,,,,,,,,,,,,,,,,,,,,,,,,b,,,,,,,,,,,,,,w w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w,w
As you see the format is simply a row per line and every column separated by a comma. My simulator reads this file and renders using whatever renderer is configured (console or WPF).
More about the A* simulator to come soon...…
The clock is ticking and the Large Hadron Collider in Cern is going to get switched on today (9/10/2008). Even though there are speculations, CERN is claiming it's perfectly safe and the world won't end. But it's easy to claim that, who'll be around to prove them wrong in case they are :)
It's one of the coolest device at an operating temperature of < -270°C. But I'd get really angry if there's any disturbance to my Birthday celebrations!! | http://blogs.msdn.com/b/abhinaba/archive/2008/09.aspx?PostSortBy=MostViewed&PageIndex=1 | CC-MAIN-2015-27 | refinedweb | 610 | 71.95 |
OTP: a Functional Approach (or Three)
I intially started the OTP challenge as a fun way to write some OCaml. It was, so much so that I wrote solutions in two other functional languages, Haskell and Elixir. I structured all three sets of programs the same so that I could easily see their similarities and differences. Check out the
encrypt program in all three languages and then I’ll share some of my favorite parts. Go ahead, I’ll wait.
Don’t Cross the Streams
One tricky part of the OTP challenge is that you have to cycle over the key if it’s shorter than the plaintext. My initial approaches involved passing around an offset and using the modulo operator, like this:
let get_mask key index = let c1 = List.nth key (index mod (List.length key)) and c2 = List.nth key ((index + 1) mod (List.length key)) in int_from_hex_chars c1 c2
Pretty gross, huh? Fortunately, both Haskell and Elixir have built-in functionality for lazy, cyclical lists, and OCaml (with the Batteries library) has the Dllist (doubly-linked list) data structure. The OCaml code above becomes simply:
let get_mask key = let c1 = Dllist.get key and c2 = Dllist.get (Dllist.next key) in int_of_hex_chars c1 c2
No more passing around indexes or using
mod to stay within the bounds of the array – the Dllist handles that for us.
Similarly, a naïve Elixir approach:
def get_mask(key, index) do c1 = Enum.at(key, rem(index, length(key))) c2 = Enum.at(key, rem(index + 1, length(key))) int_of_hex_chars(c1, c2) end
And with streams activated:
def get_mask(key) do Enum.take(key, 2) |> int_of_hex_chars end
Check out the source code (OCaml, Haskell, Elixir) to get a better sense of cyclical data structures in action.
Partial Function Application
Most programming languages have a clear distinction between function arguments (input) and return values (output). The line is less clear in ML-derived languages like Haskell and OCaml. Check this out (from Haskell’s
ghci interactive shell):
Prelude> let add x y = x + y Prelude> add 5 7 12
We create a function,
add, that (seemingly) takes two arguments and returns their sum.
Prelude> let add5 = add 5 Prelude> add5 7 12
But what’s this? Using our existing
add function, we’ve created another function,
add5, that takes a single argument and adds five to it. So while
add appears to take two arguments and sum them, it actually takes one argument and returns a function that takes one argument and adds it to the argument passed to the initial function.
When you inspect the type of
add, you can see this lack of distinction between input and output:
Prelude> :type add add :: Num a => a -> a -> a
Haskell and OCaml use a concept called currying or partial function application. It’s a pretty big departure from the C-derived languages most of us are used to. Other languages may offer currying as an option, but this is just how these languages work, out of the box, all of the time.
Let’s see this concept in action. To convert a number to its hex representation, you call
printf "%x" num. To convert a whole list of numbers, pass the partially applied function
printf "%x" to
map, like so:
hexStringOfInts nums = concat $ map (printf "%x") nums
For more info on currying/partial function application, check out Learn You a Haskell for Great Good.
A Friendly Compiler
I learned to program with C++ and Java, where
gcc and
javac weren’t my friends – they were jerks, making me jump through a bunch of hoops without catching any actual issues (or so teenage Dave thought). I’ve worked almost exclusively with interpreted languages in the intervening 10+ years, so it was fascinating to work with Haskell and OCaml, languages with compilers that catch real issues. Here’s my original
decrypt function in Haskell:
decrypt ciphertext key = case ciphertext of [] -> [] c1:c2:cs -> xor (intOfHexChars [c1, c2]) (getMask key) : decrypt cs (drop 2 key)
Using pattern matching, I pull off the first two characters of the ciphertext and decrypt them against they key, and then recurse on the rest of the ciphertext. If the list is empty, we’re done. When I compiled the code, I received the following:
decrypt.hs:16:26: Warning: Pattern match(es) are non-exhaustive In a case alternative: Patterns not matched: [_]
The Haskell compiler is telling me that I haven’t accounted for a list consisting of a single character. And sure enough, this is invalid input that a user could nevertheless use to call the program. Adding the following handles the failure and fixes the warning:
decrypt ciphertext key = case ciphertext of [] -> [] [_] -> error "Invalid ciphertext" c1:c2:cs -> xor (intOfHexChars [c1, c2]) (getMask key) : decrypt cs (drop 2 key)
Elixir’s |> operator
According to Programming Elixir, the pipe operator (
|>)
takes the result of the expression to its left and inserts it as the first parameter of the function invocation to its right.
It’s borrowed from F#, so it’s not an entirely novel concept, but it’s certainly new to me. To build our key, we want to take the first argument passed into the program, convert it to a list of characters, and then turn it to a cyclical stream. My initial approach looked something like this:
key = Stream.cycle(to_char_list(List.first(System.argv)))
Using the pipe operator, we can flip that around into something much more readable:
key = System.argv |> List.first |> to_char_list |> Stream.cycle
I like it. Reminds me of Unix pipes or any Western written language. Here’s how I use the pipe operator in my encrypt solution.
* * *
At the end of this process, I think Haskell offers the most elegant code and Elixir the most potential for us at Viget to use professionally. OCaml offers a good middle ground between theory and practice, though the lack of a robust standard library is a bummer, man.
I had a great time writing and refactoring these solutions. I encourage you to check out the code, fork the repo, and take the challenge yourself. | https://www.viget.com/articles/otp-ocaml-haskell-elixir/ | CC-MAIN-2022-21 | refinedweb | 1,021 | 69.31 |
compiler padawanMy adventures learning how to build compilers. Community 5.6.583.20496 (Build: 5.6.583.20496)2009-06-03T08:56:50Zcompiler padawan moves on<p>After nearly 15 years at Microsoft I've decided to leave the company and explore some programming adventures elsewhere. This Friday, February 3, 2012, will be my last day at Microsoft. It has been an absolutely GREAT experience, I have worked with some wonderful people and learned a lot, but I'm excited for a change.</p> <p>If you are interested in following my adventures as I step out of the reality distortion bubble surrounding 1 Microsoft Way and enter some new form of reality distortion bubble, I'm now going to blog at <a href=""></a></p> <p> </p><div style="clear:both;"></div><img src="" width="1" height="1">Daniel Simmons - MSFT dev guy becomes a compiler padawan<p>More.</p> <p.</p> ).</p> <p.)</p> <p>- Danny</p><div style="clear:both;"></div><img src="" width="1" height="1">Daniel Simmons - MSFT WPF 2D Graphics: DrawingVisual<p.</p> <p>For my first attempt I did the easiest thing (and the one most frequently shown in simple WPF graphics demos) which is to use a Canvas and create on it a series of Line objects. The result); }</pre> <p.</p> <p!</p> <p.</p> <p.</p> <p><strong><em>Unfortunately these aren’t abstract so the compiler won’t catch the issue and tell you that you must override them, but if you don’t, then the framework never becomes aware of your child objects and the result is that nothing is displayed.</em></strong></p> <p>Once over that hurdle, everything works great. The final program (equivalent to the one above)]; } }</pre> <p.</p> <p>- Danny</p><div style="clear:both;"></div><img src="" width="1" height="1">Daniel Simmons - MSFT FAQ Relocation and Overhaul<p>It has been quite some time since I last updated the EF Frequently Asked Questions which makes them dramatically less useful to everyone. At its core, I think the reason for this delay is not just that I have been busy doing other things but rather that the way the FAQ was setup and posted made it difficult for others to help keep it up to date. To have any hope of continuing to be a great and useful resource in the long run, I believe this needs to be a community effort rather than one run by a single individual or by the EF team.</p> <p>In an attempt to improve matters, I’ve spent some time over the last few weeks relocating the FAQ to <a href=""></a> (a domain that I registered personally, not something officially owned or maintained by Microsoft) and changing its source format into a series of text files with simple wiki-like markup maintained under mercurial-based source control at the codeplex project <a href=""></a>. In addition, some folks on the EF team have put a lot of effort into adding additional questions and answers and updating things based on EF4 so hopefully the FAQ is in a better place than it has been for a while.</p> <p>Now my hope is that some of you will take a few minutes to help me out in the following ways:</p> <ol> <li>Look over the FAQ—maybe you will learn something. </li> <li>Next time you have a question or answer a question, check the FAQ, and if the answer is already there publish a link so that others can more easily find and benefit from it. Every question has a fragment name based on the section and question #. So if you want a link, for instance to Code Generation Question 12.2 (“Is there any way to use public, easily modifiable templates for EF code generation?”), the link would be <a href=""></a> </li> <li>If the question you have is NOT answered in the FAQ, and you think it is something others may also run into. Please, PLEASE, take the time to enlist in the codeplex project, edit the source, make a fork and send a pull request so that we can incorporate your change and push it up to the site. This is designed to be a pretty easy process, and it can really benefit your fellow programmers. </li> </ol> <p>For the curious, here’s a little background about how the new EF FAQ site is put together: After casting about for a bit, I decided that I wanted to have a static site with pretty simple HTML so that it is simple, quick and search-engine friendly. I wanted more formatting than I could get from strict plain text, but I wanted something simpler and more constrained/uniform than what I could get out of word or something like that. In the end I decided to take the <a href="" target="_blank">markdownsharp</a> library (thanks Jeff and others), extend it a little bit to add the concept of “sections” and “questions” which get auto-numbered, have some specific HTML markup and help to produce an auto-generated index of sections for navigation. I also wrote a simple program to wrap the generated content HTML in a high-level template so that every page gets a header, footer and navigation bar and to manage the workflow of turning the various source files into all the HTML for the site. You can find the source for my “faqbldr” app as part of the codeproject as well, and soon I hope to push my extensibility hooks for markdownsharp back into its repository so that I can remove its source from the codeplex project and just check in a binary nuget-style.</p> <p>By the way, if you are an HTML and CSS guru with a strong stomach, you might want to take a look at template.html in the codeplex project where I have created a horrendous mess that no doubt violates every rule of good HTML design but somehow seems to have landed in a place where the site works and looks OK as long as I don’t touch it any more (If I do I’m pretty sure the whole house of cards will come tumbling down). Someone who wasn’t such an awful html hacker like me could no doubt make it cleaaner, nicer looking and faster.</p> <p>One last comment, while the FAQ has originally been produced by members of the EF team (and likely some EF team members will continue to contribute to it), it is definitely my intention that this not just be a Microsoft effort. This isn’t official documentation. Ideally it should be a helpful resource that contains whatever info you need to help get your tasks programming with the EF done (or more frequently pointers to where to find the more complete details on what you need). This means that if you have negative comments about the EF in some area or positive comments about some other product that you would use in conjunction with the EF or instead of it in some scenario, those belong here as well—as long as they are constructive and help achieve the goal of making people successful at building great software. Happily <a href="" target="_blank">Craig Stuntz</a> has agreed to start contributing to the project, and it’s my hope that we’ll have others join in soon.</p> <p>- Danny</p> <p>P.S. If you visit the site in the next day or two, you will find that the bing search box in the upper left isn’t fully working yet because the site is so new and bing hasn’t indexed it, but that should fix itself in the next few days.</p><div style="clear:both;"></div><img src="" width="1" height="1">Daniel Simmons - MSFT Query Tuning<p>Continuing on the topic of application performance tuning, let’s turn our attention specifically to the EF and take a look at some of what’s going on under the covers with EF queries.</p> <h3>Question #1: Where is the hot path?</h3> <p: <strong><em>Don’t be afraid to make temporary changes to your code.</em></strong> .</p> <p>My next suggestion is: <strong><em>Get a profiler and figure out how to use it. </em></strong: <a href=""></a> and <a href=""></a>.</p> <p>However you decide to measure things, the first level of investigation which I’d recommend when you are looking at an EF-based application is to determine where your biggest bottleneck is: </p> <ul> <li>your application </li> <li>EF one-time startup </li> <li>EF query code </li> <li>network transmission </li> <li>or the database </li> </ul> <p>If you can figure this out, then you can much more quickly home in on where to investigate further and what experiments you might want to try to improve performance.</p> <h3>What happens during the execution of an EF query?</h3> <p>In order to get to the next level of your investigation, it can help a lot to have a general idea of what all is happening and in what order when you execute an EF query. To that end, here’s a rough list:</p> <ul> <li. <br /> </li> <li. <br /> </li> <li>Query: Next the query is translated from LINQ or ESQL into your backend database command (TSQL in the case of SQL Server) and the results assembly plan is created. <br /> </li> <li. <br /> </li> <li. <br /> </li> <li>Network: Send query to the database. Typically this part of the process is inexpensive because the amount of data required to send the query to the database is usually small. Network time is more of a factor when results come back later. <br /> </li> <li. <br /> </li> <li. <br /> </li> <li>Query: Check the ObjectStateManager by key to see if the entity in question is already present (unless MergeOption is NoTracking). <br /> </li> <li>Startup: The first time a particular entity type / query result shape is returned, dynamic code is generated which extracts the relevant data from the DataReader, constructs the object and initializes it with data (called materialization). <br /> </li> <li>Query: Objects are actually materialized. <br /> </li> <li>Query: Objects are added to the ObjectStateManager (unless MergeOption is NoTracking). <br /> </li> <li>Query: Relationships between entities are fixed up. <br /> </li> <li>Application: Results are processed potentially causing additional queries (especially through lazy loading, etc.) </li> </ul> <p>OK. Now you know what goes on under the covers, and you’ve found something that seems to be taking more time than necessary. That brings us to…</p> <h3>What can be done to improve EF query perf?</h3> <p>Here’s a general rule-of-thumb checklist for EF query perf tuning in the order I would tend to investigate (since this is usually the order of highest bang-for-the-buck).</p> <p.</p> <ol> <li><strong>Pre-compiled views. </strong>Usually a very easy to implement change is to turn on pre-compiled views. This will move the time of generating views from the first query in your app-domain to design time. <br /> </li> <li><strong>Avoid distributed transactions.</strong> . <br /> </li> <li><strong>Avoid extra connection opens.</strong> Even if you aren’t getting distributed transactions, if you have a number of operations all in a row, just opening the connection can be expensive. As with the best case of distributed transactions, it can be easily addressed by explicitly opening the connection before the first operation. <br /> </li> <li><strong>Reduce database round trips.</strong> . <br /> </li> <li><strong>Retrieve less data.</strong> . <br /> </li> <li><strong>Reduce post DB-query client processing costs.</strong> . <br /> </li> <li><strong>Reduce pre DB-query client processing costs.</strong> . <br /> </li> <li><strong>Reduce complexity of the DB query.</strong> . </li> </ol> <p>- Danny</p><div style="clear:both;"></div><img src="" width="1" height="1">Daniel Simmons - MSFT Performance Tuning Philosophy<p>I’m in the process of embarking on some investigation and discussion with folks about performance tuning EF apps (in particular I’m going to dive into a critical area or two on my <a href="">D3 project</a>),.</p> <h3>There Are Two Ways to Improve Performance</h3> <p. </p> <p.</p> <p.</p> <p>So while design is very important, tuning is also critical to get a great experience from your app.</p> <h3>Tuning is Experimental Science</h3> <p.</p> .</p> <p.</p> <h3>Tuning Improvements Tend to be the Enemy of Clean Code</h3> <p.</p> <p.</p> <p>The moral of the story? Start clean, measure carefully, tune in biggest-bang-for-the-buck order. Stop tuning as soon as you can. (Which implies that you need to know just how fast is fast enough.)</p> <p>- Danny</p><div style="clear:both;"></div><img src="" width="1" height="1">Daniel Simmons - MSFT: Implementing the WCF Service<p><a href="" target="_blank">D3 release 0.1322</a> is now available. The work that went into this release is all about building out the WCF service which D3 will use as the interface between our mid-tier and the client. I encountered a grab bag of issues along the way, so here’s a list of tips & tricks as well as random thoughts on the topic of building a WCF service with the EF.</p> <h3>Authentication</h3> <p>Somewhere along the way between the last D3 release and this one I had knee surgery. My recovery was actually pretty quick, but I did end up spending most of a week sitting at home with my knee in the air and a computer on my lap. I spent a frighteningly large amount of time trying to read up on ways to do authentication using WCF for the D3 project. One of the best blog posts I found was this:</p> <p><a href=""></a></p> <p>The overall approach really seemed to match my general goals in the sense that I want the service to be essentially secure and WCF to handle that security in a relatively transparent way, but I also want to have my own username and password tracking in my EF model (I don’t want to create windows accounts on a domain controller for every user of D3 or something like that). The issue that drove me nuts is the fact that this approach absolutely requires SSL and at the time I was working on it there really wasn’t a good way to get SSL running (even with just a test cert) inside of VS because Cassini (the mini-webserver built into VS) doesn’t support it. </p> <p>Since then the IISExpress project seems to have gotten off the ground, and it may well be the right long-term answer, but after much banging of my head against this particular wall, I decided that for the current interim release I would completely fake all of this in a hacky way that would leave us setup for the right thing but be clear to implement now. What I did was to add a username and password parameter to every method on the service. We’ll return to this in a later release and make things more secure and nicer to use.</p> <h3></h3> <h3>Service Contract Design</h3> <p>As I have written in other places (such as this short series of MSDN Magazine Articles: <a href="">Anti-Patterns To Avoid In N-Tier Applications</a>, <a href="">N-Tier Application Patterns</a> & <a href="">N-Tier Apps and the Entity Framework: Building N-Tier Apps with EF4</a>), there are lots of patterns/approaches that can and should be considered when designing a service. While especially typical intranet line-of-business applications will benefit from the developer productivity and simplicity of an approach like self-tracking entities or RIA services, in this case my goal is to create a game that will run over the internet which means that I’d like to take more control over the contract. </p> <p>That led me to the decision that D3’s service should be fairly strictly SOA—at least for the most commonly used methods (issuing commands from the client and retrieving events from the server). For these two core commands, I decided that the service should neither accept entities as parameters nor return them as results. We just want the simplest, most basic possible interface, and the result is the following two method signatures:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;">[OperationContract, FaultContract(typeof(CommandFault)), ApplyProxyResolver] void ExecuteCommand(string userName, string password, int playerId, string command); [OperationContract] Tuple<IEnumerable<string>, int> GetEventMessages(string userName, string password, int playerId, int lastSeenEventId);</pre> <p>Ignoring the authentication parameters (which as discussed above should go away in a later release), the first method takes a player ID and a string representing the command that the player wants to execute—typically this will be something like “go north”, “get fancy sword” or “attack purple snorklewacker”. You’ll notice that this method does not return any results—the only interesting result is if the command is invalid in which case the service will return a CommandFault with the error message.</p> <p>The way the service returns the results of commands as well as the observable actions of others in the game is through the GetEventMessages method. It also takes a player ID to determine the perspective of the player from which the events are observed, and it takes the ID of the last event which was returned by a previous call to GetEventMessages. This method returns a tuple which contains an enumerable of strings which are messages to display on the client plus the ID of the last event which produced a member of that last event of messages.</p> <p>The idea is that the client will have two asynchronous UI interaction streams going on. First, at any time the player can type a command and send it to the server, and assuming the command was valid the server will process the command using its business logic and create events representing the result of that command. The second thing going on is that the client will periodically poll the server for new event messages observed by the player. The events which produce these messages are both those created as a part of processing this player’s commands as well as events caused by everyone else in the game. Each time the client calls the GetEventMessages method, it will get back any messages that it hasn’t yet seen plus an ID used in the next call to the method to indicate the “high-water mark” of what it has already seen.</p> <p>Obviously the GetEventMessages method will be the most frequently called top-level method in the entire application (every client will poll it frequently in order to maintain its UI). So it will be very perf sensitive. In subsequent blog posts, I’ll take a look at the topic of tracking and tuning the perf of this method.</p> <h3>Unit Testing vs. Integration Testing</h3> <p>If you’ve been reading my blog, you probably know by now that I’m a huge fan of unit testing, and I’ve been putting a lot of work into appropriately unit testing D3 as we go. This has led to finding and fixing a lot of bugs early on, increasing my confidence in my ability to refactor the project frequently without much risk of introducing new subtle bugs and generally a better overall structure for the project. Unit testing, though, isn’t everything. When unit testing we try to isolate small parts of the application so we can focus what we test on a very small piece to make it fast and make sure we know what is responsible when a test fails (so it’s easy to fix). Sometimes, though, you’ve got to move to a coarser granularity and make sure that you have some tests to verify that all the units go together properly. </p> <p>This lesson came home to me in a big way when I decided to write a small integration test that would simulate the most common client / service interactions. I had been unit testing various parts including directly calling the service method implementations, etc., but I hadn’t yet put it all together and run it through WCF. The code I wanted to get working wasn’t exactly huge or hugely complicated. It looked like this:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;">[TestMethod] public void KickoffScenario_Works() { using (var svc = new PlayerServiceClient( new BasicHttpBinding(), new EndpointAddress(""))) { svc.CreateUser("simmdan", "password"); var id = svc.CreatePlayer("simmdan", "password", "joe"); svc.GetPlayers("simmdan", "password").Single(p => p.Id == id); var logOnEventId = svc.LogOn("simmdan", "password", id); svc.ExecuteCommand("simmdan", "password", id, "get item1"); var eventMessages = svc.GetEventMessages("simmdan", "password", id, logOnEventId); var message = eventMessages.Item1.Single(); Assert.AreEqual("You pick up the Item1.\n", message); } }</pre> <p>As I discovered, though, there were a surprising number of issues that came up including some things that were quick and easy to find and some others that were pretty subtle and took a fair amount of hunting. Along the way, one thing I found very helpful was this post about turning on WCF tracing: <a title="" href=""></a> Following the directions on that site I added the following XML to the app.config file in D3.Web.Tests:</p> <pre class="brush: xml; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;"><system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Information, ActivityTracing"> <listeners> <add name="log" type="System.Diagnostics.XmlWriterTraceListener" initializeData="c:\Traces.svclog" /> </listeners> </source> </sources> </system.diagnostics></pre> <h3>Common Issues When Using the EF & WCF Together</h3> <p>So all of that leads me to the following short list of the most common issues I encounter in my own code or when talking with others specific to using the EF and WCF together.</p> <ol> <li><strong>Not understanding the basic design patterns involved.</strong> This is by far and away the most common issue. For this I recommend the magazine articles I referred to in the Service Contract Design section above. <br /> <br /><em>(Side Note: It used to be that at least as common as this one was the issue of trying to figure out how EF’s disconnected APIs work—even if you did understand the basic patterns, it could be very hard to implement them correctly with the EF. Happily, since EF4 shipped the APIs are much easier to use, so this is not so common any more.)</em> <br /> </li> <li><strong>Disposing the context before enumerating a query which is returned from the service method.</strong> This one will get you almost every time. The general recommended pattern is that each service method should create a context instance in a using block so that it (and its underlying connection) will be disposed properly as soon as possible to free up precious server resources. If you have a method that returns an enumerable of some type which comes from a query to the database, though, you have to make sure that you fully enumerate that query before the context is disposed. If you just return the query itself, in a regular CLR method call where the context lifetime is managed some other way, that query would be cast to IEnumerable and the CLR would lazily evaluate it when the results were actually needed. This is simple and efficient, but in a case where the context is being disposed before the method returns, though, the query results won’t be enumerated until the WCF service’s serializer attempts to process things after the method returns, and in that case the context will first be disposed and then later the query will be enumerated, and it will throw an exception because its connection to the database will be gone. The fix? Just call .ToList() on your query before returning. <br /> </li> <li><strong>Using POCO without thinking through WCF’s requirements for objects it serializes.</strong> If you have a WCF service which accepts entities as parameters or returns them as results and you use POCO entities (maybe by using the POCO T4 template or maybe just writing them by hand), then you have to think about WCF’s requirements for its objects because your POCO objects can be almost anything and they may well not be setup properly to work with WCF. The three ways this comes up most commonly for me are if you have dynamic proxies (either just for lazy loading or for full change tracking), if you have inheritance in your model which the DataContract serializer is unaware of, or if you want to return entire graphs of entities which have cycles. You can read about the first two issues (and how to address them) here: <a title="" href=""></a> the key concepts are the DataContractResolver and KnownTypes. For the last issue, the answer is in the IsReference=true parameter to the DataContract attribute. This is a feature that was added in .Net 3.5sp1 which makes the DataContract serializer smarter about graphs of related objects. You can read more about that here: <a title="" href=""></a> <br /> <br />One thing to watch out for is that if you have POCO entities with no DataContract attributes, the DataContract serializer will handle them fine up to a point, but if you start having more advanced scenarios like graphs of related entities or dynamic proxies, then the default conventions which the DataContract serializer uses will no longer work, and in that case the answer is to add the attributes. If you are using the POCO template to generate your entities, for instance, it’s pretty easy to hack these into the T4 template which is what I did for D3. If you are using the default codegen, then all of these things are taken care of for you automatically, but of course there are other advantages to POCO—it’s all a matter of tradeoffs. </li> </ol> <p>OK. I suppose that’s enough for now. Happy WCF-service building.</p> <p>- Danny</p><div style="clear:both;"></div><img src="" width="1" height="1">Daniel Simmons - MSFT – An EF Extensibility Exercise<p>A.</p> <p>This is not all that different from some samples we have put together before. Sanjay, for instance, wrote <a target="_blank" href="">a post</a> quite some time ago showing how to enable an extension which would cause code generation to add an arbitrary attribute to classes or properties. The core capability of adding <a target="_blank" href="">annotations to the conceptual model metadata</a> as described in that post is still the same. Since EF4 has shipped, however, we have some additional options which make the overall task a whole lot easier than it was before. First off, this seemed like a great opportunity to play a bit with the <a target="_blank" href="">EDM Designer Extension Starter Kit</a> <a target="_blank" href="">VSIX extension format</a> makes it easy to wrap all of this up in a neat package and manage installation and versioning of it.</p> <p>The final result of this effort is a <a target="_blank" href="">VSIX package I call EntityRoots</a> <a target="_blank" href="">solution I use to build it</a>.</p> . </p> <p>I created a small sample project, added the POCO template, and then just added a few lines to the portion of the template which outputs the ObjectSet properties:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; highlight: [1,2,3,4,5,8,9,10,11,12,13,25]; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">)#>; <# } }</pre> <p>Essentially we just check to see if any of the entity sets have the annotation. If so, then we only output an ObjectSet if that set has the annotation and it is set to true.</p> <p <a target="_blank" href="">here</a>.</p> <p>- Danny</p><div style="clear:both;"></div><img src="" width="1" height="1">Daniel Simmons - MSFT Business Logic “Hooks” for EF4 Entities<p>Once again a <a href="" target="_blank">question in the EF MSDN Forum</a> has prompted a blog post where I can give a more complete answer. If I understand things correctly, the person asking the question wanted a simple way to add business logic hooks to their entities which would be called whenever they did SaveChanges. In EF4 we made available the basic building blocks for this kind of thing, but unfortunately it’s not quite as easy and discoverable as it ought to be.</p> <p>Since I’m stuck sitting at home with my knee elevated as I recover from minor knee surgery, I thought I’d take a little time today to create something which should make this much easier. You can find the result <a href="" target="_blank">here</a>. It’s a visual studio extension file (VSIX) which installs a new T4 item template called EntityHooks. If you right click on the EF designer background and choose “Add Code Generation Item…”, you should see a new EF EntityHooks Code Generator in the list of templates. If you choose that option, it will add a new TT file to your project that implements the hooks.</p> <p>It’s important to realize that this complements whatever other code generation you have going on—it doesn’t replace it. So if you have already added a code generation item, this will just add another one which generates partial classes that add functionality to your existing partial classes. Unfortunately, if this is the first code gen artifact you have added to your project, the designer doesn’t realize it doesn’t generate the full entities so it turns off the default codegen, and you will either need to add another codegen artifact or in the properties for your EDMX file set the codegen strategy property from None back to default.</p> <p>Once this is done you can write your own partial class for any of your entities and implement any or all of the OnAdded, OnModified and OnDeleted partial methods. These methods will be called whenever an instance of that entity type is in the appropriate state at the time SaveChanges is called, and the call will happen before the framework does any other part of SaveChanges. So, if your method throws an exception, no save will happen. This also means that you can modify other entities if you want, or cancel an operation on the particular entity by calling AcceptChanges on its ObjectStateEntry (which is passed as an argument to the partial method).</p> <p>Here’s a simple example I used to test out the extension. It’s a hook which takes Customer entities marked for deletion and instead of deleting them just prepends “D:” to the front of the company name as a signal that they are deleted but you still want to keep them around for historical info or something like that:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">public partial class Customer { partial void OnDeleted(System.Data.Objects.ObjectStateEntry entry) { entry.ChangeState(EntityState.Modified);Daniel Simmons - MSFT: Test Fixture & Using a Fake Context While Testing a WCF Service<p>First off, I’ll point out that <a target="_blank" href="">D3 Release 0.1012</a> is now available on code gallery. In addition to a foray into a custom, attribute-based text parser (which I’ll not discuss in detail here unless someone really wants me to since it has little bearing on the EF), this release includes the initial outline of the WCF service which should someday soon make it possible to actually have a working game. In the course of putting that together, I continued to invest in testing improvements which will be the primary focus of this post.</p> <p>Before going into those details, though, I have to commemorate the 4 year anniversary of DPMud’s first formal check-in. One of the original members on the project, Josh Lindquist, sent me an email recently which includes a copy of the first auto-generated check-in mail for the project which had these headers:</p> <blockquote> <p><span style="font-family: conso;">From: Daniel Simmons <br />Sent: Friday, April 14, 2006 12:00 AM <br />Subject: [DPMUD] REDMOND\dsimmons: #9: Initial checkin of dpmud.</span> </p> </blockquote> <p>It’s hard to believe that 4 years later I’m still working on this crazy little project, but I am, and if anything I’m enjoying it more than ever! </p> <p>Enough nostalgia, now back to EF testability.</p> <h3></h3> <h3>Simplifying Test Code with a Test Fixture</h3> <p>As we started filling in more tests for business logic on entities in the D3 model, I found that over and over I had code which setup several entities and relationships between them, put that data into a fake context and then used it to perform some action and verify the results. This code was repetitive and prone to small errors. In addition, because it was a bit of custom code each time, it was harder to look at a bunch of tests to pick out what was the core idea of the tests and what patterns were the same or different between them. So I decided to centralize this code and configure my fake context in such a way that I could easily have a common set of initial data.</p> <p>The first step was to add a new constructor to the D3DB object which I use to provide a simple IOC container. Previously the only constructor was one which take an ID3Context instance, but now I’ve added a default constructor and a property on D3DB which is a Func used for creating a D3Context instance if needed:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;">public static Func<ID3Context> CreateAction { private get; set; } public D3DB() : this(CreateAction != null ? CreateAction() : null) { }</pre> <p>That way I could add a constructor to each of my test classes with the line:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;">D3DB.CreateAction = () => new FakeD3Context();</pre> <p>and then the body of any tests needing to interact with the context would just be surrounded by:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;">using (new D3DB()) { ... }</pre> <p>It also means that I can add a static constructor to the real D3Context implementation which sets D3DB up to create a real context any time it isn’t overridden like in the test code above.</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;">static D3Context() { D3DB.CreateAction = () => new D3Context(); }</pre> <p>The reason we have to configure this in a static constructor on the real D3Context rather than just in D3DB itself as a default value for the property is that D3DB is part of D3.Model so that entities can have a dependency on it, but that means it can’t have any dependency on any implementation of ID3Context. So we define the basic mechanism in D3DB, setup the default configuration in D3Context and then override that configuration to point to a fake in the tests.</p> <p>Once I had this mechanism in place, my fake context creation was centralized and I could expand on it by creating a test fixture with a standard set of test data. I did that by adding a factory method to FakeD3Context:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">public static FakeD3Context CreateWithTestData() { Room room1, room2; Exit exit1, exit2; Actor actor1, actor2; Player player1; Item item1, item2; User user1; var ctx = new FakeD3Context { Rooms = new FakeD3ObjectSet<Room>( (room1 = new Room { Id = 1,public D3DB() { if (current == null) { if (CreateAction == null) { throw new InvalidOperationException("Must set a CreateAction if you are going to use the default constructor."); } Current = CreateAction(); } else { disposeNeeded = false; } }</pre> <p>And the code for the dispose pattern undergoes a corresponding modification like this:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">private bool disposeNeeded = true; public void Dispose() { if (disposeNeeded) { Dispose(true); GC.SuppressFinalize(this); } }</pre> <p>The result of all this is that I can write a WCF service method whose body is surrounded by a using block which creates a new D3DB instance using the default constructor and that method will create the real context when running in the program directly, but I can also create a test method whose body has a using block creating a new D3DB instance with the result that it will create a fake context, give me an opportunity to interact with it if necessary, and then let me call the service method whose D3DB constructor call will see that a context is already available and just use that rather than making another. Since this same context is used for the whole test, I can then also verify that the service call had the intended effect upon the database.</p> <h3>ExpectedExceptionEx</h3> <p>For my last testing trick of the day, let me tell you about a small new addition to my testing arsenal: ExpectedExceptionExAttribute. The motivating scenario for me was that I added a couple new constraints to the database to enforce that actor and user names were unique even though they aren’t part of the primary key for the entity. </p> <p>Adding those constraints was easy—I just put a couple additional calls to ExecuteStoreCommand in the database creation code on the context, but the more difficult part comes when testing them. it’s easy enough to create two actors with the same name, try to save them both to the database and verify an exception is thrown, but what if that exception was actually the result of some other, subtly different thing and in the end the intended behavior wasn’t being enforced correctly? I really needed a way to verify more than just that an exception was thrown of a particular type—especially since a broad class of failures when saving changes to the database all result in an “UpdateException” being thrown.</p> <p>Happily, the MsTest library has a great extension point for just this sort of thing. I was able to create a new attribute which I can place on my tests that does more than just verify the exception type, it can also optionally check the inner exception type and/or search for strings within the exception message or inner exception message. So now my test for one of the new constraints looks like this:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">[TestMethod, ExpectedExceptionEx(typeof(UpdateException), InnerExceptionType = typeof(SqlException),namespace D3.TestUtils { using System; using Microsoft.VisualStudio.TestTools.UnitTesting; [AttributeUsageAttribute(AttributeTargets.Method, AllowMultiple = false)] public sealed class ExpectedExceptionExAttribute : ExpectedExceptionBaseAttribute { public Type ExceptionType { get; private set; } public Type InnerExceptionType { get; set; } public string ExceptionMessageContains { get; set; } public string InnerExceptionMessageContains { get; set; } public ExpectedExceptionExAttribute(Type exceptionType) { ExceptionType = exceptionType; } protected override void Verify(Exception exception) { Assert.IsNotNull(exception); RethrowIfAssertException(exception); Assert.IsInstanceOfType(exception, ExceptionType); if (InnerExceptionType != null) { Assert.IsInstanceOfType(exception.InnerException, InnerExceptionType); } if (ExceptionMessageContains != null) { Assert.IsTrue( exception.Message.Contains(ExceptionMessageContains), String.Format("ExceptionMessage does not contain '{0}'. Message value: {1}", ExceptionMessageContains, exception.Message)); } if (InnerExceptionMessageContains != null) { Assert.IsTrue( exception.InnerException.Message.Contains(InnerExceptionMessageContains), String.Format("InnerExceptionMessage does not contain '{0}'. Message value: {1}", InnerExceptionMessageContains, exception.InnerException.Message)); } } } }</pre> <p>That’s it for today. Have fun playing with the EF4 final bits and testing your code as you go. If you have any questions or feedback, as always I’d love to hear it.</p> <p>- Danny</p><img src="" width="1" height="1">Daniel Simmons - MSFT: Testing, TFS and a Teenager<P>It has been a couple months since I’ve put up a new release of D3, but that doesn’t mean I haven’t been working on it. <A href="" mce_href="">Release 0.0814</A> is now available. This release is designed to work with VS2010 RC (available starting last week), but it should also work just fine or with very minimal changes with VS2010 beta 2 if that’s what you have.</P> <H3>The Teenager<A href="" mce_href=""><IMG style="BORDER-RIGHT-WIDTH: 0px; DISPLAY: inline; BORDER-TOP-WIDTH: 0px; BORDER-BOTTOM-WIDTH: 0px; MARGIN-LEFT: 0px; BORDER-LEFT-WIDTH: 0px; MARGIN-RIGHT: 0px" title=Forest border=0 alt=Forest align=right</A></H3> <P>My 14-year old son, Keith, is developing into quite the programmer. You may have heard the .Net Rocks guys read a letter he wrote them in <A href="" mce_href="">episode 520</A> about the new support for complex numbers in .Net 4. Let me assure you that he did that all on his own, and I didn’t even hear about it until he mentioned that he’d gotten a response from them asking for his shirt size so they could send him a .Net Rocks hoodie (which he now wears all the time). You can see a few projects on <A href="" mce_href="">his website</A>, including the picture to the right which is a computer generated image that he created. Anyway, he has been watching me work on DPMud through each of its versions over the last several years, and he keeps me on the straight and narrow by regularly asking what I’ve done on it lately. One of his long-standing requests has been for us to find a way that he could work on the project with me, so in the time since the last release of D3 not only have I been working to evolve the codebase, but I also installed TFS on a server at home and got things setup so that he can begin contributing. The code in the 0.0814 release is still all from me, but he has begun doing code reviews, and in the next release some of the bits will have originated with him. (What a blast it is to code with my son! Go Keith!)</P> <H3>TFS</H3> <P>Before we dive into a discussion of the changes in this release of D3, let me take a moment to mention what a great experience I’ve had using TFS 2010. The new basic installation is AMAZING. I repurposed an old single-core AMD PC I had sitting around the house, upgraded it’s RAM a little so that it now has 2GB, reformatted it with a clean copy of Win7, and installed TFS 2010 beta2 plus VS2010 beta2, and then followed the instructions in <A href="" mce_href="">Jason Zander’s blog post</A> and the post from Brian Harry that he links to. Setting things up was super smooth, which is a huge change. The last time I tried to set up TFS for myself I couldn’t even figure out what software or licenses I would need and eventually got disgusted and just decided to use something else for my source control. This time it was as easy as could be, and the final result is a server I can access from home, work or anywhere on the internet, full integration of source control with VS including simple offline support, a build server that does a continuous integration build whenever one of us makes a checkin and even a bug/work item tracking system which we haven’t fully started using yet, but is ready to go. Oh yeah, and not only can I use the built-in client in visual studio, but if I need to check on something from a machine that doesn’t have VS installed I can just connect to the web interface. Now there’s no excuse for not having good source control.</P> <H3>D3 Changes in this Release</H3> <P>OK. So what have I been doing with D3 this time around? Well, as usual I’ve been making general investments in the code including things like refactoring and renaming the projects so that everything follows a three-part naming convention starting with D3, ensuring that every project had source analysis, stylecop and warnings as errors turned on and then correcting any issues which turned up, etc. The really interesting things, though, are that I switched from the standard codegen to POCO classes generated using a T4 template, separated the POCO entities into one project and the context and other database specific bits into another project, and then began porting over some events and actor actions and writing tests for them.</P> <P>Most of this is pretty straightforward and you can look at the code in the release to see what’s up. I started with one of the POCO templates we’ve release as part of the EF Feature CTP, but because some things have been changing between beta 2 and RC and such, I copied not only the core template but also the EF.Utility.ttinclude file into the project so that I’d be isolated from changes there. Once those things completely settle down, I’ll look at getting ride of the ttinclude file and just using the standard one. On a similar note, I’m still using my customizations to the model-first workflow to generate C# code for creating the database rather than just a SQL file, but .net 4 RC now has methods on ObjectContext which will create a database or check for its existence based on just the SSDL used in the context, so I’ll probably switch to using that in a future release of D3. (It’s always great when I can get rid of some code and just use the EF’s built-in mechanisms).</P> <P>The really interesting part, though, was when I started porting over the first core DPMud capabilities from the previous version code base. Most of what makes DPMud really work is in the Actor actions and the Event entities. This is how we simulate things happening in the virtual world of DPMud—methods are called on the Actor class for various actions like entering or leaving a room, picking up an item, etc. Those methods do some validation, modify the database state as appropriate and then create one or more Event entities which notify other actors about what happened. The normal workflow for a player client in DPMud is a combination of parsing string input to call these action methods and periodically querying the database for new Events and then displaying a string representation of the Events from the perspective of the player. The Event classes contain the business logic to compute the string describing what the event means so that when I see the event that I picked something up it will return a string saying “You picked up foo.” But when someone else sees that same Event, they will see “Danny picked up foo.” or something like that. </P> <P>The key point is that these action methods and the events that go with them are really the core business logic of the system, and doing a good job of testing them is a critical mission for the D3 rewrite of DPMud. Since we’re going to need a lot of tests, and since we want to isolate the part of the system under test, it seems clear that we need to create a fake data access layer. Further, while I’m a big fan of the new IObjectSet<T> interface which is designed to make this process simpler, in this case we’ve got some tricky dependencies. The action methods are members of entity classes which we need to make sure have no dependencies on the DAL, but at the same time those methods need services from the DAL such as the ability to add new events or modify entity state.</P> <H3>Inversion of Control</H3> <P>The standard way to handle this kind of problem is an inversion of control / dependency injection pattern which has three parts:</P> <OL> <LI>We define interfaces as part of the same assembly containing the entities which describe the capabilities that the entities require from the DAL. </LI> <LI>The DAL implements those interfaces, and we can create fake implementations of those interfaces for use by unit tests. </LI> <LI>We create some mechanism to make an implementation of the interfaces available to the entities at runtime—either the “real” DAL implementation when the app is in production, or the fake implementation for the tests. </LI></OL> <P>The first two aren’t terribly difficult… T4 to the rescue! I created a new interface ID3ObjectSet which is essentially the same as IObjectSet but specific to D3 and part of the D3.Model assembly rather than in System.Data.Entity.dll. Then it was easy to make a T4 template which generates an interface representing the context which has an ID3ObjectSet property for each top-level entity set. It was also a simple exercise to modify the template generating the context so that the generated context implements that interface except that ObjectSet<T> which provides the core capabilities for working with the sets in the database unfortunately does not implement the new ID3ObjectSet interface. So I created a simple class which wraps an ObjectSet<T> instance and implements the interface.</P> <P>The third part of the pattern can be done several ways. Sometimes there’s a dependency injection mechanism which supplies the implementation whenever the objects that need it are constructed, but in this case the EF generally creates entity instances as part of materializing queries which makes it harder to inject dependencies at this time. So the approach I took instead was to make a simple IoC container the entities can call to get an implementation from. Often this is done with a general purpose IoC container like Unity or something along those lines, but in this case I created a small special purpose implementation just for the context which I call D3DB. </P> <P>It has two parts. The core piece is just a thread-local-storage static property whose type is the context interface (ID3Context):</P><PRE class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">[ThreadStatic] private static ID3Context current; public static ID3Context Current { get { if (current == null) { throw new InvalidOperationException("Cannot access context without setting it first."); } return current; } set { current = value; } }</PRE> <P mce_keep="true"> </P> <P>This means entities can interact with the DAL (or a fake of it) by calling methods on “D3DB.Current”. I have a support method for actions, for example, which helps with creating new events that looks like this:</P><PRE class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">public void Log(Event e) { if (e == null) { throw new ArgumentNullException("e"); } e.SourceActor = this; e.SourceRoomId = this.RoomId; D3DB.Current.Events.AddObject(e); }</PRE> <P>The second part is a disposable object with a constructor that sets the static property and clears it when the object is disposed. That way we can have a “using” block that sets the implementation and clears it when the block goes out of scope. Also really simple except for the fact that correctly implementing the dispose pattern has several parts:</P><PRE class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">public class D3DB : IDisposable { // thread-local-static property here... public D3DB(ID3Context context) { if (context == null) { throw new ArgumentNullException("context"); } if (current != null) { throw new InvalidOperationException("There already is a current DB."); } Current = context; } // // Dispose pattern // private bool disposed = false; public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (!this.disposed) { if (disposing) { var context = Current; Current = null; context.Dispose(); } disposed = true; } } ~D3DB() { Dispose(false); } }</PRE> <P>This means, for instance, that I can create a fake context which uses HashSets instead of real ObjectSets, and then it’s easy to write a test where the action methods will use the fake context in the test but still have them use the real context in production. Something like this:</P><PRE class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">[TestMethod] public void Enter_ValidRoom_SetsActorRoomAndLogsEnterEvent() { using (new D3DB(new FakeD3Context())) { var actor = new Actor { Id = 1, RoomId = 1 }; var room = new Room { Id = 2 }; Assert.AreNotSame(room, actor.Room); actor.Enter(room); Assert.AreSame(room, actor.Room); Assert.AreEqual(room.Id, actor.RoomId); var e = (EnterEvent) D3DB.Current.Events.Single(); Assert.AreEqual(actor.Id, e.SourceActorId); Assert.AreEqual(room.Id, e.SourceRoomId); } }</PRE> <P>I guess that’s more than enough for one post. Have a look, and let me know if you have questions or feedback. Next release, hopefully we’ll get more of the end-to-end infrastructure together so we can begin to have a working app rather than just a bunch of tests…</P> <P>- Danny</P><img src="" width="1" height="1">Daniel Simmons - MSFT Delete and 1-to-1 (or 1-to-0..1) Relationships<P>Here.</P> <H3>The problem</H3> <P:</P> <BLOCKQUOTE> <P>System.Data.UpdateException: A relationship is being added or deleted from an AssociationSet 'FK_MyObjectDetail_MyObject'. With cardinality constraints, a corresponding 'MyObjectDetail' must also be added or deleted.</P></BLOCKQUOTE> <P>It turns out, he wasn’t the only one running into this problem. Some links (chances are there are others):</P> <UL> <LI><A href="" target=_blankEF Forum: Cascade Delete Fails on One to One Relationships</A> </LI> <LI><A href="" target=_blankstackoverflow: Deleting an entity that has an un-loaded child entity</A> </LI> <LI><A href="" target=_blankstackoverflow: Entity Framework – Different multiplicity in SSDL and CSDL</A> <- scan toward the bottom of this thread to see the issue come up </LI></UL> .</P> <H3></H3> <H3>Some background</H3> <P>The two colliding features are cascade delete and an important related concept we call “relationship span.” To learn more you can take a look at these blog posts:</P> <P>Cascade delete -- </P> <UL> <LI><A href="" target=_blankTip 33 – How cascade delete really works in EF</A> </LI> <LI>From the EF FAQ: <A href="" target=_blankWhy does deleting an entity from my context not delete all related entities when I have cascade delete defined in the model?</A> </LI></UL> <BLOCKQUOTE> <P><EM>Key lessons for us: </EM></P> <OL> <LI><EM>Cascade delete in the model just deletes things that are in memory. If you want to delete things that the database has which haven’t been loaded in memory, then you need to configure cascade delete in the database as well as in the model.</EM> </LI> <LI><EM>Cascade delete in the model kicks in when you delete the principal entity – NOT when you remove the relationship.</EM> </LI></OL></BLOCKQUOTE> <P>Relationship span --</P> <UL> <LI><A href="" target=_blankTip 11 – How to avoid Relationship Span</A> </LI> <LI><A href="" target=_blankJust delete it already!</A> </LI> <LI><A href="" target=_blankFiltered association loading and re-creating an entity graph across a web service boundary...</A> </LI></UL> <BLOCKQUOTE> <P><EM>Key lessons for us: </EM></P> <OL> <LI><EM>When you have independent associations, unless you use MergeOption.NoTracking the EF will automatically retrieve the relationship info (ie. the EntityKey) for relationships where the other side has multiplicity 1 or 0..1.</EM> </LI> <LI><EM>When you delete an entity, the EF deletes all relationships involving that entity. </EM></LI> <LI><EM, </EM><EM>it implies that the EF expects you to delete both entities.</EM> </LI></OL></BLOCKQUOTE> <H3>So what the heck is going on?</H3> .</P> <P.</P> <H3>The workaround</H3> <P.</P> <P>So all we need to do is find ObjectStateEntries representing relationships in this and only this state and call AcceptChanges on them each time right before we do SaveChanges.</P><PRE class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;"); } }</PRE> <P>The key ideas behind this code are:</P> <OL> <LI>We put these methods in the partial class for our context and use the OnContextCreated partial method to make sure we register a handler for the SavingChanges event every time a context instance is created. </LI> <LI>The main part of the event handler is a LINQ to Objects query over the ObjectStateEntries which are deleted 1-to-1 relationships. </LI> <LI. </LI> <LI>Since the relationship entry is in the deleted state, if we call its AcceptChanges method, then it will be detached from the context as though it never existed. </LI></OL> <P>Once we put this code in place, each time SaveChanges is called, the offending relationship entries will be hunted down and killed before a save happens.</P> <P>Oh, and if you prefer VB.net, here’s the VB version of this code. Note that I have not tested the VB version, but others have, and it seems to be OK…</P><PRE class="brush: vb; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;" ' AcceptChanges on this state entry so the update pipeline ignores it -- ' depend on the database to do the cascade entry.AcceptChanges() End If Next<BR> End Class</PRE> <P>- Danny</P><img src="" width="1" height="1">Daniel Simmons - MSFT Foreign-Key Relationships in EF4<p>I try to keep up with blog posts on the net which involve the entity framework, and this afternoon I came across <a href="">this post</a>:</p> <p. </p> <ol> <li. <br /> <br /. <br /> <br /></li> <li. <br /> <br /></li> <li. </li> </ol> <p.</p> <p>- Danny</p><img src="" width="1" height="1">Daniel Simmons - MSFT Query Results Caching with EF4<p>Recently.</p> <p.</p> <p>The key observations that made this possible were:</p> <p.</p> <p.</p> <p.</p> <p>4) If we are going to add query caching, it’s critical that we also add query compilation, because for LINQ compilation of queries makes them *dramatically* faster. A caching solution without compilation would be pretty much pointless.</p> <h3>Usage</h3> <p>What all this means is that we can write code which looks like this:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">var cust = ctx.QueryWithCaching("query1", "ALFKI", (IQueryable<Customer> customers, string id) => customers.First(c => c.CustomerID == id));</pre> <p.</p> <p>Similarly, if we just want compilation but not caching, we can write:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">var projList = ctx.QueryAutoCompiled("query2", "UK", (IQueryable<Customer> customers, string country) => from c in customers where c.Country == country select new CustomerProjection() { Contact = c.ContactName, Country = c.Country });</pre> <p.</p> <h3>Implementing QueryAutoCompiled</h3> <p>First let’s look at what it takes to implement QueryAutoCompiled, then we’ll take on caching (which uses this method when it actually executes a query against the database). The auto-compiling task has three parts: </p> <ol> <li>Appeasing the Compiler - creating a set of flexible overloads</li> <li>Transforming the Lambda - changing the expression from what is passed to our method into the form required by CompiledQuery.Compile</li> <li>Compiling, Caching & Executing</li> </ol> <p.</p> <h3></h3> <h3>Compiling, Caching & Executing</h3> <p: <a href="">ConcurrentDictionary</a>.</p> <p.</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">private static ConcurrentDictionary<string, object> compiledQueryCache = new ConcurrentDictionary<string, object>();</pre> <p:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;");</pre> <p>The last line actually executes the query and returns the results.</p> <h3>Transforming the Lambda</h3> <p.</p> <p.</p> <p.</p> <p.</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;"); } }</pre> <p>Given that class, the code to rewrite the expression is just these three lines:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar:]);</pre> .</p> <p…</p> <h3>Appeasing the Compiler</h3> <p.</p> <p.</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; highlight: [1,2,3,4,19,20,21]; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;"><#)#>); } <# } #></pre> <p:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;"); }</pre> <p.</p> <p.</p> <p. </p> <p.</p> <p>So the second overload (in T4 form) looks like this:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;"(); }</pre> <p.</p> <p.</p> <h3></h3> <h3>Finally… Implementing QueryWithCaching</h3> <p. </p> <p>In this case we only want to cache the results for the lifetime of the context (not in a static that can be used across multiple contexts). So we just add a regular Dictionary field to the context:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">private Dictionary<object, object> queryResultCache = new Dictionary<object, object>();</pre> <p.</p> <p>The portion of the T4 template for the single result overload of the method ends up looking like this:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;"; }</pre> <p>This template fragment depends especially on the following support function added elsewhere in the template:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;") + "))"; }</pre> <p.</p> <p>The resulting overload for two parameters, for example, looks like this:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;"; }</pre> <p.</p> <p:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">result = this.QueryAutoCompiled(queryName, arg1, arg2, expression).ToList();</pre> <p>That’s it! Queries with simple caching, and all you have to do is drop a template into your project, edit the line at the top to point to your EDMX file, and then change your query execution code to call one of these methods.</p> <p>If you want the whole solution strung together, you can find it <a href="">here</a>. But don’t forget the caveats: This isn’t full production tested code, and the caching only works for the lifetime of a single context instance. Real, full-featured caching will have to wait for a future release of the EF.</p> <p>- Danny</p><img src="" width="1" height="1">Daniel Simmons - MSFT surroundings for safety<p>And now for something completely different… My wife took this picture recently, and every time I see it I can’t decide whether I want to laugh or run away screaming “NOT SAFE! NOT SAFE!”</p> <p><a href=""><img style="border-bottom: 0px; border-left: 0px; display: block; float: none; margin-left: auto; border-top: 0px; margin-right: auto; border-right: 0px" title="Check Surroundings for Safety" border="0" alt="Check Surroundings for Safety" src="" width="644" height="430" /></a></p> <p>I’ll leave figuring out the context of the picture as an exercise for the reader.</p> <p>- Danny</p><img src="" width="1" height="1">Daniel Simmons - MSFT Merge Options and Compiled Queries<p>Recently.</p> <h3>Part 1: Merge Options</h3> <p.</p> <p>For the purpose of today’s post, however, let’s home in on another aspect of MergeOptions which is that they are a property of a <strong><em>particular query instance</em></strong>:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">//);</pre> <p:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;">var query1 = from c in ctx.Customers where c.Country == "UK" select c; ctx.Customers.MergeOption = MergeOption.NoTracking; var customer1 = query1.First(); Debug.Assert(customer1.EntityState == EntityState.Unchanged);</pre> <p>In the code above, what do you expect? Would the Assert fire or not? What about with this code:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;">ctx.Customers.MergeOption = MergeOption.NoTracking; var query2 = from c in ctx.Customers where c.Country == "UK" select c; var customer2 = query2.First(); Debug.Assert(customer2.EntityState == EntityState.Detached);</pre> <p.</p> <p>If you look at the generated code for the Customers property on the context you will see the ObjectQuery caching in action:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;">public ObjectSet<Customer> Customers { get { if ((_Customers == null)) { _Customers = base.CreateObjectSet<Customer>("Customers"); } return _Customers; } } private ObjectSet<Customer> _Customers;</pre> <p.</p> <p>Are you with me so far? Put that info on the backburner, and let’s look at how compiled queries work.</p> <h3>Part 2: Compiled Queries</h3> <p.</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">static Func<NorthwindEntities, string, IQueryable<Customer>> compiledQuery = CompiledQuery.Compile((NorthwindEntities ctx, string country) => (from c in ctx.Customers where c.Country == country select c));</pre> <p>When you are ready to execute the query, you invoke the delegate and pass in the context and parameters in order to get back the ObjectQuery.</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">var query = compiledQuery(ctx, "UK");</pre> <p>This query can then be used like any other:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">foreach (var customer in query) { // do some stuff }</pre> <p:</p> <ol> <li><strong>No real work happens until the first time the query is actually executed.</strong> . <br /></li> <li><strong>If you create a new query based on the compiled query, it will work—you just won’t get any benefit from the compilation.</strong> . </li> </ol> <p>The third unexpected behavior deserves a whole section of its own…</p> <h3>Part 3: Combining Merge Options and Compiled Queries</h3> <p.</p> <p>So if you take the following code (using the same compiled query delegate created in the example in part 2 above):</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: false;">using (var ctx1 = new NorthwindEntities()) { ctx1.Customers.MergeOption = MergeOption.NoTracking; var query1 = compiledQuery(ctx1, "UK"); var customer1 = query1.AsEnumerable().First(); Debug.Assert(customer1.EntityState == EntityState.Detached); }</pre> <p>Then the compiled query will use the no tracking merge option so the assert doesn’t fail. If you follow that with this code…<.AsEnumerable().First(); Debug.Assert(customer2.EntityState == EntityState.Detached); }</pre> <p>The second execution will also use the NoTracking merge option even though the ObjectQuery on its context has AppendOnly as its merge option.</p> <p.First(); Debug.Assert(customer2.EntityState == EntityState.Detached); }</pre> <p>Then the assert would fail because the AppendOnly merge option would be used and the entity state would end up Unchanged rather than Detached.</p> <h3></h3> <h3>Summary</h3> <p>What can I say? Compiled queries are tricky, and when you combine them with merge options they get even trickier, but the performance benefit can be huge, so it’s worth learning about how they work. Keep in mind these three potentially unexpected behaviors:</p> <ol> <li>No real work happens until the first time the query is actually executed. </li> <li>If you create a new query based on the compiled query, it will work—you just won’t get any benefit from the compilation. </li> <li>The merge option used with a compiled query is determined by the merge option specified on the ObjectQuery used as the basis for the compiled query at the time the query is first executed. </li> </ol> <p>Now, back to trying to find time to complete the next phase of D3 and get that posted. :-)</p> <p>- Danny</p><img src="" width="1" height="1">Daniel Simmons - MSFT Concurrency: It’s a Matter of Trust<p><em><font size="1">(with apologies to Billy Joel <grin>)</font></em></p> <p>Last summer I wrote a few articles for MSDN Magazine about N-Tier patterns with the Entity Framework (<a title="Entity Framework- Anti-Patterns To Avoid In N-Tier Applications" href="">Entity Framework- Anti-Patterns To Avoid In N-Tier Applications</a>, <a title="Entity Framework- N-Tier Application Patterns" href="">Entity Framework- N-Tier Application Patterns</a> and <a title="Building N-Tier Apps with EF4" href="">Building N-Tier Apps with EF4</a>). One of the most fun parts of the project has been the emails I’ve gotten from various readers with follow-up questions and interesting discussions. Today I responded to one of those messages and after writing up the email I thought the discussion might be of interest to more than just that one reader, so I’m repurposing the content here. Further thoughts and discussion are welcome.</p> <p>The message was specifically regarding <a href="" target="_blank">Anti-Pattern #3: Mishandled Concurrency</a> from the first article, and essentially the issue was around how to handle concurrency given the design principle that a good service should not assume trust with its clients. The contention was that in order to avoid trusting the client, the service should either re-load data before doing an update in order to check what was really changed and validate consistency with business rules or it should digitally sign the original version of the entity, send it to the client and then verify it when the result comes back before relying on that data.</p> <p>Here’s my response:</p> <p>I agree that trust is a significant issue, but I'm not sure that your two options are the only ones or even that they are the preferable ones in many situations. The critical questions, I think, are: What are we trying to protect? And what kinds of things are we protecting from? </p> <p>First off, we have the issue of whether or not the client is who we think it is (a question of authentication). If it's possible for someone with evil intent to make a call to the service and pretend to be someone other than who we think, then the service might allow that person to accomplish a task that should not be allowed, which of course would be a major issue, but that's a concern of other parts of the solution--not what we're talking about when we are looking at concurrency. If a request is modified between the client and the service (maybe a man-in-the-middle attack or something like that), then that has the same kind of problem, but again that's not so much a matter for the persistence or service implementation part of the application so much as it is for the messaging substrate of WCF or the like. </p> <p>So more relevant to this case we have the issue of whether or not the operation being requested by the client should be allowed (authorization / validation). It seems to me that this has three parts:</p> <p>1) Is the client allowed to change a particular value or not? Most of the time this kind of check is relatively simple / static, and unless I'm missing something it's never really affected by a question of concurrency. In any case we're not really looking at whether or not the client has really changed something compared to what's in the database--it's just a matter of whether the client is allowed to change the values at all.</p> <p>2) Do the requested changes validate? Are they self-consistent? Do they violate any of our business rules? Again, this is a matter just of whether or not the request makes sense. It's not really a matter of whether the client is lying about what it changed or didn't or anything of the kind. The only time this would be an issue is if we had a business rule that said something like "the client is allowed to increase the value of their insurance coverage by up to 5% but no more than that." If that were the case, then of course we would not be able to trust the original value of the insurance coverage sent by the client--we would either need to re-query the database or send round-trip signed data or something, but this kind of rule is much less common, and again, the question is not one of concurrency so much as it is about reference data. (I'll also point out that for this kind of business rule we'd probably want it to actually be that it can't be increased by more than 5% during some particular time period or something like that rather than per request or else we'd be open to other kinds of attacks.)</p> <p>3) What about concurrency? Finally we get to the heart of the matter. Can we trust the client to supply the correct original value for the concurrency token? What if there is an evil or buggy client which gets past the authentication checks and then makes calls to the service with a concurrency token that does not match the original value sent? If the value is modified, there are two possibilities. Either the value is some random thing that doesn't match what's in the database in which case the request will fail (keep in mind that the EF will use that concurrency token when it attempts to update the database so it is checked against the current value in the DB before any changes go through--just more efficiently because that check doesn't require an extra round-trip/it's part of the update statement), or someone else has modified the database in the meantime and the value happens to match the new concurrency token so the update goes through when it shouldn't have. This last one is the only case we really have to worry about, but we do have to keep it in perspective. We're talking about a request that passed the other checks--it's something the client should be allowed to do--just not if someone else has modified the data between when it was read and when the update went through. Further, the client has to correctly anticipate the next value of the concurrency token as well as how many times it has been updated, etc.</p> <p>Yes, I can imagine some cases where the last case could cause a problem if my data is very sensitive or my service is exposed on the internet instead of an intranet, and I allow public access or something like that. Someone might write a client that randomly tries things as a vandalism/denial of service type thing, and in that case I might go to the length of signing things sent to the client or of caching the original values so that I can use them instead of what I get back from the client, but I'd say that in the majority of cases exploiting this kind of concurrency issue isn't realistic and trusting the original value of the concurrency token sent from the client is a good, pragmatic approach.</p> <p>- Danny</p> <p>P.S. Happy New Year! May we all have a blessed 2010.</p><img src="" width="1" height="1">Daniel Simmons - MSFT: Rides Again – Now on VS2010 Beta 2<p>And you thought I might never post on D3 again. I know. I know. Jedi knights don’t wait 6 months for a follow-up blog post.<a href="#_b47d84a174ea44beaf1ed22cba91d1e1"><sup>1</sup></a><sup> </sup>Well, the wait is finally over. I’ve put a <a href="" target="_blank">new release of D3</a> up on code gallery. In this release the code has been updated to work with VS2010/.Net 4 beta 2. The release also includes a reorganization of project names and some new functionality, but I’ll talk about those in a future post or two. For this post, let’s look at the three broad steps involved in upgrading the project since they illuminate EF features and changes you might need to make to move some other project from beta 1 to beta 2.</p> <h3>Step 1 : Basic Compilation</h3> <p>When I loaded the solution in beta 2 and tried to recompile, the first problems I encountered were issues with stylecop—there were a number of errors referring to a file called ".NETFramework,Version=v4.0.AssemblyAttributes." I found a solution in <a href="">this thread</a>.</p> <p>The next change needed was to react to the EF’s breaking change from beta 1 to beta 2 where the ContextOptions property DeferredLoadingEnabled was renamed to LazyLoadingEnabled. The semantics of this property didn’t change—just it’s name, so this was a simple fix.</p> <p>Once compilation was passing, the next step was to start leveraging new EF features in beta 2.</p> <h3>Step 2 : Adding FKs to the Model</h3> <p>Beta 2 not only added runtime support for FKs in the model, but the designer is now smarter about them, and they are turned on by default. So I had a few options:</p> <ol> <li>I could do everything by hand:</li> <ul> <li>Manually add the FK properties.</li> <li>Go to each association and add the referential integrity constraint between the FK property on the dependent entity and the primary key on the principal entity.</li> <li>Remove the association set mappings for the associations since the mapping will now be handled by mapping the FK property on the entity just like any other entity.</li> <li>Finally, either manually add the mapping for that property or regenerate the database from the model. <br /></li> </ul> <li>Since the model is not yet all that big, I could recreate it from scratch, requesting FK properties with each association. <br /></li> <li>Given that I already had a database with foreign key columns (just not exposed on the entities), I could reverse engineer the model from the database with the option turned on to generate FK properties on the entities.</li> </ol> <p>I choose option #3 for no reason except that it was interesting to come full circle from a model-first database to a DB first model and then back to model first when further changes are needed. :-) In the process I encountered one may difficulty which was that reverse engineering from the database uses a simplistic algorithm for naming things that works reasonably well when you only have one relationship between any two entities, but when you have more than one relationship between the same pair of entities things get a bit confusing. In D3 I have two identical relationships between Room and Exit--one for leaving a room by way of an exit and the other for entering a room from an exit. The one tricky thing is determining which association was connected to which navigation and FK property. First you need to right click on the navprop and choose the "select association” menu item. Then you can double click on the association to see a dialog which describes the referential constraint including the FK property.</p> <h3>Step 3 : Updating Model First Workflow & Templates</h3> <p>The final step was to update the D3 model first workflow and templates both because the bug mentioned in <a href="">this previous post</a> was fixed and because of other changes and improvements made by the designer team in beta 2. You can read the <a href="">details of those improvements on the ado.net team blog</a>, but the summary is that there are now two properties available in the EDM designer when you click on an empty place on the designer surface. The “Database Generation Workflow” property allows you to specify which workflow xaml file will be used when you choose to generate the database from the model. The default value picks up the file from one of the VS installation directories, but it can also point to a per user directory (where a new xaml file could be installed by a VSIX package for instance) or to a location relative to your project directory. The “DDL Generation Template” property specifies the T4 template used in the step for generating the data definition language based on the SSDL.</p> <p>I was able to keep the overall strategy the same—for the most part just comparing the original versions of the files to the new versions and reacting to things like the renaming of the namespace Microsoft.Data.Entity.Design.Pipeline.EdmExtension to Microsoft.Data.Entity.Design.DatabaseGeneration.Activities, or the change in xml namespace for the StoreGeneratedAttribute in SSDL from the SSDL namespace to the annotations namespace. </p> <p>In the case of the SsdlToCode template there were more extensive changes, so I took the new template from the system and reapplied the same customization I originally made. Once the updates were complete, I diff’d the new template against the old one and the changes generally fall into the following buckets:</p> <ul> <li>Updating assembly and namespace names</li> <li>Moving re-usable utility code out of the template and into a shared ttinclude file</li> <li>Supporting multiple sql server schemas in the same ssdl file</li> <li>Supporting SQLCE by skipping things it doesn't allow like specific schema names</li> </ul> <p>While I was at it, I also applied a couple of improvements which a colleague suggested could be made to the default template including removing the “WITH NOCHECK” clauses when creating constraints and adding statements to create indexes for each foreign key.</p> <h3>Final Result</h3> <p>The final result of all this is that D3 is up and running on beta 2. We’re back to the functionality we had last June with beta 1 plus we now have FK properties and a somewhat more efficient database. This is just the foundation needed so that we can start filling in more of the overall application.</p> <p>- Danny</p> <h3></h3> <p><span style="font-weight: bold; text-decoration: underline">Footnotes</span> <br /><a id="_b47d84a174ea44beaf1ed22cba91d1e1"></a>1. For some of us increasingly old geezers, the cartoon <a href="">Bloom County</a> by Berke Breathed was a major part of our lives once upon a time. So I can’t help occasionally throwing out obscure references to it. You can see the specific strip I’m referring to <a href="">here</a>.</p><img src="" width="1" height="1">Daniel Simmons - MSFT revisited<p:</p> <p>About a year ago I wrote <a href="" target="_blank">a post presenting a little extension method AttachAsModified</a>.</p> <p.</p> <p>In addition, the earlier version of this method actually had two overloads—one for the case where your entity implemented IEntityWithKey so it could use the key to Attach the entity and another which did not require that interface but did require you to supply the EntitySet name. With the introduction of ObjectSet<T> this can be further simplified because we can just add our AttachAsModified extension method to ObjectSet<T> and remove the need for the overload which takes an entityset name while still supporting a method signature that does not require IEntityWtihKey. Even better, the signature is strongly typed to match the type of the set so you get nicer intellisense support.</p> <p>The final code for the new method is just:</p> <pre class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;">public static void AttachAsModified<T>(this ObjectSet<T> objectSet, T entity) where T : class { objectSet.Attach(entity); objectSet.Context.ObjectStateManager.ChangeObjectState(entity, EntityState.Modified); }</pre> <p>At this point the whole extension method seems pretty superfluous, which I guess is the goal of improving the EF in the first place. :-)</p> <p>- Danny</p><img src="" width="1" height="1">Daniel Simmons - MSFT’s time to rip up EntityBag and throw it away<p: <a href=""></a> That site also has links to each of the relevant blog posts which is where the real interesting data lives.</p> <p.</p> <h2></h2> <h4><u>If not EntityBag, then what have we been doing for N-tier?</u></h4> <p>The team has been working hard to make N-Tier much easier with EF4, and the result is several key features that are available now when you combine <a href="" target="_blank">Beta 2 of VS2010/.NET 4</a> and yesterday’s release of what we call the <a href="" target="_blank">EF Feature CTP 2</a>. </p> <p>The first critical addition to the product are core changes to the ObjectContext and ObjectStateManager APIs which make working with entities which spend some time detached from a context MUCH easier. These changes were described about a year ago in this <a href="" target="_blank">post to the EF Design blog</a>. There have been a few small adjustments to the APIs as we went through the experience of the first and second betas for this release, but the core changes are all there along with much of the reasoning behind them. </p> <p.</p> <p. </p> <h4><u>How can I learn more about all this?</u></h4> <p>In addition to the posts referenced above, I recommend taking a look at the <a href="" target="_blank">Self Tracking Entities walkthrough</a>:</p> <ol> <li><a href="">Entity Framework: Anti-Patterns To Avoid In N-Tier Applications</a> </li> <li><a href="">Entity Framework: N-Tier Application Patterns</a> </li> <li><a href="" target="_blank">N-Tier Apps and the Entity Framework: Building N-Tier Apps with EF4</a> </li> </ol> <p>So, death to EntityBag! Long live EF4!</p> <p>- Danny</p><img src="" width="1" height="1">Daniel Simmons - MSFT: Building Great Software is a Battle, Don’t Leave Any Assets on the Sidelines<P><A href="" mce_href="">Another D3 release</A> <A href="" mce_href="">Writing Solid Code</A> to help get me on the path to building world-class, reliable software. While the book is a bit dated these days, the key message I took away is as important to me now as it was then:</P> <BLOCKQUOTE> <P>Writing software is <STRONG><U>hard</U></STRONG>. So hard, in fact, that we need to play every trick in the book (and then some) if we’re going to have any hope of really getting it right.</P></BLOCKQUOTE> <P. </P> <H3></H3> <H3>Often Overlooked Asset #1: Analysis Tools</H3> <P…</P> <P><A href="" mce_href=""><IMG style="BORDER-RIGHT-WIDTH: 0px; DISPLAY: inline; BORDER-TOP-WIDTH: 0px; BORDER-BOTTOM-WIDTH: 0px; BORDER-LEFT-WIDTH: 0px" title=CodeAnalysis border=0 alt=CodeAnalysis</A> </P> <P.</P> <P.</P> ):</P> <DIV id=codeSnippetWrapper><PRE class="brush: xml; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;"><Import Condition="'$(StyleCopPath)' != ''" Project="$(StyleCopPath)\Microsoft.StyleCop.targets" /></PRE></DIV> <P.</P> <P>By default both code analysis and stylecop report things as warnings, but I wanted to make certain everything was fixed so I also added a couple properties to the csproj files that change them into errors rather than warnings:</P> <H3></H3><PRE class="brush: xml; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;"><CodeAnalysisTreatWarningsAsErrors>true</CodeAnalysisTreatWarningsAsErrors> <StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings></PRE> <DIV>One interesting issue that I ran into is that the stylecop VS integration doesn’t seem to work for VS2010. Most of the time that’s fine because the VS integration is really only about editing the stylecop settings to turn off certain checks and the like. The build integration that I describe above is all you need to make sure your source code is checked and warnings or errors appear in the VS IDE, but if you do have something you want to turn off (and in my case I decided to turn off the requirement to add documentation comments to everything as well as a few other small cosmetic things) then you need to edit the stylecop settings. Fortunately if you double click on a Settings.Stylecop file in windows explorer, it will launch the UI for editing the settings independent of VS. Probably there’s some other way to bootstrap this process, but for me the easiest way was to just launch the VS integration in VS2008 and then copy the settings file to my other project. Once you have the right settings file, you just need to put it in the root directory of your project.</DIV> <H3>Asset #2: Check Constraints in the DB</H3> <DIV>I don’t recommend trying to add check constraints for everything, but there are places where the database is in a good position to enforce constraints on your data. In many cases the EDM itself will enforce constraints—in the case of D3, the association between Actor and Room has cardinality 1:* rather than 0..1:* which means that every Actor must have a Room, and the EF will enforce that. In the case of the association between Item and Room and the association between Item and Actor, we need there to be one and only one of those associations for each Item at any one time, and the EDM doesn’t have any way to represent that. For both of these kinds of issues we will want to add business logic to the classes to make sure these things are prevented well before the time of saving changes, but it’s still good to enforce them in depth.</DIV> <DIV> </DIV> <DIV>In the case of the item associations, I decided to use a check constraint in the database. So I added a partial method call to the generated database creation code so that it’s easy to do this:</DIV><PRE class="brush: csharp; auto-links: true; collapse: false; first-line: 1; gutter: false; html-script: false; light: true; ruler: false; smart-tabs: true; tab-size: 4; toolbar: true;"]"); } }</PRE> <DIV id=codeSnippetWrapper>Which brings me to the last asset in my list for today.</DIV> <H3>Asset #3: Unit Testing Your Brains Out</H3> <DIV>I’m sure this isn’t anything new to most of you, but like many others I’d say that I probably can’t emphasize enough the impact really drinking the unit testing kool-aide can have on your development. D3 truly doesn’t have all that much code yet, and every time I add more tests I discover more issues. Testing makes your code better as you go, plus it creates both documentation and automated verification which will help prevent regressions later.</DIV> <H3>Conclusions</H3> <DIV>There are always things you can do to make your code better. Even more important, though, is to find things you can do to make CERTAIN that your code is better and will stay better. Your software will benefit from it—D3 already has.</DIV> <DIV></DIV> <P>- Danny</P><img src="" width="1" height="1">Daniel Simmons - MSFT: Fun with Concurrency and Model First – Part 2<p>In <a href="">my last post</a>.</p> <p <a href=""><img style="border-bottom: 0px; border-left: 0px; margin: 5px; display: inline; border-top: 0px; border-right: 0px" title="DbGen-xaml" border="0" alt="DbGen-xaml" align="left" src="" width="316" height="389" /></a. </p> <p). </p> .</p> <p.</p> <p>The relevant part of the resulting XAML looks like this:<"><</span><span style="color: #800000">Sequence</span><span style="color: #0000ff">></span><br /> ...<br /> <span style="color: #0000ff"><</span><span style="color: #800000">edm1:SsdlToDdlActivity</span> <span style="color: #ff0000">ExistingSsdlInput</span><span style="color: #0000ff">="[ExistingSsdl]"</span><br /> <span style="color: #ff0000">SsdlInput</span><span style="color: #0000ff">="[Ssdl]"</span><br /> <span style="color: #ff0000">DdlOutput</span><span style="color: #0000ff">="[Ssdl]"</span><br /> <span style="color: #ff0000">TemplatePath</span><span style="color: #0000ff">="DbGen\SsdlUseRowVersion.tt"</span> <span style="color: #0000ff">/></span><br /> ...<br /><span style="color: #0000ff"></</span><span style="color: #800000">Sequence</span><span style="color: #0000ff">></span></pre> </div> <div> </div> <div>For D3, the fix-up we need is to find properties in the SSDL which correspond to properties in the CSDL with the following characteristics: type is binary, maxlength is 8, concurrency mode is fixed and store generation pattern is computed. So we read the EDMX into an XElement tree in memory and then use LINQ to XML to gather the names of properties that need to be changed in the SSDL grouped by the name of the table in which they appear:< query = from property <span style="color: #0000ff">in</span> csdl.Elements(edm + <span style="color: #006080">"EntityType"</span>).SelectMany(t => t.Elements(edm + <span style="color: #006080">"Property"</span>))<br /> <span style="color: #0000ff">where</span> (<span style="color: #0000ff">string</span>)property.Attribute(<span style="color: #006080">"Type"</span>).Value == <span style="color: #006080">"Binary"</span> &&<br /> (<span style="color: #0000ff">string</span>)property.Attribute(<span style="color: #006080">"MaxLength"</span>).Value == <span style="color: #006080">"8"</span> &&<br /> (<span style="color: #0000ff">string</span>)property.Attribute(<span style="color: #006080">"ConcurrencyMode"</span>).Value == <span style="color: #006080">"Fixed"</span> &&<br /> (<span style="color: #0000ff">string</span>)property.Attribute(store + <span style="color: #006080">"StoreGeneratedPattern"</span>).Value == <span style="color: #006080">"Computed"</span><br /> group (<span style="color: #0000ff">string</span>)property.Attribute(<span style="color: #006080">"Name"</span>).Value<br /> by entitySetName(property.Parent, csdl, edm);</pre> <br /></div> <p:</p> ">string</span> entitySetName(XElement entityType, XElement csdl, XNamespace edm)<br />{<br /> <span style="color: #0000ff">return</span> (from set <span style="color: #0000ff">in</span> csdl.Element(edm + <span style="color: #006080">"EntityContainer"</span>)<br /> .Elements(edm + <span style="color: #006080">"EntitySet"</span>)<br /> <span style="color: #0000ff">where</span> entityTypeName(set) == entityType.Attribute(<span style="color: #006080">"Name"</span>).Value<br /> select (<span style="color: #0000ff">string</span>)set.Attribute(<span style="color: #006080">"Name"</span>).Value).Single();<br />}<br /><br /><span style="color: #0000ff">string</span> entityTypeName(XElement entitySet)<br />{<br /> XElement schema = entitySet.Parent.Parent;<br /> <span style="color: #0000ff">string</span> nsPrefix = schema.Attribute(<span style="color: #006080">"Namespace"</span>).Value + <span style="color: #006080">"."</span>;<br /> <span style="color: #0000ff">string</span> aliasPrefix = schema.Attribute(<span style="color: #006080">"Alias"</span>).Value + <span style="color: #006080">"."</span>;<br /> <span style="color: #0000ff">string</span> typeName = ((<span style="color: #0000ff">string</span>)entitySet.Attribute(<span style="color: #006080">"EntityType"</span>).Value).Trim();<br /><br /> <span style="color: #0000ff">if</span> (typeName.StartsWith(nsPrefix))<br /> {<br /> <span style="color: #0000ff">return</span> typeName.Substring(nsPrefix.Length);<br /> }<br /> <span style="color: #0000ff">else</span> <span style="color: #0000ff">if</span> (typeName.StartsWith(aliasPrefix))<br /> {<br /> <span style="color: #0000ff">return</span> typeName.Substring(aliasPrefix.Length);<br /> }<br /> <span style="color: #0000ff">else</span><br /> {<br /> <span style="color: #0000ff">return</span> <span style="color: #006080">"UNEXPECTED TYPENAME -- Where is the namespace?"</span>;<br /> }<br />}</pre> <br /></div> <p.</p> <h3></h3> <h2></h2> <h3>But what about derived types?</h3> <div>The above algorithm is all well and good for a model with no inheritance—every entity’s corresponding SSDL can be fixed up properly, but what if I want to put a concurrency property on an inherited type? As it turns out the EF doesn’t allow that; concurrency properties can only be present on the base type for an entity set. While I generally don’t like limitations of any kind, I can’t really think of any scenario where I’d want a concurrency property only on a derived type, and if I do have them on a base type, then I don’t need a different or additional property on any of the derived types.</div> <div> </div> <div>In fact, for standard TPH and TPC scenarios all of the properties for a particular entity instance appear in a single row of a single table. With TPH all of the instances are in a single table, and in TPC there are multiple tables involved but each type has its own table with all of the properties both from the base type and added in the derived type in that table. So the concurrency property will appear on each table and a rowversion will work just fine because any property changing in that row will cause the concurrency property to change.</div> <div> </div> <div>I got to thinking about TPT, though, and became a little worried. First off, TPT is the default inheritance strategy used by model first. In this strategy the properties from a base type all live in one table which has a row for every single entity in the set, while extra properties added by derived types are placed in their own tables with just those properties and the primary key. So for example, if I have entity type A with an id, scalar properties a and b, and a version, and I have type B which inherits from A and adds scalar property c, then the database would have table A with columns for id, a, b and version PLUS table B with columns for id and c. (A similar sort of case exists with entity splitting or mixed mode mapping, but it’s easier to think about the pure TPH, TPC and TPT scenarios.) In this kind of mapping, what happens if I have an instance of type B and only property c is modified? Then only table B needs an update, and the rowversion would not be updated since SQL Server only maintains it for changes to the row of the table where it resides. Doh!</div> <div> </div> <div>After banging my head against the wall for a little while, I decided to check with some folks on the team to see what might be done to solve this problem. Could we relax the restriction on concurrency properties only being allowed in the base type (in which case maybe for TPT scenarios we could add a second concurrency property on each derived type which would track changes on the extra tables)? As it turns out Colin Meek, one of the brightest guys I know and a key member of the team, was way ahead of me. He was able to tell me that the design for the update code that handles concurrency checks already takes this condition into account, and whenever any part of an entity which has a concurrency token is modified, the update system of the EF guarantees that the table with the concurrency value is “touched”. In cases where there isn’t a real modification being made, a fake update statement is issued which doesn’t really change anything but does force the rowversion to update.</div> <div> </div> <div>So, in the end there was nothing to worry about. The SSDL modification algorithm described above works just fine, and once the updated workflow XAML and template were put into place in the project, model first sales through cleanly and produces code which creates a database with rowversion properties for each entity type. And the whole thing is verified by the successful completion of the unit test described in the first part of the article.</div> <h3>Conclusions</h3> <div>AppWeek has come and gone, and in the end I spent the whole week getting my one test to run, but we have that much more of a stable foundation for D3 work to build on. In addition at least one key bug was found and fixed, and some extensibility scenarios were explored and opportunities for improvement identified. The really good news, though, is that the EF in general is starting to become flexible enough that even if it doesn’t work out of the box in just the way I want, I can customize it for my needs.</div> <div> </div> <div>- Danny</div><img src="" width="1" height="1">Daniel Simmons - MSFT: Fun with Concurrency and Model First – Part 1<P>The second D3 release is now up at: <A title="direct link" href="" mce_href="">d3-0.0026.zip</A> It’s the fruit of my labors pretty much full-time this week because it has been “App Week” on the EF team this week. </P> <P><A href="" mce_href=""><IMG style="BORDER-BOTTOM: 0px; BORDER-LEFT: 0px; DISPLAY: block; FLOAT: none; MARGIN-LEFT: auto; BORDER-TOP: 0px; MARGIN-RIGHT: auto; BORDER-RIGHT: 0px" title=appweek border=0 alt=appweek</A> </P> <P.</P> <P”? :-(</P> <P <A href="" mce_href="">my post about the last release</A>,</SPAN> <SPAN style="COLOR: #0000ff">void</SPAN> VerifyConcurrency<T>(Func<T> create, Action<T> modify) <SPAN style="COLOR: #0000ff">where</SPAN> T : <SPAN style="COLOR: #0000ff">class</SPAN><BR>{<BR> <SPAN style="COLOR: #0000ff">using</SPAN> (var ctx = Utilities.CreateTestContext())<BR> {<BR> var objectSet = ctx.CreateObjectSet<T>();<BR> var obj = create();<BR> objectSet.AddObject(obj);<BR> ctx.SaveChanges();<BR><BR> <SPAN style="COLOR: #0000ff">using</SPAN> (var ctx2 = Utilities.CreateTestContext())<BR> {<BR> var key = ctx.ObjectStateManager.GetObjectStateEntry(obj).EntityKey;<BR> var obj2 = (T)ctx2.GetObjectByKey(key);<BR> modify(obj2);<BR> ctx2.SaveChanges();<BR> }<BR><BR> modify(obj);<BR> ctx.SaveChanges();<BR> }<BR>}</PRE></DIV> <DIV> </DIV> <DIV>I made the method generic and gave it two delegate parameters so that I can easily re-use it for each entity type I currently have plus extend it as I add more types over time. I also designed the code so that it will work with POCO types since I anticipate switching D3 to use POCO classes in the near future. As a part of this release I also introduced a simple utility method which creates the D3Context with a connection string to my little test database—this way it’s quick and easy to type since intellisense finds things for me.</DIV> <DIV> </DIV> <DIV>The basic pattern in this method is that it uses one context to add an entity to the database. I use the create delegate because different entities have different requirements as far as what must be set on them to make them persist successfully. Each of our current entity types requires a non-null Name property, for instance, but while the Room entity doesn’t have any other required properties, an Actor must have a non-null Room reference, etc. </DIV> <DIV> </DIV> <DIV>Once the new entity is added, I then create another context instance (which means another connection to the DB, etc.) and on it retrieve another copy of that same entity. I do this retrieval by having the state manager in the first context lookup the entity for me by reference (works for POCO) and then give me back the EntityKey corresponding to it (also works for POCO even though the entity itself doesn’t have an EntityKey property). Then I can use that key with the second context to load another copy of the entity, and call the modify delegate which makes a unique update to some property on the entity and saves that update to the database.</DIV> <DIV> </DIV> <DIV>At this point the first instance of the entity is being tracked by the first context, and it’s concurrency token (assuming everything is configured correctly) represents the state of the entity before the second context updated anything. So we call the modify delegate on this entity and SaveChanges which throws an OptimisticConcurrencyException because the concurrency token in the database doesn’t match what’s in this context.</DIV> <DIV> </DIV> <DIV>In the actual test, I create a list of Action types which call this method specifying the appropriate type and passing in lambdas for the create and modify delegates. Then it just loops over that list running the action and verifying that each call produces the appropriate exception. You can look at the file Tests\Concurrency.cs in the release if you want the full details. </DIV> <DIV> </DIV> <DIV>The point of all this, though, is that to make this test pass I needed my entities to have concurrency properties that were configured properly in the database. With recent versions of SQL Server, the best way to accomplish that is with a <A href="" mce_href=""><IMG style="BORDER-BOTTOM: 0px; BORDER-LEFT: 0px; DISPLAY: inline; MARGIN-LEFT: 0px; BORDER-TOP: 0px; MARGIN-RIGHT: 0px; BORDER-RIGHT: 0px" title=ConcurrencyProperty border=0 alt=ConcurrencyProperty align=right</A>RowVersion column (previously called a timestamp). This special column type is stored as an 8 byte binary, and the server guarantees that it will have a unique value any time any part of a row in its table is modified. The trick is that this special type only exists in the server—it’s not available in the CLR or the EDM conceptual model. If I were reverse engineering a model from the database, that would be no big deal. The tools would automatically map the RowVersion column in the database to an 8 byte binary property in my model. In this case, though, I’m using model first to generate the database from my model, and when I create the property, model first has no way to know that it should create a RowVersion type rather than a binary type in the DB.</DIV> <P. </P> <P.</P> <P.</P> <P>- Danny</P><img src="" width="1" height="1">Daniel Simmons - MSFT: Release 0.0010<p: <a title="d3-0.0010" href="">d3-0.0010</a></p> <p>First, let’s take a look at a few decisions that came up along the way, and then we’ll dig into some specific code areas.</p> <h3></h3> <h3>Decisions</h3> <ul> <li><strong>Source Control and Licensing</strong>: It is very important to me to make this project as relevant and useful as possible for folks trying to build applications using the EF. So we decided to release the code via MS-PL which is a very permissive license. I’m no lawyer, but my understanding is that this basically means you can copy the source and use any part or all of it in your own projects. Originally it was my hope that I could just put the project on codeplex and use it for source control so that anyone interested could enlist and just sync down changes to their local machine as the project progresses, but there were various hang-ups with this approach, and in the end I decided to just use local source control and then periodically push releases up to code gallery which has a simple site system and where everything is automatically licensed with MS-PL. <br /> </li> <li><strong>Version Numbers</strong>: One of those mundane but required decisions along the way is how we will create version numbers for the releases. Here’s what I decided: Until we have the thing up and running, everything is version 0.something. The last four digits of the version are in the form MMDD where MM is the number of months since I started releasing (so 00 is June ‘09), and DD is the day of the month for that release. I haven’t done it yet, but that allows me to have a very simple algorithm to automate version stamping and such as I make subsequent releases. <br /> </li> <li><strong>EF Versions and Dependencies</strong>: One critical area of decisions was around what EF features do I take dependencies on—particularly when we’re in the state where EF4 is not yet complete. In the end I decided that the main principle was to avoid taking dependencies on EF features that aren’t yet available outside the team so that you can stay up with me if you like. This means, for instance, that I’m not going to take a dependency on foreign-key support even though I have access to an internal build where it’s working great. When FKs are publically available, we’ll take a look at the project at that point and evaluate whether or not it’s worthwhile to refactor the app to take advantage of them (almost certainly we will do that refactoring). Similarly, I’m not yet going to take a dependency on self-tracking entities because it isn’t yet available. It will become available much sooner than FKs, though, so I will plan on taking advantage of it soon. A harder question was code-only. The first CTP of it will also be available soon (in the same download with self-tracking entities), but I already know of some areas where I’ll need more control over the model and database generation than code-only will support in its first CTP, so we’ll have to wait on that refactoring until a later CTP of code-only. So the basic approach will be to use EF4 beta 1 with model-first to make it efficient to design the model and generate the database and code from that model. </li> </ul> <h3>Initial Model</h3> <p>Like every other part of the system, we’re going to start with a simple model and expand. Here’s a first look:</p> <p><a href=""><img style="border-right-width: 0px; display: block; float: none; border-top-width: 0px; border-bottom-width: 0px; margin-left: auto; border-left-width: 0px; margin-right: auto" title="D3ModelDiagram" border="0" alt="D3ModelDiagram" src="" width="475" height="509" /></a></p> <p>A few things to notice: </p> <ul> <li>Each entity has an Id property which is a server-generated integer. <br /> </li> <li>Each of these entities is a “real” thing which players will interact with in the virtual environment of the game, and as such we have given them all a few common properties including Name and Description. Eventually we’ll define an interface for these common properties so that we can reason about the commonality of these objects. We could have an abstract base class for each of these entities, but we want them each in their own entity sets, and the designer doesn’t support more than one entityset with the same base type, so using an interface gets the key aspects we want while allowing us to still use the designer (we don’t really need implementation inheritance here anyway). <br /> </li> <li>The two associations between Room and Exit have each had one of the navigation properties removed so that Room only has one collection of “Exits” (it doesn’t also have a collection of entrances, so to speak), and Exit has only the TargetRoom navigation property. This makes our entities follow the intended model where exits go only one way. If you want two rooms to be connected in a symmetrical fashion, you need two exits one going from room A to room B, and another one going back from room B to room A. This is important because it means that we can easily give different names to the exits (from room A you go East to room B, but from room B you have to go West to get back to room A), and it means that you don’t always have symmetrical paths (when you jump off a cliff you can’t necessarily get back to the top of the cliff using the same way you got down, etc.). <br /> </li> <li>We don’t yet have version fields for concurrency checking. That will come later, because it requires some additional customization to the database generation. <br /> </li> <li>Item has a relationship with Actor and one with Room, but logically an item can only be related to one of them at a time (and it must always be related to one or the other so that items have location). These constraints we will represent in the database rather than the conceptual model. They aren’t currently enforced, but they will be once we make additional customizations to database generation. </li> </ul> <h3>Customizing “Model First”</h3> <p.</p> <p.</p> <p.</p> <p>So, I decided to customize the db generation process now by modifying the SSDL->SQL template to generate a C# method for creating the database schema rather than a SQL script. The process to set things up for customization looks like this:</p> <ol> <li>Copy the workflow xaml file from %ProgramFiles\Microsoft Visual Studio 10.0\Extensions\EntityFrameworkTools\Workflows\DbGen.xaml and the SQL gen template from %ProgramFiles\Microsoft Visual Studio 10.0\Extensions\EntityFrameworkTools\Templates\SsdlToSql10.tt to my project. </li> <li>I renamed the template to SsdlToCode.tt and modified the xaml file’s TemplatePath to just have that name rather than the full path to the original template. </li> <li>Both files I added to my project in VS, but because they aren’t actually used at compile time (they are just used by the designer when I choose the menu option to generate the database) I made sure that the Custom Tool property for each of them is blank and the Build Action property is set to None. </li> <li>I clicked on a blank part of the Entity Designer surface and then set the “Generate Database Script Workflow” property to just DbGen.xaml (just the file name because the xaml file is in the same project and directory with the EDMX). </li> </ol> <p>Now when I choose the option to generate the database it runs the workflow as specified in my project which then uses the template from my project rather than the default versions.</p> <p).</p> <p.</p> <p>All of this enabled me to write the following simple test:<">[TestMethod]<br /><span style="color: #0000ff">public</span> <span style="color: #0000ff">void</span> DropAndCreate()<br />{<br /> <span style="color: #0000ff">using</span> (var ctx = <span style="color: #0000ff">new</span> D3Context(<span style="color: #006080">"name=D3TestContext"</span>))<br /> {<br /> ctx.DropDatabase();<br /> Assert.IsFalse(ctx.DatabaseExists());<br /> ctx.CreateDatabase();<br /> Assert.IsTrue(ctx.DatabaseExists());<br /> }<br />}</pre> </div> <div> </div> <div>Not only is it simple to perform these operations, but also everything is based on the connection string given. So I just created two different connection strings in my app.config file—one called D3Context (the default) and the D3TestContext which is the exact same as the other except that it has a different initial-catalog name for my test database.</div> <div> </div> <h3>The SqlDb Internal Class</h3> <div>It’s probably also worthwhile to take a brief look at the internal class I called SqlDb which does the heavy lifting for these database interactions. The first interesting piece of the puzzle is that in order to create or drop the database with SQL Server (or in this case Sql Express) we need to make sure that we don’t use a connection string with the database in question as the initial-catalog. So, we extract the connection string from the StoreConnection property on the EntityClient connection instance created by the context and create a SqlConnectionBuilder from it. SqlDb’s constructor takes a DbConnection parameter since that’s what ObjectContext exposes and creates the builder like this:< builder = <span style="color: #0000ff">new</span> SqlConnectionStringBuilder(((EntityConnection)entityConnection).StoreConnection.ConnectionString);</pre> <br /></div> <p>Then we store the initial-catalog property into a string field so we know the name of the database we should be working with, and then we replace the initial-catalog with “master” since that database is present on every SQL instance.</p> <p.</p> <p.</p> <h3>Conclusions</h3> <p…</p> <p>- Danny </p><img src="" width="1" height="1">Daniel Simmons - MSFT: EF4 Podcast, N-Tier Anti-Patterns, and some thoughts about DDD<p>Here are a few more resources I hope you will find useful:</p> <p>Recently I had a conversation about EF4 with Carl and Richard of .Net Rocks, and it went up on their site today. Have a listen at <a title="" href=""></a>. </p> <p>In addition, I have an <a href="">article about design/architecture decisions related to n-tier applications in the latest issue of MSDN Magazine</a>. .</p> <p>At the same time that I’ve been busy with these, some other folks have been trying out the beta of EF4 and some very interesting posts are starting to appear like this <a href="">dive into producing a POCO repository with the EF</a> by Keith Patton and a great discussion of <a href="">LINQ Expression Trees and the Specification Pattern</a> by the EF team’s own David DeWinter.</p> <p>- Danny</p><img src="" width="1" height="1">Daniel Simmons - MSFT | http://blogs.msdn.com/b/dsimmons/atom.aspx | CC-MAIN-2015-14 | refinedweb | 20,292 | 54.66 |
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
Qmail, Daemontools and djbDNS assumes that "errno" can be defined as "extern
int
errno;".
This is a broken assertion.
When Glibc 2.3.2pre1 is emerged, all DJB software immediately stop working.
And compilation doesn't work any more, because "errno" is no more defined as a
global symbol.
Fix : add patches for all DJB software to #include <errno.h> and remove
"extern
int errno"
Reproducible: Always
Steps to Reproduce:
1.
2.
3.
dosfstools breaks too ...
az: shall we just handle this on a case by case basis ?
*** Bug 16344 has been marked as a duplicate of this bug. ***
*** Bug 16355 has been marked as a duplicate of this bug. ***
glimpse (4.15) ebuild also fails with 'undefined reference to errno'.
daemontools is also broken ...
in light of these issues, imo this version of glibc should be masked asap ...
any thought on how it'll affect binary packages ? we can patch source code of
programs but we cant really patch binary packages (vmware) ...
Created an attachment (id=8716) [edit]
Fix errno
Fix use of errno by including errno.h
Created an attachment (id=8717) [edit]
Fix ebuild
Apply errno patch, minor spacing cleanup
Note that the above patches will NOT break building under older glibc versions.
Created an attachment (id=8719) [edit]
Patch to make glimpse compile with new glibc
Attached patch makes glimpse compile with new glibc 2.3.2pre1
Created an attachment (id=8720) [edit]
New ebuild for glimpse that uses errno patch
Attached ebuild uses errno.diff to patch glimpse to compile with new glibc
2.3.2pre1
*** Bug 16370 has been marked as a duplicate of this bug. ***
Yep, havent looked at this yet, but we will have to handle it case by case.
Basically a problem that you need to include errno.h, and not declare errno
external. I already added patches to net-nds/portmap and x11-misc/xloadimage
at the beginning of the year when I did some testing on glibc-2.3.2 cvs.
that is fine but what about binary only packages ?
vmware breaks with the new glibc
also, i still maintain that the new glibc should be masked due to the sheer
volume of broken ebuilds ...
yes we patch them, but we do it on developer boxes and not let the users
experience this too
Ok, I thought ~ was for developers, and users that *wanted to help test*
developmental stuff ?
yes, ~ is for testing/development, but package.mask is for broken (which is
what this is)
I am sorry, but the glibc developers do not consider this as broken. It will
be the same in 2.3.2 official, and following versions. And this is what '~'
if for .. we fix things now for when 2.3.2 is officially out. Do you want this
chaos when 2.3.2 is actually out ?
i wasnt suggesting we mask it now and forever ...
i was suggesting we package.mask it and then on our own dev boxes we try to emerge
and patch as many packages as possible ... you can tell there are, and will be,
plenty of packages that break ...
then, we patched a large number of ebuild's, we release it to ~ and let all the
other random users report the packages we missed ...
however, we still need some kind of solution for older binary only packages ...
even if it's like hacking something into lib-compat ...
*** Bug 16396 has been marked as a duplicate of this bug. ***
There is a hack available for vmware. It is described in in
vmware.for-linux.experimental which I am copying below. It works for me with
the
caveat that for non root users, LD_PRELOAD clears suid. vmware needs suid
access
to /dev/misc/vmmon and /dev/tty0 and /dev/tty7(tty on which current X is
running).
I had to change the permissions on these to a+rw for vmware to run as non-root.
It will complain about /dev/tty8 and not having permission to run full-screen.
However changing permissions on tty8 does not seem to work. I don't rung
full-screen, so not an issue for me, but may be problems for others.
From researching on the web, I believe it is correct for LD_PRELOAD to clear
suid. vmware-ui and vmware-mks have suid propagated to them from vmware.
However the LD_PRELOAD, breaks the chain.
I don't know what problems can be created by changing the permissions on the
tty0 and tty7. So far I haven't seen any. But I am on a non-secure workstation.
If security is important, this might be an issue. Hopefully vmware will
release a patch sometime soon, as redhat 8.1 I believe will require that.
Otherwise they might just move people to version 4.0 which is in beta now.
Naren
------------------------------------------------
In the interest of posterity (and helping anyone else using vmware with
rawhide glibc), here's what ended up working.
Compile the attached file q.c into q.so. Put q.so into /usr/lib/, and
chmod it 555. Then rename the files /usr/lib/vmware/bin/vmware-{ui,mks},
giving them a ".bin" extension. Then create two new files named
/usr/lib/vmware/bin/vmware-{ui,mks} instead that contain:
-------------------
#!/bin/bash
LD_PRELOAD=q.so exec "$0.bin" "$@"
-------------------
chmod a+rx these new files, and all should be well.
The basic problem is that vmware binaries have their own copy of errno and
related functions, and there is a clash with the new glibc ones. q.so will
resolve this problem. The need for the scripts above arises from the fact
that each of the binaries launched by the initial /usr/bin/vmware call
needs to have the q.so preloaded.
q.c was written by Petr Vendrovec, and many thanks go to him for the work
he put into the debugging of the problem and the attached code!
In summary, q.c qorreqts the formerly inqurable qrashes.
Cheers,
--
Ami Fischman
usenet@fischman.org
-------------------------------------------------------
/*
* Build with: gcc -W -Wall -shared -o q.so q.c
*/
#include <dlfcn.h>
#include <sys/mman.h>
#include <stdlib.h>
#include <stdio.h>
void go(void) __attribute__((constructor));
void go(void) {
void* qh;
unsigned char *__real_errno_location, *__vm_errno_location;
qh = dlopen("libc.so.6", RTLD_GLOBAL);
__real_errno_location = dlsym(qh, "__errno_location");
__vm_errno_location = dlsym(NULL, "__errno_location");
printf("Got eroloc %p & %p\n", __vm_errno_location, __real_errno_location);
if (__real_errno_location && __vm_errno_location && __real_errno_location !=
__vm_errno_location) {
unsigned int errnobase = (int)__vm_errno_location;
unsigned int mpbase = errnobase & ~0xFFF;
unsigned int mplen = 4096;
if (errnobase + 5 > mpbase + mplen) {
mplen = mplen + 4096;
}
mprotect((void*)mpbase, mplen, PROT_READ|PROT_WRITE|PROT_EXEC);
*__vm_errno_location = 0xE9;
*(int*)(__vm_errno_location + 1) = __real_errno_location -
__vm_errno_location - 5;
mprotect((void*)mpbase, mplen, PROT_READ|PROT_EXEC);
}
}
----------------------------------
Another comment. There is an ongoing discussion on the wine lists as to how to
solve this. This is a problem for wine. So wine will not work at this time, and
the last I know there is no real solution for this yet. The same is true for
the mono compiler and tools. There are issues there, and the wine and mono
teams are trying to figure out a solutions.
look at the following for more information.
*** Bug 16420 has been marked as a duplicate of this bug. ***
Idea: Is it feasible to create an eclass that would facilitate fixing source
files, changing "extern int errno;" to "#include <errno.h>" during the
src_unpack stage? i.e.
# grep pattern intentionally loose
find ${S} -type f -name '*.[ch]' | xargs grep -l errno | while read f; do
einfo "Fixing errno usage in ${f}"
mv ${f} ${f}.orig
# why are sed regexes so lame?
perl -p -e 's|(extern\W+int\W+errno\W*;)|/* \1 */\n#include <errno.h>\n|;'
\
<${f}.orig >${f}
done
Then the ebuilds of packages with problems just need to inherit fix_errno (or
whatever) and we don't need a bazillion patches everywhere.
I've tested the above script on qmail and it works correctly. I think it will
work on the majority of cases.
BTW, looks like mysql is another app that breaks with glibc-2.3.2. (I only
tried 4.0.11a, but certainly older versions are affected as well.)
well, in addition to fixing it, we should be talking to upstream maintainers
about this bug
glimpse fixed in cvs
kerberos5 1.2.7 is also affected by the glibc bug. I tried to ship around with
-DHAVE_ERRNO_AS_DEFINE=1 but it didn't work.
I submited a seperate bug for kerberos5 1.2.7 itself
The Intel Fortran Compiler, ifc, is also broken by this. Presumably also their
C Compiler, icc.
The intel c/c++ compiler icc & icpc Version 7.0 Build 20030212Z
works without any problems on glibc 2.3.2. I have built an internal project on this and it is running with no bugs so far.
Another package affected by this is ezm3, needed to compile cvsup.
any updates on the wine situation?
The last I read, for wine they are still working on a solution. The reason for
the delay is the intent to also fix the threading model which is definitely
more work.
So for now, people needing wine should not upgrade glibc.
fix:
while OOo is extracting add this line:
#include <errno.h>
to this file:
/var/tmp/portage/openoffice-1.0.2/work/oo_1.0.2_src/dmake/extern.h
^ for openoffice that is. fixes the dmake error
Right, we should just replace all declarations of errno with '#include
<errno.h>'.
What binary packages except for WineX (the one you pay for), and VMWare are
affected ?
Sorry but please explain this to me... How come glibc developers don't consider
this
broken? There is appearently a change in 2.3.2 that breaks a majority of apps
even if
they are compiled with the latest gnu compiler... but seam to work some
none-gnu
ones... Then this must be a gcc bug then really or?
To me keep on adding patches to apps making them work seam to be a temporary
solution not THE solution?
Its rather simple: all programs that was coded like so:
------------ broken code ----------------
extern int errno;
-----------------------------------------
breaks. The right way (and why the glibc devs do not really bother), is to
include errno.h like so:
------------- the right way --------------
#include <errno.h>
------------------------------------------
System header files is there for a reason, and doing things like
declaring 'errno' as an 'extern int' when the proper system header
do exist, is just wrong (or in the glibc devs's books).
Crossover 1.2 is affected, I believe 1.3 suffers from the problem as well.
Fixed krb5-1.2.7 ebuild with a construct like in comment 24.
libmpeg3-1.5-r1 also has this problem.
what about the wine thing? could wie just replace the wrong errno things in
code? or any other fix?
Got a quick fix for wine* from irc:
'export LD_ASSUME_KERNEL=2.2.5' before you run wine.
Thanks to taviso in #gentoo.
The related url is:
-phoen][x-
Ok, I was a bit busy, but I'll try now to get to everything. Could you
guys add a bug for all packages that is affected by this? I can then
start sorting them out.
NOTE: please add a *new* bug, with me as owner.
*** Bug 16406 has been marked as a duplicate of this bug. ***
In xloadimage, I had to add:
#include <errno.h>
to the top of img.c in order for it to compile as well.
app-games/mindless-1.2 as well (needed to include errno.h in http_fetcher.c)
glibc-2.3.2 is marked stable, all packages are out w/errno fixes, and we should
be
good to go :)
nice work all
*** Bug 22248 has been marked as a duplicate of this bug. *** | http://bugs.gentoo.org/16267 | crawl-002 | refinedweb | 1,980 | 77.13 |
RealNames CEO Talks Back 207
jasoncart writes: "Keith Teare, former CEO of RealNames, has updated his homepage with his opinions regarding his the companies downfall. Obviously he's annoyed as he has lost his job, but he makes some good points about Microsoft's monopoly - 'Microsoft seems to be playing the role of the referee who decides whether any innovations succeed'"
Sour Grapes... (Score:4, Insightful)
Not that I blame him, and not that he's not completely without merit here, but I don't really think RealNames had a viable product to begin with (as several of the comments last time suggested).
If anything, I think this company failed to adapt to changes in technologies.
Re:Sour Grapes... (Score:1)
Re:Sour Grapes... (Score:3, Informative)
However, the same can't be said for average computer users whose native language can't be rendered in ASCII (i.e. most of Asia). RealNames made it possible for them to go to websites by typing in words in their native language, instead of words in a character set they may not be familiar with.
Re:Sour Grapes... (Score:2)
Re:Sour Grapes... (Score:2)
But let's assume that RealNames sells the Japanese word for "Cookies" to the Japanese division of Nabisco.
What does this mean to Fuji Cookies? They can't have the same word linked to their website with this mechanism. Whereas at least with a search engine, while Nabisco might get top placement because they paid more, other companies would also be listed.
Again, as others have pointed out, what is needed is google.jp or something similar... that is all.
Google has several search language options (Score:3, Informative)
Re:Sour Grapes... (Score:2, Interesting)
What did he expect would happen?
After reading his whining, I sent him an email:
I just finished reading the comments posted on your web page detailing your
feelings about your previous company's dealings with Microsoft and felt
compelled to comment.
It certainly seems likely that Microsoft is guilty of, if not illegal, then
certainly less than ethical business practices. You seem surprised at this.
As an officer of a company, it is your responsibility to your employees and
your investors to investigate any company with which you plan on entering a
contract. It is clear that you did not do this.
Look at this quote from your page:
"The browser is now back under Microsoft's control and it is possible that -
having learned much from RealNames - it will develop its own version of our
resolution service."
Had you bothered to do the most cursory investigation of Microsoft you would
have found that this is one of their common business practices.
A few companies who Microsoft have been *convicted* of doing this to in the
past are:
Stacker
Syn'x Relief
There are several other cases where they have been convicted, and numerous
others where the developers in question simply could not afford to pursue the
cases. A simple internet search will reveal this to you, as it would have
before entering negotiations
While it is arguably wrong of Microsoft to have done what they did,
the fault of the failure of your company lies squarely with your failure to
follow the adminition of any grade school teacher:
Do your homework.
perhaps sour grabes but it's still good news (Score:2)
It's disappointing that Microsoft weilds such power that they can, with a single business decision, cause the ruin of a company, however it's good to see that Microsoft's own potentially highly successful forray into the internet-as-a-platform space is not gaining the full support of the company. This division within the company allows other providers to keep their foot in the door, and particularly may be the saving grace of JAVA. Only time will tell.
--CTH
Re:perhaps sour grabes but it's still good news (Score:2)
What type of business model relies entirely on one company?
Poor business model for them.
Obsolete technology (Score:5, Insightful)
If search wasn't so cheap that companies compete to give it away, we'd need something like this. But we don't.
Re:Obsolete technology (Score:2)
Maybe, maybe not. Microsoft's motivations for killing RealNames will be revealed when they either create their own version of the technology - validating Teare's statements - or not.
Re:Obsolete technology (Score:2)
Full circle (Score:2)
And that requirement usually results in more ads to be thrown in. Which means, weeding through more and more inappropriate hits as time goes on. I've wound up once too often on a vendors web site whose product I have already eliminated from my shortlist.
Thus, the success in attracting advertizer revenue is precisely what does a service in for me.
Frankly, if Google went subscription I'd buy it to the exclusion of all other search engines, provided my money prevents me from seeing any paid-for links.
Re:Obsolete technology (Score:2)
Of course, many moons ago, Google themselves delivered RealNames results [google.com], as did Altavista [realnames.com] and others.
But then the paid-for-placement leanings of Overture nee Goto [overture.com] became more tempting, and ultimately triumphed. Why make only cents per click, when you can make much more from fools willing to pay more [overture.com]?
To say Microsoft killed Realnames is a simplification. MS may have been Realnames's biggest customer, but they were at death's door regardless.
rOD.
Re:Obsolete technology (Score:3, Insightful)
a) DNS is only able to make use of 7 bit ASCII - 26 characters in the English alphabet and the 10 in the numerical system, 0-9, plus the hyphen (37 total characters), in forming a name. 7 bit ASCII cannot handle foreign characters, creating a significant problem for languages with non-Roman scripts.
b) DNS cannot guarantee quality of service in delivering content. A DNS resolution points a user to a physical resource and is at the mercy of bandwidth constraints and traffic peaks.
c) DNS is a poor global naming system. A company with multiple sites worldwide has to give each of them different names [ibm.com; ibm.co.uk etc].
d) DNS has no inbuilt reporting capabilities. In fact, reporting on DNS traffic is so complex and essential that an industry has arisen to provide the imperfect reports that are available today.
URIs and URLs have weaknesses as well:
a) DNS gave birth to the URI. These long strings - again restricted to ASCII - allow naming of a wider set of resources. The URI can address individual web pages (with URLs), but the URI can also address people's email address - as in mailto:person@company.com - and even their phone number - as in phoneto:16504865555.
b) The URI is a major breakthrough as a means of addressing an unlimited number and type of resources on the Internet, but it is not a naming system. Rather it is a physical addressing system. Naming systems match a physical resource with an alias. A phone number, for example, is simply a memorable (one hopes!) alias to a physical switch address. A DNS name is an alias to an IP number. Physical addresses that are also forced to play the role of names are a bad idea because an identity is then tied to a physical resource identifier. If the resource moves or changes, the name will break. No persistent naming system for the Web was built, and the URL was adopted as the only available alternative. This is widely accepted to be a huge error.
c) In addition, the URI is incapable of being human friendly. Home page URLs for well known things barely pass muster as human friendly, intuitive identifiers. is OK, but how could one expect to intuitively understand that the URL for the US Fish and Wildlife Service is.
d) URLs cannot be consistent pointers to all content across all network access devices Wireless URLs and Web URLs point to different versions of content.
e) The URL, like DNS, cannot use non-ASCII characters, although it can use a wider set of ASCII characters than the DNS. Limitations in DNS and URIs spawned search engines - which compensate for the lack of a manageable, human friendly naming architecture for network resources.
While they solve a specific and relevant problem, search engines also have weaknesses:
a) Search Engines can only index "static" web pages on the public network. These are pages with a physical existence on a web server. Today less than 25% of web pages are "static". Search engines cannot provide pointers to protected content. Similarly, search engines cannot provide access to dynamic content that is refreshed frequently, or content that resides in a content management server or searchable database.
b) Search Engines employ a "full text index" approach to content. Even with algorithms which attempt to elevate one site above another based on relevance rankings, search engines inevitably find it hard to distinguish between a home page for an entity [a company, a product, a famous person] and a reference to that entity by a third party. Search is great for research but of limited value for navigation.
That is not the point (Score:2)
The point here is, that Microsoft wants to control everything they can, if they can't they'll cancel support. As a result of this Microsoft is not a good business partner to have for startups: if you don't have success, well, then they'll dump you (who wouldn't, no problem there), but if you are successful then it might either be against microsofts interests to follow up on that technology, or, if they think it's worth following up, they'll want to do it themselves. In both cases they'll kick you in the back, because if it is to be done, then it's them who want to do it.
Playing the game (Score:2, Insightful)
That's especially true when a well used and Free alternative to your product exists and is in wide use.
- Serge Wroclawski
Off-Topic: Has Microsoft Ever Dealt Fairly? (Score:2)
Can anyone name a single company that got in bed with Micros~1 that didn't later get attacked/sabotaged/destroyed by them in some way?
Schwab
Whine whine whine (Score:3, Insightful)
Now they are blaming Microsoft for their own short sightedness.
Microsoft has no obligation to keep these people in business just for the sake of keeping them in jobs.
Their weird naming standards didn't make much sense in the first place, with the crash of the
Re:Whine whine whine (Score:1)
...now microsoft will take RealNames idea and making it their own... along with the money....
every day I get more and more pissed off at the pseudo-government that Microsoft is becoming...
when will it end?
The fuck? (Score:1, Funny)
Somebody needs to call this dude a whaaaaaaaaaaaaaaaaambulance!
bullshit. (Score:5, Insightful)
all realnames had was a database that paired together words with webaddresses. this is not innovation. this is novelty at best. save me the sob story about monopolies and start working on real innovation. had it not been for the monopoly of microsoft, realnames would never have gained any kind of recognition in the first place.
-c
Hey it took 79 people to maintain buddy (Score:3, Funny)
It was all about sales (Score:1)
It was also necessary as what they where selling, was worthless.
They betted on fear. That some corporations and organizations would pay for first time dumb internet users who would just enter a word and expect to find exactly what they were looking for, as if everything consisted of monopolies nowadays.
Good Riddance (Score:3, Interesting)
I have had exactly the same scripted cold-call telephone conversation with these idiots twice within an hour. Any company that behaves like that deserves to crash and burn imho.
isnt this what aol does? (Score:1)
i've never used aol
but i've seen alot of commercials where you here soemthing to the effect of:
for more info go to aol keyword 'goatsex'
if this is the case, im not sure i really feel sorry for this guy, i think any schmuck could have thought of this.
Re:balderdash (Score:2)
Which amounts to just another second-level namespace. It's no different, in principle and application, from creating a top-level domain called ".realnames" except that you can't delegate it any further, because RealNames had no concept of hierarchy.
Oh yeah, with Unicode-style names. That's nice but not very interesting to most of the Western world.
Whoop-de-doo.
Is it just me seeing this, or was Keith Teare totally oblivious to the utter uselessness of his "technology" to the Western market?
so let me get this straight... (Score:5, Insightful)
No one would feel sorry for a hardware vendor that made hardware that would only work for Dells, and then went other because kingston/micron/western digital, etc could do it for less, and Dell went with them when it was time to renegotiate the contract.
ostiguy
GoogleBar for IE (Score:1)
RealNames could do exactly the same thing, without paying millions of dollars in cash and 20% of their stock to Microsoft.
Re:GoogleBar for IE (Score:2)
Wouldn't work. People install Google's toolbar because it adds some efficiency to their user experience.
RealNames just made things more confusing and had no coherent value proposition, so nobody had any interest in going out of their way to use it.
Maybe they could have made a deal with Audiogalaxy or someone to have the RealNames URL Befuckulator surreptitiously installed as a secret browser add-on.
This reminds me of Loki Games (Score:2, Insightful)
Contracts written during the boom which returns to kill the company now. I wonder how many of the dotcoms died because of that kind of deals.
I'm with Microsoft on this one (Score:4, Insightful)
When I want to find RandomCo online, unless they're a seriously huge company I don't just guess at randomco.com. That's not reliable enough. I've also long since ceased to visit directory sites to look up RandomCo. What I do instead is go to Google, type in "RandomCo RandomProduct" and find it immediately. This is infinitely more applicable to documents that are not sponsored by huge corporations, given the corporate dominance and limited range of the DNS hierarchy.
RealNames didn't even have a shot without Microsoft's dominance of the browser market, so Teare's parting shots at Microsoft (while very accurate) smack of hypocrisy. Dollars to doughnuts RealNames loved the fact that there was a single company to deal with in their bid to propagate their technology.
Yeah, right. (Score:1, Funny)
They didn't have much of a business plan (Score:4, Insightful)
Uhh no.. (Score:2)
Microsoft was the only one willing to use your crappy "product". Then they realized it was crap, and decided to stop using it. It's not even like most stuff where they buy/steal/copy it, they just didn't want it anymore because it was stupid.
Re:Uhh no.. (Score:2)
More intellectually void bias. (Score:3, Flamebait)
No he doesn't. This has nothing to do with MS, and everything to do with a failed/flawed concept. Why does
Re:More intellectually void bias. (Score:1)
Re:More intellectually void bias. (Score:2)
There's a difference?
Re:More intellectually void bias. (Score:2)
Realnames former CEO is a whining little bitch (Score:5, Insightful)
Re:Realnames former CEO is a whining little bitch (Score:1)
aem
Lay down with dogs (Score:3, Informative)
jep, typical M$ (Score:2, Insightful)
seems, nomather how bad the company may be, quite unfair to me. And this quite confirm one of my previous postings (
its too bad. (Score:3, Funny)
Write to MSN? (Score:1)
If I were him, I would just be happy that I didn't have to deal with their tactics anymore.
Innovation? (Score:1)
It's like DNS being controlled by a single company who charge what they like; there was no reason for the company to succeed. What do they really offer over DNS?
But hey, let's blame MS for quashing innovation. If Microsoft had set up this scheme, everyone would be outraged that they were trying to take over from the DNS system. Ah, bollocks.
Real professional, buddy.... (Score:2)
I particularly like the "I testified on Microsoft's behalf when I stood to make a boatload of money from them, but now that they've cancelled that arrangement, I think they're stifling innovation" bit. Pretty telling, IMHO.
Cheers
-b
What Did He Expect? (Score:1)
2. Jump in bed w/ MS
3. Wam, Bam, Thank you 'Mam
When you make a deal with the Devil... (Score:2)
No pity here, I'm afraid.
Mike
The referee (Score:1)
Live by the sword, die by the sword (Score:5, Insightful)
Nope, not much sympathy from me.
A friend's idea for a startup 5 years ago never got off the ground because at least two vulture capitalists refused to fund, on the grounds that if it became sucessful, M$ would jump in, make an offer we would be literally fools to refuse, and the VCs would not get enough return on their investment. I had long since been avoiding anything M$, just because of their nonethics attitude, and the friend was a real M$ junkie. Woke him up a bit. Maybe Teare will wake up a bit. Maybe others will wake up a bit.
I was offered a job by some people ... (Score:2)
Like that's real ambitious ain't it?
Right now I'm sorry I didn't take the job and that they never got noticed before the VC money ran out, but that's mainly because of Bin Laden ruining my life, career prospects and my (old) neighborhood.
Re:Live by the sword, die by the sword (Score:2)
If DRDOS was so bad, how come M$ had to stoop to that kind of crap? They took it out of the release version, presumably because they got so much flak.
You may also not remember, but Win95 was publicized as requiring MSDOS. Integrated, so to speak, unlike 3.1. Allegedly you couldn't make it work with DRDOS because DRDOS was "inferior". Novell (or whoever owned DRDOS then) made Win95 work with DRDOS just fine.
Netscape may not be much competition for IE now, but when it was better, there were enough tricks dumping on NN. Maybe you are just too young to know better.
Re:Live by the sword, die by the sword (Score:2)
What impact do you think this actually had on software sales? I mean come on, sheesh.
"You may also not remember, but Win95 was publicized as requiring MSDOS."
No, Win95 was publicized as no longer requiring DOS because the functionality as integrated.
"Novell (or whoever owned DRDOS then) made Win95 work with DRDOS just fine."
That's nice, but it has no benefit to the consumer.
"Netscape may not be much competition for IE now, but when it was better, there were enough tricks dumping on NN. "
Of course Netscape was dumping their own tricks, purposefully not following the W3C standards process... dumping their product on the market so as to kill Spyglass and other competitors.
But back to that beta version... What were you thinking?
In Case it gets Slashdotted (Score:1, Informative)
M$oft turning back on .NET? (Score:1)
Can anybody tell me what this is supposed to mean?
If you choose to dance with an elephant... (Score:2)
If you choose to base the entire success of your company on the whims of a company like Microsoft, then don't be surprised if the whims of Microsoft don't go your way. Microsoft will do what's in its best interests, and that often doesn't coincide with the interests of others.
I take exception to Teare's comment:.
Microsoft denied his company nothing that wasn't legitimately its to deny. Microsoft chose not to renew a consentual agreement between it and RealNames. There is nothing wrong with that. It's the basis of a free market.
This is hardly an example of Microsoft attempting to stifle innovation. If Microsoft were buying their company, then closing the business, maybe. If Microsoft were writing incompatibilities into their code, maybe.
Microsoft opted to not engage in further agreements with RealNames. Too bad for RealNames. Get back up, brush the dust off, and find a business model that doesn't depend on the good intentions of Microsoft.
Re:If you choose to dance with an elephant... (Score:3, Informative)
Also you might want to take exception to the lack of factual basis. Plenty of TLDs already support non-ASCII characters in DNS, and have for some time. Check out, for example, NUNIC's Worldnames project [worldnames.net].
Dot Com Whining (Score:4, Insightful)
Re:Dot Com Whining - AOL (Score:1)
Now AOL is shifting to Netscape. AOL has a business with their Keywords. Are we to expect now that AOL will implement keyword features in the netscape browser, and also try to get beyond AOL users to generic internet users with the same consumer leadware?
Don't blame microsoft!! (Score:1)
So let me get this straight.... (Score:3, Insightful)
2) Party can not pay Microsloth what they agreed to and provides a note
3) Party proposes alternate options to original agreement and MicroSloth decides against the agreement because it is not financially appealing in the long run
Hrm...they made what seems like a smart business decision without breaking any law or taking advantage of any loophole.
I don't see the issue.
Re:So let me get this straight.... (Score:2)
Maybe that's what he's complaining about; He was expecting Microsoft to fight dirty, and instead they played fair.
Technology? (Score:3, Insightful)
This illustrates the problem with technology: it is only valuable if you can build something that is not easily imitated or replaced.
If you hire the ten sharpest people around and you take a year to develop something and then stand still, your competition is going to have no trouble catching up, even if it takes them a little longer or more resources. This is how many popular open source projects such as GIMP and OpenOffice are surviving. They've caught up with the real thing; not entirely, but to the point that they're good enough for a number of users.
Of those 80 people at RealNames, how many were driving technology forward? Did their entire technology consist of a database mapping keywords to URLs? Three people at Microsoft could probably do that--and scale--in six months.
The page mentioned that the Microsoft contact got moved to the Natural Language group; maybe MS is coming out with technology that allows you to type natural language queries instead of having to know the exact static keyword. Now that's technology that is not easily imitated or replaced, and it's already here in one form: the Search Assistant in XP.
I feel sorry for the employees of RealNames that have to find jobs in this economy (which is hopefully picking up!), but it is not Microsoft's job to singlehandedly sustain an unsustainable business, and based on the web page in the article that's what was going on.
One side note: If RealNames had acquired a patent on their "technology"--the kind we all love to hate--they could have survived if MS is planning on replacing it and not just ditching it altogether.
Re:Technology? (Score:2, Funny)
Huh? I could do it in 20 minutes.
The hard part was making anyone want to use it. That's what the 80 people were supposed to be doing.
Re:Technology? (Score:2)
Unlike Hotmail, where authentication and storage servers require complex interactions, the RealNames task scales linearly with the hardware you throw at it. Periodically replicating a near-static (i.e., daily updates) database across the machines is trivial and remains so no matter what the aggregate query volume.
Re:Technology? (Score:2)
It is basicly a half thought out bussiness idea.
What a Whiner! (Score:1)
PULEEZ!
Come on Keith Teare (founder of RealNames). Grow up! You climbed into bed with M$ because their majority ownership of the browser market was the only possible way to make your rotten egg fly. Now that they're turning their back on you, the crying starts. Just accept the fact that it was a flawed idea, born out of greed during the domain name gold rush and move on.
And good luck to you.
Good riddiance.... (Score:2)
Cry me a river.
If you deal with jerks, don't expect them to not be jerks in the future.
Plus, this twit had a patent on thist stupid "invention" [uspto.gov]
Dear god, (Score:1)
the company's downfall
Please, at leat have correct spelling/grammar on the front page.
Re:Dear god, (Score:1)
I did that on purpose. Honest.
*mutter*
Remind you of anyone? (Score:1)
Of course IBM are now A Good Thing but the only difference is that they had the money needed to hit back.
Time for some political will to sort this mess out.
wtf is RealNames? (Score:1)
Yeah.. sure.. blame MS. (Score:2)
As much as I'd like to see something besides DNS used to locate websites... because I think there should be no more new TLD's, and the system should be left as-is, to force the world to come up with a better way... realnames wasn't that solution.
Uh? RealNames was an innovation? (Score:1)
I don't suppose it ever ocurred to him that their idea was just plain stupid and little more than an attempt to skim money from those foolish enough to pay them.
The reality is not that Microsoft pushed them out of business... the reality is Microsoft was propping them up by implementing this junk in their browser. And now that MS isn't going to prop them up anymore, they're history.
Idea (Score:2)
how about a new record type for websites? A record that includes both an IP address and a port.
That would rock.
Re:Idea (Score:2)
Um, you mean like SRV records [menandmice.com]?
Sour grapes. (Score:2)
Business 101 (Score:1)
M$ screwed us *click* M$ screwed us *click* M$ s-- (Score:1)
Doesn't anybody remember those old movies where the guy makes a deal with the devil, only to get burned later on?
If you ask me it's their own fault if they're going to fall for M$'s tired old shell game.
If this doesn't just say it all (Score:2)
IP is the asset, people are free agents.
Then...
MSFT recruiting would be a positive message to employees.
A positive message doesn't pay the mortgage. Thanks guys. You're wonderful people.
This perfectly illustrates the non-future that W-4 employment is becoming. Oh, and for all you boardroom-apologists: this happens ALL THE TIME. Don't even try to argue about it.
I wonder which of these managers are free agents. While we're at it, if people are so "free" why do they make the hiring process more grueling than becoming an astronaut.
It's sickening.
Microsoft's changing attitude... (Score:2)
So it's interesting how the slashbot editors have tried to spin this as a negative, as this is exactly the type of responsible attitude that they have previously desired to come from Microsoft.
No point (Score:2)
Well, only if there's a point to doing so.
Excellent question.
If you already know what you want to do, maybe we should begin with that.
Yes. We should.
OK, well, We aren't going to continue to bet on Keywords
MEETING ADJOURNED.
Don't stand there and dance for quarters. Pick up your stuff and leave. Sell to buyers, not skeptics. Bill 'em for your time too. They could have sent this by e-mail. This kind of arrogance is designed to do two things:
Don't believe the hype. The only way to negotiate with arrogance is to close the door, while it still belongs to you. Meeting adjourned. ^^
well (Score:1)
For great justice (Score:4, Funny)
Keith Teare wants us to email 'zig'? For great justice?
What you say!! Someone set him up the bomb!
-Erik
Re:For great justice (Score:2)
I never thought I would be glad for the existence of spambots.
RealCEO with Unreal Expectations (Score:3, Informative)
RealNames was a marketing ploy, taking advantage of Microsoft's dominance above actual internet standards, and exchanging that monopoly for material gain. The fact that it failed is a testament to the capability of standards over proprietary schemes, and is hardly an example of the evils of Microsoft's monopoly.
The evils of Microsoft's monopoly is the reason RealNames existed in the first place, not the reason it was torn down.
Blind Leading Dumb (Score:2)
While, in other news, Microsoft VP Jim Allchin admits
Interesting dichotomy, that.
ok way to blame your problems on microsoft (Score:3, Interesting)
So he started offering delayed repayment plans. Microsoft accepted them for a while and then they stopped. Well nobody is required to accept delayed repayment plans. Its their money after all.
So that guy tries to hide the fact that his bussiness failed by saying that Ms refused to accept his "innovations". Well the market refused to accept his "innovations" too. His bussiness did not succeed.
And as far as the innovations go lets be realistic here. All he did was try to hijack domain names. I am actually glad he did not succeed. I dont want some private co connected to microsoft in control of the naming system. At least icann pretends to be community governed.
A possible opportunity (Score:3, Insightful)
There are more than 100,000 customers including many well known ones like IBM, Xerox [who made RealNames partner of the year last year], EBay, Mattel - who have Keywords on every Barbie Box, and many more.
....
What can you do? Probably nothing.
I think that there is something that people can do.
Create their own name tool.
It seems to me that there are enough 'big movers' in this process that a consortium to re-install a naming process into IE is possible. Not only that, but it could be done in an 'open' manner such that the same naming mechanism could be used for IE, Netscap, Mozilla and any other browser that was interested in doing so.
Yes, this might require that realnames restart it's process, to a certain extent, but they will have to do this anyways if the company is to thrive. Microsoft is *NOT* necessary to this. They were the best way to get the process kick-started. Now that people know what realnames is capable of, it's possible to now take this to the next level -- but without any fealty payments to Microsoft.
This could be the death of realnames, or it could be a new beginning.
If realnames really wants to take on this task, one of the first things to do would probably be to create an add-on/plugin, and put some add hooks into the links created by real-names such that people know where to find the new extension. Then people at various large sites would need to put links allowing people to find the addin as well.
Time is short, but the opportunity is as large is the problem.
RealNames was useful. (Score:2)
I'd type "? Windows Media Guide" into my address bar and get the site for it, because I could never remember the link and didn't want to favorite place it.
Typing in a search ? $SEARCH usually yielded the RealNames keyword of what I was looking for. This was especially useful searching for band web pages where the band's web site and name don't necessarely coincide.
With the release of an API for the Google database, I'd like to see MS license it and convert addressbar "? $SEARCH" searching using MSN search to using Google search-it'd be a ton better and still do the same thing.
Plus if MS dropped it, google wouldn't go under.
Faust (Score:2)
In ASCII, this means I have no sympathy with this man. Microsoft has a long record of screwing it's partners and to be honest, these people should know better. Microsoft was caught stealing code by Apple, the makers of Softimage and others and regularly works "with" "partners" in order to "embrace and extend" the product once they have sent the former "partner"(e.g. IBM) off into the wilderness. There would have been hundreds if not thousands of people in the business world that would have warned him not to trust Microsoft, IF he would have bothered to stop checking his bank account every 5 minutes and listend to what they had to say, but greed is a powerful motivating factor. Microsoft could not find find any partners for it's hailstorm/passport strategy for a reason: No one trusts them.
This man would have had a better chance of long term success if he had worked with the opensource crowd to get the technology accepted.
Huh? (Score:2)
Keith Teare's (Score:2)
Hoist by his own petard (Score:2)
Dislike for MSN (Score:2)
For someone who seems bitter about Microsoft, it's interesting that he still uses their free mail service...
:-)
Re:what a lameass (Score:3)
Probably not his main mailbox, but just a drop box for all the "we told you so -- if you sleep with the devil don't complain when you get raped" mails. He would be foolish to put his real business e-mail address under such a whiny piece...
Re:what a lameass (Score:1)
Hehe, this way BillG can read his e-mail too
...
Oh, his real address for "I told you so" e-mails is keith@teare.com. His old keith@realnames.com probably won't work anymore, for obvious reasons.
-Martin
One language (Score:2)
My feeling is that the Internet is best served by standards that all (or most) people can use and understand. Having multi-language support is antithetical to this goal.
Please don't call me an English-only bigot. I freely admit that I know no other written/verbal language. However, I truly wouldn't care what language was used, as long as it was the univerally understood standard.
Perhaps one language is too few.. maybe 3 or 4 languages would be better... IF everybody (or most everybody) could use them. I don't want the Internet to become segregated.
Find a standard language, ANY language... Use it exclusively in a global medium. This promotes global communication and prevents people from having their own little private "Internets" where the rest of the world can't understand a thing they're saying, much less search and browse through their "private world".
Right now, English is the standard, right or wrong, for better or for worse. Whether or not English should be the standard is a different debate to me. My point is that we should have a standard language and maintain it in order to keep the whole thing all together and on the same (web) page.
If you want to reach out and communicate with people across the globe, you have to have common ground. On the Internet, that's langauge. So, on the Internet, España is "Spain".
Vortran out | http://slashdot.org/story/02/05/12/1750223/realnames-ceo-talks-back?sdsrc=prevbtmprev | CC-MAIN-2014-35 | refinedweb | 6,002 | 64.1 |
For background task processing and deferred execution in Python with Django, Web developers and system admins can try out Celery.
Asynchronous mass email delivery, clickstreams like the number of hotels being watched or the number of likes, image resizing, video processing, connecting to third-party APIs, enhancing the user experience by executing tasks in the background rather than in the request-response loop, or executing a large number of queries on a database for reporting… these are some of the scenarios in which you might want to use distributed job queues in order to scale your application. Job queues also separate the tasks to execute in real-time from those to be scheduled for later. There are many use cases in which job queues help you achieve better user experiences. This article introduces readers to the use of Celery to leverage this in Python and Django applications.
Celery is based on distributed message-passing for asynchronous task queues/job queues. It is capable of supporting both real-time operations as well as scheduled jobs. The tasks can be run on a single worker or multiple workers concurrently, to take advantage of multi-processing. Celery provides a powerful and flexible interface for defining, executing, managing and monitoring tasks. A Celery system can consist of multiple workers and brokers, yielding high availability and horizontal scaling. Celery is suitable for applications that need to achieve any of the following:
1. Execute tasks asynchronously.
2. Distributed execution of expensive processes.
3. Third-party API usage.
4. Periodic/ scheduled tasks.
5. Retrying tasks.
6. Enhancing the user experience.
Celery architecture
Task queues are used to distribute work across workers. Celery task queues are based on the Advanced Message Queue Protocol. By default, it uses RabbitMQ as its message broker; however, users are not limited to RabbitMQ but can use Redis, MongoDB or Beanstalk too. Figure 1 depicts this process.
Step 1: The AMQP receives a task from the client, which may be a Web app or a Python program.
Step 2: The workers constantly monitor the queue; as soon as a message is dropped in the queue, one of the workers picks it up and executes it.
Step 3: Depending on the configuration, it may or may not save the result, once it has finished processing the execution.
Setting up Celery
Although the choice of message broker is entirely your decision, for this article, I assume we are using RabbitMQ (it’s what I use in production, too). Before installing Celery, you must have RabbitMQ installed and running (start it with rabbitmq-server start). Then, all you need to install Celery is pip install U celery and you’re ready to create your first program using Celery.
Make a project folder, and in it, create a file tasks.py, which will contain the tasks you want to perform using Celery. Here’s a sample program I will be using to fetch JSON and read its contents:
from celery import Celery #Configure celery. celery = Celery('tasks', broker='amqp:[email protected]//') @celery.task #Decorator which defines the underlying function as a celery task. def fetch_data(json_name): sleep(10) url_to_open = "" % json_name req = urllib2.Request(url_to_open) opener = urllib2.build_opener() f = opener.open(req) data_fetched = simplejson.load(f) print data_fetched return data_fetched
Now run the celery daemon from the terminal using the following command:
celery worker -A tasks --loglevel=INFO:
These are the minimum arguments you need to pass to start the service. Other options like events, concurrency levels and CeleryBeat can also be passed as arguments. You will learn about them later in the article.
In another terminal, use the Python interpreter to call the tasks module file
>>> from tasks import add, fetch_data >>> result = fetch_data.delay('sample.json')
Next, track the task state/fetch the result. There are a variety of ways to achieve this, depending on your use case. For example:
- You just want to execute the task, and don’t want to save the result.
- You might want to check if the task has finished executing, or is still pending.
- Do you want to save the result in the message queue itself, or in MySQL or a back-end of your choice?
- To achieve the third, you need to configure this setting in your tasks.py file, as follows:
celery = Celery('tasks', backend='amqp', broker='amqp:[email protected]//')
Now the message queue is configured to save the result of the job. You can configure any back-end that you wish to use here. This is how an immediate task is executed, though there might be use-cases wherein you would want to run scheduled jobs. To run a task as a scheduled task, you need to define the schedule in the decorator of the task, as follows:
@periodic_task(run_every=datetime.timedelta(minutes=1)) def print_name(): print "Welcome to Tutorial"
The entry for period can be in the form of timedelta or in the form of cron too. Now the command for running the daemon would be:
celery worker -A tasks --loglevel=INFO B # Where B means running
CeleryBeat is used for periodic tasks; if the argument B is not passed, then it will not run periodic tasks. Next, let’s look at how we can integrate Celery with Web frameworks;in this case, Django.
Integrating Celery with Django
Create a Django project using django-admin.py startproject simple_django_project, and then create an app in the project with python manage.py start app celery_demo. Next, install django-celery with pip install django-celery and then modify the settings.py file to configure the message queue, as shown below:
import djcelery djcelery.setup_loader () INSTALLED_APPS = ( ... 'Djcelery' , ) BROKER_HOST = "localhost" BROKER_PORT = 5672 BROKER_USER = "myusername" BROKER_PASSWORD = "mypassword" BROKER_VHOST = "myvhost"
Next, sync the database with python manage.py syncdb, after which create tasks.py inside the app. Now you can create a URL entry in urls.py that maps to a function in view, which will be used to call the tasks that we have defined in tasks.py. Run the Celery daemon now, with the following commands:
python manage.py celeryd -l info -c 2 # Without CeleryBeat python manage.py celeryd -l info -c 2 B # With CeleryBeat
This is a simple method to integrate Celery with Django.
Adding multiple workers
Now scaling workers is not a concern all you need is to ship your tasks app to a new machine, set up Celery and just start running it. The Celery daemon will start talking to the message queue, and multiple workers will start executing tasks. Celery makes sure that your task is executed once, and not by multiple workers.
Monitoring
As your application grows, so will the need to make it more stable and robust. To achieve this, you need to monitor all the components of your Celery set-up.
Monitoring RabbitMQ
To check the number of messages in your queues via the console, simply run rabbitmqctl-status, which will list all queues, with the number of messages in each queue. For a GUI-based output, you can simply install the RabbitMQ management plug-in.
Monitoring Celery
First of all, to manage Celery, you need to switch events on start the Celery daemon with the option -E, so the command would become python manage.py celeryd -l info -c 2 B E. This starts to capture events, and now you can monitor your workers, task states, etc, using:
- Celery command-line utilities
- Django-Celery admin
- Flower: A real-time Celery Web-monitor
Celery is one of the most stable systems available. It is very easy to get started with, very simple to configure, fast at executing millions of tasks, and flexible, as almost any component of Celery can be used on its own, changed, or configured as per requirements. Some other great features of Celery are:
1. Designing workflows: To chain multiple tasks, you can use canvas to divide your tasks into sub-tasks.
2. Webhooks: To enjoy the power of Celery using other languages like PHP, Ruby, etc.
3. Routing: Send tasks to a particular queue rather than any queue, and to implement all the routing mechanisms that the message broker supports.
There are loads of other great features of Celery, which are beyond the scope of this article. I am sure that if you have a use-case, chances are that you can do it with Celery. | http://opensourceforu.com/2013/12/introducing-celery-pythondjango/ | CC-MAIN-2017-04 | refinedweb | 1,387 | 63.7 |
.
#include <stdio.h>
void main()
{
int c, d;
printf("Press Keys. Press Q to exit.\n");
for(;;){
c = _getch();
if(c != 0x00 && c != 0xE0){
printf("Char = %x\n", c);
if(c == 'Q'){
break;
}
}
else{
// It's an F key
d = _getch();
switch(c){
case 0x00:
printf("F%d\n", d - 0x3a);
break;
case 0xE0:
printf("F%d\n", d - 122);
break;
}
}
}
}
BOOL YourDialog::PreTranslateMe
{
if (pMsg->message == WM_KEYDOWN) {
int nVirtKey = (int) pMsg->wParam;
if (nVirtKey == VK_F1)
{
///do what you want here
}
if (nVirtKey == VK_UP)
{
///do something else here
}
return CDialog::PreTranslateMessa
}
So you can't do this with standard C++. But you can use OS-specific functions and features to do this.
Is this for a 32bit windows console program? If so you probably want to use the ReadConsole() function to read the input. You can do this without the line buffering, so the user doesn't have to press enter.
I can elaborate on that, but first, is this for a 32 bit console?
Azure has a changed a lot since it was originally introduce by adding new services and features. Do you know everything you need to about Azure? This course will teach you about the Azure App Service, monitoring and application insights, DevOps, and Team Services.
If console then it's easy. You can use microsoft specific functions to get keystrokes.
As I posted before:
Maybe a bit of background information information on your application would be helpful.
VK_F1 is defined in winuser.h from the Windows SDK. I'm sure that even Borland C has this include as it's fundamental. Usually you don't include it directly but rather it's brought in by windows.h.
For a Windows GUI application you need to handle the WM_KEYDOWN message.
>from keyboard is received.
BTW, this is NOT how to do this in a Windows program. It's incredibly bad programmig practice. Never loop waiting for user input. Set things up so that you receive a message when the user input you are waiting for happens.
It seems to me that you are unfamiliar with the most basic Windows programming concepts. I suggest you get the book, "Programming Windows" by Charles Petzold. It covers how to get information from the keyboard in great detail but more importantly, it will teach you how to write a Windows program.
This is a console application I'm trying to create. Suggestions Pacman?
I'd like to know what header file to include and what the syntax is for whatever function I need to be using.
jhance, you had another approach to the loop I was suggesting. Can you show me how this is done?
just read then with getchar()
#include <stdio.h>
int main(void)
{
int c;
/*
Note that getchar reads from stdin and is line buffered; this means it will not return until you press ENTER.
*/
while ((c = getchar()) != '\n')
printf("%c", c);
return 0;
}
It returns an integer. And it also needs ENTER. I've modified the while arguement, but still int. How do I make it so that it shows those hex numbers you were talking about?
>> t returns an integer
Just treat it as a character.
But I really think you want to switch to ReadConsole(), getchar() works weirdly with fucntion keys.
switch(d)
Look CLOSER, you're wrong!
I know. ;-) | https://www.experts-exchange.com/questions/10304525/Function-Keys.html | CC-MAIN-2018-26 | refinedweb | 555 | 75.91 |
Frees the object tree when a smux tree is unregistered.
SNMP Library (libsnmp.a)
#include <isode/snmp/smux.h>
void smux_free_tree ( parent, child)
char *parent;
char *child;
The smux_free_tree subroutine frees elements in the Management Information Base (MIB) list within an SNMP multiplexing (SMUX) peer. If the SMUX peer implements the MIB list with the readobjects subroutine, a list of MIBs is created and maintained. These MIBs are kept in the object tree (OT) data structures.
Unlike the smux_register subroutine, the smux_free_tree subroutine frees the MIB elements even if the tree is unregistered by the snmpd daemon. This functionality is not performed by the smux_register routine because the OT list is created independently of registering a tree with the snmpd daemon. The unregistered objects should be removed as the user deems appropriate. Remove the unregistered objects if the smux peer is highly dynamic. If the peer registers and unregisters many trees, it might be reasonable to add and delete the OT MIB list on the fly. The smux_free_tree subroutine expects the parent of the MIB tree in the local OT list to delete unregistered objects.
This subroutine does not return values or error codes.
The character strings are names or dot notations representing object identifiers.
This subroutine is part of the SNMP Application Programming Interface in the TCP/IP facility.
The snmpd command.
The readobjects subroutine, smux_register subroutine.
List of Network Manager Programming References.
SNMP Overview for Programmers in AIX 5L Version 5.1 Communications Programming Concepts. | http://ps-2.kev009.com/wisclibrary/aix51/usr/share/man/info/en_US/a_doc_lib/libs/commtrf2/smux_free_tree.htm | CC-MAIN-2022-33 | refinedweb | 247 | 50.73 |
Flexjoly
Backend php Divaeloper former_Netbeans_fan
- Total activity 12
- Last activity
- Member since
- Following 0 users
- Followed by 0 users
- Votes 0
- Subscriptions 4
Flexjoly created a post,Answered
How to enable time tracking report for generic task trackerHi, We use teamwork.com for projectmanagement. Today I managed to enable the generic task-tracker server for this. And can load our tasks now. I also enabled the timetracking. But the report button...
Flexjoly created a post,
How to add @deprecated to multiple classes/filesI would like to mark whole directories/namespaces as deprecated. Because of our saas-applications we have multiple versions which need to stay online, until nobody needs it anymore. Our namespace/...
Flexjoly created a post,
Reference of all supported phpdoc tagsHi,After a long search I found the wonderful tag I needed desperately: @mixin. But that was not easy to find. (ok I am still a newby on phpstorm, using it for about 2 months, coming from 10 years o...
Flexjoly created a post,
Menu very slowI am using phpStorm now for a few weeks and on the latest eap 2021.2#4 Sometimes right-click in file or opening a menu like navigation or code are very slow. It takes about 10 sec. before it is sho... | https://intellij-support.jetbrains.com/hc/en-us/profiles/382711393479-Flexjoly | CC-MAIN-2021-43 | refinedweb | 210 | 57.98 |
Evaluating the security of OpenWRT (part 2)
In my last post I covered how I setup an OpenWRT build, to examine a small subset of indicators of security of the firmware.
In this follow-up post we will examine in detail the analysis results of one of the indicators: specifically, the RELRO flag.
A first look - what are the defaults?
The analysis here is specific to the Barrier Breaker release of OpenWRT, but it should be noted that during experiments with the OpenWRT development trunk the results are much the same.
Before diving into RELRO, lets take a look at the overall default situation.
Here is the checksec report for the Carambola2 device (MIPS processor) build. It is a sea of red…
The ‘run as root’ errors can be ignored: those programs are actually absolute symbolic links which do not resolve in the host system. Relative symbolic links resolve correctly but are filtered out of the analysis.
The x86 build paints a similar picture:
(Notably for x86, the NX flag is correctly set, but that is a topic for another time.)
Note, the rest of this post describes how to modify OpenWRT to enable RELRO. There may be perfectly valid reasons to not enable the flag (for example, using RELRO may have a performance impact, and for a given system the adverse security risk may be judged low), so I have ensured that the suggested mitigation if applied remains a choice in the configuration menu of the system. For the moment my patch also retains backward compatibility by defaulting to off.
Inside the OpenWRT build system
After a brief look at the build logs, the reason is obvious: the typical gcc linker command is missing the flags needed to enable RELRO:
-Wl,-z,relro -Wl,-z,now (or the direct linker equivalents,
-z relro -z now)
What could be done to address this?
OpenWRT provides a hook for appending to the global compiler
CFLAGS but there is no similar hook for the linker stage. We could add those flags to the global
CFLAGS and they can in fact flow through to the linker for many programs, but that would also be redundant as the flags are irrelevant to the compiler. In the end I decided I would modify the OpenWRT source to add a new global CONFIG option, which adds
-Wl,-z,relro -Wl,-z,now to the global
LDFLAGS instead.
The following patch achieves that (note, I have left out some of the help for brevity): ```text diff –git a/rules.mk b/rules.mk index c9efb9e..e9c58d8 100644 — a/rules.mk +++ b/rules.mk @@ -177,6 +177,10 @@ else endif endif
+ifeq ($(CONFIG_SECURITY_USE_RELRO_EVERYWHERE),y) + TARGET_LDFLAGS+= -Wl,-z,relro -Wl,-z,now +endif + export PATH:=$(TARGET_PATH) export STAGING_DIR export SH_FUNC:=. $(INCLUDE_DIR)/shell.sh; diff –git a/toolchain/Config.in b/toolchain/Config.in index 7257f1d..964200d 100644 — a/toolchain/Config.in +++ b/toolchain/Config.in @@ -38,6 +38,19 @@ menuconfig TARGET_OPTIONS
Most people will answer N.
- config SECURITY_USE_RELRO_EVERYWHERE
- bool “Enable RELRO and NOW for binaries and libraries” if TARGET_OPTIONS
- default n
- Apply -z relro -z now flag to the linker stage for all ELF binaries and libraries.
menuconfig EXTERNAL_TOOLCHAIN bool ``` Having attched OpenWRT, and enabled the new flag, lets rebuild everything again and run another checksec scan.
The results shown above are for x86, the picture is similar for the Carambola2 MIPS image.
The new results indicate that the RELRO flag is present on some binaries but not all of them. From this we can predict that some packages do not fully honour the global OpenWRT build system linker flags. I soon confirmed this**; the implication is that the new flag CONFIG_SECURITY_FORCE_RELRO is useful, however, a caveat in the Kconfig help is required. In particular, a statement to the effect that the efficacy depends on proper coding of OpenWRT packages (with ideally all packages maintained by the project being fixed to honour the flag.)
** For example: the package that builds libnl-tiny.so does not pass LDFLAGS through to the linker; this and some other base system packages needed patching to get complete coverage. it is likely that there are other packages that I did not have selected that may also need tweaking.
Another notable package is busybox. Busybox it turns out uses
ld directly for linking, instead of indirectly via gcc, and thus requires the flags in the pure form
-z relro -z now. (The busybox OpenWRT package Makefile also happens to treat the global TARGET_LDFLAGS differently from the TARGET_CFLAGS although I am unsure if this is a bug; but that turned out to be a red-herring.) Oddly, this solution worked for MIPS when I tried it previously, but is presently not successful for the x86 build, so further investigation is needed here; possibly I incorrectly noted the fix in previous experiments.
Fun and Games with uClibc and busybox
The other recalcitrant is the uClibc library. I spent quite a bit time trying to work out why this was not working, especially having confirmed with verbose logging that the flags are being applied as expected. Along the way I learned that uClibc already has its own apply RELRO config item, which was already enabled. Even more oddly, RELRO is present on some uClibc libraries and not others, that as far as I could tell were being linked with identical linker flag sets.
After some digging I discovered hints of bugs related to RELRO in various versions of binutils, so I further patched OpenWRT to use the very latest binutils release. However that made no difference. At this point I took a big diversion and spent some time building the latest uClibc externally, where I discovered that it built fine using the native toolchain of Debian Wheezy (including a much older binutils!) After some discussion on the uClibc mailing list I have come to the conclusion that there may be a combination of problems, including the fact that uClibc in OpenWRT is a couple of years old (and additionally has a set of OpenWRT specific patches.) I could go further and patch OpenWRT to use the trunk uClibc but then I would have to work through refreshing the set of patches which I really don’t have time or inclination to do, so for the moment I have deferred working on resolving this conundrum. Eventually someone at OpenWRT may realise that uClibc has undergone a flurry of development in recent times and may bump to the more recent version.
Along the way, I discovered that Debian actually runs security scans across all packages in the distribution - take a look at.
It is worth noting that whenever changing any build-related flag it is worth cleaning and rebuilding the toolchain as well as the target packages and kernel; I found without doing this, flag changes such as the RELRO flag don’t fully take effect as expected.
For maximum verboseness, run with
make V=csw although I had to dig through the code to find this out.
I was going to repeat all the testing against a third target, another MIPS-based SOC the RALINK 3530 but at this point I don’t really have the time or inclination, I am sure the results will be quite similar. It would probably be useful to try with an ARM-based target as well.
I should also try repeating this experiment with MUSL, which is an alternative C library that OpenWRT can be built with.
Conclusion
Out of the box, OpenWRT has very limited coverage of the RELRO security mitigation in a standard firmware build. By applying the suggested patches it is possible to bring OpenWRT up to a level of coverage, for RELRO, to that approaching a hardened Gentoo or Ubuntu distribution, with only a small subset of binaries missing the flag.
References
My Github account includes the repository openwrt-barrier-breaker-hardening. The following branch include the completed series of patches mentioned above: owrt_analysis_relro_everywhere I hope it will remain possible to apply these changes against the official release for a while yet.
The patch that enables the latest binutils is not in that branch, but in this commit.
posted in embedded, hacking, openwrt, security | http://blog.oldcomputerjunk.net/2014/evaluating-the-security-of-openwrt-part-2/ | CC-MAIN-2017-09 | refinedweb | 1,365 | 57.81 |
ShaderEffector: IsDirty(DIRTYFLAGS::DATA) always true?
- fwilleke80 last edited by fwilleke80
Hello,
I noticed that the Shader Effector always returns true for
IsDirty(DIRTYFLAGS::DATA). Even if it doesn't do anything.
This somehow destroys caching in our plugins that support Effectors.
Test it by simply creating a scene with just a Shader Effector, attaching a Python Tag to the effector, and putting in this code:
import c4d def main(): effectorOp = doc.SearchObject("Shader") dirtyMatrix = effectorOp.IsDirty(c4d.DIRTYFLAGS_MATRIX) dirtyData = effectorOp.IsDirty(c4d.DIRTYFLAGS_DATA) dirtyCache = effectorOp.IsDirty(c4d.DIRTYFLAGS_CACHE) dirtyAll = effectorOp.IsDirty(c4d.DIRTYFLAGS_ALL) print(str(effectorOp)) print("dirtyMatrix: " + str(dirtyMatrix) + ", " + "dirtyData: " + str(dirtyData) + ", " + "dirtyCache: " + str(dirtyCache) + ", " + "dirtyAll: " + str(dirtyAll))
Any time the tag is executed (be it by playing back the animation, changing anything in the scene, or just by wildly waving the mouse around),
DIRTYFLAGS_DATAreturns
true.
Is there any way to know if the Effector is actually dirty?
Thanks & greetings,
Frank
Hi,
this is probably a safeguard against the uncertainty of shader dirty state evaluation. While a shader node can be evaluated for its dirty state like any other node, that does say very little about what the shader does in its sample function. The shader could be dependent on the craziest combination of scene graph attributes and unravelling that is not that easy / simply impossible. So the effector samples the shader every time it has a chance to do so.
A very ugly solution to the problem would to be to coarsely sample the effector on a carefully chosen bounding box and and cache these samples. Upon evaluation you would resample the effector testing for it to match your cache. The obvious flaws would be that this is computationally expensive and also prone to False Negatives when your cache grid stride steps over some finer details in the shader.
Cheers,
zipit
- PluginStudent last edited by
What do you want to learn by checking dirty?
A Shader Effector could use a "Noise" shader. In that Noise shader, you can enable "Animation Speed".
So the Shader effector will create a different result for every frame, even no parameter values change.
- fwilleke80 last edited by
Yeah, I know shaders can depend on basically anything in the scene.
And I guess there is no way to know for sure (would be cool if shaders could return a flag the indicates if it actually is animating or changing without changes to the container).
So, it really seems they are always dirty because of this. Damn :D I would prefer them to only be dirty if the shader node changes. That way it would be up to the develop if they rely on the dirty results or not.
Cheers,
Frank
Hi @fwilleke80, unfortunately, I confirm the shader effect is almost always dirty. I don't have any real workaround maybe you can retrieve all noises and compute their dirtiness, but this is indeed a very hard topic to handle all cases because you will also need to handle the actual parameter of the shader effector itself.
Cheers,
Maxime. | https://plugincafe.maxon.net/topic/12578/shadereffector-isdirty-dirtyflags-data-always-true | CC-MAIN-2020-50 | refinedweb | 505 | 54.73 |
On Saturday 18 December 2004 14:44, Andy Firman wrote: > On Thu, Dec 02, 2004 at 02:24:13AM -0500, William Ballard wrote: > > Google's becoming evil. > > > > Not only have they screwed up groups layout, showing a few columns of > > text and leaving a big space for ads, now they're pimping their own > > group namespace beside usenet. > > > > It's horrible. > > I agree 100%. > > > Is there any other usenet archive? > > This is one good resource for me: > > > Since Google groups sucks for finding answers now, what else is everyone > using? There is a topic "Suggestion/Feature request" in I'm being overly optimistic, but maybe some strong feedback make them abort this new ugly creature and give us back our good old Google Groups. Cláudio | http://lists.debian.org/debian-user/2004/12/msg02727.html | CC-MAIN-2013-48 | refinedweb | 125 | 77.06 |
Opened 8 years ago
Closed 3 years ago
#12974 closed Bug (duplicate)
Admindocs app introspection omits several model methods
Description
The problem is that some methods are preprocessed (especially those with a @decorator) are ignored by admin docs, because they take no arguments as opposed to a single (self) argument. This patch changes a single character (from '==' to '<=').
Attachments (2)
Change History (17)
Changed 8 years ago by
comment:1 Changed 8 years ago by
comment:2 Changed 8 years ago by
comment:3 Changed 8 years ago by
Firstly - please generate patches against the root of the source tree.
Secondly, due to the absence of a concrete test case (either in code, or in the problem description), I can't work out what problem it is that this patch is trying to solve.
Changed 8 years ago by
diff'd against the root
comment:4 Changed 8 years ago by
When admindocs displays methods for a model, it only displays ones w/ no arguments, or rather with only the (self) argument. The reason for this is no doubt to display only those methods which are valid for use in a template.
This patch is trying to solve the following problem:
Occasionally a model can have a method with no arguments, due to preprocessing @decorators or the use of the django.utils.curry function. These methods are then overlooked by admindocs, which checks for exactly one argument.
With this patch, it will check for fewer than or equal to one required argument.
The 'test' case that brought this problem to my attention was using that django-basic-apps blog app, whose Post model defines a get_absolute_url method. This method has a @permalink decorator before it, which results in Post.get_absolute_url having zero arguments. Given that the current code checks for exactly one argument, this method (which is totally valid for use in a template) is not displayed on the docs page.
Looking deeper into the code, I found that any get_absolute_url (even without a @permalink) gets curryd by the base model, and would therefore never be displayed by admindocs.
comment:5 Changed 8 years ago by
comment:6 Changed 8 years ago by
Ok - problem now confirmed. For future reference: While I was eventually able to piece together an example that broke, it took some work. The best test case isn't a longer, more verbose verbal explanation -- it's a working sample of code that demonstrates the breakage, along with a short set of instructions for how to cause the breakage. For example:
Sample model:
class Thing(models.Model): name = models.CharField(max_length=100) @models.permalink def get_absolute_url(self): "Get the URL" return ('blog_detail', None, {}) def button(self): "Push the button" pass
If you inspect
/admin/doc/models/myapp.thing/ (the admin docs for the Thing model), button will be displayed, but get_absolute_url will not.
The best part is that this programatic test case can form the start of the test for your patch. Your patch doesn't have a test case, so the 'needs test case' flag still applies. The fact that this particularly feature doesn't have any tests at the moment doesn't give you a pass to not include tests -- it's an opportunity to add tests to make sure the feature will continue to work.
I've also turned on patch needs improvement -- At this point, the check for function arguments is redundant, since you can't have a function with a negative argument count.
comment:7 Changed 8 years ago by
Could you explain the 'redundant' comment? The function must not have any required arguments (at least that is my impression of the desired behavior), and the check is therefore still necessary. Are you referring to some line other than the one that I modified?
comment:8 Changed 7 years ago by
comment:9 Changed 7 years ago by
comment:10 Changed 7 years ago by
comment:11 Changed 7 years ago by
Please note that even if you remove
@models.permlink from above example
get_absolute_url still fails to be documented in the admin documentation. Further more functions that take no arguments that are template friendly also fail to be documented (see model definition below).
However this is not the reason I am adding to this ticket. I believe the assumption that admindoc should only document template friendly functions is flawed. For the admindoc app to be useful as a documentation portal for both the template designer and django programmer all functions should be included; template friendly functions can simply be differentiated visually.
Documenting all functions simply means removing
len(inspect.getargspec(func)[0]) == 1 from the if statement on line 242. If needed I can submit a patch.
Thank you
class Example(models.Model): ### django native method def get_absolute_url(self): """ I'm not in the admin """ pass def button(self): "I'm in the admin" pass ### extra model functions def get_next_order(): """ I'm not in the admin """ pass name = models.CharField(help_text='Name', max_length=20)
comment:12 Changed 7 years ago by
(adding myself to cc)
patch | https://code.djangoproject.com/ticket/12974 | CC-MAIN-2018-22 | refinedweb | 845 | 59.84 |
Credit: BecomingHuman
In this post, you will learn how simple Bayes can be used to do what rnn can do, like text classification, by drawing our main intuition from how rnn works,and how word and character representation work
In the world of Natural language processing, rnn has been used in language modeling in order to build a machine translation, help in voice synthesis and much likes.
What is RNN? : Since we are coming from the idea of first principles, i would like to neglect the deep definition of shared weight or hidden state or cells, if you know all that, that good. But the major idea is that rnn helps to map an input to an output.
In language modeling, characters or word are seen as a sequence of numbers, in such a system in which the input is a word and the output is a word, then the input and output will be converted to a sequence of numbers.
Trending AI Articles:
1. How ethical is Artificial Intelligence?
2. Predicting buying behavior using Machine Learning
3. Understanding and building Generative Adversarial Networks(GANs)
4. Building a Django POST face-detection API using OpenCV and Haar Cascades
So, that brings us to the idea of a function that map input to output, such a function can learn the representation of each word(input) to another word(output)
For instance, if we are to build a machine translation from English to French, we could build a functional system that takes an English word input and output it french representation. provided we are giving an input word
go and ask to provide the French representation. let say
f is a function that maps the input to output.
But, there exist some difficulties for such a system, since for most language a word could mean different things based on the context at which they are being used. e.g
f("go")=>"va" Since the system has learn to map
go to va that is for every sentence it sees go in English is mapped to va in French.
Even for a more complex language like mine, I speak Yoruba, for instance, a word like
ewa could mean different things entirely in English, it could mean
beauty, beans based on the context they are being used.
This is just to tell you that RNN is more than just mapping word to word. But still, a system that maps words to words, but also contains information about their statistical distribution can still help create good language modeling.
Bayesian Theorem
Probability is used to measure the uncertainty or level of doubt or the likelihood of an event.
But the best way to simply get the grasp is through examples. imagine two boxes containing two different types of fruits. The color of the boxes is blue and red. Each of these boxes contains apples and oranges, the blue box contains 3 apples and 1 orange and the red box contain 2 apple and 6 oranges.
To get the probability of picking an apple from the red box, we use the probability equation according to the frequentist:
P(F=o|B=r) = On/Ft, where On stand for the total number of orange in the red box and Ft stand for the total number of fruit in the red box, such a process can be repeated for each fruit in each box. And also we can calculate the probability of picking any of the fruit from the boxes, e.g the probability of picking fruit from blue or orange is given by P(B=bi) = Ni/N, where Ni stand for the number of i-element of box and N stand for the total number of box we observed.
This kind of probability can easily be inferred using this frequentist method, but what about the case of, let say: we are told that a fruit (orange) was picked from a box and we are told to predict which of the box it was picked from.
Using the frequentist method here will fail us, this is a conditional probability, that given the fact we’ve picked a fruit from a box, so tell us which of the box was it picked from.
The previous method we’ve used to calculate the probability, just give us for a particular box, but what about we want no know the probability of orange over the two boxes, we used the method called sum rule.
Since sum rule deals with the summation of probability of orange in each box, we can just say then p(0) = p(F=o|B=r) + p(F=o|B=b), but the probability is more than that, what we are trying to do is summation of joint probability not summation of probability.
Considering this type of summation: p(0) = p(F=o|B=r) + p(F=o|B=b), nothing is actually wrong with it, since rules are applied, that we only pick a fruit from a box at a time and not both at the same time so both events are dependent on each other.
But while picking a fruit from a box two thing are happening at the same time, we are picking fruit and at the same time selecting a box, so to calculate the probability we consider the two events which are independent, it is independent because one occurring does not stop the other from occurring,
So considering the independent event, we represent:
p(0) =p(F=0|B=r)p(B=r) + p(F=0|B=b)p(B=b)
We can called this summation of joint probability, but p(F,B)=P(F|B)P(B), and this called product rule.
Bayes equation helps us to determine the probability of the posterior, using the sum and product rule.
The above image shows the full the details of Bayes equation, for our examples, we can represent the Box as C and the fruit as X.
Now let say, given the fact we’ve picked an orange from a box, what is the color of the box, that means, we are to determine P(B|F=0). remember to get this all we have to do is to get the value for each of the variable and we test it on the boxes. The box with the highest posterior probability is picked has the box.
P(F=0|B=r); we choose the red box first, since their are 4 oranges in the red box and the total fruit is 6, therefore P(F=0|B=r) = 4/6 = 0.67.
P(B=r) is the probability of picking red, this can be gotten in two ways, based on the number of time we pick or number of trials and the second way is based on the number of the box.
Based on the number of trial: let say we pick red 40% of the time and the blue 60% of the time. then P(B=r) = 40/100 = 0.4
Base on the number of box: we have two boxes, so P(B=r) = 1/2=0.5
P(F=o) is gotten by calculating the sum of joint probability;
p(F=o) = p(F=0|B=r)p(B=r) + p(F=0)p(B=b);
since we’ve calculated p(F=o|B=r) as equal to 0.67, and p(F=o|B=b) will be equal to 1/4 = 0.25, Hence;
p(F=o) = (0.67 * 0.5) + (0.25*0.5) = 0.46
since we’ve get a value for each variable needed to compute the probability for the posterior:
p(B=r|F=o) = (0.67*0.5)/0.46 = 0.72
Their is high probability of picking from the red box compare to the blue box
Proof of Concept
Having gotten some intuition as to what Bayes is, let’s try to used Bayes to classify text as belonging to a particular language;
Let’s create a simple Bayes to classify text based on language:
Let create a dataset and label:
data = ["El ex presidente sudafricano, Nelson Mandela, ha sido hospitalizado la tarde del sábado, según confirmó un hospital de Pretoria a CNN. Al parecer se trata de un chequeo médico que ya estaba previsto, relacionado con su avanzada edad, según explicó el portavoz de la presidencia Sudafricana Mac Maharaj.",
"Trabajadores del Vaticano escalaron al techo de la Capilla Sixtina este sábado para instalar la chimenea de la que saldrá el humo negro o blanco para anunciar el resultado de las votaciones para elegir al nuevo papa.La chimenea es el primer signo visible al público de las preparaciones que se realizan en el interior de la capilla donde los cardenales católicos se reunirán a partir de este martes para el inicio del cónclave.",
.",
"Just months ago, demonstrators here and around Egypt were chanting for the end of military rule. But on Saturday, as a court ruling about a soccer riot set off angry mobs, many in the crowd here declared they now believed that a military coup might be the best hope to restore order. Although such calls are hardly universal and there is no threat of an imminent coup, the growing murmurs that military intervention may be the only solution to the collapse of public security can be heard across the country, especially in circles opposed to the Islamists who have dominated post-Mubarak elections. ",
"L'Italie a été gouvernée pendant un an par un homme qui n'avait pas été élu par le peuple. Dès la nomination de Mario Monti au poste de président du conseil, fin 2011, j'avais dit :Attention, c'est prendre un risque politique majeur. Par leur vote, les Italiens n'ont pas seulement adressé un message à leurs élites nationales, ils ont voulu dire : Nous, le peuple, nous voulons garder la maîtrise de notre destin. Et ce message pourrait être envoyé par n'importe quel peuple européen, y compris le peuple français.",
"Il en faut peu, parfois, pour passer du statut d'icône de la cause des femmes à celui de renégate. Lorsqu'elle a été nommée à la tête de Yahoo!, le 26 juillet 2012, Marissa Mayer était vue comme un modèle. Elle montrait qu'il était possible de perforer le fameux plafond de verre, même dans les bastions les mieux gardés du machisme (M du 28 juillet 2012). A 37 ans, cette brillante diplômée de Stanford, formée chez Google, faisait figure d'exemple dans la Silicon Valley californienne, où moins de 5 % des postes de direction sont occupés par des femmes. En quelques mois, le symbole a beaucoup perdu de sa puissance.",
"David."]
label = ["spanish","spanish","english","english","spanish","spanish","english"]
After creating the dataset, we create a function called train, which accept the words and labels
def train(word_arr, label):
words = {}
labels = {}
for i,sentences in enumerate(word_arr):
for word in sentences.split(" "):
if word not in words:
words[word] = 0.0
words[word] +=1.0
if label[i] not in labels:
labels[label[i]] ={}
if word not in labels[label[i]]:
labels[label[i]][word] = 0.0
labels[label[i]][word] +=1.0
return words,labels
In the above function, a dictionary(a store) is created for each label, then each word is categorized base on the label they belong to, their count are stored, that is, the number of time they appear in each label, and also create a word array that store the number of occurrence of each word
words,labels = train(data,label)
output[1]:
#labels
{'english':{'': 1.0,
'2011,': 1.0,
'30,': 1.0,
'Although': 1.0,
'American': 2.0,
'Arabia.': 1.0,
'Arabian': 1.0,
'Ballet': 2.0,
'Belgian': 1.0,
'Bolshoi': 1.0,
'But': 2.0,},
'spanish':{
'El': 1.0,
'Elle': 1.0,
'En': 1.0,
'Et': 1.0,
'Google,': 1.0,
'Il': 1.0,
'Italiens': 1.0,
"L'Italie": 1.0,
"Lorsqu'elle": 1.0,}
}
We then create a basic utility function to help us count the total number of item a dictionary,
def count(word):
count = 0
for key, value in word.items():
count +=value
return count
def prob_count(table):
count_sum= {}
total = 0
for key, value in table.items():
total += value
for key, value in table.items():
count_sum[key] = value / total
return count_sum
The prob_count is used to implementing the frequentist probability, which is base on number of occurrences, p(W=wi|L=li), so for each word let say in ‘English’ we calculate their probability, which is the number of time the word appears in the English label divided by the total number of word frequency in English.
word_count = count(words)
english_count = count(labels['english'])
spanish_count = count(labels['spanish'])
lab = {}
lab["english"] = prob_count(labels["english"])
lab["spanish"] = prob_count(labels["spanish"])
This variable help us to easily calculate the Bayes probability,
def probs_f(char,labels=["english","spanish"]):
p_b = [0.0,0.0]
for i, l in enumerate(labels):
p_w_l = (lab[l][char] if char in lab[l] else 0.0) * 0.5
denom = 0.0
for key,value in lab.items():
if char not in value:
continue
denom += value[char]*0.5
p_l_w = p_w_l / denom
p_b[i] = p_l_w
return p_b
The above code, calculates the posterior probability
probs_f("pour")
output[]:
[0.0,1.0]
One of the problems with this method is that is not flexible enough to accept new word, although this process used what is similar to word embedding but assuming we just keep the occurrence of each character and assign it to each label, the method will be more flexible to new words not seen in the training text. this method of using characters instead of the word can be seen as character embedding
Instead of just using the normal Bayes to make an inference here, we can use the inference method used by the recurrent neural network, which is similar
P(y|L,y(1),………,y(i))
Since in rnn, the prediction is not just made, previous input are also considered, so for the next Bayes we are going to compute, we are going to consider the probability of a choosing the next character after a particular character
For example, we can compute the probability of selecting e after selecting b so to do this:
Each unique character will be storage for another character, this character which is being stored under this unique character is a list of character that does come after this unique character in sentences
E.g for word like
be, beat, big, bees,bag, bond if we are to create a character table for
b
{'b':{e:3,a:1,i:1,o:1},
'e':{a:1,e:1},
'a':{t:1,g:1},
etc
}
After getting this count, it will now be easy to calculate the probability
#probability
{'b':{ e:0.5,a:0.16,i:0.16,o:0.16}.....}
So the probability of
e coming after
b is 0.5 and probability of
a coming after
b is 0.16.
This method of getting the probability of next word still be improved, in order for our Bayesian system to give a better inference.
This method is similar to Skip gram and n-grams
The above method is similar to 1-gram for the character, or just taking window across 1 character at once, if we are to make it 2-gram
{'be':{a:1,e:1,},
'ea':{t:1},..}
We can continue like that for each word, but to me, I think skip gram will work better, also n-gram will work, to read more about skip-gram check here.
def train(word_arr,label):
words = {}
word_2word = {}
labels = {}
for i, sentence in enumerate(word_arr):
for word in sentence.split(" "):
store = [None]* len(word)
for j, char in enumerate(word):
if char not in words:
words[char] = 0.0
words[char] +=1.0
if label[i] not in labels:
labels[label[i]] = {}
if char not in labels[label[i]]:
labels[label[i]][char] = 0.0
labels[label[i]][char] += 1.0
if char not in word_2word:
word_2word[char] = {}
if j==0:
store[j] = char
continue
if char not in word_2word[store[j-1]]:
word_2word[store[j-1]][char] = 0.0
word_2word[store[j-1]][char] +=1.0
store[j] = char
return (words,word_2word,labels)
The above function creates the character after character count as we discussed above.
words,word_2word,label_t = train(names,labels)
prob_lab = {}
for key, value in label_t.items():
prob_lab[key] = prob_count(value)
prob_w2wl = {}
for key,value in word_2wl.items():
prob_w2wl[key] = {}
for key1, value1 in value.items():
prob_w2wl[key][key1] = prob_count(value1)
The prob_lab and prob_w2w contain the probability of word base on label and character to character.
def char_prob2(text,label):
probs = [0.0]* len(label)
for l_i, l in enumerate(label):
p_l_w = 0.0
for i,c in enumerate(text):
p_w_l = (prob_lab[l][c] if c in prob_lab[l] else 0.0) * (1.0/ float(len(label)))
denom = 0.0
for key, value in prob_lab.items():
if c not in value:
continue
denom += value[c] * (1.0/ float(len(label)))
p_w_l_w = p_w_l / denom
# print(c,": prob= ", p_w_l_w,": for label:",l)
# p_j_w = 1.0
for j in text[:i]:
p_w_w = (prob_w2w[j][c] if c in prob_w2w[j] else 0.0) * (1.0/ float(len(prob_w2w)))
j_denom = 0.0
for pkey,pvalue in prob_w2w.items():
if c not in pvalue:
continue
j_denom += pvalue[c] * (1.0/ float(len(prob_w2w)))
# print(p_w_w,"j_denom",j_denom)
p_c_l_w = p_w_w / j_denom
# print(c,": prob= ", p_c_l_w,": for w.r.t:",j)
p_w_l_w *=p_c_l_w
p_l_w += p_w_l_w
probs[l_i] = p_l_w
return probs
In the above function, we create the probability array, whose length equal the length of the unique label, then we calculate probability by looping through the label and text, so probability is calculated based on the label, and character before the present character
text[:i]is used to get the character before the present character, and we loop through this character and calculate the posterior probability, note that the probability from one character has been calculated has prob_w2w. so it make it easy to get the probability: let say the probability of e given b can be gotten from
{b:{e:0.7,d:0.2} has
prob_w2w['b']['e']
Since the posterior probability is p(y|L,y(1),….y(i)): can be calculated has
p(y|L,y(1),y(2)) = p(y|L)*p(y|y(1))*p(y|y(2))
that is why you can see in the function
p_w_l_w *= p_c_l_w
label = set(labels)
char_prob2("EL",label)
#Spanish
You can try it on different dataset, and see how it works.
The major interesting thing is how the Bayes method could work fine with small data set, compare to rnn.
Remember this just a proof of concept, this method can still be improved
- Using skip-gram or n-gram
- Their some tricks used to calculate Bayes to make it work perfectly which is not implemented in this post
- We can embed the Bayes method in optimization algorithm in which they could be a loss function, which then update the count/probability. e.g . if
vais predicted has English which is french, we can increase the v and a count by 1 in the french dict or increase the probability of v to a
- We can create a character to character probability based on label
{'french':{
'v':{'a':.0.6,....},
....
}
One of the beauties of RNN that Bayesian could not capture is a representation, rnn learn the word representation as a body of matrix H which is the hidden state, which is shared throughout the time steps.
In my next post, we will see how we can make RNN do the same thing.
Don’t forget to give us your 👏 !
Credit: BecomingHuman By: oni stephen | https://nikolanews.com/rnn-from-first-principlesbayes-can-do-what-rnn-can-do/ | CC-MAIN-2021-17 | refinedweb | 3,290 | 59.13 |
The backpropagation algorithm is the classical feed-forward artificial neural network.
It is the technique still used to train large deep learning networks.
In this tutorial, you will discover how to implement the backpropagation algorithm from scratch with Python.
After completing this tutorial, you will know:
- How to forward-propagate an input to calculate an output.
- How to back-propagate error and train a network.
- How to apply the backpropagation algorithm to a real-world predictive modeling problem.
Let’s get started.
- Update Nov/2016: Fixed a bug in the activate() function. Thanks Alex!
- Update Jan/2017: Changed the calculation of fold_size in cross_validation_split() to always be an integer. Fixes issues with Python 3.
- Update Jan/2017: Updated small bug in update_weights(). Thanks Tomasz!
How to Implement the Backpropagation Algorithm From Scratch In Python
Photo by NICHD, some rights reserved.
Description
This section provides a brief introduction to the Backpropagation Algorithm and the Wheat Seeds dataset that we will be using in this tutorial.
Backpropagation Algorithm
The Backpropagation algorithm is a supervised learning method for multilayer feed-forward networks from the field of Artificial Neural Networks.
Feed-forward neural networks are inspired by the information processing of one or more neural cells, called a neuron. A neuron accepts input signals via its dendrites, which pass the electrical signal down to the cell body. The axon carries the signal out to synapses, which are the connections of a cell’s axon to other cell’s dendrites.
The principle of the backpropagation approach.
Technically, the backpropagation algorithm is a method for training the weights in a multilayer feed-forward neural network. As such, it requires a network structure to be defined of one or more layers where one layer is fully connected to the next layer. A standard network structure is one input layer, one hidden layer, and one output layer.
Backpropagation can be used for both classification and regression problems, but we will focus on classification in this tutorial.
In classification problems, best results are achieved when the network has one neuron in the output layer for each class value. For example, a 2-class or binary classification problem with the class values of A and B. These expected outputs would have to be transformed into binary vectors with one column for each class value. Such as [1, 0] and [0, 1] for A and B respectively. This is called a one hot encoding.
Wheat Seeds Dataset
The seeds dataset involves the prediction of species given measurements seeds from different varieties of wheat.
There are 201 records and 7 numerical input variables. It is a classification problem with 3 output classes. The scale for each numeric input value vary, so some data normalization may be required for use with algorithms that weight inputs like the backpropagation algorithm.
Below is a sample of the first 5 rows of the dataset.
Using the Zero Rule algorithm that predicts the most common class value, the baseline accuracy for the problem is 28.095%.
You can learn more and download the seeds dataset from the UCI Machine Learning Repository.
Download the seeds dataset and place it into your current working directory with the filename seeds_dataset.csv. The dataset is in tab-separated format, so you must convert it to CSV using a text editor or a spreadsheet program.
Tutorial
This tutorial is broken down into 6 parts:
- Initialize Network.
- Forward Propagate.
- Back Propagate Error.
- Train Network.
- Predict.
- Seeds Dataset Case Study.
These steps will provide the foundation that you need to implement the backpropagation algorithm from scratch and apply it to your own predictive modeling problems.
1. Initialize Network
Let’s start with something easy, the creation of a new network ready for training.
Each neuron has a set of weights that need to be maintained. One weight for each input connection and an additional weight for the bias. We will need to store additional properties for a neuron during training, therefore we will use a dictionary to represent each neuron and store properties by names such as ‘weights‘ for the weights.
A network is organized into layers. The input layer is really just a row from our training dataset. The first real layer is the hidden layer. This is followed by the output layer that has one neuron for each class value.
We will organize layers as arrays of dictionaries and treat the whole network as an array of layers.
It is good practice to initialize the network weights to small random numbers. In this case, will we use random numbers in the range of 0 to 1.
Below is a function named initialize_network() that creates a new neural network ready for training. It accepts three parameters, the number of inputs, the number of neurons to have in the hidden layer and the number of outputs.
You can see that for the hidden layer we create n_hidden neurons and each neuron in the hidden layer has n_inputs + 1 weights, one for each input column in a dataset and an additional one for the bias.
You can also see that the output layer that connects to the hidden layer has n_outputs neurons, each with n_hidden + 1 weights. This means that each neuron in the output layer connects to (has a weight for) each neuron in the hidden layer.
Let’s test out this function. Below is a complete example that creates a small network.
Running the example, you can see that the code prints out each layer one by one. You can see the hidden layer has one neuron with 2 input weights plus the bias. The output layer has 2 neurons, each with 1 weight plus the bias.
Now that we know how to create and initialized a network, let’s see how we can use it to calculate an output.
2. Forward Propagate
We can calculate an output from a neural network by propagating an input signal through each layer until the output layer outputs its values.
We call this forward-propagation.
It is the technique we will need to generate predictions during training that will need to be corrected, and it is the method we will need after the network is trained to make predictions on new data.
We can break forward propagation down into three parts:
- Neuron Activation.
- Neuron Transfer.
- Forward Propagation.
2.1. Neuron Activation
The first step is to calculate the activation of one neuron given an input.
The input could be a row from our training dataset, as in the case of the hidden layer. It may also be the outputs from each neuron in the hidden layer, in the case of the output layer.
Neuron activation is calculated as the weighted sum of the inputs. Much like linear regression.. This helps here and later to make the code easier to read.
Now, let’s see how to use the neuron activation.
2.2. Neuron Transfer
Once a neuron is activated, we need to transfer the activation to see what the neuron output actually is.
Different transfer functions can be used. It is traditional to use the sigmoid activation function, but you can also use the tanh (hyperbolic tangent) function to transfer outputs. More recently, the rectifier transfer function has been popular with large deep learning networks.
The sigmoid activation function looks like an S shape, it’s also called the logistic function. It can take any input value and produce a number between 0 and 1 on an S-curve. It is also a function of which we can easily calculate the derivative (slope) that we will need later when backpropagating error.
We can transfer an activation function using the sigmoid function as follows:
Where e is the base of the natural logarithms (Euler’s number).
Below is a function named transfer() that implements the sigmoid equation.
Now that we have the pieces, let’s see how they are used.
2.3. Forward Propagation
Forward propagating an input is straightforward.
We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer.
Below is a function named forward_propagate() that implements the forward propagation for a row of data from our dataset with our neural network.
You can see that a neuron’s output value is stored in the neuron with the name ‘output‘. You can also see that we collect the outputs for a layer in an array named new_inputs that becomes the array inputs and is used as inputs for the following layer.
The function returns the outputs from the last layer also called the output layer.
Let’s put all of these pieces together and test out the forward propagation of our network.
We define our network inline with one hidden neuron that expects 2 input values and an output layer with two neurons.
Running the example propagates the input pattern [1, 0] and produces an output value that is printed. Because the output layer has two neurons, we get a list of two numbers as output.
The actual output values are just nonsense for now, but next, we will start to learn how to make the weights in the neurons more useful.
3. Back Propagate Error
The backpropagation algorithm is named for the way in which weights are trained.
Error is calculated between the expected outputs and the outputs forward propagated from the network. These errors are then propagated backward through the network from the output layer to the hidden layer, assigning blame for the error and updating weights as they go.
The math for backpropagating error is rooted in calculus, but we will remain high level in this section and focus on what is calculated and how rather than why the calculations take this particular form.
This part is broken down into two sections.
- Transfer Derivative.
- Error Backpropagation.
3.1. Transfer Derivative
Given an output value from a neuron, we need to calculate it’s slope.
We are using the sigmoid transfer function, the derivative of which can be calculated as follows:
Below is a function named transfer_derivative() that implements this equation.
Now, let’s see how this can be used.
3.2. Error Backpropagation
The first step is to calculate the error for each output neuron, this will give us our error signal (input) to propagate backwards through the network.
The error for a given neuron can be calculated as follows:
Where expected is the expected output value for the neuron, output is the output value for the neuron and transfer_derivative() calculates the slope of the neuron’s output value, as shown above.
This error calculation is used for neurons in the output layer. The expected value is the class value itself. In the hidden layer, things are a little more complicated.
The error signal for a neuron in the hidden layer is calculated as the weighted error of each neuron in the output layer. Think of the error traveling back along the weights of the output layer to the neurons in the hidden layer.
The back-propagated error signal is accumulated and then used to determine the error for the neuron in the hidden layer, as follows:
Where error_j is the error signal from the jth neuron in the output layer, weight_k is the weight that connects the kth neuron to the current neuron and output is the output for the current neuron.
Below is a function named backward_propagate_error() that implements this procedure.
You can see that the error signal calculated for each neuron is stored with the name ‘delta’. You can see that the layers of the network are iterated in reverse order, starting at the output and working backwards. This ensures that the neurons in the output layer have ‘delta’ values calculated first that neurons in the hidden layer can use in the subsequent iteration. I chose the name ‘delta’ to reflect the change the error implies on the neuron (e.g. the weight delta).
You can see that the error signal for neurons in the hidden layer is accumulated from neurons in the output layer where the hidden neuron number j is also the index of the neuron’s weight in the output layer neuron[‘weights’][j].
Let’s put all of the pieces together and see how it works.
We define a fixed neural network with output values and backpropagate an expected output pattern. The complete example is listed below.
Running the example prints the network after the backpropagation of error is complete. You can see that error values are calculated and stored in the neurons for the output layer and the hidden layer.
Now let’s use the backpropagation of error to train the network.
4. Train Network
The network is trained using stochastic gradient descent.
This involves multiple iterations of exposing a training dataset to the network and for each row of data forward propagating the inputs, backpropagating the error and updating the network weights.
This part is broken down into two sections:
- Update Weights.
- Train Network.
4.1. Update Weights
Once errors are calculated for each neuron in the network via the back propagation method above, they can be used to update weights.
Network weights are updated as follows:
Where weight is a given weight, learning_rate is a parameter that you must specify, error is the error calculated by the backpropagation procedure for the neuron and input is the input value that caused the error.
The same procedure can be used for updating the bias weight, except there is no input term, or input is the fixed value of 1.0.
Learning rate controls how much to change the weight to correct for the error. For example, a value of 0.1 will update the weight 10% of the amount that it possibly could be updated. Small learning rates are preferred that cause slower learning over a large number of training iterations. This increases the likelihood of the network finding a good set of weights across all layers rather than the fastest set of weights that minimize error (called premature convergence).
Below is a function named update_weights() that updates the weights for a network given an input row of data, a learning rate and assume that a forward and backward propagation have already been performed.
Remember that the input for the output layer is a collection of outputs from the hidden layer.
Now we know how to update network weights, let’s see how we can do it repeatedly.
4.2. Train Network
As mentioned, the network is updated using stochastic gradient descent.
This involves first looping for a fixed number of epochs and within each epoch updating the network for each row in the training dataset.
Because updates are made for each training pattern, this type of learning is called online learning. If errors were accumulated across an epoch before updating the weights, this is called batch learning or batch gradient descent.
Below is a function that implements the training of an already initialized neural network with a given training dataset, learning rate, fixed number of epochs and an expected number of output values.
The expected number of output values is used to transform class values in the training data into a one hot encoding. That is a binary vector with one column for each class value to match the output of the network. This is required to calculate the error for the output layer.
You can also see that the sum squared error between the expected output and the network output is accumulated each epoch and printed. This is helpful to create a trace of how much the network is learning and improving each epoch.
We now have all of the pieces to train the network. We can put together an example that includes everything we’ve seen so far including network initialization and train a network on a small dataset.
Below is a small contrived dataset that we can use to test out training our neural network.
Below is the complete example. We will use 2 neurons in the hidden layer. It is a binary classification problem (2 classes) so there will be two neurons in the output layer. The network will be trained for 20 epochs with a learning rate of 0.5, which is high because we are training for so few iterations.
Running the example first prints the sum squared error each training epoch. We can see a trend of this error decreasing with each epoch.
Once trained, the network is printed, showing the learned weights. Also still in the network are output and delta values that can be ignored. We could update our training function to delete these data if we wanted.
Once a network is trained, we need to use it to make predictions.
5. Predict
Making predictions with a trained neural network is easy enough.
We have already seen how to forward-propagate an input pattern to get an output. This is all we need to do to make a prediction. We can use the output values themselves directly as the probability of a pattern belonging to each output class.
It may be more useful to turn this output back into a crisp class prediction. We can do this by selecting the class value with the larger probability. This is also called the arg max function.
Below is a function named predict() that implements this procedure. It returns the index in the network output that has the largest probability. It assumes that class values have been converted to integers starting at 0.
We can put this together with our code above for forward propagating input and with our small contrived dataset to test making predictions with an already-trained network. The example hardcodes a network trained from the previous step.
The complete example is listed below.
Running the example prints the expected output for each record in the training dataset, followed by the crisp prediction made by the network.
It shows that the network achieves 100% accuracy on this small dataset.
Now we are ready to apply our backpropagation algorithm to a real world dataset.
6. Wheat Seeds Dataset
This section applies the Backpropagation algorithm to the wheat seeds dataset.
The first step is to load the dataset and convert the loaded data to numbers that we can use in our neural network. For this we will use the helper function load_csv() to load the file, str_column_to_float() to convert string numbers to floats and str_column_to_int() to convert the class column to integer values.
Input values vary in scale and need to be normalized to the range of 0 and 1. It is generally good practice to normalize input values to the range of the chosen transfer function, in this case, the sigmoid function that outputs values between 0 and 1. The dataset_minmax() and normalize_dataset() helper functions were used to normalize the input values.
We will evaluate the algorithm using k-fold cross-validation with 5 folds. This means that 201/5=40.2 or 40 records will be in each fold. We will use the helper functions evaluate_algorithm() to evaluate the algorithm with cross-validation and accuracy_metric() to calculate the accuracy of predictions.
A new function named back_propagation() was developed to manage the application of the Backpropagation algorithm, first initializing a network, training it on the training dataset and then using the trained network to make predictions on a test dataset.
The complete example is listed below.
A network with 5 neurons in the hidden layer and 3 neurons in the output layer was constructed. The network was trained for 500 epochs with a learning rate of 0.3. These parameters were found with a little trial and error, but you may be able to do much better.
Running the example prints the average classification accuracy on each fold as well as the average performance across all folds.
You can see that backpropagation and the chosen configuration achieved a mean classification accuracy of 95.238% which is dramatically better than the Zero Rule algorithm that did slightly better than 28.095% accuracy.
Extensions
This section lists extensions to the tutorial that you may wish to explore.
- Tune Algorithm Parameters. Try larger or smaller networks trained for longer or shorter. See if you can get better performance on the seeds dataset.
- Additional Methods. Experiment with different weight initialization techniques (such as small random numbers) and different transfer functions (such as tanh).
- More Layers. Add support for more hidden layers, trained in just the same way as the one hidden layer used in this tutorial.
- Regression. Change the network so that there is only one neuron in the output layer and that a real value is predicted. Pick a regression dataset to practice on. A linear transfer function could be used for neurons in the output layer, or the output values of the chosen dataset could be scaled to values between 0 and 1.
- Batch Gradient Descent. Change the training procedure from online to batch gradient descent and update the weights only at the end of each epoch.
Did you try any of these extensions?
Share your experiences in the comments below.
Review
In this tutorial, you discovered how to implement the Backpropagation algorithm from scratch.
Specifically, you learned:
- How to forward propagate an input to calculate a network output.
- How to back propagate error and update network weights.
- How to apply the backpropagation algorithm to a real world dataset..
That’s what I was looking for. Write a neural network without any libraries (scikit, keras etc.) Thnak you very much!
I’m glad to hear it!
where can i see your data set, i want to see how it looked like
Hi MO.
The small contrived dataset used for testing is listed inline in the post in section 4.2
The dataset used for the full example is on the UCI ML repository, linked in the section titled “Wheat Seeds Dataset”. Here is the direct link:
in two class classification for 0 the expected value is [1,0] for 1 its is [0,1].
how will be the output vectors for more than two class??
Hi prakash,
For multi-class classification, we can extend the one hot encoding.
Three class values for “red”, “green” “blue” can be represented as an output vector like:
1, 0, 0 for red
0, 1, 0 for green
0, 0, 1 for blue
I hope that helps.
Hi, Jason.
You’ve mentioned that there are 3 output classes.
How do we check the values which come under the 3 classes / clusters?
Could we print the data which fall under each class?
Hi Rakesh,
The data does belong to 3 classes. We can check the skill of our model by comparing the predicted classes to the actual/expected classes and calculate an accuracy measure.
I’m confused why the activation method iterates from 0 to len(inputs) – 1 instead of from 0 to len(weights) – 1. Am I missing something?
Hi Alex,
The length of weights is the length of the input + 1 (to accommodate the bias term).
We add the bias term first, then we add the weighted inputs. This is why we iterate over input values.
Does that help?
When I step through the code above for the ‘forward_propagate’ test case, I see the code correctly generate the output for the single hidden node but that output doesn’t get correctly processed when determining the outputs for the output layer. As written above in the activate function ‘for i in range(len(inputs)-1):’, when the calculation gets to the activate function for the output node for class=0, since ‘inputs’ has a single element in it (the output from the single hidden node), ‘len(inputs) – 1’ equals 0 so the for loop never executes. I’m assuming the code is supposed to read ‘for i in range(len(weights) -1):’ Does that make sense?
I’m just trying to make sure I don’t fundamentally misunderstand something and improve this post for other readers. This site has been really, really helpful for me.
I’m with you now, thanks for helping me catch-up.
Nice spot. I’ll fix up the tutorial.
Update: Fixed. Thanks again mate!
# Update network weights with error
def update_weights(network, row, l_rate):
for i in range(len(network)):
inputs = row
if i != 0:
inputs = [neuron[‘output’] for neuron in network[i – 1]]
for neuron in network[i]:
for j in range(len(inputs)-1):
neuron[‘weights’][j] += l_rate * neuron[‘delta’] * inputs[j]
neuron[‘weights’][-1] += l_rate * neuron[‘delta’]
In this fragment:
for j in range(len(inputs)-1):
neuron[‘weights’][j] += l_rate * neuron[‘delta’] * inputs[j]
neuron[‘weights’][-1] += l_rate * neuron[‘delta’]
If inputs length = 1, you are not updating weights, it’s correct? You are updating only bias, because in hidden layer is only one neuron.
Hello. In method update_weight you are doing for j in range(len(inputs) – 1). If inputs lenght = 1, you aren’t updating weights. It’s correct? Hidden layer have one neuron so in output layer weights aren’t updated
Hi Tomasz,
The assumption here is that the input vector always contains at least one input value and an output value, even if the output is set to None.
You may have found a bug though when updating the layers. I’ll investigate and get back to you.
Thanks Tomasz, this was indeed a bug.
I have updated the update_weights() function in the above code examples.
Hi, Thanks for the tutorial, I’m doing a backpropagation project at the moment so its been really useful.
I was a little confused on the back-propagation error calculation function. Does “if i != len(network)-1:” mean that if the current layer isn’t the output layer then this following code is run or does it mean that the current layer is an output layer?
Glad to hear it Michael.
The line means if the index i is not equal to the index of the last layer of the network (the output layer), then run code inside the block.
I have another question.
Would it be possible to extend the code from this tutorial and create a network that trains using the MNIST handwritten digit set? using a input unit to represent each pixel in the image. I’m also not sure whether/how I could use feature extractors for the images.
I have a project where I have to implement the Backpropagation algorithm with possibly the MNIST handwritten digit training set.
I hope my question makes sense!
Sure Michael, but I would recommend using a library like Keras instead as this code is not written for performance.
Load an image as a long list of pixel integer values, convert to floats and away you go. No feature extraction needed for a simple MLP implementation. You should get performance above 90%.
Hi Jason,
Great post!
I have a concern though:
In train_network method there are these two lines of code:
expected = [0 for i in range(n_outputs)]
expected[row[-1]] = 1
Couldn’t be the case that expected[row[-1]] = 1 will throw IndexError, as n_outputs is the size of the training set which is a subset of the dataset and row basically contains values from the whole dataset?
Hi Calin,
If I understand you correctly, No. The n_outputs var is the length of the number of possible output values.
Maybe put some print() statements in to help you better understand what values variables have.
Hmm..I ran the entire code (with the csv file downloaded from), added some breakpoints and this is what I got after a few iterations:
n_outputs = 168
row[-1] = 201
which is causing IndexError: list assignment index out of range.
I’ve got the same error, That my list assignment index is out of range
Sorry to hear that, did you try running the updated code?
This is error of csv read. Try to reformat it with commas. For me it worked
What was the problem and fix exactly Ivan?
The data file () has a few lines with double tabs (\t\t) as the delimiter — removing the double tabs and changing tabs to commas fixed it.
Thanks for the good article.
Thanks for the note Bob.
I’ve had the same error at the ‘train_network’ function. Is your dataset fine? I’ve had some problems because the CSV file wasn’t loaded correctly due to my regional windows settings. I’ve had to adjust my settings and everything worked out alright.
Thanks for such a good article.
Just one question: in the equation “weight = weight + learning_rate * error * input”, why there is an “input”? IMO it should be: “weight = weight + learning_rate * error”?
The var names and explanation are correct.
The update equation is:
For the input layer the input are the input data, for hidden layers the input is the output of the prior layer.
Jason,
Thanks for the code and post.
Why is “expected” in expected = [0 for i in range(n_outputs)] initialized to [0,0] ?
Should not the o/p values be taken as expected when training the model ?
i.e for example in case of Xor should not 1 be taken as the expected ?
Hi Madwadasa,
Expected is a one-hot encoding. All classes are “0” expect the actual class for the row which is marked as a “1” on the next line.
Hello, I have a couple more questions. When training the network with a dataset, does the error at each epoch indicate the distance between the predicted outcomes and the expected outcomes together for the whole dataset? Also when the mean accuracy is given in my case being 13% when I used the MNIST digit set, does this mean that the network will be correct 13% of the time and would have an error rate of 87%?
Hi Michael,
The epoch error does capture how wrong the algorithm is on all training data. This may or may not be a distance depending on the error measure used. RMSE is technically not a distance measure, you could use Euclidean distance if you like, but I would not recommend it.
Yes, in generally when the model makes predictions your understanding is correct.
Hi Jason,
in the excerpt regarding error of a neuron in a hidden layer:
“Where error_j is the error signal from the jth neuron in the output layer, weight_k is the weight that connects the kth neuron to the current neuron and output is the output for the current neuron.”
is the k-th neuron a neuron in the output layer or a neuron in the hidden layer we’re “on”? What about the current neuron, are you referring to the neuron in the output layer? Sorry, english is not my native tongue.
Appreciate your work!
Bernardo
It would have been better if recall and precision were printed. Can somebody tell me how to print them in the above code.
You can learn more about precision and recall here:
Hello Jason, great tutorial, I am developer and I do not really know much about this machine learning thing but I need to extend this your code to incorporate the Momentum aspect to the training, can you please explain how I can achieve this extension?
Sorry, I don’t have the capacity to write or spell out this change for you.
My advice would be to read a good book on the topic, such as Neural Smithing:
Hi Jason,
I have my own code written in C++, which works similar to your code. My intention is to extend my code to convolutional deep neural nets, and i have actually written the convolution, Relu and pooling functions however i could not begin to apply the backpropagation i have used in my shallow neural net, to the convolutional deep net, cause i really cant imagine the transition of the backpropagation calculation between the convolutional layers and the standard shallow layers existing in the same system. I hoped to find a source for this issue however i always come to the point that there is a standard backpropagation algorithm given for shallow nets that i applied already. Can you please guide me on this problem?
I”d love to guide you but I don’t have my own from scratch implementation of CNNs, sorry. I’m not best placed to help at the moment.
I’d recommend reading code from existing open source implementations.
Good luck with your project.
Thank you, I was looking for exactly this kind of ann algorith. A simple thank won’t be enough tho lol
I’m glad it helped.
The best way to help is to share the post with other people, or maybe purchase one of my books to support my ongoing work:
Great one! .. I have one doubt .. the dataset seeds contains missing features/fields for some rows.. how you are handling that …
You could set the missing values to 0, you could remove the rows with missing values, you could impute the missing values with mean column values, etc.
Try a few different methods and see what results in the best performing models.
What if I have canonical forms like “male” or “female” in my dataset… Will this program work even with string data..
Hi Manohar,
No, you will need to convert them to integers (integer encoding) or similar.
Great job! this is what i was looking for ! thank you very much .
However i already have a data base and i didn’t know how to make it work with this code how can i adapt it on my data
Thank you
This process will help you work through your predictive modeling problem:
Thanks for such a great article..
I have one question, in update_weights why you have used weight=weight+l_rate*delta*input rather than weight=weight+l_rate*delta?
You can learn more about the math in the book on the topic.
I recommend Neural Smithing:
Thanks for a good tutorial.
I have some IndexError: list assignment index out of range. And I cannot fix it with comma or full-stop separator.
What is the full error you are getting?
Did you copy-paste the full final example and run it on the same dataset?
line 151 :
expected[row[-1]] = 1
IndexError : list assignment index out of range
Is this with a different dataset?
if it is a different dataset, what do i need to do to not get this error
The dataset that was given was for training the network. Now how do we test the network by providing the 7 features without giving the class label(1,2 or 3) ?
You will have to adapt the example to fit the model on all of the training data, then you can call predict() to make predictions on new data.
Ok Jason, i’ll try that and get back to you! Thank you!
Just a suggestion for the people who would be using their own dataset(not the seeds_dataset) for training their network, make sure you add an IF loop as follows before the 45th line :
if minmax[i][1]!=minmax[i][0]
This is because your own dataset might contain same values in the same column and that might cause a divide by zero error.
Thanks for the tip Karan.
Thanks jason for the amazing posts of your from scratch pyhton implementations! i have learned so much from you!
I have followed through both your naive bayes and backprop posts, and I have a (perhaps quite naive) question:
what is the relationship between the two? did backprop actually implement bayesian inference (after all, what i understand is that bayesian = weights being updated every cycle) already? perhaps just non-gaussian? so.. are non-gaussian PDF weight updates not bayesian inference?
i guess to put it simply : is backpropagation essentially a bayesian inference loop for an n number of epochs?
I came from the naive bayes tutorial wanting to implement backpropagation together with your naive bayes implementation but got a bit lost along the way.
sorry if i was going around in circles, i sincerely hope someone would be able to at least point me on the right direction.
Great question.
No, they are both very different. Naive bayes is a direct use of the probabilities and bayes theorem. The neural net is approximating a mapping function from inputs and outputs – a very different approach that does not directly use the joint probability.
How did you decide that the number of folds will be 5 ? Could you please explain the significance of this number. Thank You.
In this case, it was pretty arbitary.
Generally, you want to split the data so that each fold is representative of the dataset. The objective measure is how closely the mean performance reflect the actual performance of the model on unseen data. We can only estimate this in practice (standard error?).
Dear Jason,
thank you for the reply! I read up a bit more about the differences between Naive Bayes (or Bayesian Nets in general) and Neural Networks and found this Quora answer that i thought was very clear. I’ll put it up here to give other readers a good point to go from:
TL:DR :
– they look the same, but every node in a Bayesian Network has meaning, in that you can read a Bayesian network structure (like a mind map) and see what’s happening where and why.
– a Neural Network structure doesn’t have explicit meaning, its just dots that link previous dots.
– there are more reasons, but the above two highlighted the biggest difference.
Just a quick guess after playing around with backpropagation a little: the way NB and backprop NN would work together is by running Naive Bayes to get a good ‘first guess’ of initial weights that are then run through and Neural Network and Backpropagated?
Please note that a Bayesian network and naive bayes are very different algorithms.
Hi Jason,
Further to this update:
Update Jan/2017: Changed the calculation of fold_size in cross_validation_split() to always be an integer. Fixes issues with Python 3.
I’m still having this same problem whilst using python 3, on both the seeds data set and my own. It returns an error at line 75 saying ‘list object has no attribute ‘sum” and also saying than ‘an integer is required.’
Any help would be very much appreciated.
Overall this code is very helpful. Thank you!
Sorry to hear that, did you try copy-paste the complete working example from the end of the post and run it on the same dataset from the command line?
Yes I’ve done that, but still the same problem!
Hello jason,
please i need help on how to pass the output of the trained network into a fuzzy logic system if possible a code or link which can help understand better. Thank you
Awesome Explanation
Thanks!
Hello Jason
I m getting list assignment index out or range error. How to handle this error?
The example was developed for Python 2, perhaps this is Python version issue?
Thanks but I think python is not a good choice…
I think it is a good choice for learning how backprop works.
What would be a better choice?
Hey, Jason Thanks for this wonderful lecture on Neural Network.
As I am working on Iris Recognition, I have extracted the features of each eye and store it in .csv file, Can u suggest how further can I build my Backpropagation code.
As when I run your code I am getting many errors.
Thank you
This process will help you work through your modeling problem:
Could you please convert this iterative implementation into matrix implementation?
Perhaps in the future Jack.
Hi Jason,
In section 4.1 , may you please explain why you used ### inputs = row[:-1] ### ?
Thanks
Yes. By default we are back-propagating the error of the expected output vs the network output (inputs = row[:-1]), but if we are not the output layer, propagate the error from the previous layer in the network (inputs = [neuron[‘output’] for neuron in network[i – 1]]).
I hope that helps.
Thanks for your respond. I understand what you said , the part I am no understanding is the [:-1] . why eliminating the last list item ?
It is a range from 0 to the second last item in the list, e.g. (0 to n-1)
In function call, def backward_propagate_error(network, expected):
how much i understand is , it sequentially pass upto
if i != len(network)-1:
for j in range(len(layer)):
error = 0.0
for neuron in network[i + 1]:
error += (neuron[‘weights’][j] * neuron[‘delta’])
My question is which value is used in neuron[‘delta’]
delta is set in the previous code block. It is the error signal that is being propagated backward.
Thanks very much!
You’re welcome.
Hi Jason
Thank you very much for this awesome implementation of neural network,
I have a question for you : I want to replace the activation function from Sigmoid
to RELU . So, what are the changes that I should perform in order to get
correct predictions?
I think just a change to the transfer() and transfer_derivative() functions will do the trick.
Awesome !
Thank you so much
You’re welcome.
Hi Jason,
Thank you very much for this wonderful implementation of Neural Network, it really helped me a lot to understand neural networks concept,
n_inputs = len(dataset[0]) – 1
n_outputs = len(set([row[-1] for row in dataset]))
network = initialize_network(n_inputs, 2, n_outputs)
train_network(network, dataset, 0.5, 20, n_outputs)
What do n_inputs and n_outputs refer to? According to the small dataset used in this section, is n_inputs only 2 and n_outputs only 2 (0 or 1) or I am missing something?
Input/outputs refers to the number of input and output features (columns) in your data.
Is the program training the network for 500 epochs for each one of the k-folds and then testing the network with the testing data set?
Hi Yahya,
5-fold cross validation is used.
That means that 5 models are fit and evaluated on 5 different hold out sets. Each model is trained for 500 epochs.
I hope that makes things clearer Yahya.
Yes you made things clear to me, Thank you.
I have two other questions,
How to know when to stop training the network to avoid overfitting?
How to choose the number of neurons in the hidden layer?
You can use early stopping, to save network weights when the skill on a validation set stops improving.
The number of neurons can be found through trial and error.
I am working on a program that recognizes handwritten digits, the dataset is consisting of pictures (45*45) pixels each, which is 2025 input neurons, this causes me a problem in the activation function, the summation of (weight[i] * input[i]) is big, then it gives me always a result of (0.99 -> 1) after putting the value of the activation function in the Sigmoid function, any suggestions?
I would recommend using a Convolutional Neural Network rather than a Multilayer Perceptron.
In section 3.2. Error Backpropagation, where did output numbers came from for testing backpropagation
‘output’: 0.7105668883115941
‘output’: 0.6213859615555266
‘output’: 0.6573693455986976
Perhaps from outputs on test forward propagation [0.6629970129852887, 0.7253160725279748] taking dd -> derivative = output * (1.0 – output), problem is they don’t match, so I’m a bit lost here…
thanks!
Awesome article!!!
In that example, the output and weights were contrived to test back propagation of error. Note the “delta” in those outputs.
Hi Jason,
I am using the MNIST data set to implement a handwritten digit classifier. How many training examples will be needed to get a perfomance above 90%.
I would recommend using a CNN on MNIST. See this tutorial:
Hi Jason,
Your blog is totally awesome not only by this post but also for the whole series about neural network. Some of them explained so much useful thing than others on Internet. They help me a lot to understand the core of network instead of applying directly Keras or Tensorflow.
Just one question, if I would like to change the result from classification to regression, which part in back propagation I need to change and how?
Thank you in advance for your answer
Thanks Huyen.
You would change the activation function in the output layer to linear (e.g. no transform).
Hi Jason,
I am playing around with your code to better understand how the ANN works. Right now I am trying to do predictions with a NN, that is trained on my own dataset, but the program returns me one class label for all rows in a test dataset. I understand, that normalizing dataset should help, but it doesn`t work (I am using your minmax and normalize_dataset functions). Also, is there a way to return prediction for one-dimensional dataset?
Here is the code (sorry for lack of formatting):
def make_predictions():
dataset = [[29,46,107,324,56,44,121,35,1],
[29,46,109,327,51,37,123,38,1],
[28,42,107,309,55,32,124,38,1],
[40,112,287,59,35,121,36,1],
[27,43,129,306,75,41,107,38,1],
[28,38,127,289,79,40,109,37,1],
[29,37,126,292,77,35,100,34,1],
[30,40,87,48,77,51,272,80,2],
[26,37,88,47,84,44,250,80,2],
[29,39,91,47,84,46,247,79,2],
[28,38,85,45,80,47,249,78,2],
[28,36,81,43,76,50,337,83,2],
[28,34,75,41,83,52,344,81,2],
[30,38,80,46,71,53,347,92,2],
[28,35,72,45,64,47,360,101,2]]
network = [[{‘weights’: [0.09640510259345969, 0.37923370996257266, 0.5476265202749506, 0.9144446394025773, 0.837692750149296, 0.5343300438262426, 0.7679511829130964, 0.5325204151469501, 0.06532276962299033]}],
[{‘weights’: [0.040400453542770665, 0.13301701225112483]}, {‘weights’: [0.1665525504275246, 0.5382087395561351]}, {‘weights’: [0.26800994395551214, 0.3322334781304659]}]]
# minmax = dataset_minmax(dataset)
# normalize_dataset(dataset, minmax)
for row in dataset:
prediction = predict(network, row)
print(‘Expected=%d, Got=%d’ % (row[-1], prediction))
I would suggest exploring your problem with the Keras framework:
Hi Jason!
In the function “backward_propagate_error”, when you do this:
neuron[‘delta’] = errors[j] * transfer_derivative(neuron[‘output’])
The derivative should be applied on the activation of that neuron, not to the output . Am I right??
neuron[‘delta’] = errors[j] * transfer_derivative(activate(neuron[‘weights’], inputs))
And inputs is:
inputs = row[-1]
if i != 0:
inputs = [neuron[‘output’] for neuron in self.network[i-1]]
Thank you! The post was really helpful!
Hello Jason!
This is a very interesting contribution to the community 🙂
Have you tried using the algorithm with other activation functions?
I tried with Gaussian, tanh and sinx, but the accuracy was not that high, so I think that I omitted something. What I altered were the activation functions and the derivatives. Is there something else that needs to be changed?
Sigmoid was the defacto standard for many years because it performs well on many different problems.
Now the defacto standard is ReLU.
Sigmoid and ReLU are transfer functions right ?
Activation function is just the sum of all weights and inputs
You are correct, but in some frameworks, transfer functions are called activation functions:
Thanks for the great post. Here is some observation that I am not able to understand. In the back ward propagate you are not taking all the weights and only considering the jth. Can you kindly help understand. I was under the impression that the delta from output is applied across all the weights,
for neuron in network[i + 1]:
error += (neuron[‘weights’][j] * neuron[‘delta’])
I understand you do not want to take in the bias weight hence the exclusion of the last weight in neuron. I kind of get stumped on bias.
Thanks for the great article. In the backward propagate, the delta value is applied for each weight across the neuron and the error is summed. I am curious why is the delta not applied to individual weights of the neuron and the error summed for that neuron. Can you please clarify?
Why don’t you split the data into TrainData and TestData, like 80% of the dataset for training and 20% for testing, because if you train with 100% of rows of the dataset and then test some rows of the dataset the accuracy will be good . But if you put new data on the seeds.csv the model will work with less accuracy, Right?
You can, k-fold cross validation generally gives a better estimate of model performance.
Once we have the estimate and choose our model, we can fit the final model on all available data and make predictions on new data:
Thanks for the post! I have a question about cross-validation. The dataset of seeds is perfect for 5 folds but for a dataset of 211? I’ll have uniformly sized subset right? (211/5) Can you give me a suggestion how I could handle that ?
Thanks in advanced.
One way is that some records can be discarded to give even sized groups.
Thanks so much for the tutorial. It was really helpful!
I’m glad it helped.
Hello Jason,
any advice on how to handle multi-classifier problems when the classes have high cardinality ?
I’m thinking about input data of search engines linked to choosen urls.
Ouch, consider modeling it as regression instead (e.g. a rating or recommender system).
Ok thank you very much Jason.
But it wont work with searches unseen by the algorithm.
I red something in the books “Programming collective intelligence” about a neural net from scratch for this king of problem but I don’t understang how it works for the moments…
Consider focusing on one measure/metric that really matters in your domain, then try a suite of framings of the problem and different algorithms to get a feeling for what might work best.
I am not able to understand the above code.So, I request you to explain me the above code
Which part do you not understand exactly?
Hey Jason..am a novice in machine learning..have a small question…how can I track the timesteps involved in the algorithm with accordance with the code?
What do you mean by time steps?
Hi, Jason. I am so confused, in the result, why there are 4 set of [output,weight,delta]
like this:
[{‘output’: 0.9999930495852168, ‘weights’: [0.9315463130784808, 1.0639526745114607, 0.9274685127907779], ‘delta’: -4.508489650980804e-09}, {‘output’: 0.9992087809233077, ‘weights’: [-2.4595353900551125, 5.153506472345162, -0.5778256160239431], ‘delta’: 1.940550145482836e-06}]
[{‘output’: 0.01193860966265472, ‘weights’: [2.3512725698865053, -8.719060612965613, 1.944330467290268], ‘delta’: -0.0001408287858584854}, {‘output’: 0.988067899681387, ‘weights’: [-2.2568526798573116, 8.720113230271012, -2.0392501730513253], ‘delta’: 0.0001406761850156443}]
after the backpropagation we find the optimal weights to get minimum error, what does these 4 group means?
E
That is the internal state of the whole trained network.
hi Jason
thanks for your code and good description here, i like it so much.
i run your example code and encounter with an error same others whom left note here
the error is:
expected[row[-1]] = 1
IndexError: list assignment index out of range
how i can fix this error?
The code was written for Python 2.7, confirm that this is your Python version.
Also confirm that you have copied the code exactly.
Dear Jason,
i have this question about Back Propagate Error
1- derivative sigmoid = output * (1.0 – output)
That is ok
2- error = (expected – output) * transfer_derivative(output)
Ok but it also means that error == 0 for output = 1 whatever the expected is because transfer_derivative(1) ==0
So, whatever the expected , error is nil if output is 1 …
Is there something rotten here?
Thanks
Jerome
Thank you Jason, It’s a great tutorial and really helpful for me!
But I have to say that trying to reimplement your code strongly increased my ability of debugging 🙂
Thanks.
Hi Jason,
Thanks for sharing your code. I’m a PhD candidate in machine learning, and I have a doubt about the weights update in section 4.1:
weight = weight + learning_rate * error * input
Should not it be as follows?
weight = weight – learning_rate * error * input
Thanks again for sharing this.
Regards,
Victor.
I didn’t say anything, my mistake in understanding.
Thanks again for sharing your work.
Hi..
Thanks for ur coding. It was too helpful. can u suggest me how to use this code for classifying tamil characters. i have tried in cnn and now i need to compare the result with bpn. can u pls suggest me.
thank you
Perhaps this tutorial on classifying with a CNN would be more useful to you:
Thank you sir. With this tutorial i have implemented cnn sir. but for BPN i am getting error rate 687.203 sir. i dnt know what to do sir. can u help me sir.
Thank you
What is the problem exactly?
Classification of Tamil characters sir. I have 144 different classes. I have taken 7 glcm features of each character and I need to train this features in backpropagation and predict the character to which class it belongs.
Sound like a great project!
Hi, so I wasn’t following this tutorial when implementing my neural network from scratch, and mine is in JavaScript. I just need help with the theory. How do I calculate the error for each node in the net so that I can incrementally change the weights? Great tutorial btw
Hahaha nevermind, it was my code
Multidimensional arrays and stuff boggle the mind hah
Glad to hear you worked it out.
[ 6.38491205 5.333345 4.81565798 5.43552204 9.96445304 2.57268919 4.07671018 1.5258789 6.19728301 0 1 ]
Dear sir,
the above mentioned numerical values are extracted from the dental x-ray image using gray level co occurrence matrix [10 inputs and 1 output]. This dataset is used as a input for BPN classifier. whether the same data set as[.csv] file can be used as the input for DEEP Convolutional Neural Network technique ? and can i get the output as image ? for example if i give the dental x ray images as numerical values i have to get the caries affected teeth as the output for the given dataset.
That sounds like a great problem. It may be possible.
I would recommend using deep CNNs.
Perhaps this tutorial will give you some ideas on how to get started:
You may want to look at some papers on object localization in images. I don’t have material on it sorry.
can i get the example code for dental caries detection using deep Convolutional Neural Network for the given dataset as x ray images.
I do not have sample code for this problem, sorry.
Very nice explanation, thank you.
I have some questions.
1) weight = weight + learning_rate * error * input
Do I really need to multiply it with input ? For example here they dont it multiply it with input. At least, I think that…
2) Is your method same as in?
i think yes, but again, Im not sure and Im confused by that input multiplication.
3) What is exactly loss function in your example (I usually found some derivations of loss (cost ?) function (in other explanations), not transfer function derivation)? Im actually very confused by notation which I find around …
4) momentum and weight decay. In your example, you can implement them that you substract calculated decay and add calculated momentum (to weight update) ? Again, I found forms which substract both and weight update as w + deltaW, so again I’m mega confused by notation for backpropagation which I found…
Sorry for dumb questions, … math is not my strong side, so many things which can be inferred by math sense are simply hidden for me.
*substract both and weight update as w + deltaW, so again
I found above sentence as nonsense, must be side effect of my confusion …
Hang in there.
Pick one tutorial and focus on it. Jumping from place to place will make things worse for sure.
Hi John, good questions.
According to my textbook, yes.
I can’t speak for random sites on the internet sorry.
Loss is prediction error. You can change this to other forms like MAE or MSE.
No decay or momentum in this example. Easy to add if you want. There are many ways to dial in the learning process. No hard and fast rules, just some norms that people reuse.
A VERY GOOD TUTORIAL SIR…
Sir i am implementing remote sensed image classification using BPN neural network using IDL.
I am not finding good resources on constructing features for input dataset and also number of hidden layers and number of neurons in hidden layer.
Any resources you know, can help me?
Thanks
The CNN will perform feature extraction automatically, you could explore using different filters on the data to see if it helps the network.
The number of layers and neurons/filters per layer must be found using trial and error. It is common to copy the designs from other papers as a starting point.
I hope that helps.
Nice tutorial, very clean and readable code. =) thank you!
Thanks pero.
1. I expect that this code is deep learning (many hidden layer) but not. One sentence is easy (“you can add more hidden layer as explained”) but to do is not as easy as you said.
2. I think your code is wrong.
neuron[‘delta’] = errors[j] * transfer_derivative(neuron[‘output’])
but
Error = Target – ActivatedOutputNode
Delta = Error * Derivative(NONActivatedOutputNode)
I mean you use the same ‘output’ variable both error and delta. But in error it must be activated one, in delta it must be NONactivated one.
Nice tutorial, very clean code and beginner-friendly. Thank you very much!
Thanks, I’m glad you found it useful!
I only have one slight issue: I implemented this in Ruby and I tried to train it using the IRIS dataset, keeping the network simple (1 input layer, 1 hidden layer, 1 output layer) and after decreasing for a while the error rate keeps increasing. I tried lowering the learning rate, even making it dynamic so it decreases whenever the error increases but it doesn’t seem to help. Could you give me some advice? P.S sorry for my bad English
Here is an example of backprop I developed in Ruby:
Hi Jason, I enjoy so much your tutorials. Can you do a tutorial implementing BackPropagation Through Time? Thanks man.
Thanks for the suggestion.
I have a few posts on the general topic, for example:
Hello Jason,
Thank you for the great tutorial!
I would like to know how I can obtain the weight*input for every single neuron in the network…
I’ve been trying these lines –
for layer in network:
new_inputs = []
for neuron in layer:
activation = activate(neuron[‘weights’], inputs)
neuron[‘output’] = transfer(activation)
new_inputs.append(neuron[‘output’])
but the activation variable here is a single value…what I understand is that if I have set n_hidden = 5 (number of hidden layers), I should get N*5 (N = number of features in the dataset) outputs if I print the activation…
Kindly help 🙂
Thank you!
Dear Jason,
I have a question on the delta calculation at the output layer, where
the primary value is the difference between the neuron output and
the expected output. And we are then multiplying this difference
with the transfer_derivative. where transfer_derivative is a function
of neuron’s output.
My question is, is it correct to find the difference between the
neuron’s output and the expected output?
In this case of the example, you have chosen digital outputs [0,1]
and hence it may not have come up .. but my point is…
one is already subjected to a transfer function, and one is not.
The neuron’s output is always subjected to a transfer function and
hence will be in a specific range, say -.5 to +.5 or something..
But the expected output is the user’s choice .. isnt it?
user can have an expected value of say 488.34, for some stock price
learning.. then is it still correct to find this primary difference
between the expected output and the neuron output, at the output
layer delta calculation?
shoulnt the expected output also be subjected to the same transfer
function before finding the difference? Or the otherway, like
shoulnt the neuron ouptut be subjected to a reverse transfer function
before comparing with the expected output directly?
Thanks and Regards,
Jose Panakkel
Hello Jason, great tutorial that helped me a lot!
I have a question concerning the back-propagation : what if instead of having an error function I only have a desired gradient for the output (in the case of an actor-critic model for example)?
How can I change your backprop function to make it work? Or can I just use the gradient as the error?
Sorry, I don’t follow, perhaps you can restate your question with an example?
Hi Jason , thank you for providing this tutorial. I’m confused of how can I implement the same backpropagation algorithm with output not binary. Since I noticed that your example has binary output. Like predicting for stock price given the open, high, low and close values. Regards.
Use a library like Keras. Start here:
Hi Jason,
great article. I have an interest in NN but I am not that good at python.
Want I wanted to try was to withhold say 5 rows from the dataset and have the trained network predict the results for those rows. these is is different from what I think the example does which is rolling predictions with the learning. Removing 5 rows from the dataset is of course easy but my pitiful attempts at predicting with unseen data like below fail ((I guess network is not in scope at the end): any help appreciated!
# predict unseen data
unseendataset = [[12.37,13.47,0.8567,5.204,2.96,3.919,5.001],
[12.19,13.2,0.8783,5.137,2.981,3.631,4.87],
[11.23,12.88,0.8511,5.14,2.795,4.325,5.003],
[13.2,13.66,0.8883,5.236,3.232,8.315,5.056],
[11.84,13.21,0.8521,5.175,2.836,3.598,5.044],
[12.3,13.34,0.8684,5.243,2.974,5.637,5.063]]
for row in unseendataset:
prediction2 = predict(network, row)
print(‘Predicted=%d’ % (prediction2))
I would recommend starting with Keras rather than coding the algorithm from scratch.
Start here:
Hi Jason, I am trying to generalize your implementation to work with a variable number of layers and nodes. However, whenever I try to increase the number of nodes too much it stops working (the network freezes at one error rate and all output nodes are active, i.e. giving 1). Although the code would work if I decreased the layers and the errors will go down.
Is there something I am missing when using too many layers? The concepts should be the same.
I trained a network with 4 layers: [14,10,10,4] and it worked.
I trained a network with 4 layers [14,100,40,4] and it is stuck. Same dataset.
My code is here if you are looking in more details:
Thanks
What problem do you get exactly?
Hi, Jason Brownlee,
can we extend this code for 2 or more hidden layers ?
Sure.
Dear Mr. Brownlee,
I’m trying to alter the code to represent a regression problem (sigmoid on hidden layer, linear on output layer). As far as I know, the main part of the code that would have to be modified is the FF algorithm. I’ve rewritten the code as below:
With this code, I’m getting an “OverflowError: (34, ‘Result too large’)” error. Could you please tell what I’m doing wrong? All the other parts of the code are as you’ve written.
What did you change exactly? Can you highlight the change for me?
Also, try using pre tags.
(I don’t know how to highlight the change, sorry!)
I got the hidden layer (network[0]), and I applied your algorithm (calculate activation, transfer the activation to the output, append that to a new list called “new_inputs”).
After that, I get the output layer (network[-1]), I calculate the activation with the “new_inputs”, but I do NOT apply the sigmoid transfer function (so, the outputs should be linear). The results are appended to a new list, which is set to be the return of the function.
Would that be the best way to remove the sigmoid function from the output layer, making the code a regression, instead of a classification?
Sounds good. I don’t have any good ideas, I’d recommend stepping through some calculations to help spot where it is going wrong.
You may want to consider moving to an open source neural net library, such as Keras:
Hi Jason, nice posting and it really helps a lot
for j in range(len(layer)):
neuron = layer[j]
neuron[‘delta’] = errors[j] * transfer_derivative(neuron[‘output’])
Should the neuron[‘output’] be the output of the activation function instead of the transfer function here?
hi jason, nice post its really helps alot.
please tell me how we can change the neuron in hidden layer and in output layer?
and what will be the result when we change the neuron in hidden layer and in output layer?
in this tutorial u take one hidden layer,so can we use more than one hidden layer? and how?
please tell me i m waiting
Perhaps you would be better served by starting with a neural network library such as Keras:
Dear Mr. Brownlee,
I’m trying to adapt the code to support many hidden layers. I’ve adapted the code as below, with a new input called “n_layers”, to insert N hidden layers in the network.
# Initialize a network with “n_layers” hidden layers
def initialize_network3(n_inputs, n_hidden, n_layers, n_outputs):
network = list()
for i in range(n_layers):
hidden_layer = [{‘weights’:[random() for i in range(n_inputs + 1)]} for i in range(n_hidden)]
network.append(hidden_layer)
output_layer = [{‘weights’:[random() for i in range(n_hidden)]} for i in range(n_outputs)]
network.append(output_layer)
return network
When I try to run the code, it shows the error below. Do you have any idea why?
in backward_propagate_error(network, expected)
78 error = 0.0
79 for neuron in network[i + 1]:
—> 80 error += (neuron[‘weights’][j] * neuron[‘delta’])
81 errors.append(error)
82 else:
IndexError: list index out of range
Hi,
In the output/last layer when we are calculating the backprop error why are we multiplying with the transfer derivative with the (expected-output)?? transfer derivative is already canceled out for the the last layer , the update should be only (expected-output)*previous_layer_input , ???
Thanks | https://machinelearningmastery.com/implement-backpropagation-algorithm-scratch-python/ | CC-MAIN-2017-43 | refinedweb | 11,154 | 64.3 |
Red Hat Bugzilla – Bug 502977
panic in posix_lock_file() with GFS2 over NFS
Last modified: 2016-04-26 10:05:30 EDT
Description of problem:
With the error messages seen in the logs as follows:
----------------
May 5 14:00:19 heim1 kernel: lockd: grant for unknown block
May 5 14:00:19 heim1 kernel: dlm: dlm_plock_callback: lock granted after lock request failed; dangling lock!
May 5 14:00:19 heim1 kernel:
May 5 14:01:52 heim1 kernel: lockd: grant for unknown block
May 5 14:01:52 heim1 kernel: dlm: dlm_plock_callback: lock granted after lock request failed; dangling lock!
May 5 14:01:52 heim1 kernel:
May 5 14:02:36 heim1 kernel: lockd: grant for unknown block
May 5 14:02:36 heim1 kernel: dlm: dlm_plock_callback: lock granted after lock request failed; dangling lock!
May 5 14:02:36 heim1 kernel:
May 5 14:04:28 heim1 kernel: lockd: grant for unknown block
May 5 14:04:28 heim1 kernel: dlm: dlm_plock_callback: lock granted after lock request failed; dangling lock!
May 5 14:04:28 heim1 kernel:
May 5 14:04:37 heim1 kernel: lockd: grant for unknown block
May 5 14:04:37 heim1 kernel: dlm: dlm_plock_callback: lock granted after lock request failed; dangling lock!
May 5 14:04:37 heim1 kernel:
May 5 14:04:50 heim1 kernel: lockd: grant for unknown block
May 5 14:04:50 heim1 kernel: dlm: dlm_plock_callback: lock granted after lock request failed; dangling lock!
May 5 14:04:50 heim1 kernel:
May 5 14:06:52 heim1 kernel: lockd: grant for unknown block
May 5 14:06:52 heim1 kernel: dlm: dlm_plock_callback: lock granted after lock request failed; dangling lock!
May 5 14:06:52 heim1 kernel:
May 5 14:08:06 heim1 kernel: lockd: grant for unknown block
May 5 14:08:06 heim1 kernel: dlm: dlm_plock_callback: lock granted after lock request failed; dangling lock!
May 5 14:08:06 heim1 kernel:
----------------
Kernel Oops on a cluster node:
This is node 1 of a two node cluster set up to NFS export home directories.
Mar 17 11:02:50 heim1 kernel: Unable to handle kernel NULL pointer dereference at 0000000000000010 RIP:
Mar 17 11:02:50 heim1 kernel: [<ffffffff800e4e68>] posix_lock_file+0x6/0xf
Mar 17 11:02:50 heim1 kernel: PGD 221e6d067 PUD 22227d067 PMD 0
Mar 17 11:02:50 heim1 kernel: Oops: 0000 [1] SMP
Mar 17 11:02:50 heim1 kernel: last sysfs file: /devices/pci0000:00/0000:00:00.0/irq
Mar 17 11:02:50 heim1 kernel: CPU 4
Mar 17 11:02:50 heim1 kernel: Modules linked in: ip_vs nfsd exportfs lockd nfs_acl auth_rpcgss sunrpc autofs4 ipmi_devintf ipmi_si ipmi_msghandler lock_dlm gfs2(U) dlm configfs bo\
nding ipv6 xfrm_nalgo crypto_api dm_emc dm_round_robin dm_multipath scsi_dh video hwmon backlight sbs i2c_ec i2c_core button battery asus_acpi acpi_memhotplug ac parport_pc lp par\
port ide_cd i5000_edac sg e1000e edac_mc bnx2 cdrom serio_raw pcspkr dm_raid45 dm_message dm_region_hash dm_mem_cache dm_snapshot dm_zero dm_mirror dm_log dm_mod qla2xxx scsi_tran\
sport_fc ata_piix libata shpchp megaraid_sas sd_mod scsi_mod ext3 jbd uhci_hcd ohci_hcd ehci_hcd
Mar 17 11:02:50 heim1 kernel: Pid: 3982, comm: gfs_controld Tainted: G 2.6.18-128.1.1.el5 #1
Mar 17 11:02:50 heim1 kernel: RIP: 0010:[<ffffffff800e4e68>] [<ffffffff800e4e68>] posix_lock_file+0x6/0xf
Mar 17 11:02:50 heim1 kernel: RSP: 0018:ffff810221ee3ea0 EFLAGS: 00010246
Mar 17 11:02:50 heim1 kernel: RAX: 0000000000000000 RBX: ffff81012c695000 RCX: 0000000000000000
Mar 17 11:02:50 heim1 kernel: RDX: 0000000000000000 RSI: ffff81012c695070 RDI: ffff81022d47a380
Mar 17 11:02:50 heim1 kernel: RBP: ffff81012c695070 R08: 0000000000000000 R09: 7fffffffffffffff
Mar 17 11:02:50 heim1 kernel: R10: 000000000000000c R11: 000000000003a2b4 R12: ffff81022d47a380
Mar 17 11:02:50 heim1 kernel: R13: ffff81022f14e8e0 R14: ffffffff88677fdf R15: 000000000cae4450
Mar 17 11:02:50 heim1 kernel: FS: 00002b6752d64a10(0000) GS:ffff81022fc1ed40(0000) knlGS:0000000000000000
Mar 17 11:02:50 heim1 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
Mar 17 11:02:50 heim1 kernel: CR2: 0000000000000010 CR3: 00000002243e9000 CR4: 00000000000006e0
Mar 17 11:02:50 heim1 kernel: Process gfs_controld (pid: 3982, threadinfo ffff810221ee2000, task ffff81022f489040)
Mar 17 11:02:50 heim1 kernel: Stack: ffffffff885377f6 0000000100000001 0000000200000000 000000010000000c
Mar 17 11:02:50 heim1 kernel: 0007000200000000 000000000003a2b4 0000000000000000 7fffffffffffffff
Mar 17 11:02:50 heim1 kernel: 000000000000000c ffff81022fe658c0 0000000000000040 00007fff57cd43c0
Mar 17 11:02:50 heim1 kernel: Call Trace:
Mar 17 11:02:50 heim1 kernel: [<ffffffff885377f6>] :dlm:dev_write+0x157/0x207
Mar 17 11:02:50 heim1 kernel: [<ffffffff8001659e>] vfs_write+0xce/0x174
Mar 17 11:02:50 heim1 kernel: [<ffffffff80016e6b>] sys_write+0x45/0x6e
Mar 17 11:02:50 heim1 kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0
Mar 17 11:02:50 heim1 kernel:
Mar 17 11:02:50 heim1 kernel:
Mar 17 11:02:50 heim1 kernel: Code: 48 8b 78 10 e9 fd fb ff ff 41 57 49 89 ff 41 56 41 55 41 54
Mar 17 11:02:50 heim1 kernel: RIP [<ffffffff800e4e68>] posix_lock_file+0x6/0xf
Mar 17 11:02:50 heim1 kernel: RSP <ffff810221ee3ea0>
Mar 17 11:02:50 heim1 kernel: CR2: 0000000000000010
Mar 17 11:02:50 heim1 kernel: <0>Kernel panic - not syncing: Fatal exception
The filesystem is GFS2 and the panic happens on the node which is exporting the GFS2 filesystem through NFS configured in cluster.
Version-Release number of selected component (if applicable):
This panic has been reproduced on 2.6.18-150.el5.
How reproducible:
100%
Steps to Reproduce:
This how I reproduced it - may not represent what the customer was doing.
1. Setup cluster with two nodes and a GFS2 filesystem.
2. Export the GFS2 filesystem via NFS from node A and mount on node B
3. Export the GFS2 filesystem via NFS from node B and mount on node A
4. On both NFS clients run this:
for i in `seq 1 1000`
do
./flock $i &
done
[see attached flock.c for flock program]
One of the nodes will panic within 15 mins.
Additional info:
crash> bt
PID: 5229 TASK: ffff81022b1cc0c0 CPU: 1 COMMAND: "gfs_controld"
#0 [ffff81021d989c00] crash_kexec at ffffffff800aaa19
#1 [ffff81021d989cc0] __die at ffffffff8006520f
#2 [ffff81021d989d00] do_page_fault at ffffffff80066e1c
#3 [ffff81021d989df0] error_exit at ffffffff8005dde9
[exception RIP: posix_lock_file+6]
RIP: ffffffff800e4e68 RSP: ffff81021d989ea0 RFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff8101bd756200 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffff8101bd756270 RDI: ffff8101bcb4a980
RBP: ffff8101bd756270 R8: 0000000000000000 R9: 7fffffffffffffff
R10: 0000000000000000 R11: 000000000003a2b4 R12: ffff8101bcb4a980
R13: ffff81022e1e64e0 R14: ffffffff88675fdf R15: 0000000006119280
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
#4 [ffff81021d989ea0] dev_write at ffffffff885357f6
#5 [ffff81021d989f10] vfs_write at ffffffff8001659e
#6 [ffff81021d989f40] sys_write at ffffffff80016e6b
#7 [ffff81021d989f80] tracesys at ffffffff8005d28d (via system_call)
RIP: 00000034130c56a0 RSP: 00007fffa5d97f88 RFLAGS: 00000246
RAX: ffffffffffffffda RBX: ffffffff8005d28d RCX: ffffffffffffffff
RDX: 0000000000000040 RSI: 00007fffa5d98080 RDI: 000000000000000a
RBP: 0000000000000002 R8: 000000003a0698eb R9: 00000000702bc85c
R10: 0000000049d63f88 R11: 0000000000000246 R12: 0000000006119280
R13: 00000000061192c0 R14: 0000000006119280 R15: 0000000006119280
ORIG_RAX: 0000000000000001 CS: 0033 SS: 002b
crash>
crash> dis posix_lock_file
0xffffffff800e4e62 <posix_lock_file>: mov 0x10(%rdi),%rax
0xffffffff800e4e66 <posix_lock_file+4>: xor %edx,%edx
0xffffffff800e4e68 <posix_lock_file+6>: mov 0x10(%rax),%rdi
0xffffffff800e4e6c <posix_lock_file+10>: jmpq 0xffffffff800e4a6e <__posix_lock_file_co
nf>
crash> px *(struct file *)0xffff8101bcb4a980
$2 = {
f_u = {
fu_list = {
next = 0x0,
prev = 0xffffffff8003c0d9
},
fu_rcuhead = {
next = 0x0,
func = 0xffffffff8003c0d9 <file_free_rcu>
}
},
f_dentry = 0x0,
f_vfsmnt = 0x0,
f_op = 0xffffffff88570a40,
f_count = {
counter = 0x0
},
f_flags = 0x8000,
f_mode = 0xd,
f_pos = 0x0,
f_owner = {
lock = {
raw_lock = {
lock = 0x1000000
}
},
pid = 0x0,
uid = 0x0,
euid = 0x0,
security = 0x0,
signum = 0x0
},
f_uid = 0x96,
f_gid = 0x96,
f_ra = {
start = 0x0,
size = 0x0,
flags = 0x0,
cache_hit = 0x0,
prev_page = 0xffffffffffffffff,
ahead_start = 0x0,
ahead_size = 0x0,
ra_pages = 0x20,
mmap_hit = 0x0,
mmap_miss = 0x0
},
f_version = 0x0,
f_security = 0x0,
private_data = 0x0,
f_ep_links = {
next = 0xffff8101bcb4aa50,
prev = 0xffff8101bcb4aa50
},
f_ep_lock = {
raw_lock = {
slock = 0x1
}
},
f_mapping = 0xffff8101be425478
}
We've crashed because the struct file pointer passed to posix_lock_file() appears to have been freed (ie f_count is zero and other fields have been reset - in particular f_dentry which we panicked trying to dereference).
I suspect it's a bug in dlm (ie it needs to take an additional reference on the struct file when it's saved off in dlm_posix_lock() to prevent it from being freed before gfs_controld gets to it).
This issue looks similar to:
and
but since the problems still exist in 2.6.18-150 they weren't fixed in those BZs.
Created attachment 345702 [details]
Program to reproduce panic
also similar to
but that patch should be in the -150 kernel as well.
I'm testing with 5.4 beta, 2.6.18-151.el5xen.
nodes xen1 and xen2 are exporting,
# cat /etc/exports
/gfs *(rw,insecure,no_root_squash)
node xen3 mounts from xen1, node xen4 mounts from xen2.
xen3 and xen4 have been running my own test as well as the flock test in comment 1, and they seem to all work fine. Given that these are all vm's on one host, everything is very slow.
I've tried both gfs1 and gfs2 as the shared fs between xen1 and xen2.
I'll next try xen2 mounting xen1's export, and xen1 mounting xen2's export,
although I wouldn't be too surprised if that arrangement produced an odd problem somewhere (and I wouldn't be too concerned about it.)
Initial results from xen2 mounting xen1's export and xen1 mounting xen2's export.
The flock test often stops and doesn't make any progress on either node, I don't know why, I didn't notice this using separate clients and servers. I've seen a couple "lockd: grant for unknown block" messages on each node after running for a few minutes.
I've not been able to reproduce this. Lachlan, could you try this again with separate nodes exporting and importing? I was not able to reproduce in either case, but nodes both exporting and importing the same fs isn't a configuration we want to worry about.
Okay. I have a 4 node cluster with node 1 exporting the GFS2 filesystem via NFS to node 3 and node 2 exporting to node 4 with the test running on the NFS clients on nodes 3 and 4. I'm running 2.6.18-150 again on all nodes. I still see the "grant for unknown block" and "dangling lock" messages but so far no panic. I'll let it run overnight. Separating the NFS client and servers onto different nodes may change the load/timing enough to avoid the problem but the bug will still be lurking.
It ran all night without panicking. This morning I noticed the flock processes had been killed so I tried to unmount the NFS filesystems on nodes 3 and 4 and got EBUSY on both. The flock processes had not terminated yet and still had references to the filesystem. Slowly they terminated but not before node 1 panicked in dlm:dev_write() as above.
OK, thanks, I'll get a cluster set up to try this again.
This bug is manifesting under GFS1 as well as GFS2 (on a production system)
Both GFS and GFS2 use the same code to deal with posix locks, so its quite likely that any bugs in this area will be shared between the two code bases.
Have all four normal test nodes back. Testing with upstream kernel 2.6.32-rc5
because it's easier to debug and should be about the same code in this area.
Reproduced the same bug with much less load.
node1 and node2 have gfs mounted
node3 mounts node1:/gfs /gfs
node4 mounts node2:/gfs /gfs
node3 and node4 each run three instances of looping flock test all in
foreground (and modified to show output on each iteration), on files 1, 2, 3
flock-loop 1
flock-loop 2
flock-loop 3
This ran for several minutes, periodically one to three of the flock-loop
instances would block for up to a minute at a time before resuming; reason
unknown.
While running, node1 had a single "lockd: grant for unknown block" message, and
node2 had none. Neither reported a "dlm_plock_callback: lock granted" message.
Eventually the original oops occured on node2:
BUG: unable to handle kernel NULL pointer dereference at 00000000
00000050
IP: [<ffffffff8110deee>] posix_lock_file+0x8/0x13
PGD 6cc05067 PUD 6cc06067 PMD 0
Oops: 0000 [#1] SMP
last sysfs file: /sys/devices/system/cpu/cpu3/cache/index2/shared_cpu_map
CPU 3
Modules linked in: nfsd nfs_acl auth_rpcgss exportfs gfs2 dlm configfs
ipt_REJEC
T xt_tcpudp iptable_filter ip_tables x_tables bridge stp autofs4 lockd sunrpc
ip
v6 iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi cpufreq_ondemand
dm_mult
ipath video output sbs sbshc battery ac parport_pc lp parport sg serio_raw
butto
n tg3 libphy i2c_nforce2 i2c_core pcspkr dm_snapshot dm_zero dm_mirror
dm_region
_hash dm_log dm_mod qla2xxx scsi_transport_fc shpchp mptspi mptscsih mptbase
scs
i_transport_spi sd_mod scsi_mod ext3 jbd uhci_hcd ohci_hcd ehci_hcd
Pid: 9630, comm: dlm_controld Not tainted 2.6.32-rc5 #2 ProLiant DL145 G2
RIP: 0010:[<ffffffff8110deee>] [<ffffffff8110deee>] posix_lock_file+0x8/0x13
RSP: 0018:ffff88006cc63e88 EFLAGS: 00010202
RAX: 0000000000000000 RBX: ffff88005e90c270 RCX: ffff88006ce5cbb8
RDX: 0000000000000000 RSI: ffff88005e90c2e0 RDI: ffff88007ee752f8
RBP: ffff88006cc63e88 R08: ffffffffa02de53a R09: ffffffff8132af3c
R10: ffffffff810dd0e1 R11: 0000000000000206 R12: ffff88007eaac128
R13: ffff88007ee752f8 R14: ffffffffa02c2ca2 R15: 000000000133df60
FS: 00007fc9471286e0(0000) GS:ffff880083a00000(0000) knlGS:00000000f777a6c0
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000000000000050 CR3: 000000006cc04000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process dlm_controld (pid: 9630, threadinfo ffff88006cc62000, task
ffff88006ce5c
4c0)
Stack:
ffff88006cc63f08 ffffffffa02de495 ffff88006cc63ec8 ffffffff8115bee1
<0> 0000000100000001 0000000200000000 0000000200026817 6ce2339d00000000
<0> 00000000000308a8 0000000000000000 0000000000000000 0000000000026817
Call Trace:
[
Code: f1 fe ff ff 41 bc db ff ff ff e8 93 d7 21 00 eb cd 48 83 c4 38 44 89 e0
5b 41 5c 41 5d 41 5e 41 5f c9 c3 55 48 8b 47 18 48 89 e5 <48> 8b 78 50 e8 c4 fb
ff ff c9 c3 48 8b 47 20 55 49 89 d0 48 89
RIP [<ffffffff8110deee>] posix_lock_file+0x8/0x13
RSP <ffff88006cc63e88>
CR2: 0000000000000050
---[ end trace 8a173e5ae5b588d3 ]---
dlm_controld ? 0000000000000003 0 9630 1 0x00000080
ffff88006cc63c08 0000000000000046 0000000000000000 ffff88006ce5c4c0
ffff88006cc63bd8 ffff88006ce5c4c0 ffff88007f414400 ffff88006ce5c870
00000001045b7c19 0000000000000046 ffffffff814f4018 ffffffff814f4000
Call Trace:
[<ffffffff81047730>] do_exit+0x655/0x66e
[<ffffffff8132c2fb>] oops_end+0xb2/0xba
[<ffffffff810280b5>] no_context+0x1ec/0x1fb
[<ffffffff810282ea>] __bad_area_nosemaphore+0x16c/0x18f
[<ffffffff8102834c>] __bad_area+0x3f/0x48
[<ffffffff81028373>] bad_area+0xe/0x10
[<ffffffff8132d750>] do_page_fault+0x1fb/0x2db
[<ffffffffa02c2ca2>] ? nlmsvc_grant_deferred+0x0/0x15a [lockd]
[<ffffffff8132b80f>] page_fault+0x1f/0x30
[<ffffffffa02c2ca2>] ? nlmsvc_grant_deferred+0x0/0x15a [lockd]
[<ffffffff810dd0e1>] ? fget_light+0x4f/0xe9
[<ffffffff8132af3c>] ? _spin_unlock+0x26/0x2a
[<ffffffffa02de53a>] ? dev_write+0x203/0x221 [dlm]
[<ffffffff8110deee>] ? posix_lock_file+0x8/0x13
[
Based on lachlan's analysis, I'm going to investigate how the xop->file might
be getting freed after dlm_posix_lock() and before the callback, and what is,
in theory, supposed to prevent that (if anything).
Created attachment 366868 [details]
debugging patch for reference
Collected the following info from printks in this patch, but not completely analyzed yet.
Oct 30 15:52:24 bull-01 kernel: lockd: grant for unknown block
Oct 30 15:52:24 bull-01 kernel: lockd: fl ffff88013f54b398 owner 7740398493674204011 start 7740398493674204011 end 7740398493674204011
Oct 30 15:53:28 bull-01 kernel: lockd: grant for unknown block
Oct 30 15:53:28 bull-01 kernel: lockd: fl ffff88003f9fedf8 owner 18446612137670821488 start 0 end 0
Oct 30 15:53:28 bull-01 kernel: dlm: dlm_plock_callback: 199873 fl ffff88003f9fedf8 lock granted after lock request failed; dangling lock!
Oct 30 15:53:28 bull-01 kernel: dlm: start 0 end 0
Oct 30 15:52:29 bull-02 kernel: dlm: dlm_plock_callback: 199884 fl ffff88013ed3d630 file ffff88012d369068 dentry (null)
Oct 30 15:52:29 bull-02 kernel: lockd: grant for unknown block
Oct 30 15:52:29 bull-02 kernel: dlm: dlm_plock_callback: 199884 fl ffff88013ed3d630 lock granted after lock request failed; dangling lock!
Oct 30 15:52:29 bull-02 kernel: dlm: start 7740398493674204011 end 7740398493674204011
This problem first appears in bug 466677 which was probably never fully understood or fixed.
vfs_cancel_lock is being called constantly during these tests which seems strange. I'd like to understand the whole end-to-end picture of what the lock/unlock cycle is supposed to look like, how it's supposed to work, and what role cancel has in it. gfs translates a CANCELLK into an UNLOCK with a "Hack" comment.
From comment 20 it sounds like the problem appears with a single client talking to a single server.
I've spent the day studying lockd/svclock.c along with the dprintk output for a
simple lock/unlock. From the dlm side everything seems to be working correctly; it's only when the dlm calls back into lockd that there seems to be problems with the nlm_block structures. My hunch is that it's cancel that somehow leads to the problems. Overall, the comments in the svclock.c code are not encouraging; it sounds like there are plenty of gaps for things to go wrong even without the new async/DEFERRED behavior introduced by gfs.
When lockd is operating correctly, one call to vfs_lock_file() should have one corresponding nlmsvc_grant_deferred() callback. With a debugging patch, I'm setting a new B_IN_FS flag on a block when vfs_lock_file() is called, and then clearing it in nlmsvc_grant_deferred(). Before setting the flag I check that it's not already set, and before clearing the flag I check that it is set.
The first sign of problems is when lockd calls vfs_lock_file() on a block that already has B_IN_FS set. After this, a lot of other similar errors quickly pile up indicating that the lockd block's are out of sync with the dlm locks.
I don't have any ideas why lockd may be calling vfs_lock_file() on a block that's currently busy in the fs, but I believe the root of the problems are in that direction.
My suspicion is that the file object is being freed because we don't take a reference on it before saving it away and later on we retrieve it after it has been freed. I just ran this patch through a quick test and it hasn't crashed.
--- linux-2.6.18.x86_64/fs/dlm/plock.c.orig 2009-11-04 14:54:45.000000000 +1100
+++ linux-2.6.18.x86_64/fs/dlm/plock.c 2009-11-04 14:58:00.000000000 +1100
@@ -11,6 +11,7 @@
#include <linux/poll.h>
#include <linux/dlm.h>
#include <linux/dlm_plock.h>
+#include <linux/file.h>
#include "dlm_internal.h"
#include "lockspace.h"
@@ -106,6 +107,7 @@ int dlm_posix_lock(dlm_lockspace_t *lock
locks_init_lock(&xop->flc);
locks_copy_lock(&xop->flc, fl);
xop->fl = fl;
+ get_file(file);
xop->file = file;
} else {
op->info.owner = (__u64)(long) fl->fl_owner;
@@ -187,6 +189,7 @@ static int dlm_plock_callback(struct plo
log_print("dlm_plock_callback: vfs lock error %llx file %p fl %p",
(unsigned long long)op->info.number, file, fl);
}
+ fput(file);
rv = notify(fl, NULL, 0);
if (rv) {
The "grant for unknown block" and "lock granted after lock request failed ..." messages are still appearing though so there may be more to fix.
The patch in comment #25 doesn't look right. All locks are removed when the file is closed, so it should be impossible to have a closed file on which there are remaining locks.
Comment #25 is a logical fix for the NULL-file oops, but I believe that it fixes a symptom and doesn't address the root cause. (Fixing symptoms can be sensible to do, too, but I think our main goal right now is finding the root cause.)
One thing that I couldn't see was any testing for the FL_CLOSE flag and I wonder if that needs to be handled specifically or whether it doesn't matter.
It does not appear to me that the problems are in the direction of the vfs, but rather in the direction of lockd. Remember, there are a lot more than file's and file_lock's involved here, and we don't have any known problems with local locking tests, only nlm locking tests. I'm fairly confident in the struct lifetimes/references among the first three items in this list, but not among the last three items in the list:
- struct file
- struct file_lock
- struct plock_op
- struct nlm_file
- struct nlm_block
At the moment I don't have the impression this is even a struct lifetime or reference counting issue at the core. As I said above, the first sign that things are off is when lockd seems to call vfs_lock_file() on a lock that's currently in the middle of vfs_lock_file().
Created attachment 367874 [details]
debugging patch
This patch adds the B_IN_FS flag to a block that's busy in the fs as mentioned in the earlier comment. It also takes it one step further in an attempt to fix (or at least avoid) the problems by *not* going ahead with another vfs_lock_file() if the block in question is busy with an earlier vfs_lock_file().
In my testing it has so far been successful in avoiding the problems (e.g. no "dangling locks" or "unknown blocks"), but it would be good to try the other tests that have shown problems.
My big remaining question is whether or not lockd is behaving correctly when it calls vfs_lock_file() on a block that's currently busy with a previous vfs_lock_file(). If not, then I'll pass this off to the lockd experts to debug why it's happening. If it's legitmate or difficult to avoid, then we'll need to detect when it happens (with B_IN_FS or something equivalent) and abort it.
We're getting multiple outages in production systems due to this bug (one machine goes down, a clustermate picks up the load and then crashes, etc) and users aren't happy.
Test RPMS would be handy...
Not sure if related. Posted per Dave's request:
I'm seeing panics on machines when running local rsync between GFS1 and GFS2 filesystems.
Neither FS is NFS exported, both were mounted as GFS local only (lock_nolock).
The filesystem content is 350Gb of Imap Mdir folders - approximately 3 million mostly tiny files. Some directories may contain 10k+ files but most hold far less than this. (usually a few hundred at most)
Probably not related. I had another report of something that sounds like this recently. What the original reporter didn't say was that it was GFS1 -> GFS2 so when I tried to reproduce it I did GFS2 -> GFS2 and I didn't manage to reproduce it. I've not had a chance to try with GFS1 on the sending end so far.
If you have any log messages from that issue, I'd like to know, but its probably not appropriate for this bz.
Re comments 31 and 32, are the outages and panics due to the oops in
posix_lock_file? Or are they other bugs not yet recorded in bugzilla?
The posix_lock_file bug we're working on here should not appear if you're using
lock_nolock, only lock_dlm; and it should also not appear unless gfs is
exported via nfs and clients are doing locking.
Created attachment 367889 [details]
flock-loop test program
In my testing I run
flock-loop file1
flock-loop file2
flock-loop file3
on two nfs clients, each client mounting from a separate server.
I have now seen a couple errors even with the patch from comment 20:
lockd: grant for unknown block, result 0
grant fl t 2 p 14197 o ffff88013e1bbaa8 0-0
lockd: nlm_block list
b ffff88013eacad58 flags 0 file ffff88013eb06508 fl t 1 p 14199 o ffff88013e1bbaa8 0-0
b ffff88013eaca930 flags 8 file ffff88013eae2760 fl t 1 p 14198 o ffff88013e1bbaa8 0-0
dlm: dlm_plock_callback: 30ccc fl ffff88007e262598 lock granted after lock request failed; dangling lock!
This is a different kind of error from most of the "unknown block" / "dangling lock" cases I was seeing without the patch. In this case the fl described in the callback has type 2 (F_UNLCK) which should never be the case in a callback.
And then separately,
nlmsvc_grant_deferred block ffff88007ee736d0 not B_IN_FS
Another thing to try changing is nlmsvc_unlock() which does:
nlmsvc_cancel_blocked()
vfs_cancel_lock()
vfs_lock_file(F_UNLCK)
Since gfs/dlm does not have the ability to cancel locks, it converts the vfs_cancel_lock() call into an ordinary unlock. So, lockd ends up calling unlock twice back to back, first from vfs_cancel_lock() and second from vfs_lock_file(F_UNLCK). I'll probably try removing the call to nlmsvc_cancel_blocked() altogether and see what happens.
Removing the calls to nlmsvc_cancel_blocked() did seem to improve things, but I don't have any specific examples. I was still seeing cases were a nlmsvc_grant_deferred() callback would occur with bogus fl data that would fail to match the fl in the block it should have matched. This is similar to the bug we recently fixed to pass the pointer of the original fl into the callback instead of the flc (copy of the original fl) because the flc ranges are modified by the vfs. Since it appears the original fl is being clobbered, causing it to not match the lock it's supposed to, I changed the dlm to make a second copy (flc2) of the original fl to pass back to nlmsvc_grant_deferred(). This appears to have fixed the problem of nlmsvc_grant_deferred() failing to find any matching blocks. I'm still seeing an occasional occurance of nlmsvc_lock() called on a block that is currently busy in the dlm from a previous nlmsvc_lock() call. I'm still dealing with that by aborting and returning in those cases.
Summary of the functional changes I've made:
1. In dlm_plock_callback(), create a second copy of the original file_lock, and pass this copy into the fl_grant callback instead of the pointer to the original file_lock.
2. In lockd set B_IN_FS in block->b_flags before calling vfs_lock_file(), and clear it in nlmsvc_grant_deferred() (if DEFERRED was returned, otherwise clear it right after the vfs call)
3. In nlmsvc_lock(), check if B_IN_FS is set, and return without calling into the fs if it is.
4. In nlmsvc_unlock(), remove the call to nlmsvc_cancel_blocked().
5. In nlmsvc_cancel_blocked(), return immediately without doing anything.
Response to #34:
#31 crashes are all posix lock related on nfs exported filesystems
#32 crashes are glocks on non-exported filesystems
addition to comment 40,
6. In dlm_plock_callback(), check if the saved struct file is invalid (non-null file->f_path.dentry), and if so don't call posix_lock_file() (which oopses if passed a bad struct file).
Since most of the changes I've been trying are work-arounds anyway (I've mostly given up for now on finding root problems), I tried get_file/fput, but killing flock-loop on the clients triggered fs/locks.c locks_remove_flock() BUG from the fput().
I'm relatively confident in the broad reasons behind the problems we're seeing. I have the impression that the problems are structural/design ones, and not traceable to a specific root bug that can be fixed (although I'd sure like to be wrong on that.)
lockd sends off async plock ops to the fs.
lockd is not careful to collect the async reply from the fs before doing something else that may interfere with the op it has sent.
Instead, lockd can do a number of things after it sends off the op and before it has processed a reply for it:
- it can send off another lock op on the same file for the same holder
- it can send off an unlock op on the same file for the same holder
- it can close and free the file
The first two can confuse the dlm plock code, although I've added some additional checking to detect and ignore them (partially anyway).
The first two can also result in callback errors where lockd has forgotten about the op it fired off and can no longer match up the reply when it arrives,
i.e. lockd: grant for unknown block
The last one is especially troubling, because the dlm needs the struct file to be "valid" at the time the op completes so that it can do the vfs "bookkeeping". If lockd has pulled the file out from under the dlm, it will result in oopses of various kinds (even if the dlm adds its own get_file reference) in the vfs locking code (fs/locks.c).
The dlm assumes that the plock caller is "well behaved", where that's defined as the behavior that it would see from a local process doing a plock operation. lockd is not well behaved in this sense; it behaves differently in the three ways (at least) listed above.
The lockd changes that were made to accomodate async locks were minimal. They assumed that lockd could work in largely the same way for sync or async implementations, and seemd to ignore the issue of things happening in the async window which could interfere with an incomplete call.
It wouldn't surprise me if you're right about lockd.
We already found that nfs utils aren't wonderfully coded and had to wrap all /usr/share/cluster/nfsclient.sh exportfs calls with flock statements in order to have multiple NFS services start/stop without tripping over each other.
I'm guessing that switching to NFSv4 won't help matters?
NFSv4 won't change the dlm/lockd interactions, but it may have some effect on the server/client interactions, which may have some indirect effects on dlm/lockd parts, I don't know.
Created attachment 369346 [details]
latest testing/debugging patch
My recent attempts have been to approach this as much as possible from the dlm side and avoid lockd changes. This current patch does seem to be holding up better than average, although there are still issues.
FWIW: All the nfs exports here are sync, not async.
To be honest at this point I think that lockd/nfs work would be more productive overall, but I'm a big fan of belt+braces+safetypin approaches.
The Samba project's CTDB project () has some NFS work included which might be helpful.
I don't seem to have any of the locking problems at all when using nfs4!
nfs4 is not using the dubious async lock completions like lockd does, but ordinary synchronous calls like local processes do. I'm not sure why we've not realized this before, and why we've spent so much time trying to make lockd work on gfs/dlm (many other bz's before this one) rather than simply limiting nfs+gfs file locking to nfs4 configurations. I'm going to seriously consider removing the async plock code from the dlm altogether.
Can you verify that switching to nfs4 solves all your file locking problems?
If so we can close this bz.
Unfortunately we can't completely remove nfs3 (or 2) from the servers - there are older OSes involved which don't have NFSv4.
I'm in the process of migrating the RH clients to NFS4 - which should help a lot, but it's already exposed that cluster.conf doesn't support "mount --bind"
After a few hours of testing: Superficially NFSv4 seems to work without causing GFS issues, but it's not been tested in anger yet.
Additionally there are other problems:
Client delegation doesn't work (syslogging lots of errors as a result)
IMPORTANT: file locking doesn't work at all! We have /home on NFS and being unable to lock .Xauthority + KDE startup files means that GUI logins don't work.
Serverside is exported using rw,insecure,no_subtree_check,sync,nohide
Clientside has: rw,nosuid,nodev,_netfs,fsc,acregmin=10,acregmax=120,acdirmax=120,timeo=600,retry=1,lock,hard,intr
I realise this is a bit beyond the scope of the ticket, but getting this working would allow us to drop v3 on 100+ clients and fully test if it gets round the crash.
Sorted the file locking issue - broken kernel requiring 2.6.18-164.6.1.el5 or later.
Under heavy load NFSv4 is basically unusable. We had to revert to v3 so that people could work - so we're back in the crossfire of this posix locking issue :(
Perhaps we should open a new bz to look at the nfs4 performance problems?
After two weeks working on nfs3 posix locks without a fix, I think we should spend a little time seeing if we can address the nfs4 performance.
Is anyone looking into the nfs4 performance issues?
Not as such. Most of the performance issues turned out to be self-inflicted (dialled down the number of nfsd threads, unaware this also affects nfsd4, lack of documentation is a cow).
Even with 2.6.18-164.6.1.el5 there are still serious nfs4 issues apparent which make it effectively unusable in a cluster environment (not least of which is that with v4recovery on a cluster disk as recommended, clients are only able to mount shares from one server. If different subsets are on different servers the other shares are unreachable)
On the NFS3/GFS front, I've found it only takes ONE client doing sustained heavy nfs writes(*) on an otherwise idle disk to be able to knock a server over. Given that not all clients are NFS4 capable this is still a serious shortcoming which needs addressing.
(*) Client was running a 100Gb+ scp from a OSX box on the LAN to nfs mounted disk.
While trawling bugzilla I ran across BZ 531493 and thought it was worth testing on nfs3+GFS - the server got very sluggish after about 16G had been xferred so the test got aborted.
Is this related or am I chasing mirages?
Alan, I doubt BZ 531493 is related to this bug. That bug results in the system hanging whereas this bug results in a crash preceded by lockd/dlm errors messages.
In our case the recent spate of crashes have been preceeded by the servers running sluggishly thanks to a user trying to put 20+Gb files on the gfs fs via nfs.
We have lock/dlm errors logged every couple of minutes under normal circumstances but only occasional crashes unless large file xfers are underway as well.
Just a few notes about using NFS+GFS in general.
Active/Active usage of NFS on GFS1/2 is not presently supported. Active/Passive (i.e. failover) of NFS on GFS1/2 is supported and should work fine.
Here are the relevant docs on this:
"Note that this configuration is not a "high capacity" configuration in the sense that more than one server is providing NFS service. In this configuration, the floating IP moves about as needed, but only one server is active at a time."
"Do not use NFS locks in a failover situation such as the one described in this configuration, as this may cause a duplicate lock situation. The GFS file system correctly distributes the locks between nodes.
The Linux NFS server as used in Red Hat Enterprise Linux 5 is not cluster aware. NFS state information is held on a per machine basis and there is a slight but definite chance of file corruption should two NFS servers try to write to the same file at the same time, even in a GFS environment. Because of this the current recommendation is to only export any given file system on one NFS machine only."
So presently the only supported configuration is a single NFS server at a time using GFS1/2 filesystem.
Though it should be noted that if you completely disable NFS locking that multiple NFS servers should work fine. But this is not officially a supported configuration.
.
Nonetheless the bug manifested even with _all_ nfs activity coming off one physical machine in the cluster (3 machines) - the problem with this configuration is the load spikes to more nfs processes than one box can provide (there's a hard limit around 368 processes)
FWIW the same corruption risks apply to the EL5 version of samba, which is why the Samba project developed CTDB and have put a lot of effort into clustered NFS too. Bringing this work into RH clustering would solve a number of issues but probably not this one.
(In reply to comment #64)
> .
Ok, reading through the bz comments I only saw mention of the active/active configuration. But if this problem exists on a single NFS server configuration, then I agree with you.
It has long been abundantly clear that this bug is due to incompatibility between lockd and gfs/dlm. See comment 44 for the explanation of that. That has nothing to do with active/active vs active/passive.
However, if you are doing active/passive, what's the point of sending the locks into gfs/dlm to "clusterize" them at all? There is not point. Don't do that, let the locks be managed only by nfs/lockd/vfs on the active node, just like a local fs. Recovery (shifting from failed active node to passive one) should then also be handled like a local fs.
I believe that if you mount gfs with the "localflocks" option it should turn off the clusterization of all plock/flock calls and make them equivalent to a plock/flock calls on a local fs.
comment #67 assumes that the GFS filesystem in question is ONLY being accessed via NFS (in this case why use GFS at all?)
The first thing that springs to mind requiring clusterisation of locks is other cluster nodes accessing the filesystem via GFS while running other tasks or another cluster node acting as a samba server.
*NUDGE*
Is anything being done on this problem? More specifically has any thought been given to sorting out the NFS suite?
Even if NFSv4 can be nmade to behave properly in a clustered environment, there will be v3-only clients for a long time to come.
Created attachment 396587 [details]
example patch
This untested patch allows us to control how plocks from nfs are handled in gfs,
without resorting to localflocks. If this new "nfslocks" mount option is used, then gfs will pass nfs plock requests on to the dlm to be clusterized (like gfs2 has done since 5.3). Without the nfslocks mount option, nfs plocks are handled locally and not clusterized (like gfs always did prior to 5.3).
I expect gfs2 developers may want to tweak this patch according to their taste.
We should also test to verify it works as expected, of course.
Steve, could you take a look at the patch in comment 72?
The patch looks ok, but I'm not sure I follow the problem that we are trying to solve with it. Why do we need to make nfs do something different to local fcntl locks? I think its likely to be rather confusing.
It's GFS, not ext4. eg: Other cluster nodes may be accessing the filesystem. (and in our case, they ARE.)
How long before this makes its way into a test kernel?
The issues related to the combination of posix locks, nfs, and gfs have
been very confusing to everyone. This includes developers like myself who
haven't understood the extent of the technical issues until recently.
There may still be some problems in the mix that we don't know or
understand because the issues span several subsystems. Given all the
confusion, different people and documents have been saying different
things about what works or what doesn't work or why.
Here's an attempt to clarify the current situation.
It's easiest to begin by describing things without nfs in the picture.
When normal programs/processes acquire posix locks (plocks) on gfs files
using fcntl(2), the plocks are "clustered", i.e. plocks between two
processes on different nodes work the same way as two processes on the
same node; locks from different nodes are aware of each other.
In this context it's helpful to understand what the "localflocks" gfs
mount option does. When localflocks is used, any plocks (fcntl(2)) or
flocks (flock(2)) acquired on that fs are not clustered among nodes and
remain local to the node, and behave like plocks/flocks on say ext3.
Whether localflocks is acceptable depends, of course, on the specific
applications using gfs.
What changes when nfs is added to the picture?
1. Most fundamentally, the locks are no longer finally held by processes
on the cluster nodes themselves, but by processes on nfs client nodes.
This means that when a cluster node fails, the plocks of all processes on
that node cannot simply be discarded by other cluster nodes during
recovery. Instead, the locks need to be recovered from the nfs client
nodes. This recovery requires a significant amount of new design and
development across multiple subsystems owned by multiple groups. This
includes defining public interfaces between different components which is
especially difficult and time consuming. The only current development I
am aware of is related indirectly, in the area of pnfs.
2. Processes are no longer acquiring plocks from userspace via fcntl(2),
but the nfs kernel server (lockd for nfs3), are calling into gfs to
acquire the locks. lockd behaves differently from userland processes and
does not follow the same conventions. Perhaps the worst example of this
is that it will give up on locks after a certain amount of time and try to
cancel them. This creates a race condition that will require serious
changes to fix. Attempts at fixing this race have only reduced the
occurance, and suggest that a complete fix may well extend beyond the
boundaries of the isolated cluster-fs-specific code. Changing general
lockd code for the sake of cluster file systems would be especially
challenging.
The only way to address issue 1 currently is to prevent nfs plocks from
being passed into gfs since the capability to recover them simply does not
exist in any form. One way to prevent passing nfs plocks into gfs is to
use the localflocks option to make all plocks/flocks local. A second way
is a patch like
which is more discriminating than localflocks.
The current solution to issue 1 obviates any patches to address issue 2.
Why all the recent confusion?
These issues have recently become prominent because of an unfortunate
kernel patch upstream and in RHEL 5.3 that changed the default behavior of
nfs plocks on gfs. Prior to RHEL 5.3, nfs plocks had always been local to
the node, regardless of the underlying filesystem type (gfs, ext3, etc).
There was no code or mechanism to pass plocks from nfs into gfs. This
changed in RHEL 5.3 with the introduction of interfaces (from the GPFS
group at IBM) to allow this passing of plock operations between nfs and an
underlying cluster fs like gfs (bz 196318).
Unfortunately, these interfaces were put to use by default on gfs, under
the mistaken assumption that nfs plocks could now be clustered just like
plocks used by local processes via fcntl(2). This decision failed to
account for the fact that there is much more to be done in the area of
recovery coordination before nfs plocks can truely be clustered for gfs.
The added confusion of active/passive nfs on gfs.
The context thus far has been about the most "natural" way to export nfs
from gfs: all gfs nodes exporting the same file system at the same time.
However, some people are interested in an active/passive configuration
where only one of the gfs nodes does the nfs export at a time. If the
exporting node fails, rgmanager is used to export the same fs from a
different node and move a virtual ip address.
In this configuration, we do not want nfs plocks to be passed into gfs
(there is no reason to do so); we want them to be handled and recovered in
the same way as a local fs like ext3. If this is done (e.g. pre-5.3
behavior, localflocks or a patch that disables nfs locks being handled on
gfs), then the underlying fs is not a factor and nfs plocks should work
even if gfs is the underlying fs.
Dave says in #78
> The context thus far has been about the most "natural" way to export nfs
> from gfs: all gfs nodes exporting the same file system at the same time.
This is our preferred method....
> However, some people are interested in an active/passive configuration
> where only one of the gfs nodes does the nfs export at a time.
We are running this setup because RH have explicitly warned that NFS isn't cluster safe and any given filesystem must only be NFS exported on one node at a time or there is risk of file corruption due to nfs write locks not passing between nodes
(In other words: active:passive is a workaround, not the preferred configuration. I believe this is the general case across most entities running A:P configs)
(RH also warn customers that in an a:p setup, all nfs locks are lost in the event of the nfs service switching between nodes. This isn't generally a problem.)
HOWEVER:
In a multinode active:active configuration containing 3 or more nodes it's a fair bet that any given filesystem won't be NFS exported on all nodes at all times, even if that fs may be exported on multiple nodes.
In such a configuration, the odds are reasonably high that customers will want
failover of a failed node's NFS service to another machine in the cluster.
In this case the same requirements as single export active:passive take effect.
There used to be (still is?) a userspace nfsd - development was stalled a long time ago in favour of kernelspace nfsd because of speed issues. Perhaps it's worth revisiting the userspace daemon for clustered purposes.
I'm not sure that moving to a userspace nfs lockd would improve matters here. There is still the issue of how the locks are to be failed over between nodes.
Looking at the gfs2 code we have this:
if (cmd == F_CANCELLK) {
/* Hack: */
cmd = F_SETLK;
fl->fl_type = F_UNLCK;
}
which looks to me like it is not performing lock cancellation at all, but instead queuing an unlock, unless the userland code has some other way to distinguish between these requests? I presume from the comment that this might not be the case.
I'm still looking into the NFS code to try and figure out what exactly is required of the fs in that case.
Dave, could we get some clarification please?
Without your patch, is the bug currently present in both GFS and GFS2 or is it only in GFS now?
If fixed in GFS2, do you know which RH test/release kernel?
The patch in comment 72 (adding the nfslocks option) has not been included in any build or release of gfs1 or gfs2. The gfs developers will need to do that if they approve of it.
Without that patch or something like it, the only way I know of to keep nfs plocks out of gfs is to mount gfs with localflocks.
The question is, why is localflocks not enough? I'll put the patch in if we can justify it in some way, but I don't understand the reasons for not just using localflocks.
> The question is, why is localflocks not enough?
localflocks == no clustered filesystem.
No clustered filesystem == no point in running GFS.
localflocks is independent from the filesystem's internal locking. The only reason that this other option would be needed is if there is a requirement for flock() and fcntl() locks to have different configs wrt cluster/single node.
If the application doesn't use flock() but only fcntl() locks then there is no difference between the proposed patch and the localflocks option.
localflocks *is* enough, the point is that it may be too much for some people.
localflocks means the fs won't do any clustering of flocks or plocks, even for local processes. nfslocks only stops clustering of nfs locks, but any plocks or flocks by other processes are still clustered.
The really critical issue, though, is that nfslocks is *off by default*, which returns us to the original pre-5.3 behavior of nfs locks being local. We really need nfslocks to be off by default or people will continue to run into these problems (inconsistent plocks, oopses).
Redhat Support advised us that filesystems mounted with localflocks should NOT be clustered and warn of file corruption risks. For our setup, the advice was to run all services on one node ONLY with the others powered down - hardly a High Availability situation to say the least.
We were sold GFS as a High Availability solution for clustered NFS/Samba operations and weren't even advised that NFS/Samba should only be operated on one node of a cluster until well AFTER we had the thing running (on RHEL4)
To get to the situation of clustered hardware being switched off in order to have safe NFS fileserving under RHEL5 defeats the whole purpose of using GFS.
Users and management will put up with issues for a _limited_ period, however the current situation is approaching the absolute limits of their patience.
If this issue isn't resolved in a reasonable timescale then we see little alternative but to remove GFS.
Are you confusing lock_nolock with localflocks? Those are very different things.
If you have any other technical questions or confusion I'd be happy to clarify them.
Not particularly. I had explicit warnings not to use localflocks on clustered systems whilst discussing lock_nolock
GFS man page says:
===.
===
On that basis I'd prefer not to play russian roulette with a clustered production filesystem.
I'm not necessarily suggesting you use localflocks. A person needs to be aware of the applications' requirements with respect to file locks (flocks and posix locks) before knowing whether it is acceptable to use localflocks. I said this in comment 78.
But we've been led off on the localflocks tangent; localflocks is not the most pertinent question, the proposed nfslocks patch is.
If you are exporting nfs from a single gfs node:
- the nfslocks patch will prevent kernel oopses
- posix locks from nfs clients will be local to the single exporting node
- posix locks will "work" among all nfs clients
- posix locks will be clustered for processes using fcntl() on all gfs nodes
If you are exporting nfs from multiple gfs nodes:
- the nfslocks patch will prevent kernel oopses
- posix locks from nfs clients will be local to the server/node they mount from
- posix locks will "work" among nfs clients mounting from the same server/node,
but not among nfs clients mounting from different servers
- posix locks will be clustered for processes using fcntl() on all gfs nodes
Dave, I know you're not suggesting we use localflocks, however others within RH are stating it is the only viable solution.
Given the current issues with multiheaded NFS exporting we'll still be sticking to one nfs server per filesystem, but we _must_ have stable clustered operation with NFS exports active on at least one node.
Created attachment 402691 [details]
updated patch
Updated patch, added "nonfslocks" in addition to "nfslocks", now matches upstream version. I have tested the upstream patch, it works as expected.
Created attachment 402696 [details]
patch
fix patch conflicts
Created attachment 402920 [details]
patch
fix
I have tested and verified the patch in comment 94, in build:
Hm, Steve has rejected this patch since he doesn't understand the problem.
Steve, this bug is now completely in your hands.
Steve, what is the current state of this bug?
Alan, I don't think there is anything to fix here. The bz has unfortunately got rather confused from the original report. Several different issues have been reported along the way. Let me try and clarify the situation....
We do not support:
o Mixed samba/nfs exports of GFS/GFS2 filesystems
o Active/active nfs exports with nfs lockd support (active/active should work without locking, and with udp nfs only)
o Mixed nfs and local applications on GFS/GFS2 filesystems
o Mixed samba and local applications on GFS/GFS2 filesystems
Active/passive nfs exports should work with nfs lockd, but you must set the "localflocks" mount option on each GFS/GFS2 mount.
If there are any issues other than those relating to NFS locking, they should be reported in different bugzillas.
We would like to be able to support both samba and nfs mixed and also active/active with locking support in the future. We have bz #580863, for example open to track upstream effort required to implement the features we need in order to do this. It will not be an easy thing to do unfortunately.
If there has been some confusion in the information supplied by support and/or any other part of Red Hat then please accept my apologies for that. If you are still experiencing problems, then please drop support a line and they will do their best to assist.
As for this bug, the originally reported issue has been resolved, so I'm now intending to close it.
Firstly, the original problem has NOT been resolved. There are still panics.
Secondly your comment about not supporting Mixed Samba/NFS exports of GFS is at odds with the original sale - which was specifically for this purpose...
Alan, I think your set up does not bear much relation to the original report for which this bz was opened. The details of that are:
o A contrived example set up by our support team to debug another customers problem
o Active/active with nfs locking (not supported)
o Using nfs on each of the two nodes to mount gfs2 from the other node (also not supported)
o No use of samba at all
I apologise if someone at Red Hat has given you incorrect information. I would be very interested to know who gave you that information.
If you can drop our support team a line, then we'll try and work with you to come up with a solution for your situation.
The reason that mixed samba/nfs is a problem is basically down to locking. Samba has an internal cache of information which uses posix leases to keep itself uptodate. GFS2 doesn't support leases when it is clustered, although it does when it is run single node (lock_nolock). NFS supports posix fcntl locks and so does GFS2 (if lockd support is enabled, i.e. when clustered and not using the localflocks option). There is however a problem in trying to use that interface active/active in that NFS doesn't have cluster recovery support so that the lock state cannot in that case be recovered in case of node failure.
The combination of these issues mean that we cannot support mixed samba and nfs at the moment, even though we would certainly like to do so.
Closing this bug on the basis that the original report was for an unsupported situation which has since been resolved. If there are other issues then they should be reported under a different/new bugzilla to avoid further confusion.
Upstream development work for active/active support of nfs lockd can be found in bz #580863
Please state what exactly is "unsupported" about NFS exporting running on a single GFS cluster node running by itself that is also exporting the same filesystems via samba and/or running local operations on the filesystem.
The reason I ask this question is that the same crash mode seen in the original report is exactly the same as seen on our systems in the configuration I describe.
We were referred to this ticket by Redhat Support because our crashdumps matched. Opening a new BZ for the _same_ issue simply causes more confusion.
The crash happens more often if there are 2 GFS nodes, even if one is completely quiescent. It's disruptive because we experience 10-15 minute cluster downtime for every event _and_ the ~1Tb filesystems eventually have to be taken down for a day in order to fsck.
This is not a contrived situation. It's an issue which occurs under normal network loadings and which didn't manifest prior to RHEL5.2
This is a very real problem, happening on supported configurations. It needs to be addressed properly, not shoved under the carpet.
Samba and NFS both maintain some state (lock state specifically) not in the kernel.
As things stand today, there is no coordination between Samba (a user space process) and the NFS server when exporting the same partition.
This is something that commercial NAS appliances provide and something worth investigating/implementing in Linux so we are interested in hearing about customers that would like this. Please fill in a "FEAT" request if you are interested, this BZ is not intended to be a feature request.
Best regards,
Ric
As per some of the previous comments, would it be supported using Mixed nfs and local applications on GFS/GFS2 filesystems using NFSv4? If I've understood it right, it should work
I doubt it will work correctly if recovery takes place. It is not tested and thus NFS is only supported on its own and not mixed with local applications. | https://bugzilla.redhat.com/show_bug.cgi?id=502977 | CC-MAIN-2016-22 | refinedweb | 9,463 | 66.07 |
Java. You can get JSON-P reference implementation from the link below:
JsonGenerator (javax.json.stream)
JsonGenerator makes it very easy to create JSON. With its fluent API the code to produce the JSON very closely resembles the resulting JSON.
package blog.jsonp; import java.util.*; import javax.json.Json; import javax.json.stream.*; public class GeneratorDemo { public static void main(String[] args) { Map<String, Object> properties = new HashMap<String, Object>(1); properties.put(JsonGenerator.PRETTY_PRINTING, true); JsonGeneratorFactory jgf = Json.createGeneratorFactory(properties); JsonGenerator jg = jgf.createGenerator(System.out); jg.writeStartObject() // { .write("name", "Jane Doe") // "name":"Jane Doe", .writeStartObject("address") // "address":{ .write("type", 1) // "type":1, .write("street", "1 A Street") // "street":"1 A Street", .writeNull("city") // "city":null, .write("verified", false) // "verified":false .writeEnd() // }, .writeStartArray("phone-numbers") // "phone-numbers":[ .writeStartObject() // { .write("number", "555-1111") // "number":"555-1111", .write("extension", "123") // "extension":"123" .writeEnd() // }, .writeStartObject() // { .write("number", "555-2222") // "number":"555-2222", .writeNull("extension") // "extension":null .writeEnd() // } .writeEnd() // ] .writeEnd() // } .close(); } }
Output
Below is the output from running the GeneratorDemo.
{ "name":"Jane Doe", "address":{ "type":1, "street":"1 A Street", "city":null, "verified":false }, "phone-numbers":[ { "number":"555-1111", "extension":"123" }, { "number":"555-2222", "extension":null } ] }
JsonParser (javax.json.stream)
Using JsonParser we will parse the output of the previous example to get the address information. JSON parser provides a depth first traversal of events corresponding to the JSON structure. Different data can be obtained from the JsonParser depending on the type of the event.
package blog.jsonp; import java.io.FileInputStream; import javax.json.Json; import javax.json.stream.JsonParser; import javax.json.stream.JsonParser.Event; public class ParserDemo { public static void main(String[] args) throws Exception { try (FileInputStream json = new FileInputStream("src/blog/jsonp/input.json")) { JsonParser jr = Json.createParser(json); Event event = null; // Advance to "address" key while(jr.hasNext()) { event = jr.next(); if(event == Event.KEY_NAME && "address".equals(jr.getString())) { event = jr.next(); break; } } // Output contents of "address" object while(event != Event.END_OBJECT) { switch(event) { case KEY_NAME: { System.out.print(jr.getString()); System.out.print(" = "); break; } case VALUE_FALSE: { System.out.println(false); break; } case VALUE_NULL: { System.out.println("null"); break; } case VALUE_NUMBER: { if(jr.isIntegralNumber()) { System.out.println(jr.getInt()); } else { System.out.println(jr.getBigDecimal()); } break; } case VALUE_STRING: { System.out.println(jr.getString()); break; } case VALUE_TRUE: { System.out.println(true); break; } default: { } } event = jr.next(); } } } }
Output
Below is the output from running the ParserDemo.
type = 1 street = 1 A Street city = null verified = false
MOXy and the Java API for JSON Processing (JSR-353)
Mapping your JSON to domain objects is still the easiest way to interact with JSON. Now that JSR-353 is finalizing we will integrating it into MOXy’s JSON-binding. You can track our progress on this using the following link:
And again Java is the leader in verbosity. I love Java and JVM, but more and more I ask myself “What they were thinking of developing their API?”. If we are talking about fluent API, and the key word here is FLUENT, why make it so verbose? Doesn’t the code below look better?
jg.object()
.put(“name”, “Jane Doe”)
.object(“address”)
.put(“type”, 1)
.put(“street”, “1 A Street”)
.put(“city”, null)
.put(“verified”)
.end()
.array(“phone-numbers”)
.object()
.put(“number”, “555-1111”)
.put(“extension”, “123”)
.end()
.object()
.put(“number”, “555-222”)
.end()
.end()
.end()
I know one can object saying “C’mon, it’s just a method name, once you know it it’s simple”. But one unnecessarily long name here, another there and you realize that in order to read what some code does you have to read 3 times more of its meta-data.
With your proposal the number of lines of code remains the same, your point is the method names are unnecessarily long. This is a hard balance to strike. I find the current method names work well without having to read their Javadoc. I’m not sure what I would think your method “end” would do if I saw it come up as a possible method as part of code completion. These APIs are also similar to the StAX (JSR-173) APIs so there is less of a learning curve for XML developers who want to do JSON processing.
One thing to remember is that all the Java SE and Java EE standards are developed in the open. If you have input on what you think APIs should look like, it is most effective to raise those points while the specifications are still in development. | http://www.javacodegeeks.com/2013/04/java-api-for-json-processing-jsr-353-stream-apis.html | CC-MAIN-2016-07 | refinedweb | 750 | 52.36 |
I'm in the process of creating a windows service by coding in c#. How can I send email from it?
Please give me some clue.
Thanks.
You can use the System.Web.Mail namespace to send e-mail in C# (). Your Windows service must be configured to run under a user account with the necessary permissions to send mail.
Phil Weber
Please post questions to the forums, where others may benefit.
I do not offer free assistance by e-mail. Thank you!
Phil,
Due to some reason, I cannot see the System.Web.Mail namespace in my system. Do I need to install anything extra on my workstation?
Please let me know.
Thanks.
You must add a reference to System.Web.dll.
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?142516-c-Windows-Service-email&p=422994 | CC-MAIN-2015-11 | refinedweb | 141 | 70.39 |
Arduino Service - almost like "real" software ....
Arduino Service - almost like "real" software ....
"Finally".....
Step 1 download the latest "bleeding edge" build from the build server...
.png)
At the time of this writing it was at intermediate.751.20120901.0645 - press the lastest link..
Download this release by pressing ... (all files in zip)
After its done downloading, extract the contents somewhere...
Double click the myrobotlab.bat or .sh depending on operating system...
If all goes well you should see MRL startup - click over to the Runtime tab (it's the BORG + number tab).
This allows you to select what services you want in MRL.
Many of them require additional components to run. Arduino needs all the avr stuff and other doo-hickeys..
Right click on Arduino, and select install...
It "should" start downloading the appropriate parts depending on your flavor of operating system... It won't work through a proxy server, and we have had problems with some of the service components being blocked by anti-virus software... Some services (opencv) require native libraries (dlls or so's) which some filewalls will not allow... hopefully, you'll get a happy green check-mark :D
The windows install of Arduino is larger than the Linux one - being that Linux already has many of the components to run avr cross compiler... so if you have windows you'll have to wait a bit...
Restart - now MRL should allow you to start an Arduino service. Oh... heh, did I mention it was a good time to plug your Arduino in at this point :D
I "think" I can support almost any Arduino, I put some work in to suppor the different pin configuration of the Mega 2560... but let me know if "it no worky" for you... I'll try to get more boards supported...
go back to the tab and right-click -> start
Now you'll need to name the service... you could call it George or Mary, or some other silly name.. it does not matter, but it needs to be unique.. This name is used for the different services to talk to one another...
I named my with the un-imaginitive name of "arduino"
Now you should see an Arduino service tab in the GUI.
Select the appropriate board & serial port under the tools menu.
wait for the "good times...." :)
The first sub-tab which comes up is the "editor" and a file called MRLComm.ino
MRLComm.ino is a Arduino script file which allows communication from MRL to an Arduino based board.
The first thing you want to do is compile it - with the arduino compile like button :)
Then upload it to the board.
Please be patient with the serial port ... for some reason it takes a long time to connect - on Windows 7 its the worst... but I'll look into it (later)
Next after uploading - you want to re-establish a connection to the boar - the "upload" has to tear down the connection. But we want to connect - hit the connect button..
Now if everything went correctly, you should be able to tab to the oscope - and turn on a few pins analog or digital and see the results. Sometimes I plug in a plain wire to an analog pin to see how much it affects noise :)
Sometimes its more convienent to have different screens showing at the same time or in different positions, most tabbed components you can right click - and detach for more effective screen space
So now hopefully you have a working Arduino service - the service itself is not the gui, but the GUI is a service. These means it can run just the same without a gui, it can be controlled programmatically. Jython is a service which you can write scripts to do things with the Arduino service. Or multiple Arduino services. Everything in MRL speaks to one another in a standard message format. So Arduinos can speak to other Arduinos, or OpenCV can tell the Arduino to move Servos or Motors etc etc...
Well, let me know how it works for you or if you have ideas on how to make it better..
Thank you..
good times....
Closer...
The sketch didn't go in.
Ok, played around... I converted my Duemilanove into an Uno (which did not help). From there, I zapped-in the sketch a few more time, each time wondering how it was going in so fast. I did indeed see the RX/TX blink, but for a very quick second only. In the end, I installed a blink sketch, then zapped in the MRL sketch via MRL. After that sketch was sync'ed, my blink sketch was still blinking --It didn't go in. Again, RX/TX lights did their little flicker thing, but proof is in pudding or something, and in this case proof was a blinking pin13, still blinking.
I cut and pasted the MRL standard sketch into the regular Arduino IDE (023) and zapped it to the Uno. It went in like any other sketch. I fired up MRL again and connected. I have a scope --Which is quite fantastic, by the way, but no manual access to pins.
So far, I have tried a pot on one analog input and I have tried to turn pin13 on and off via the "pins" tab. I am getting nothing from my pin13 led. I am moving on to a meter so I can test some of the PWM sliders and I need to jump 5v into some of the digitals to be sure I can see them on the scope as well.
I really want this to work, dude --The scope alone is incredibly beautiful and simple and fast and great.
I'll let you know.
Yay !
Your up (ish)....
maybe flip the pin 13 out to in then back to out ?
pin 13 has a different default initialization of pinmode if I remember correctly...
Anyhoo, its great that its working(ish) for you..
So, the MRLComm.ino was a little rough, and you had to use the real Arduino IDE to get it "worky" ... hmm need to add more logging..
I got pins
I have no idea what the hell is going on with my 13...
Yup, I have all analogs and all the digitals/ pwm's work via the "pins" tab. I have digital inputs on the scope too. 100%
Oddly enough, I am getting exactly 1.75v on my pin13 when I send it high. Every other digial pins works just fine with a clean 5.04v on each. Once again, the blink sketch works great but when run via MRL, only 1.75v. Friggin' weird, dude.
Oh well, whatever, I don't need to stinkin' pin13 --screw it.
Everything else is working. I am going to go play with it now.
Hardware issue?
Pin 13 saw one to many LEDs ?
Pin13 was always a chump
I always had bad feelings about this particular pin13 on this board. I could tell he was going to be trouble maker. Well, its his loss, he could be triggering a ping or enabling a motor, but HA! You ain't doin' any of that now, are ya, pin13?
As of now, there is no pin13... He is dead to me.
Fixed ....data sent to the
Fixed ....
So, theoretically if you got a new enough version you can simply press :
Help -> About -> I feel lucky, give me the bleeding edge ! - to get updates
Thanks.
Here's a 2560 Mega ... I
Here's a 2560 Mega ... I had to "zap" the pde in from the Arduino IDE... this is after fighting Window 7 64bit with drivers for an hour Grrr... Found some more parts to fix (of course), but it is possible !
So everything can be
So everything can be controlled by Python script.. It's the glue between the services...
If you got your Arduino working previously throught the gui, you "should" be able to use this script & simply copy/paste it to the jython tab - then hit the
execute button..
You'll probably have to adjust the COM setting appropriately, but it "should' start an Arduino - and add a digital & analog trace..
Initial startup is slow - but that's because of a delay in the RXTXComm package - the Arduino IDE has this same delay starting on windows 7 64 bit...
from time import sleep
from org.myrobotlab.service import Arduino
arduino = runtime.createAndStart('arduino','Arduino')
# set and open the serial device
# arduino.setSerialDevice('/dev/ttyUSB0', 57600, 8, 1, 0)
arduino.setSerialDevice('COM6', 57600, 8, 1, 0)
sleep(4)
arduino.pinMode(14, Arduino.INPUT)
# arduino.digitalReadPollingStop(7)
arduino.analogReadPollingStart(14) # A0
arduino.digitalReadPollingStart(7)
copy/paste/click
wait & view | http://letsmakerobots.com/node/34061?page=3 | CC-MAIN-2016-07 | refinedweb | 1,450 | 83.86 |
Musings about ASP.NET and more...by G. Andrew Duthie
Since one of the attendees at my PDC pre-con asked about this, I thought I'd pass it along. I've updated the sample code from my ASP.NET Overview pre-con session to add the necessary code for performing authentication when sending mail using an SMTP server that requires authentication (and requiring authentication for your SMTP server is a Good Thing®).
System.Web.Mail is just a wrapper over CDOSYS, and the v1.1 framework provides a new Fields member (on the MailMessage class) that provides access to any fields available in CDOSYS, by specifying the namespace for that field (more info on the fields available can be found here). So the code for authenticating a mail request is:
'Use the following fields to add authentication, if necessary
'0 = None / 1 = Basic / 2 = NTLM
Mail.Fields("") = 1
Mail.Fields("") = "myUser"
Mail.Fields("") = "myPwd"
Do keep in mind that when using basic authentication, credentials are passed in clear text, so it is especially important to make use of SSL encryption if you are using basic authentication.
Thanks to Darren Jefford, from Microsoft UK, for working this out.
Hi Andrew,
Thanks for sharing. Mike pin a very useful thread on the Forums about SMTP Mailing as well.
FYI:
Your my hero! I've been looking for this for a very long time...
The code is very useful and it works magically | http://weblogs.asp.net/gad/archive/2003/11/02/35288.aspx | crawl-002 | refinedweb | 241 | 66.33 |
a=['123','2',4] b=a[4] or 'sss' print b
I want to get a default value when the list index is out of range (here:
'sss').
How can I do this?
In the Python spirit of “ask for forgiveness, not permission”, here’s one way:
try: b = a[4] except IndexError: b = 'sss'
In the non-Python spirit of “ask for permission, not forgiveness”, here’s another way:
b = a[4] if len(a) > 4 else 'sss'
In the Python spirit of beautiful is better than ugly
Code golf method, using slice and unpacking
b=a[4:4+1] or 'sss'
Nicer than a wrapper function or try-catch IMHO, but intimidating for beginners. Personally I find tuple unpacking to be way sexier than list[#]
using slicing without unpacking:
b,=a[4:5] or ['sss']
or, if you have to do this often, and don’t mind making a dictionary
d = dict(enumerate(a)) b=d.get(4,'sss')
You could create your own list-class:
class MyList(list): def get(self, index, default=None): return self[index] if len(self) > index else default
You can use it like this:
>>> l = MyList(['a', 'b', 'c']) >>> l.get(1) 'b' >>> l.get(9, 'no') 'no'
another way:
b = (a[4:]+['sss'])[0]
You could also define a little helper function for these cases:
def default(x, e, y): try: return x() except e: return y
It returns the return value of the function
x, unless it raised an exception of type
e; in that case, it returns the value
y. Usage:
b = default(lambda: a[4], IndexError, 'sss')
Edit: Made it catch only one specified type of exception.
Suggestions for improvement are still welcome!
try: b = a[4] except IndexError: b = 'sss'
A cleaner way (only works if you’re using a dict):
b = a.get(4,"sss") # exact same thing as above
Here’s another way you might like (again, only for dicts):
b = a.setdefault(4,"sss") # if a[4] exists, returns that, otherwise sets a[4] to "sss" and returns "sss"
I’m all for asking permission (i.e. I don’t like the
try…
except method). However, the code gets a lot cleaner when it’s encapsulated in a method:
def get_at(array, index, default): if index < 0: index += len(array) if index < 0: raise IndexError('list index out of range') return array[index] if index < len(a) else default b = get_at(a, 4, 'sss')
For a common case where you want the first element, you can do
next(iter([1, 2, 3]), None)
I use this to “unwrap” a list, possibly after filtering it.
next((x for x in [1, 3, 5] if x % 2 == 0), None)
or
cur.execute("SELECT field FROM table") next(cur.fetchone(), None)
Using try/catch?
try: b=a[4] except IndexError: b='sss'
Since this is a top google hit, it’s probably also worth mentioning that the standard “collections” package has a “defaultdict” which provides a more flexible solution to this problem.
You can do neat things, for example:
twodee = collections.defaultdict(dict) twodee["the horizontal"]["the vertical"] = "we control"
Read more:
If you are looking for a maintainable way of getting default values on the index operator I found the following useful:
If you override
operator.getitem from the operator module to add an optional default parameter you get identical behaviour to the original while maintaining backwards compatibility.
def getitem(iterable, index, default=None): import operator try: return operator.getitem(iterable, index) except IndexError: return default
If you are looking for a quick hack for reducing the code length characterwise, you can try this.
a=['123','2',4] a.append('sss') #Default value n=5 #Index you want to access max_index=len(a)-1 b=a[min(max_index, n)] print(b)
But this trick is only useful when you no longer want further modification to the list | https://exceptionshub.com/getting-a-default-value-on-index-out-of-range-in-python.html | CC-MAIN-2021-21 | refinedweb | 651 | 56.59 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.