text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
On Thursday 26 July 2007 20:21:59 Simon Arlott wrote:> On 26/07/07 00:49, Dave Johnson wrote:> > ipv6_addr_type() doesn't check for 'Unique Local IPv6 Unicast> > Addresses' (RFC4193) and returns IPV6_ADDR_RESERVED for that range.> >> > + if ((st & htonl(0xFE000000)) == htonl(0xFC000000))> > + return (IPV6_ADDR_UNICAST |> > + IPV6_ADDR_SCOPE_TYPE(IPV6_ADDR_SCOPE_GLOBAL)); /* RFC 4193 */>> But ULA's scope isn't global, shouldn't it be IPV6_ADDR_SCOPE_ORGLOCAL ?Yes it is - global.-- Rémi Denis-Courmont-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2007/7/26/345
CC-MAIN-2017-09
en
refinedweb
Wiki javarosa / OpenRosa Workgroup Skype Chat Logs 17 Nov - 18 Nov 2011 @ 4.24pm [11/17/2011 4:41:49 PM] *** Anton de Winter added Jonathan Jackson *** [11/17/2011 4:42:01 PM] *** Anton de Winter added Clayton Sims *** [11/17/2011 4:42:05 PM] *** Anton de Winter added Drew Roos *** [11/17/2011 4:42:09 PM] *** Anton de Winter added Yaw Anokwa *** [11/17/2011 4:42:12 PM] *** Anton de Winter added Carl Hartung *** [11/17/2011 4:46:04 PM] Anton de Winter: Hey guys, created this group chat for OpenRosa API discussions [11/17/2011 4:46:17 PM] Anton de Winter: see the emails I sent out to the OR Workgroup for more deets [11/17/2011 4:49:18 PM] *** Anton de Winter added Cory Zue *** [11/17/2011 4:56:07 PM] Jonathan Jackson: can skype still do public links to chats? [11/17/2011 4:56:08 PM] Jonathan Jackson: let the JR room know this is herfe. [11/17/2011 4:57:15 PM] *** Anton de Winter has changed the conversation topic to "OpenRosa API Chat" *** [11/17/2011 4:57:59 PM] Anton de Winter: cool, found the public URI [11/17/2011 4:57:59 PM] Anton de Winter: [Thursday, November 17, 2011 4:57 PM] sys: <<< skype:?chat&blob=2pSByFPIcAva_JLXE2kR8r2qBavK2_4mlpyViJGZr596luQTeQNe2iZ4CYTF1bkuMMVg5jtheE6CE3zZ_VC7LXv0XY_U556LKY1INu9eDfuOOlAfvv4p7w [11/17/2011 5:23:28 PM] *** Anton de Winter added Matt Adams *** [11/17/2011 5:23:44 PM] Anton de Winter: Hey everyone [11/17/2011 5:23:54 PM] Anton de Winter: welcome to Matt Adams with Radical Dynamic. [11/17/2011 5:24:54 PM | Edited 5:25:14 PM] Anton de Winter: They're working on a data collection product based on ODK that uses JR () [11/17/2011 5:26:59 PM] Matt Adams: Thanks Anton. [11/17/2011 8:40:04 PM] Yaw Anokwa: read over the spec... [11/17/2011 8:40:15 PM] Yaw Anokwa: i don't think any of the metadata should be required. [11/17/2011 8:41:10 PM | Edited 8:46:40 PM] Yaw Anokwa: it'd also be good if we standardized on what prefix. we use jr: everywhere else, and in metadata we use orx: and that'd clean up the need for attributes on the data node too. [11/17/2011 8:43:06 PM | Edited 8:48:30 PM] Yaw Anokwa: i think any change you make to the form is a new version. i don't like the version/uiversion thing. i see why it's there and i'm not going to fight it, but i don't like it. [11/17/2011 8:44:35 PM] Yaw Anokwa: i think we can get rid of fillMechanismUri unless there is a clear usecase. [11/17/2011 8:48:47 PM | Edited 8:49:47 PM] Yaw Anokwa: do we really need the deprecatedId. is there a clear usecase? [9:05:00 AM] Cory Zue: +1 on removing the version/uiversion distinction [9:05:54 AM] Cory Zue: no idea what the fill mechanism is [9:06:29 AM] Cory Zue: deprecation is a useful concept, though what we do currently, which i don't like, is use the same uid and assume that if the content is different it's an overwrite [9:06:45 AM] Cory Zue: if we don't think we care about standardizing on the edit workflow am happy to remove it [9:07:26 AM] Cory Zue: i would also propose getting rid of the "uuid:" "openid:" prefixes in all the examples, unless we actually expect the clients to do that. seems a bit silly to me [9:08:23 AM] Cory Zue: the jr/orx distinction is a purist one, i think, where jr is an implementation of the orx apis (but also is free to add new stuff). i could take it or leave it [11:10:50 AM] Anton de Winter: Agreed on namespacing. We're all using JR: we should stick to that for 1.0 and can change it for 1.1 if need be. [11:11:14 AM] Anton de Winter: Also not sure what fill mechanism is for so going to cut it [11:11:58 AM] Anton de Winter: There was a ton of discussion around having the prefixes (uuid: openid: etc) [11:12:04 AM] Anton de Winter: so I think in the interest of getting things to 1.0, we should keep it in. [11:12:16 AM] Drew Roos: the reason for the jr: / orx: split was because jr: was a namespace for xform customizations, and orx: was a namespace to be included in instances [11:12:48 AM] Drew Roos: and i've also never liked version/uiVersion [11:13:52 AM] Anton de Winter: yeah, we don't use it properly anyway... Does ODK? [11:15:40 AM] Anton de Winter: so I'd suggest that we merge uiVersion and Version (=> Version), which indicates any change inthe xform [11:16:02 AM] Drew Roos: that shows up as a mushroom for me [11:16:27 AM] Anton de Winter: ? [11:17:59 AM] Anton de Winter: What is a mushroom? [11:18:14 AM] Cory Zue: i think drew is saying that ( = > renders as a mushroom on his skype [11:18:31 AM] Anton de Winter: oh haha [11:19:12 AM] Drew Roos: my original plan was an X.Y style version [11:19:24 AM] Drew Roos: where X ~= version and Y ~= uiVersion [11:19:43 AM] Drew Roos: can we just put version in the xmlns? [11:19:55 AM] Clayton Sims: Drew: We discussed that originally [11:19:58 AM] Drew Roos: we must have found a reason not to the last time 'round [11:20:38 AM] Clayton Sims: it makes it difficult to recognize the same data without doing a lot more processing [11:21:19 AM] Drew Roos: "how does every other xml app on the planet handle this?" [11:21:58 AM] Clayton Sims: the verson/uiversion thing that we used came from an article about how intel deals with this [11:22:28 AM] Drew Roos: but isn't it clear the uiVersion at least has been a failure? [11:22:38 AM] Drew Roos: because it's never updated when it's supposed to be [11:23:00 AM] Jonathan Jackson: if the one group trying to use it correctly hasn't, I would argue yes. [11:23:07 AM] Jonathan Jackson: doesn't that also mean ouur backend is getting no value out of it? [11:23:11 AM] Clayton Sims: uiVersion clearly doesn't work [11:23:17 AM] Clayton Sims: I'm fine with only using one version # [11:23:37 AM] Clayton Sims: the whole notion of UIVersioin was way, way too long-term [11:25:05 AM] Anton de Winter: Ok, so I think I'm going to merge the two to just "version" and let it indicate any changes to the form [11:25:14 AM] Clayton Sims: I think that's a good approach [11:25:27 AM | Edited 11:25:32 AM] Anton de Winter: Are we settled on ORX: versus JR: ? [11:25:49 AM] Anton de Winter: Leaning heavily towards making it JR and re-investigating for 1.1 [11:26:07 AM] Clayton Sims: So... I don't get this [11:26:13 AM] Clayton Sims: you mean for namespaces? [11:26:18 AM] Anton de Winter: yeah [11:26:30 AM] Clayton Sims: are you using jr:/orx: as placeholders for hte actual namespaces? [11:27:01 AM] Clayton Sims: I think openrosa makes the better namespace [11:27:12 AM] Clayton Sims: but really don't care [11:29:05 AM] Anton de Winter: I don't think it adds much processing overhead to have it in a different namespace [11:29:10 AM] Anton de Winter: so maybe it's best to leave it in [11:29:11 AM] Drew Roos: i think the questoin is same or different namespaces [11:29:14 AM] Anton de Winter: (it sure doesn't hurt) [11:29:26 AM] Clayton Sims: Oh, I'm pro same namespaces [11:29:33 AM] Jonathan Jackson: it should be orx [11:29:36 AM] Clayton Sims: What do we think should be in a different namespace? [11:29:37 AM] Jonathan Jackson: its just jr because we made it jr. [11:29:40 AM] Jonathan Jackson: and never changed it [11:29:55 AM] Jonathan Jackson: i'm sure of this because I made up the orx namespace [11:29:56 AM] Drew Roos: @jon you know the prefix is irrelevant [11:29:59 AM] Jonathan Jackson: with the intentin of it replacing jr [11:30:22 AM] Jonathan Jackson: yes, its only per form, right? [11:30:27 AM] Drew Roos: we could replace jr with orx right now as long as you map orx: to the same namespace uri [11:30:38 AM] Jonathan Jackson: right. [11:30:45 AM] Drew Roos: though some of the itext stuff expects hard-coded jr: [11:30:52 AM] Jonathan Jackson: that should change. [11:31:17 AM] Anton de Winter: that's a implementation problem though. [11:31:24 AM] Drew Roos: i don't feel a compelling reason to switch from jr: [11:31:58 AM] Drew Roos: doesn't javarosa as a name have much more traction than openrosa? [11:31:59 AM] Clayton Sims: Drew, I'm fine with jr = [11:32:30 AM] Drew Roos: xmlns: [3:56:48 PM] Jonathan Jackson: yeah (see the sample xml) [3:56:50 PM] Mitch Sundt: whatever you give me is what it is taged with... [3:56:59 PM] Drew Roos: ok [3:57:05 PM] Drew Roos: well formID is completely redundant [3:57:13 PM] Clayton Sims: I'm way, way, way, pro on eliminating formid and using xmlns [3:57:20 PM] Mitch Sundt: Form Id is either the xmlns attribute value or the value of the id attribute. [3:57:39 PM] Drew Roos: <data xmlns="..." id="..."> ? [3:57:48 PM] Matt Adams: @Clayton, same here. [3:57:50 PM] Mitch Sundt: try running some forms through an xml validator sometime... [3:57:59 PM] Ian Robert Lawrence: we use <model> <instance> <data id="2rtoaucm"> [3:58:15 PM] Drew Roos: i'm fine with that [3:58:47 PM] Mitch Sundt: exactly. People like simple names, like "2rtoaucm" -- the XML spec requires xmlns be a URI (). [3:59:28 PM] Mitch Sundt: Form id refers to either usage. Use id="blah" or form up a big URI and go to town with xmlns [4:00:44 PM] Anton de Winter: Mitch can you summarize what you'd like to see for a solution w.r.t the version+uiVersion issue [4:01:08 PM] Drew Roos: for the record, the xml spec says relative URIs as namespaces are deprecated [4:01:14 PM] Anton de Winter: Drew/Clayton can you do the same? [4:02:03 PM | Edited 4:02:50 PM] Drew Roos: a single version attribute, format and meaning are undefined; servers make best-faith effort to process version #s intelligently; fine with id/xmlns "form id" scheme [4:02:17 PM] Clayton Sims: xmlns as formid, single version attribute as integer [4:02:49 PM | Edited 4:03:24 PM] Yaw Anokwa: alphanumeric thing as formId, single version attribute as incrementing integer. fine with using xmlns as form id as a stop gap, but i don't love it. [4:03:02 PM] Matt Adams: I'm with Yaw on this. [4:03:10 PM] Anton de Winter: What I'd imagine is that we have a simple version indicator made by humans for humans to easily keep track of changes in the form (any changes). This version indicator shouldn't be relied upon for internal data merging by the server. Something like the xmlns or formID (or a diff of some kind) should be used by the server to determine if relevant changes have happened. [4:04:05 PM] Mitch Sundt: On the server, formId must match for the server to attempt ANY merge of data sets / form versions. [4:04:23 PM] Jorn Klungsoyr: Hi, I may be completely off. But why would you restrict version to an integer? Are you using the version number to communicate to the client the version number to use (that is always use the latest?). [4:04:27 PM] Mitch Sundt: The version is metadata to track, as the form creator desires, the differences among tweaks to a form. [4:04:35 PM] Mitch Sundt: This assumes uiVersion is eliminated. [4:04:44 PM] Anton de Winter: yep [4:05:43 PM] Mitch Sundt: Whether the server can handle model changes while the formID remains unchanges is server-specific. [4:06:05 PM] Mitch Sundt: Aggregate, for example, can easily merge data for changes that don't alter the data model. [4:07:12 PM] Jorn Klungsoyr: If the version is only used by the backend to handle things - then there should be no reason to require it to be an integer. [4:07:32 PM] Mitch Sundt: Why make it more complex? [4:07:51 PM] Anton de Winter: Jorn, it won't only be used by the backend [4:08:51 PM] Jorn Klungsoyr: But what would a client use it for? Other than being able to properly identify the version the data being submitted belongs to. [4:08:58 PM] Mitch Sundt: And if you want the version to have more structure, the only thing that makes sense is the original uiVersion + version integers. [4:09:47 PM] Mitch Sundt: and I think we've decided to keep it simple. [4:09:48 PM] Anton de Winter: Jorn, in the case when an admin modifies the form and the user (on the client) needs to identify the form they want [4:10:12 PM] Mitch Sundt: Yes, that is the other use case. [4:10:39 PM] Mitch Sundt: If you pull the list of forms avaliable on the server, the version is presented and the client app can then know whether it has the same version. [4:10:41 PM] Jorn Klungsoyr: But still, I don't see why that needs to be restricted to an integer. Any system should ensure that a version is unique - whether string or integer. [4:11:43 PM] Anton de Winter: Well, I think string or arbitrary value isn't a great idea, because determining on the server side which is newer, which between "foo" and "bar" is newer? [4:11:54 PM] Anton de Winter: constraining it makes life simpler for server implementors [4:12:10 PM] Mitch Sundt: yes [4:12:27 PM] Anton de Winter: Integer is simple, easy for a client with 0 software experience to also identify which is newer (the higher number) [4:12:37 PM] Anton de Winter: 2.3.4.x could be confusing to someone with no software experience [4:13:02 PM] Anton de Winter: and there appears to be little benefit to having x.x.x verus x [4:13:09 PM] Anton de Winter: for our purposes, here. [4:15:31 PM] Jorn Klungsoyr: Well, I think you also restrict which servers can act as backends. If somebody has another versioning scheme, then it adds an extra unncessary level. The backend can implement integer or string - or whatever - as long as it is possible to identify it. [4:15:45 PM] Mitch Sundt: Note that a change to version/uiVersion in the Metadata schema would also be reflected in the FormListAPI, where the majorMinorVersion would just become a version (it was version + "." + uiVersion ). [4:16:50 PM] Anton de Winter: Jorn, that's a fair point but it's also why we're pushing these APIs: if you comply with the API you agree what the versioning format should be [4:17:08 PM] Anton de Winter: you know what to expect, as the server, and deal with it accordingly [4:17:13 PM] Mitch Sundt: Integers can be stored as strings, but strings can't be stored as integers. Restricting v1.0 to integers allows more servers to function, not less. [4:17:15 PM] Anton de Winter: similarly your form creators need to follow that spec [4:17:57 PM] Mitch Sundt: yes, it does restrict the form construction tools [4:18:46 PM] Anton de Winter: but that's not a bad thing [4:19:00 PM] Anton de Winter: (nor a hard thing to make work on the formdesigner side, honestly) [4:19:20 PM] Mitch Sundt: it is easier to restrict now, and wait for demand to enlarge [4:19:28 PM] Mitch Sundt: than to try to narrow later. [4:19:31 PM] Anton de Winter: +1 [4:19:59 PM] Drew Roos: i don't care enough to continue this conversation [4:20:15 PM] Mitch Sundt: I need lunch [4:20:23 PM] Yaw Anokwa: i need a beer [4:20:39 PM] Jorn Klungsoyr: If the spec just requires the version to be a unique string (at the API) level, then you don't restrict. Well I can say for openXdata this would be too restrictive. [4:21:10 PM] Jonathan Jackson: lets go get beers, circle back in 30 [4:21:18 PM] Jonathan Jackson: no one leaves until we've accomplished our job. [4:21:25 PM] Jorn Klungsoyr: I need to sleep, over and out |-) [4:21:39 PM | Edited 4:21:46 PM] Cory Zue: i'm with jorn on this. Your server can limit the support if you want, but no reason to restrict it at the spec level Updated
https://bitbucket.org/javarosa/javarosa/wiki/OpenRosa%20Workgroup%20Skype%20Chat%20Logs%2017%20Nov%20-%2018%20Nov%202011%20@%204.24pm
CC-MAIN-2017-09
en
refinedweb
Just try that: var tt = new Ext.ToolTip({ target: averageWorkExpCombo.geEl(), title: 'Mouse Track', width: 200, html: 'This tip will follow the mouse while it is over... Type: Posts; User: syedarshadali Just try that: var tt = new Ext.ToolTip({ target: averageWorkExpCombo.geEl(), title: 'Mouse Track', width: 200, html: 'This tip will follow the mouse while it is over... I am also facing the same problem in IE Would you like to share some code. Thanks. How do you people compare ExtJS with SmartClient Problem is fixed just by adding the scope attribute in the handler, scope: this this.grid = new Ext.grid.GridPanel( { store: draftTicketsStore, cm: new Ext.grid.ColumnModel({ defaults: { //width: 120, sortable: true }, columns: [ // sm, I am also getting the same problem. Whats the solution...??? or what am I doing wrong? Ok, if its not possible then is it possible to extract data from the Pie Chart, data that shows percentages and then we could display that information in a separate area / table. Basically the... Can we place Pie Chart legends in this way: Or inside pie chart. Is there any events to capture next / previous buttons operations, like onNext , onPrevious Condor, how should I call it with correct scope? any sample code if you can provide. Thanks. //In utils.js, I have a function this.currentUser = function( callback ) { //PHP class method called by ExtJS Direct var f = CMS.Direct.UserInfo.getJSON.createDelegate(null, [callback]); f();... Will this solution work for ExtJS 3.0?? I want to reset PagingToolbar but the suggested code does not work for me, giving me javascript error: this.afterTextEl is undefined Thanks Animal for your reply that I have received in email but not visible here, dont know why, anyhow, in ExtJS doc for TabPabel: TabPanel use their header or footer element (depending on the... Can we add 'refresh' tool in TabPanel like 'close' tool?? I want to have a 'Refresh' icon in every tab, like we have a close icon in every tab. Problem has been solved through ref attribute, updated code: SearchEmployeePanel = new Ext.extend(Ext.Panel, { constructor: function(conf) { ... I am creating a custom component, but when I create 2 instances of the same component in a page, both components sharing same ids (I guess) for child elements. var searchField; var frmSearch;... I can call my specified function on paging toolbar's referesh button's click by : grid.getBottomToolbar().refresh.on('click', resetSearch ); But, I want to load my store with different params... Is there any reset method exist in FormPanel? Oh, we have to call getForm() from FormPanel, then call reset method :) Is there any event available on pressing Ext.PagingToolbar Refresh button. Thanks man, BoxComponent works.... Btw one of the main reason to use ExtJS is a quick support :) Following code giving me error: c.getHeight is not a function ch = c.getHeight();\r\n Ext.namespace('cms.ui'); var ui = Ext.namespace("cms.ui"); Yup, I have just upgrade Opera from 7 to 10, its working now. Web Desktop demo included in ExtJS 3.0 is not working in Opera. I am using Opera 7.54 version. Any idea? Thanks Condor, yes its working after adding root attribute. Yes I know built in JSON encoder thingy.
https://www.sencha.com/forum/search.php?s=fdac6c807665bd0dfeb55c16be55e689&searchid=18955928
CC-MAIN-2017-09
en
refinedweb
How to make a QML Component a Singleton? In this tutorial I’ll walk you through how to define a custom QML Component as a singleton. UPDATE: I uploaded a new sample code which better outlines the difference between using an instance of MyStyleObject and using the singleton MySingletonStyleObject.You can download it here. Currently I’m working on a rather large Qt Quick application which will be deployed on an embedded device. I’m starting to notice performance issues as we add more and more screens to the application. I decided to run the QML Profiler that is part of Qt Creator and one of the things I noticed was that one of our QML Components is being created 1500+ times. This component is an Item which holds a number of properties defining different styling options like what the colour Blue is for our purposes; its our style sheet. The issue being that its loading non-standard fonts which is expensive so as it turns out we’re loading the fonts 1500+ times. Each screen/page in the application is instantiating its own copy of the style object as we weren’t really keen on defining one globally and simply referring to it. I mean really there is nothing wrong with that approach per-say but it does relay on a dependency; that someone instantiated the object somewhere in the application and that the objects id never changes. So how can I reduce the number of instantiated objects without depending on a locked in stone id name? Enter the Singleton So as it turns out Qt Quick does have a mechanism to let you define a QML component as a singleton. I didn’t even think to see if that was even an option, a college mentioned it to me in passing and they only knew about it as a one line statement on a blog post. Ok so its a thing but how do you do it? There are three steps you need to do; first you need to use the pragma Singleton keyword in your QML script, then you need to register the component as a singleton, and lastly you need to import it into your other QML scripts which are going to use it. Step 1: Declaring the QML Component as a Singleton Lets say this is your style sheet QML object that you want to make a singleton. import QtQuick 2.0 Item { readonly property string colourBlue: "blue" readonly property string colourRed: "red" readonly property int fontPointSize: 16 } To declare it as a singleton you need to add the keywords pragma Singleton to the top of the script file. pragma Singleton import QtQuick 2.0 Item { readonly property string colourBlue: "blue" readonly property string colourRed: "red" readonly property int fontPointSize: 16 } Step 2: Registering the Component Now you have two options you can either register the Component in C++ or by using a qmldir file. To register the singleton via C++ somewhere in your C++ code you need to call qmlRegisterSingletonType(). #include <QtQml> ... qmlRegisterSingletonType( QUrl(""), "ca.imaginativethinking.tutorial.style", 1, 0, "MyStyle" ); ... If adding the call to qmlRegisterSingletonType() won’t work for you, maybe this is a Qt Quick UI project (i.e. no C++ ) then you can add a file called qmldir to the directory where your MyStyleObject.qml file exists. When importing a directory the QML Engine first looks for the qmldir file and uses it to import the scripts found in that directory; if the file does not exist it will import the scripts found with default values (i.e. non-singletons and uses the file name as the component name). The qmldir file can define different names to be used instead of the file name and can also tell the import to register the script as a singleton. Here is what the directory structure should look like: /root + absolute | + path | | + qmldir | | + MyStyleObject.qml | | + AnotherObject.qml | | + MyButton.qml | | + MySwitch.qml | + main.qml Here is how to define the qmldir file: singleton MyStyle 1.0 MyStyleObject.qml MyOtherObject 1.0 AnotherObject.qml MyButton 1.0 MyButton.qml Step 3: Importing and Using the Singleton If you use the C++ option above then in order to import and use the singleton in your QML script you need to import the module you defined via the second parameter of qmlRegisterSingletonType() then access the object using the registered name (parameter three of qmlRegisterSingletonType()). import QtQuick 2.0 import ca.imaginativethinking.tutorial.style 1.0 Rectangle { anchors.fill: parent color: MyStyle.colourBlue // <-- Notice that to access the singleton I use the object name not id (i.e. Capital M) } If you used the qmldir approach then you simply need to import the directory which will register all the scripts under that directory. import QtQuick 2.0 import "path" Rectangle { anchors.fill: parent color: MyStyle.colourBlue // <-- Notice that to access the singleton I use the object name not id (i.e. Capital M) } Note here however that if main.qml was within the path directory you would still have to import the path as it does not appear that the qmldir file gets used when you relay on the auto look up path. Commercial break... So what does this buy us? Example time: lets say we create an app that shows 100 blue boxes with red boarders on the screen. We'll use the MyStyleObject component to hold what shade of blue Blue is and the thickness of the boarder. How we'll achieve this is by using a GridView with a integer model of 100 and the MyStyleObject will be created within the delegate. GridView { anchors.fill: parent model: 100 delegate: gridDelegate cellHeight: 50 cellWidth: cellHeight } Component { id: gridDelegate Rectangle { width: 50 height: width color: myStyle.colourBlue border.color: myStyle.colourRed border.width: myStyle.borderSize Text { anchors.centerIn: parent text: index font.pointSize: myStyle.fontPointSize color: myStyle.colourWhite } MyStyleObject { id: myStyle } } } We can run the QML Profiler in Qt Creator (Analyze -> QML Profiler) it will switch Qt Creator over to the Analyze view and launch our application. Once our application has launched we can press the stop button on the Analyze view to terminate the application and stop the profiler. It will then load the data into the view. By instantiating an instance of the MyStyleObject within the delegate you can see it gets created 100 times which takes 12.084ms; binded to 99 times which takes 1.130ms and that the compile phase for the object (which only happens once) takes 708.820µs. If we change this into a singleton and run the same profiler we'll find that now the object is only created once (474.917µs); although the compile phase does take longer (1.917ms) your still saving about 10ms. This might not seem like a lot over all but when your trying to keep the presentation of your application at 60FPS every little bit helps. GridView { anchors.fill: parent model: 100 delegate: gridDelegate cellHeight: 50 cellWidth: cellHeight } Component { id: gridDelegate Rectangle { width: 50 height: width color: MyStyle.colourBlue // <-- Capital M MyStyle means I'm now access the one and only instance of the MySingletonStyle object border.color: MyStyle.colourRed border.width: MyStyle.borderSize Text { anchors.centerIn: parent text: index font.pointSize: MyStyle.fontPointSize color: MyStyle.colourWhite } } } So there you go that is all you have to do to define a QML Component as a singleton so that it only gets created once. You can download a sample application which illustrate the above here: QML Singleton Sample 1.1 - Updated to be more clear. Thank you I hope you have enjoyed and found this tutorial helpful. Feel free to leave any comments or questions you might have below and I’ll try to answer them as time permits. Until next time think imaginatively and design creatively Hello, I don’t understand very clearly this situation. Singleton component can only be created once. So if we replace MyStyleObject with MySingleTonStyle component , the application is failed because there are more than one delegate is created. So you want to use your example to demonstrate the creation of SIngleton component? If we move MyStyleObject outsite of delegate declaration there are juts one creation for itself Did I understand your purpose ? Thanks Thank you for your comment Tan Do. Yes after looking at this post and the sample app again I fear I was not being clear enough; sorry about that. When switching to use the MySingletonStyleObject object the creation of the MyStyleObject within the delegate is not changed to MySingletonStyleObject (which will fail as you found out) but is actually removed as the instantiation of the singleton object will be done the first time you use it. All uses of the id reference myStyle within the delegate are changed to MyStyle (notice the capital M) which is what we registered the MySingletonStyleObject as in the qmldir file. So instead of: we would write I’ve updated the post and the sample app to show this more clearly. Again thank you for your comment, Until next time think imaginatively and design creatively It’s great, Brad! Your blog help me so much when investigating Qt-Qml! Keep up the good job! 🙂 Hi, Thank u for the Post. I am developing with QT 5.4.1 QML/JS I’m not being able to import the singleton into mi qml script (Principal.qml) QML module not found. My Steps: 1) I create MySingleton.qml file: pragma Singleton import QtQuick 2.0 Item { property var _driver; property string _codeapp; function getDriver(){ return _driver; } function setDriver(driver){ _driver=driver; } function getCodeApp(){ return _codeapp; } function setCodeApp(codeapp){ _codeapp=codeapp; } } 2) I created the qmldir (Add new file -> General -> Empty File) and add the following line singleton MySingleton 1.0 MySingleton.qml Principal.qml, MySingleton.qml and qmldir are placed in the same directory Can u help me? Thank u in advance. Hi, I could do it looking in this post too. Thank u
http://imaginativethinking.ca/make-qml-component-singleton/
CC-MAIN-2017-34
en
refinedweb
A MOSFET is a voltage-driven switch that controls the flow of current in an electronic circuit. The devices are made from a doped semiconductor material. Unlike magnetic power control devices, MOSFETs have a very small form factor and they do not have moving parts. This means that MOSFETs can operate much faster than magnetic switching devices. As shown in Figure 4, the devices have three basic external connections: the source, drain and the gate. The source is connected to the ground, the drain is connected to the load and finally, the MOSFET will be switched ON when a positive voltage is connected to the gate. Flash Memory Cell The charge of the floating gate determines the flow of current from the source to the drain. The floating gate can be neutral, positive or negatively charged. If the floating gate is neutral, then the storage transistor will behave like a normal MOSFET. A positive charge on the control gate creates a conducting channel in the p-substrate and current flows from the source to the drain. Lastly, a negative charge on the floating gate prevents the formation of a channel in the p-substrate. Another important parameter is the threshold voltage. This is the minimum voltage at the control gate which can make the channel conductive. Operations which can be performed on the flash memory cell include programming the cell and erasing the cell. When we program a Flash memory cell, what we are physically doing is placing electrons into the floating gate. On the other hand, when we remove the charge from the floating gate, we are essentially erasing the memory cell. However, the detailed process of trapping or removing electrons from the floating gate is beyond the scope of this article. Arduino Flash Memory Flash memory, also known as program memory, is where the Arduino stores and runs the sketch. Since the flash memory is non-volatile, the Arduino sketch is retrieved when the micro-controller is power cycled. However, once the sketch starts running, the data in the flash memory can no longer be changed. Modification can only be done when the program is copied into SRAM memory. The table below show the amount of flash memory available on some different Arduino boards: Arduino EEPROM In some instances, we may need to store the states of certain input and output devices on the Arduino for long periods. For that, we save the data to EEPROM memory with the help of Arduino libraries or third-party EEPROM libraries. This helps us to remember the information when we power up the Arduino again. Most of the Arduino boards have built-in EEPROM memory, but in some cases, certain programs may require the use of an external EEPROM. The functions below help us to interact with the Arduino EEPROM. #include <EEPROM.h> EEPROM.write(address, value); EEPROM.read(address); EEPROM.update(address, value); EEPROM.get(address); EEPROM.put(address, value); To update or write to EEPROM, we need the address to write to and also the value to write or update. The read function accepts the address to read from and returns the value stored at that address. The get() and put() functions operate just like the read() and write() functions respectively, except that the former allow us to store other data types such as floats, structs or integers. Control Your DSLR Camera with an Arduino September 13, 2018
https://www.circuitbasics.com/types-of-memory-on-the-arduino/
CC-MAIN-2020-29
en
refinedweb
In Part one of File Handling in C++, we have introduced the ios class, the parent of all stream classes. We have learned some of its main features (manipulators, and format flags). In this article, we will revisit the ios class one more time, and then talk about the istream and ostream classes. The ios Class Member Functions Besides to the Manipulators and Format Flags, the ios class has a list of functions that control formatting. In this section, we are going to investigate some of the commonly-used functions, and illustrate them by example. The setf() and unsetf() Functions As I told you in the previous article, the setf() function is used to set (enable) format flags. Its opposite unsetf() disables the effect of the flag. The fill() Function This function works in two modes: it can get the fill character currently in use, or to set one. Its use is equivalent to using the setfill() manipulator function. The width() Function Also having two modes: getting the current field width, and setting a new one. This is equivalent to using the setw() manipulator function. Example Consider the following code that illustrates the use of fill() and width() functions: #include <iostream> using namespace std; int main() { int num = 3490; char ch; cout << "Number: "; cout.fill('*'); cout.width(10); cout << num << endl; ch = cout.fill(); cout << "Currently using " << ch << " as filling character." << endl; return 0; } This program should produce the following output: The istream and ostream Classes Derived from the same parent class ios, both istream and ostream classes inherit the features of the ios class, and extend them with features of their own. istream Class As its name implies, this class is responsible for input operations. We have already used some of the functions defined in this class in many examples. For instance, we have been using the extraction operator >> since the first few articles. In this section, we are going to review on what we already know, and learn some new functions. The Extraction Operator >> Mostly used with the cin object (standard input stream), the extraction operator >> extracts input from the input stream on its left, and assigns it to the variable on its right. The get() Function We have learned about this function when talking about Strings. The get() function has many forms of operation depending on the list of provided arguments. For example, the function will wait for a single-character input, if provided a char variable as argument. #include <iostream> using namespace std; int main() { char choice; cout << "A) Print the hostname\n" << "B) Print the IP Address info\n" << "C) Print the system data and time\n\n" << "Enter your choice: "; cin.get(choice); cout << "\nYou entered " << choice << endl; return 0; } This will behave as follows: The get() function could be also used to read a single-line input, or multi-line input if a delimiter character is specified. The getline() Function Another way to extract input text. This function is similar to the get() function, with slightly different syntax. Example Consider the following program: #include <iostream> using namespace std; int main() { string str; char delimit = '#'; cout << "\n Enter text: \n"; getline(cin, str, delimit); cout << "\nYou entered: \n" << str << endl; return 0; } This will read user input until the user enters #. The gcount() Function This function returns the number of characters read in the last input operation. ostream Class In contrary with istream class, the ostream class is responsible for output operations. We have already used one of the utilities defined in this class: the insertion operator << that we used so far in this series to write to the standard output. Besides to the extraction operator and the other functions inherited from its parent class ios, like fill(), clear(), and good(), the ostream class has methods of its own like: put() and write() for unformatted output. tellp() and seekp() for getting and setting position inside a file. Summary In this article, we have continued what we started in Part one in the context of I/O. - The fill() function gets/sets the padding character in output. - The width() functions gets/sets field width. - The istream and ostream classes are derived from the ios class. That was Part two. See you in Part three, and File I/O.
https://blog.eduonix.com/system-programming/learn-inputoutput-file-handling-c-part-2/
CC-MAIN-2020-29
en
refinedweb
Difference between revisions of "Trove/MeetingAgendaHistory" Revision as of 16:55, 22 April 2015 Contents - 1 Trove Weekly Meeting Agenda History (2015) - 1.1 Agenda for Jan 28th - 1.2 Agenda for Jan 21st - 1.3 Agenda for Feb 4th - 1.4 Agenda for February 18 2015 - 1.5 Agenda for February 25 2015 - 1.6 Agenda for March 4 2015 - 1.7 Trove Meeting, March 11, 2015 - 1.8 Trove Meeting, March 18, 2015 - 1.9 Trove Meeting, March 25, 2015 - 1.10 Trove Meeting, April 1, 2015 - 1.11 Trove Meeting, April 8, 2015 - 1.12 Trove Meeting, April 15, 2015 CANCELED - 2 Trove Weekly Meeting Agenda History (2014) - 2.1 Agenda for Mar. 19 - 2.2 Agenda for Mar. 26 - 2.3 Agenda for Apr. 2 - 2.4 Agenda for Apr. 9 - 2.5 Agenda for Apr. 16 - 2.6 Agenda for Apr. 23 - 2.7 Agenda for Apr. 30 - 2.8 Agenda for May 7 - 2.9 May 14 meeting canceled - 2.10 Agenda for May 21 - 2.11 Agenda for May 28 - 2.12 Agenda for Jun 4 - 2.13 Agenda for Jun 11 - 2.14 Agenda for Jun 25 - 2.15 Agenda for July 2 - 2.16 Agenda for July 9 - 2.17 Agenda for July 16 - 2.18 Agenda for July 23 - 2.19 Agenda for Aug 06 - 2.20 Agenda for Aug 13 - 2.21 Agenda for Aug 27 - 2.22 Agenda for Sep 3 - 2.23 Agenda for Sep 10 - 2.24 Agenda for Sep 17 - 2.25 Agenda for Sep 24 - 2.26 Agenda for Oct 8 - 2.27 Agenda for Oct 15th - 2.28 Agenda for Oct 29th - 2.29 Agenda for Nov 19th - 2.30 Agenda for Nov 26th - 2.31 Agenda for Dec 3rd - 3 Trove Blueprint Meeting Agenda History (2014) - 3.1 Agenda for Mar. 31 - 3.2 Agenda for Apr. 7 - 3.3 Agenda for Apr. 14 - 3.4 Agenda for Apr. 21 - 3.5 Agenda for April 28 - 3.6 Agenda for May 5 - 3.7 Meeting Cancelled May 12 - 3.8 Agenda for May 19 - 3.9 Agenda for Jun 2 - 3.10 Agenda for Jun 23 - 3.11 Agenda for Jul 14 - 3.12 Agenda for Jul 21 - 3.13 Agenda for Jul 28 - 3.14 Aug 18 Meeting Canceled - 3.15 Agenda for Aug 25 - 3.16 Sep 1 Meeting Canceled - 3.17 Agenda for Sep 8 - 3.18 Agenda for Sep 15 - 3.19 Agenda for Sep 22 - 3.20 Agenda for Oct 20 - 3.21 Agenda for Oct 22nd - 3.22 Agenda for Oct 27 - 3.23 Agenda for Nov 3 - 3.24 Agenda for Nov 5 - 3.25 Agenda for Nov 12 - 3.26 Agenda for Nov 17 - 3.27 Agenda for Nov. 24 Trove Weekly Meeting Agenda History (2015) Agenda for Jan 28th - Review #147908: Try to gain some consensus on how to resolve the issues itemized in final review comment. Agenda for Jan 21st - Review #131610 : Security concerns, continued from the previous meeting + general review Agenda for Feb 4th - Meeting canceled due to mid-cycle in Seattle Agenda for February 18 2015 - Trove pulse update: - Trove related presentations/talks in the Liberty Summit -- Please Vote! - Trove Liberty Mid Cycle Sprint -- Initial planning - Continue discussion of proposal to classify datastores and strategies - - Has been updated with more details and review comments thus far. I would like to request discussion and hopefully get a green-light to complete implementation. Agenda for February 25 2015 - Trove pulse update: - [amrith] Need reviewers for 157140 [1] - Would like to have people look at this and consider it for merging quickly as there are other patch sets that will be impacted by it, especially the one reorganizing the guestagent code, anything that wants to add a new datastore, etc., - There have been some comments (Simon, Ashleigh, Sushil) that I've tried to address in a reply but I would like some positive (or negative) reviews. Ok, I'd like some positive reviews ;) - [doug] Discuss 147908 [2] - There are a couple of options for how to proceed on this change and it would be good to get it into Kilo-3. - [schang] Discuss 136918 [3] - Discuss the comment thread at point #3 under the Guest Agent section. Need to decide what to do with the common code. - [amrith] Trove Liberty Mid Cycle Sprint -- Initial planning - - Am hoping we can nail down a date. - Currently, week of August 24th is winning. - [sushilkm] Needed reviewers for Vertica work - Looking forward for people to look at the Vertica work, the spec, trove-integration and trove patches. - The list of patchsets for the work is as follows: - trove-specs: - trove-integration: - trove: Agenda for March 4 2015 - Trove pulse update: - Feature proposal freeze at the end of this week. - Currently open specs: - Feature request (vkmc) - SSH keys injection on Trove instance creation. Has it been discussed before? Are there any cons for adding it? - Review #131610 - Anna Trove Meeting, March 11, 2015 - [SlickNik] Trove pulse update: - [SlickNik] Feature Freeze Next Week (March 19) - Please prioritize reviews for BPs : - - [amrith] Coming down the pike for Liberty - oslo-config-generator, look at - Heads-up, I'll be submitting BP's or changes for these as appropriate and arrange to have the code ready post Kilo freeze. Trove Meeting, March 18, 2015 - [SlickNik] Trove pulse update: - [SlickNik] Openstack Feature Freeze this week for kilo-3: - [SlickNik] Switch from oslo namespace packages to "oslo_" style modules - [peterstac] Come up with a strategy to deal with Mock()ing utility methods - We have several patchsets that are mocking execute_with_timeout, which can cause unexpected behaviour. - For context see: and - [sushilkm] Different timeouts in configuration based per datastore Trove Meeting, March 25, 2015 - [SlickNik] Trove pulse update: - [vgnbkr] Instance Storage for Replication - [sushilkm] Different timeouts in configuration based per datastore Trove Meeting, April 1, 2015 - [SlickNik] Trove pulse update: Trove Meeting, April 8, 2015 - [SlickNik] Trove pulse update: - [vgnbkr] Exceptions in Unit Tests - Propose that when unit tests test for generic errors such as RuntimeError, we check with assertRaisesWithRegexp to check the actual error message rather than just the type. Propose this for all changes going forward, but not fixing existing tests at this time. - [SlickNik] Liberty Design Summit sessions Trove Meeting, April 15, 2015 CANCELED Trove Weekly Meeting Agenda History (2014) Agenda for Mar. 19 - - Moving away from mockito () - Clustering API Container vs Joins approach. Agenda for Mar. 26 - Summit sessions [hub_cap]* Data Store abstraction start/stop/status/control [amrith] - Icehouse, RC1, and juno Agenda for Apr. 2 - Icehouse RC1 cut and Juno branch open - ATL Dev Summit topics - Open discussion Agenda for Apr. 9 - Refactoring backup/restore strategies [denis_makogon] - Moving the docs [grapex] - Design Summit Session Proposals Agenda for Apr. 16 - Discuss the expectations of an agenda item for the weekly meeting [core/amcrn] - Goal: Explain what is expected of an agenda item to guarantee the conversation stays on topic and is headed toward meeting a goal or resolving an issue. - amcrn (talk) 07:13, 17 April 2014 (UTC): - Result: majority vote agrees that each agenda item should have an objective and be goal-oriented. - Solution: as a result, guidelines for agenda items have been added to the top of this wiki and the core team has agreed to shepherd folks for the next few weeks on how to fine tune their agenda items. - Next Steps: enforce guidelines/rules. - Datastore and Datastore Version Concat in Trove Horizon Dashboard [michael-yu] - Question to Answer: Should there be two columns in list-view, and two dropdowns in launch-view, for datastore and datastore-version respectively? Or should they be concatenated into one? - See the ongoing conversation at and . - amcrn (talk) 07:13, 17 April 2014 (UTC): - Result: majority vote agrees that datastore-type and datastore-version should be consolidated into one dropdown for the launch-instance workflow and one column for list-view. long-term, many believe this to be a hint/indication that datastore-type and datastore-version should be combined into a single field for the v2 api, but this is yet to be seen. - Next Steps: michael-yu updating his horizon patch-set () to be single dropdown/column and amcrn updating trove-integration to default to a naming scheme that plays well with this model () - Perform integration testing for the heat based instances [Shivamshukla] - - What could be the way to test instance workflow through heat. - amcrn (talk) 07:39, 17 April 2014 (UTC) - Summary: some tests for a heat-based workflow () require changing trove-taskmanager conf values, which then require a restart of the trove-taskmanager. the problem with requiring a restart is that it makes the naive assumption that it's not a remote system. - Solution: add a test.conf flag that indicates whether non-remote tests should be run (i.e. tests that require the restart of trove-taskmanager). long-term, a majority agree that a separate heat gate should be birthed. - Next Steps: shivamshukla to update the patch-set to include the test.conf flag mentioned above. - Neutron Support, SecGroup Management, and Networking [annashen, denis_makogon] - + + - amcrn (talk) 07:57, 17 April 2014 (UTC): - Result: annashen will unilaterally/single-handedly own the blueprint + implementation for supporting neutron in trove. the three related blueprints ( + +) are to be merged into one (the final destination being). a majority agrees that this consolidated blueprint needs to clearly articulate the requirements, beyond just stating "support running neutron instead of nova-network". - Next Steps: annashen to deliver unified blueprint and implementation. - Refactoring backup/restore strategies (re-visiting from previous meeting) [denis_makogon] - - amcrn (talk) 07:57, 17 April 2014 (UTC) - Result: Ran out of time, was not discussed in the meeting. Agenda for Apr. 23 - Juno Mid-cycle Meetup - discuss venue/timing proposal and confirm Agenda for Apr. 30 - Establish policy for disk image builder elements for datastores [mat-lowery] - Do we require elements for both Ubuntu and Fedora when a Gerrit change introduces a new datastore or version? - Cassandra and Couchbase are examples of datastores that don't currently have Fedora elements. and are examples of Gerrit changes that have been -1ed because of lack of Fedora elements. - Isn't one working element better than none? If both are required, is that inviting lower quality on the other element the user is required to code but never uses? - Are Fedora elements tested in an automated way as the Ubuntu elements are? Reasoning: Yes they were submitted with the Gerrit change, but do they work? - Is the policy that Trove requires both Fedora and Ubuntu support for each manager on day one? There is already a lack of parity across datastore managers--not all can do backup and restore (but ultimately I assume they will). Why can't Trove take distro support piecemeal just like it does the datastore operations? - mat-lowery (talk) Wed Apr 30 19:17:28 UTC 2014 - Result: <SlickNik> Let's table this for now. - What about Package Install vs. Tarball Install vs. Source Install? Is there a requirement on how the bits are laid down? - If packages are required, what are the acceptable sources? (Example: Redis currently uses a PPA.) - Does the decision here have repercussions on datastore upgrades? - mat-lowery (talk) Wed Apr 30 19:17:28 UTC 2014 - Result: <SlickNik> So allow [PPAs]. But let's make a best effort to follow this order: distro package, project maintainer PPA, some random PPA. - Open Items - Clarification on commit message style guidelines (slicknik): We've been having a lot of -1 reviews on commit message style and I wanted to clarify some of this so that we can reduce review churn caused by this. Goal: Have a clear understanding on when a commit message should be -1'ed. Agenda for May 7 - How do Gerrit changes get approved? [mat-lowery] - Goal: To clarify the Gerrit change approval process used by Trove core (for the benefit of core and non-core). - Core potentially benefits by establishing a process that all core follows (and possibly swap best practices). - Non-core potentially benefits by making the process transparent and setting expectations. - "Hey core, please review <change>" is inefficient and unfair. The priority (such as that calculated by ReviewDay) should be the sole indicator of review/approval priority. - Goal: To reduce the time from submittal to approval and prevent Gerrit change starvation. - My first-ever Trove submittal is nearly three months old. Why is that? - Establish Gerrit "etiquette." - No leaving -1s and then disappearing. A reviewer that leaves a -1 has an obligation to respond to follow up questions. - No leaving -1s for nice-to-haves. That's what 0 is for. - No leaving -1s for questions about why something was done. Again, use a 0. - No leaving -1s for something minor when there are five +1s. - Leaving a -1 has the potential to cause the author to "reset the counter" (because he has to submit a new revision) on age-influenced priorities (such as ReviewDay). Think hard before you leave that -1. - More details on the mailing list. - Meetings cancelled next week since most folks will be at the ATL summit [slicknik] May 14 meeting canceled Agenda for May 21 - Informational PSA on Openstack Logging Standards (cp16net) - Follow up on ATL discussion on Code Reviews - Clustering python-troveclient Interface - Reference to Clustering API: - Question: How to model heterogeneous instances for a cluster in a single python-troveclient command? - See mockups at Agenda for May 28 - Refactor/Re-write notifications [denis_makogon] - - We need to discuss the current contract/payload for all notifications since there are a lot of concerns of current design and implementation Agenda for Jun 4 - Juno-1 release to be cut next week (SlickNik) Agenda for Jun 11 - Code reviews summary + state of the gate - We've been doing a better job with code reviews, but we still need more reviews (both from non-core and core!). - Code Review summary for the last 30 days: - Total reviews: 542 (18.1 per day) - Total reviewers: 33 (0.5 per reviewer per day) - Total reviews by core team: 282 (9.4 per day) - Core team size: 6 (1.6 per core per day) - RDJenkins gate is currently broken because image built doesn't use correct cloud-init data source - - steve-leon and slicknik investigating; likely due to - FYI We cut the Juno-1 trove release! (SlickNik) Agenda for Jun 25 - Integration-tests update (SlickNik) - "Vertica Datastore patch review" why no reviews ? (SnowDust) Agenda for July 2 Agenda for July 9 - Scrub / Update Docs (SlickNik): The following need to be scrubbed. I'd like to get a few volunteers to take a look at each one of these, identify areas where they are deficient, and help fix them. Areas to update are tracked at. - Developer Docs - Deployment Guide - API Doc - Config Options help strings - Manual Trove Install Doc Agenda for July 16 - Propose and discuss initial Clustering UI options (michael-yu). - Doc Scrub Update : - Juno-2 next week : Agenda for July 23 - Juno-2 cut: - Doc Scrub Update : - Other matters: - [amrith] adam_g raised the issue on IRC re: and the bug. I believe this can be fixed without breaking the API contract by extending the API to include a new parameter. If this proposal is acceptable, I'll volunteer to resubmit the patch set. Agenda for Aug 06 - Juno Midcycle meetup proposed agenda - Trove related talks for the Kilo Summit Agenda for Aug 13 - Juno Midcycle meetup proposed agenda - Heads up: Clustering implementation has landed in Gerrit: Agenda for Aug 27 - Juno-3 cut next week - Clusters - See for a quick walkthrough of the operations and some notes. Agenda for Sep 3 - Juno-3 cut this week Agenda for Sep 10 - Clusters Migration and Downgrades (schang) - The new v32 clusters migration script creates a foreign key constraint between the clusters table and the database_versions table, but it's downgrade script do not cleanup the table and the foreign key constraint, causing the v16 downgrade to fail on dropping the database_versions table. - To ensure error free downgrades moving forward, we have these options: - All migration scripts' downgrade needs to cleanup tables and foreign key constraints: - Nobody currently do downgrade migrations on a production system. Shouldn't downgrade's purpose be allowing devs to perform hard resets on their test databases? If test data preservation is needed, the dev should manually backup and restore the table. - Establish version points at which downgrade is not permitted. e.g. Downgrade v32 -> v25 is allowed, but you can't downgrade from latest to anything pre v25. - Only test the upgrade path in the test script. - Remove all downgrade routines and don't support downgrade migrations at all. - Action Item: Discuss the above options and decide on an approach. - Review: - Error log: - Related IRC discussions: Agenda for Sep 17 - trove-specs is underway for Kilo (slicknik) - - All blueprints going forward (for Kilo) must include a spec proposed to the trove-specs repo. - Will give folks a lead time of a week -- so if you're submitting a bp on or after Sept 22, please make sure you a spec linked to it. Agenda for Sep 24 - Juno RC1 - Design Summit details Agenda for Oct 8 - Discuss if change to rdservers related logic in load_mgmt_instance is acceptable as part of bugfix - FOSS Outreach Program Projects Agenda for Oct 15th - Discuss the timing of trove-guestagent refactoring. This is a big change and it would help to reduce rebasing if this was fast tracked or put on hold until other patches land. (rmyers) Agenda for Oct 29th - [denis_makogon] - - During bug discussion (using report comments) was made unformal decision to mark all running backups as FAILED if instance is being deleted. But this checkin was marked with -2 from Auston (for more info see checkin comments). I'd like to discuss it and find suitable way for all of us. - - As we are all know that incremental backup can silently be executed as full backup because there's no incremental strategy for given backup strategy. This fix aims to fix it. But Amrith keep voting with -1 because he thinks that fix can be shorter. More info you can find out checkin comments. My idea is to have common validation method that executes specific logic depending on operation type. So, i'd like to hear concrete feedback from other contributors and find suitable way to fix it. - - We need to find proper way to handle cluster extension procedure using CLI. Also, as it was discussed, once clustering is merged we might proceed with changing existing python-troveclient framework (for more information see GIST). - [amrith] - Since the meeting next monday (BP meeting) will likely be preempted by the summit in Paris, I'd like to get some focus on a blueprint to allow users to retrieve guest log files. Please review. I have re-written this specification and (with her permission) listed Iccha as the mentor of the person who will be implementing this. Agenda for Nov 19th - Looking for reviews to oslo-incubator changes - 129292, Obsolete oslo-incubator modules - unused modules - 129294, Obsolete oslo-incubator modules - timeutils - 129714, Obsolete oslo-incubator modules - gettextutils (now oslo.i18n) - 129664, Obsolete oslo-incubator modules - network_utils (now netutils) - 129654, Obsolete oslo-incubator modules - excutils - 129295, Obsolete oslo-incubator modules - lockutils - 129663, Obsolete oslo-incubator modules - strutils - 129378, Obsolete oslo-incubator modules - importutils - 129668, Obsolete oslo-incubator modules - jsonutils (now oslo.serialization) - 133068, Obsolete oslo-incubator modules - wsgi - 133051, Obsolete oslo-incubator modules - exception - Better documentation for Image building - Goal: This was identified as an area of confusion in the summit. Identify what the next steps here are and establish a timeline for fixing this. - Trove Mid Cycle Sprint in Seattle, WA - - Goal: Close on a date for the mid-cycle sprint so that other logistics can follow - Trove Cross Project Liasons - - Goal: Identify volunteers for Cross Project Liasons Agenda for Nov 26th - Dates for Trove Kilo Mid-Cycle sprint - Trove BP meeting -- to bp or not to bp? Agenda for Dec 3rd - [amrith] Discuss merge order for various OSLO related patches - [amrith] Discuss status of moving guest agent to its own repository patch set Trove Blueprint Meeting Agenda History (2014) For those interested, we have blueprint meetings in #openstack-trove weekly, Mondays at 18:00 UTC. Feel free to add items in the agenda below. Agenda for Mar. 31 Networking related blueprints: - Network attribute management [denis_makogon] - Neutron Support for Trove [annashen] - Trove Managed Instances on Private Network [juice] Others: - Point in time recovery [denis_makogon] - Data volume snapshot [denis_makogon] - Instance metadata [imsplitbit] Agenda for Apr. 7 - Point in time recovery [denis_makogon] - Data volume snapshot [denis_makogon] - Networking [denis_makogon] - Cross-region backups [esmute] - Descriptions to Datastore Configuration Group Parameters [cp16net] - Categorize the trove-manage commands [cp16net] - Neutron Network Support [annashen] - Trove-Managed Instances [juice] Agenda for Apr. 14 - Datastore Capabilities [k-pom] - Descriptions to Datastore Configuration Group Parameters [cp16net] - Categorize the trove-manage commands [cp16net] - Point in time recovery [denis_makogon] - Data volume snapshot [denis_makogon] - Networking [denis_makogon] - Neutron [annashen] - Move the Trove Guest Agent to its own module [robertmyers] - Unify common code in guest agent configurations [amrith and snowdust] - Adding datastore and version to the backup API [robertmyers] Agenda for Apr. 21 - Unify common code in guest agent configurations [amrith and snowdust] - Adding datastore and version to the backup API [robertmyers] - Move the Trove Guest Agent to its own module [robertmyers] - Database log files manipulations [denis_makogon] - Resource management driver [denis_makogon] - Add backup and restore support for single instance couchbase [michael-yu] - Populate endpoint URLs from Keystone service catalog [esmute] - Enable specification of Cinder Volume Types - Add ability to create read-only users via the users API Agenda for April 28 - Percona support [mattgriffin]: - Support for Vertica [yogeshmehra]: - Resource management driver [denis_makogon] - Managed Instances [juice] - List all datastore types and versions by a single API call [nehav] Agenda for May 5 - Percona support [mattgriffin]: - - slicknik (talk) We can look into this (streaming to Swift NA yet, but there are some new features) - - slicknik (talk) This will be supported through our mysql datastore manager - - slicknik (talk) We do not support clustering yet, but we will look into this once v1 support for clustering arrives. - Instance database log manipulations [denis_makogon] - - slicknik (talk) dmakagon to fix the wiki page conf section with exact API - Update database instance name [nehav] - - slicknik (talk) Approved to updated name (only in trove and not in nova) - Allow configs to be rendered based on datastore version [cp16net] Meeting Cancelled May 12 Agenda for May 19 - Trove client for cross-region-backup [esmute] - - amcrn (talk) 18:24, 19 May 2014 (UTC): Meeting Minutes: this was originally scheduled due to concerns with introducing a new field called "region". at the time, there was no consensus as to whether namespacing should be used (arn-style), etc. during the summit, we reached a consensus (in the clustering talks) that a "region" field is the way to go, and therefore there's no action items here. - Add visibility filter to datastores [iccha] - - amcrn (talk) 18:24, 19 May 2014 (UTC): Meeting Minutes: consensus is that if is_active is 0, then only if the tenant-id is in a whitelist (new table?) will they be able to use and see the datastore-version. this is more aligned with how glance deals with images vs. what's proposed in the orginal spec here (a simple admin-only). - Database log files manipulations [denis_makogon] - - amcrn (talk) 18:54, 19 May 2014 (UTC): Meeting Minutes: the following modifications are required for approval (1) adding posix created/modified timestamp fields on the response so that a user knows when the log was last touched and/or rotated (2) use configuration-groups for configuration parameters (vs. introducing the mapping in the code) (3) fix a few bugs in the blueprint, including inconsistent usage of underscore vs. hyphen in naming, and datastore_log_files should be an array. amcrn (talk) 19:04, 19 May 2014 (UTC): the items below by boden are being tabled at the moment (pending confirmation of sponsorship and a discussion surrounding the precedent this sets). amrith to send an email to boden to sync up. - Pluggable conductor manager [boden] - Configurable DB plugins [boden] - Multi-path plugin support; added notes to existing BP [boden] Agenda for Jun 2 - Trove should use Keystone Trusts for Authentication instead of hard coding the credentials in configuration files. - Please refer to this blueprint - Changes are similar to Keystone trusts blueprint in Tuskar - Using keystone trusts will eliminate the risk of keeping login credentials in a configuration file. - Can be seen as Security Enchantment by better practice. slicknik (talk) 21:22, 2 June 2014 (UTC) There were concerns around whether keystone trusts is fully done yet, and what other OpenStack projects are going to use this. We had a vote, and the unanimous result was to wait and watch for now. - Flavor per datastore association [iccha] slicknik (talk) 21:22, 2 June 2014 (UTC) Approved. There was one small concern about backward-compat with the GET /flavors api call which should be addressed. - Conductor phase 2 [denis_makogon] slicknik (talk) 21:22, 2 June 2014 (UTC) Needs more detail as to what phase 2 will actually entail. Denis to update the bp with more details. - Add created/updated timestamps and instance count to configuration groups list and details calls [tvoran and iccha] slicknik (talk) 21:22, 2 June 2014 (UTC) Approved - Pluggable conductor manager [boden] slicknik (talk) 21:22, 2 June 2014 (UTC) Approved - Configurable DB plugins [boden] slicknik (talk) 21:22, 2 June 2014 (UTC) Needs more information regarding. Boden to update the BP. - Multiple API extension paths [boden] slicknik (talk) 21:22, 2 June 2014 (UTC) Ran out of time and didn't get to this. Tabled for a later meeting. Agenda for Jun 23 - Heat integration [denis_makogon] - Configurable DB plugins [boden] Agenda for Jul 14 - New API call to get the default configuration values for a specific datastore version flavor without an instance (cp16net) - configurable db plugins - [boden] - Based on our previous discussion I propose we drop this BP as (a) no other projects expose this level of integration and (b) this couples consumers into an internal impl detail which is not guaranteed to be supported long term. - multiple API extension paths [boden] - See email note here (bullet c) : Agenda for Jul 21 - [denis_makogon] - Datastore log files operations [denis_makogon] - [boblebauce] - Use oslo-rootwrap - [denis_makogon] - Disk space validation coefficient - Ceilometer integration. Notifications (revisiting after fleshing out). Agenda for Jul 28 - [boden] - Dynamic extension loading using stevedore Aug 18 Meeting Canceled - Most trove folks traveling for Trove Day + Midcycle event in Cambridge, MA Agenda for Aug 25 - [amrith] - What do we do about SUSE? There are bugs coming in at a steady clip fixing little things that don't work in SUSE but in some cases, no test cases are being proposed nor is there a clear plan to get SUSE on the supported platform list. How do we want to proceed? I raise this question because I don't want to use the review process as the place to discuss on a patch-by-patch basis whether a certain code change should be considered or not; rather raise it and address it as a larger issue of how we get to a supported SUSE platform including testing on an ongoing basis, etc., Sep 1 Meeting Canceled - Labor Day, so most folks are out Agenda for Sep 8 - [iartarisi] - SUSE support - I created a blueprint to track the SUSE support in trove. We could discuss this and see what else is needed and how to proceed. (also see the full specification link) - [kevinconway] - SSL support - [denis_makogon] - ceilometer integration Agenda for Sep 15 - [zhiyan] - OSProfiler integration - [denis_makogon] - Cassandra clustering - Clustering int-tests Agenda for Sep 22 - [denis_makogon] - Cassandra clustering - Oracle 12c support over Fedora 20 - [zhiyan] - OSProfiler integration: Agenda for Oct 22nd - [SlickNik] Discuss trove design summit sessions in the etherpad - [amcrn] Ask folks to review and (re-)categorize bugs accordingly. - [amrith and AJaeger] AJaeger started working on. This same issue was debated in and after some deliberation we eventually didn't take the change. My recollection is that this was discussed at the mid-cycle in Cambridge as part of the conversations that led to and we decided that we wouldn't take H301. Maybe that was a decision for the Juno cycle only; I don't recall. After speaking with AJaeger, I've put this on the agenda for this meeting because I would like to make sure that we're in favor of this change now before he spends much more time on it, and the attendant fixes to H302 and H304 which produce numerous failures. - [robertmyers and schang] Discuss guestagent refactoring progress. - passed and ready for review, but gate seems to be broken. - Prepare for other guestagent merges. Discuss status of other in-flight commits. Agenda for Oct 27 - (Add RAM, Cores, and Volume Count to Quotas) - (Make Rsync Optional For Guest) - (Enance Mgmt-Show To Support Deleted Instances) - (Guest Agent RPC Ping Pong Mgmt API) Agenda for Nov 3 - Allow users to retrieve guest log files Agenda for Nov 5 Meeting canceled Agenda for Nov 12 Meeting canceled Agenda for Nov 17 - Add support for DB2 Express-C datastore () - Mariam John - Add support for Apache CouchDB datastore () - Mariam John [grapex] - Example Snippet Generator - Tim Simpson [Riddhi] - Flavors per datastore - - sgotliv (Sergey Gotliv) and peterstac (Peter Stachowski) wanted to discuss the relevant API changes. Agenda for Nov. 24 - [flaper87, sgotliv, amrith] Switch Trove to use OSLO Messaging [4] - Since a spec in trove-specs is required for all projects related to blueprints that are intended for merge in Kilo, this spec has been submitted for review based on a document that was already in the old wiki format. A procedural approval similar to the ones granted to other specs [flavor id's and datastore versions] is requested for this. - [iccha, amrith] Allow users to retrieve guest log files [5] - [dougshelley66] Add instance name as parameter to various CLI commands [6]
https://wiki.openstack.org/w/index.php?title=Trove/MeetingAgendaHistory&diff=prev&oldid=78027
CC-MAIN-2020-29
en
refinedweb
I'm working on a tool that's invoked primarily from the command line and scripts. The easiest way for me to interact with it is through the console. So in PyCharm, this is fairly easy to do, just Tools->Run Python Console... Then I can work with my code directly in there as per normal. The problem is that PyCharm doesn't really seem to get me anything over running from the regular Python console! I really want to be able to set a breakpoint in my code, so I could do something like this: D:\Tools\Python27\pythonw.exe -u C:\Program Files (x86)\JetBrains\PyCharm 1.2.1\helpers\pydev\console\pydevconsole.py 52626 52627 import sys; print('Python %s on %s' % (sys.version, sys.platform)) sys.path.extend(['C:\\Program Files (x86)\\JetBrains\\PyCharm 1.2.1\\helpers', 'D:\\Code\\Python\\PyHello']) Python 2.7.1 (r271:86832, Nov 27 2010, 17:19:03) [MSC v.1500 64 bit (AMD64)] on win32 >>> from fibo import Fibo >>> fib = Fibo(boundary=10) >>> fib.show() 1 1 2 3 5 8 >>> Pretty straightforward. But I'd like to set a breakpoint in the __init__ or show() functions and that doesn't work. Is there a way to start the console in debug mode? I found this thread, but it's not quite what I want to do: that involves starting a debug session first, then opening the shell afterwards. I want to be able to debug from right within the console. Hello Rick, Right now there is no possibility to run the console in debug mode. Running your script in the debugger, putting a breakpoint somewhere and then showing the command line console is the only possibility at the moment. >>>> from fibo import Fibo >>>> fib = Fibo(boundary=10) >>>> fib.show() -- Dmitry Jemerov Development Lead JetBrains, Inc. "Develop with Pleasure!" Oh. Well, that'd be useful, so consider this a vote for that :-) What I've done is to create a go.py script that contains the stuff I'd normally do in the console and then I run that. But it's OK, but not as flexible because of course I can't just try different things as I go along. Thanks for the quick response. Hi Rick, There is already an issue for that so you can vote ;-)
https://intellij-support.jetbrains.com/hc/en-us/community/posts/205804329-Debugging-from-the-Python-Console
CC-MAIN-2020-29
en
refinedweb
From: atkinson julian (julian.atkinson71_at_[hidden]) Date: 2008-07-22 16:56:35 Hi, the combination of asio and coroutines would seem to represent a little bit of magic (layering a linear code model over an async event framework and without the attendant issues involved in multithreading). However the coroutine library seems to be exception throwing in an unanticipated way when performing a wait on a future<> during the execution of an asio async operation. I am following the documention and am reasonably familiar with asio from other projects - but have only just worked through some of the examples from the coroutines documention. i am using the vault freeze version of coroutines from 2006 - i understand from perusing these lists that the library author is currently rewritting boost::coroutine around a continuation framework to simplify and in the hope that the library may be an official boost inclusion. Much kudos for contributing to c++ in a significant way ;-) the documentation seems to have some errors - guidance is given to use shared_coroutine in preference to coroutine in asio context although this is not present in the example - asio uses the boost::system error classes in the boost namespace when not freestanding - the example calls wait() on source which is a socket type rather than a future I have attempted to work up an example which looks logical - but cant get around the throw that occurs at /boost/coroutine/detail/context_base.hpp:213 // linux 2.6.23, g++ (GCC) 4.1.2 // g++ -g main.cpp -lboost_coroutine-mt -lboost_system-mt #include <iostream> #include <boost/bind.hpp> #include <boost/ref.hpp> #include <boost/coroutine/coroutine.hpp> #include <boost/coroutine/generator.hpp> #include <boost/coroutine/future.hpp> #include <boost/asio.hpp> namespace coro = boost::coroutines; using namespace coro; typedef boost::asio::io_service service_type; //typedef coro::coroutine< void()> foo_coroutine_type; //typedef coro::coroutine< void( service_type &service)> foo_coroutine_type; typedef coro::shared_coroutine< void( service_type &service)> foo_coroutine_type; void foo( foo_coroutine_type::self &self, service_type &service) { typedef boost::system::error_code error_type; typedef boost::asio::ip::tcp::resolver::query query_type; typedef boost::asio::ip::tcp::resolver::iterator iterator_type; boost::asio::ip::tcp::resolver resolver( service); query_type query( "", 80 ); coro::future< error_type, iterator_type> future( self); resolver.async_resolve( query, coro::make_callback( future)); assert( !future); // wait throws here --> /boost/coroutine/detail/context_base.hpp:213 coro::wait( future); assert( future); error_type &error = future->get< 0>(); if( error) std::cout << "error resolving " << error.message() << std::endl; else { iterator_type ip = future->get< 1>(); while( ip != iterator_type()) std::cout << ip->endpoint() << std::endl; } } int main() { service_type service; foo_coroutine_type cfoo( boost::bind( &foo, _1, boost::ref( service))); cfoo(); // service.post( boost::bind( &foo_coroutine_type::operator(void ), &cfoo)); service.run(); } Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2008/07/140175.php
CC-MAIN-2020-29
en
refinedweb
You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org Portal:Toolforge/Admin/Kubernetes/Certificates: Difference between revisions Revision as of 15:43, 7 June 2021 This page contains information on certificates (PKI, X.509, etc) for the Toolforge Kubernetes cluster. General considerations Kubernetes includes an internal CA which is the main one we use for cluster operations. By default, kubernetes issued certificates are valid for 1 year. After that period, they should be renewed. The internal kubernetes CA, generated at deployment time by kubadm expires after 10 years. The current CA is good until Nov 3 14:13:50 2029 GMT Worth noting that etcd servers don't use the kubernetes CA, but use the puppetmaster CA instead. Most certs can be checked for expiration with sudo kubeadm alpha certs check-expiration on a control plane node. Use cases and operations Description of the different certificate types we have in the cluster. external API access We have certain entities contacting external the kubernetes API. The authorization/authentication access is managed using a kubernetes ServiceAccount and a x509 certificate. The x509 certificate encodes the ServiceAccount name in the Subject field. Some examples of this: - tools-prometheus uses this external API access to scrape metrics. - TODO: any other example? operations Certificates for this use case can be generated using a custom script we have: wmcs-k8s-get-cert . Usually, the generated cert will be copy&pasted into the private puppet repo to be used as a secret in a puppet module or profile. Renewing the certificate is just generating a new one and replacing the old one. Example workflow for replacing tools-prometheus k8s certificate: user@tools-clushmaster-02:~$ clush WHATEVER_DISABLE_PUPPET_FEELWIDE # TODO user@tools-k8s-control-3:~$ sudo -i wmcs-k8s-get-cert prometheus /tmp/tmp.9k9N7ksn6K/server-cert.pem /tmp/tmp.9k9N7ksn6K/server-key.pem user@tools-k8s-control-3:~$ sudo cat /tmp/tmp.9k9N7ksn6K/server-cert.pem -----BEGIN CERTIFICATE----- MIIDYTCCA[...] -----END CERTIFICATE----- user@tools-k8s-control-3:~$ sudo cat /tmp/tmp.9k9N7ksn6K/server-key.pem -----BEGIN RSA PRIVATE KEY----- MIIEpQIBA[...] -----END RSA PRIVATE KEY----- root@tools-puppetmaster-02:/var/lib/git/labs/private# stg uncommit -t a706eb28 uncommit the patch that modifies 'modules/secret/secrets/ssl/toolforge-k8s-prometheus.key' root@tools-puppetmaster-02:/var/lib/git/labs/private# stg pop ; stg push until you are in the right uncommited patch root@tools-puppetmaster-02:/var/lib/git/labs/private# nano modules/secret/secrets/ssl/toolforge-k8s-prometheus.key ; stg refresh copy paste here the private key root@tools-puppetmaster-02:/var/lib/git/labs/private# stg push -a ; stg commit -a you are done! user@laptop:~/git/wmf/operations/puppet$ nano files/ssl/toolforge-k8s-prometheus.crt create a patch similar to user@tools-clushmaster-02:~$ clush WHATEVER_ENABLE_PUPPET_FEELWIDE # TODO [...] internal API access Some stuff running inside the kubernetes cluster also require a certificate to access the API server and use a ServiceAccount. This certificate is usually crafted as a Kubernetes secret for the utility to use it. Some examples of this: - our custom webhook: ingress admission controller - our custom webhook: registry admission controller - the internal metrics server (i.e, what kubectl top uses) operations Certificates for this use case can be generated using a custom script we have: wmcs-k8s-secret-for-cert. After running the script, the secret should be ready to use. Renewing the certificate is just generating a new one (running the script again and making sure the pod uses it). If you want to make sure the old cert is no longer present, just delete it and run the script again. Example session for the metrics-server: root@tools-k8s-control-3:~# kubectl delete secrets -n metrics metrics-server-certs secret "metrics-server-certs" deleted root@tools-k8s-control-3:~# wmcs-k8s-secret-for-cert -n metrics -s metrics-server-certs -a metrics-server secret/metrics-server-certs created root@tools-k8s-control-3:~# kubectl get secrets -n metrics metrics-server-certs -o yaml | grep cert.pem | head -1 | awk -F' ' '{print $2}' | base64 -d | openssl x509 -text Certificate: Data: Version: 3 (0x2) Serial Number: 2f:65:a6:cf:2c:16:2f:39:6e:29:95:ee:35:01:b9:d7:75:a1:d2:50 Signature Algorithm: sha256WithRSAEncryption Issuer: CN = kubernetes Validity Not Before: Jun 2 11:31:00 2020 GMT Not After : Jun 2 11:31:00 2021 GMT Subject: CN = metrics-server [..] node/kubelet certs Kubelet has two certs: - A client cert to communicate with the API server - A serving certificate for the Kubelet API At this time the serving certificate is a self-signed one managed by kubelet, which should not need manual rotation. Proper, CA-signed rotating certs are stabilizing as a feature set in Kubernetes 1.17, and we should probably switch to that for consistency and as a general improvement. The client cert of kubelet is signed by the cluster CA and expires in 1 year. operations All such client certs are rotated when upgrading Kubernetes, but they can be manually rotated with kubeadm as well. This should be as easy as running kubeadm alpha certs renew on a control plane node as root. It is possible to configure the kubelet to request upgraded certs on its own when they near expiration. So far, we have not set this flag in the config, expecting our upgrade cycle to be 6 months, roughly. TODO: elaborate tool certs These certs are automatically generated by the maintain-kubeusers mechanism. When a new tool is created in Striker, the LDAP change is picked up by a polling loop in the maintain-kubeusers deployment, and the service will: - Create the NFS folder for the tool if it isn't already there because of maintain-dbusers - Create the necessary folders to set up the KUBECONFIG for the user. - Create a tool namespace along with all necessary privileges, restrictions and quotas - Generate a private key - Request and approve the CSR for the cert to authenticate the new tool with the Kubernetes cluster - Write out the cert to the appropriate files along with the KUBECONFIG - Create a configmap named maintain-kubeusersin the tool namespace that gives the expiration date of the cert to use for automatically regenerating the cert before it expires - Deleting this configmap will cause the cert to be regenerated on the next iteration. This is the safest way to regenerate the certs manually. Each cert includes a CN, which functions as the user name in Kubernetes, and can include groups as well ("O:" or organization entries). Tool certs currently have the CN of their tool name and one O of "toolforge". This service runs in Kubernetes in a specialized namespace just for it using a hand-made Docker image, as is documented in the README of the repo. The toolsbeta version runs the maintain-kubeusers:beta tag instead of the :latest tag to facilitate staging and testing live without hurting Toolforge proper. Deploying new code only requires deleting the currently-running pod after refreshing the required image tag. operations If someone has a need to rotate their tool user certs for some reason, run: user@bastion $ sudo -i root@bastion # become <tool-that-needs-help> tools.toolname:~$ kubectl delete cm maintain-kubeusers This will cause maintain-kubeusers to refresh their certs. If the certs are deleted, you will need to instead run kubectl delete cm maintain-kubeusers --namespace tool-$toolname as a cluster admin (such as root on a control plane node) since the tool won't be able to authenticate. In case of a corrupt .kube/config file, the same trick applies except, that maintain-kubeusers will not read invalid YAML. Therefore, you will need to delete the tool's .kube/config and then as a cluster admin, run kubectl delete cm maintain-kubeusers --namespace tool-$toolname. That will regenerate their credentials. etcd certs All etcd servers use puppetmaster issued certificates (puppet node certificates). The etcd service will only allow communication from clients presenting a certificate signed by the same CA. This means kubernetes components that contact etcd should use puppet node certificates. In the puppet profile controlling this, we have a mechanism to refresh the certificate and restart the etcd daemon if the puppet node certificate changes (it is reissued or whatever). See also Some other interesting docs:
https://wikitech-static.wikimedia.org/w/index.php?title=Portal:Toolforge/Admin/Kubernetes/Certificates&diff=next&oldid=494416
CC-MAIN-2022-21
en
refinedweb
Introduction The linked list is one of the most important concepts and data structures to learn while preparing for interviews. Having a good grasp of Linked Lists can be a huge plus point in a coding interview. Problem Statement In this problem, we are given two linked lists, A and B. We should merge the second list into the first one at alternate positions. The second linked list should become empty. Problem Statement Understanding Let's try to understand this problem with help of examples. Suppose the given linked lists are: We have add the elements of the second list to the first list, at alternate positions. First, we will add the first element of B after the first element of A - Now, add the second element of B after the second element of old A - Now, add the third element of B after the third element of old A Now, add the fourth element of B after the fourth element of old A B: Empty list After adding the elements of the second list to the first list at alternate positions, the final list will be: Input: Output: Explanation: The nodes of the second linked list are merged with the first linked list at alternate positions. I think from the above example the problem statement is clear, so we should now think how we can approach this problem. Try to come up with a approach, it may be bruteforce but before jumping to approach section try to think how will you approach it. Note: We should keep in mind that we cannot create a new linked list. We have to alter the first list. Approach The main idea is to iterate using a loop while there are available positions in the first list and insert nodes of the linked list by changing pointers. We should also note that the head of the first link will never change, but the head of the second list may change, so we have to use one pointer for the first list and two pointers for the second list. We can get a better idea by looking at the algorithm. Algorithm - Traverse in FirstList until there are available positions in it. - Loop in SecondList and insert nodes of SecondList to FirstList. - Do insertion by changing pointers. - Store next pointers of both A and B in nextA and nextB and current pointers of both A and B in currA and currB. - Make B as the next of pointer A and next of pointer B is next of pointer A. By doing this we insert a node of SecondList in FirstList: currB - > next = nextA currA - > next = currB currA=nextA currB=nextB - Move pointers of FirstList and SecondList and again perform the above steps of insertion of list B nodes in list A at alternate position. Dry Run Code Implementation #include using namespace std; struct LLNode { int data; struct LLNode* next; }; void insertAtBeginning(struct LLNode** head, int dataToBeInserted) { struct LLNode* curr = new LLNode; curr->data = dataToBeInserted; curr->next = NULL; if(*head == NULL) *head=curr; else { curr->next=*head; *head=curr; } } void display(struct LLNode**node) { struct LLNode *temp= *node; if (temp==NULL) { cout<<"Empty linked list"< next!=NULL) cout< data<<" ->"; else cout< data; temp=temp->next; } cout< next; nextB = currB->next; currB->next = nextA; currA->next = currB; currA = nextA; currB = nextB; } *B = currB; } int main() { struct LLNode *FirstList = NULL, *SecondList = NULL; insertAtBeginning(&FirstList, 18); insertAtBeginning(&FirstList, 17); insertAtBeginning(&FirstList, 7); insertAtBeginning(&FirstList, 8); insertAtBeginning(&SecondList, 4); insertAtBeginning(&SecondList, 2); insertAtBeginning(&SecondList, 10); insertAtBeginning(&SecondList, 32); cout<<"Linked List A: "; display(&FirstList); cout<<"Linked List B: "; display(&SecondList); cout<<"Merging List B into List A at alternate positions in A"; Merge(FirstList,&SecondList); cout<<"\nOutput Linked List A: "; display(&FirstList); cout<<"Output Linked List B: "; display(&SecondList); return 0; } public class LinkedList { Node head; class Node { int data; Node next; Node(int d) {data = d; next = null; } } void push(int new_data) { Node new_node = new Node(new_data); new_node.next = head; head = new_node; } void merge(LinkedList q) { Node p_curr = head, q_curr = q.head; Node p_next, q_next; while (p_curr != null && q_curr != null) { p_next = p_curr.next; q_next = q_curr.next; q_curr.next = p_next; p_curr.next = q_curr; p_curr = p_next; q_curr = q_next; } q.head = q_curr; } void printList() { Node temp = head; while (temp != null) { System.out.print(temp.data+" "); temp = temp.next; } System.out.println(); } public static void main(String args[]) { LinkedList llist1 = new LinkedList(); LinkedList llist2 = new LinkedList(); llist1.push(18); llist1.push(17); llist1.push(7); llist1.push(8); System.out.println("Linked List A: "); llist1.printList(); llist2.push(4); llist2.push(2); llist2.push(10); llist2.push(32); System.out.println("Linked List B: "); System.out.println("Merging List B into List A at alternate positions in A"); llist1.merge(llist2); System.out.println("Output Linked List A: "); llist1.printList(); System.out.println("Output Linked List B: "); llist2.printList(); } } #include #include // A nexted list node struct Node { int data; struct Node *next; }; /* Function to insert a node at the beginning */ void push(struct Node ** head_ref, int new_data) { struct Node* new_node = (struct Node*) malloc(sizeof(struct Node)); new_node->data = new_data; new_node->next = (*head_ref); (*head_ref) = new_node; } /* Utility function to print a singly linked list */ void printList(struct Node *head) { struct Node *temp = head; while (temp != NULL) { printf("%d ", temp->data); temp = temp->next; } printf("\n"); } // Main function that inserts nodes of linked list q into p at // alternate positions. Since head of first list never changes // and head of second list may change, we need single pointer // for first list and double pointer for second list. void merge(struct Node *p, struct Node **q) { struct Node *p_curr = p, *q_curr = *q; struct Node *p_next, *q_next; // While there are available positions in p while (p_curr != NULL && q_curr != NULL) { // Save next pointers p_next = p_curr->next; q_next = q_curr->next; // Make q_curr as next of p_curr q_curr->next = p_next; // Change next pointer of q_curr p_curr->next = q_curr; // Change next pointer of p_curr // Update current pointers for next iteration p_curr = p_next; q_curr = q_next; } *q = q_curr; // Update head pointer of second list } // Driver program to test above functions int main() { struct Node *p = NULL, *q = NULL; push(&p, 3); push(&p, 2); push(&p, 1); printf("First Linked List:\n"); printList(p); push(&q, 8); push(&q, 7); push(&q, 6); push(&q, 5); push(&q, 4); printf("Second Linked List:\n"); printList(q); merge(p, &q); printf("Modified First Linked List:\n"); printList(p); printf("Modified Second Linked List:\n"); printList(q); getchar(); return 0; } class Node(object): def __init__(self, data:int): self.data = data self.next = None class LinkedList(object): def __init__(self): self.head = None def push(self, new_data:int): new_node = Node(new_data) new_node.next = self.head self.head = new_node def printList(self): temp = self.head while temp != None: print(temp.data,end=" ") temp = temp.next def merge(self, p, q): p_curr = p.head q_curr = q.head while p_curr != None and q_curr != None: p_next = p_curr.next q_next = q_curr.next q_curr.next = p_next p_curr.next = q_curr p_curr = p_next q_curr = q_next q.head = q_curr llist1 = LinkedList() llist2 = LinkedList() llist1.push(18) llist1.push(17) llist1.push(7) llist1.push(8) llist2.push(4) llist2.push(2) llist2.push(10) llist2.push(32) print("First Linked List:") llist1.printList() print("\nSecond Linked List:") llist2.printList() llist1.merge(p=llist1, q=llist2) print("\nModified first linked list:") llist1.printList() print("\nModified second linked list:") llist2.printList() Output Linked List A: 5 ->7 ->17 ->13 Linked List B: 12 ->10 ->2 ->4 Merging List B into List A at alternate positions in A Output Linked List A: 5 ->12 ->7 ->10 ->17 ->2 ->13 ->4 Output Linked List B: Empty linked list [forminator_quiz id="4067"] Space Complexity: O(1), as only temporary variables are being created. So, in this article, we have tried to explain the most efficient approach to merge a linked list into another linked list at alternate positions. This approach requires no extra space, and that is what makes this question an important one for coding interviews. If you want to solve more questions on Linked List, which are curated by our expert mentors at PrepBytes, you can follow this link Linked List.
https://www.prepbytes.com/blog/linked-list/merge-a-linked-list-into-another-linked-list-at-alternate-positions/
CC-MAIN-2022-21
en
refinedweb
The objective of this post is to explain how control the allowed HTTP methods on the URLs specified for a Flask web server. Introduction The objective of this post is to explain how control the allowed HTTP methods on the URLs specified for a Flask web server. Flask is a web micro framework for Python [1] which allows us to create and deploy simple web applications very easily. You can read an introduction on Flask on this previous post. The code First of all, we need to import the Flask class from the flask module, so all the functionality we need becomes available. We will also import the request object from the flask module, which we will use later to check the method of an incoming HTTP request. Then, we create an instance of the Flask class. In the constructor, we pass the name of the module of our application, using the __name__ global variable. from flask import Flask from flask import request app = Flask(__name__) Now, we will specify a route that only answers to HTTP GET requests. Although this is redundant since, by default, routes only answer to HTTP GET methods [1], we will do it to illustrate the functionality. So, we will define a handler function for a route to be listening on the /get URL that is only triggered for GET requests. We specify the methods allowed by providing the methods argument to the route decorator [1]. @app.route('/get', methods = ['GET']) def getHandler(): return 'GET handler' To define a route that only listens to HTTP POST methods, we just need to change the methods argument to POST. So, we will now define a handling function for a route that will be listening on the /post URL that is only triggered for POST requests. @app.route('/post', methods = ['POST']) def postHandler(): return 'POST handler' Note that the methods argument receives a list of values. So, we can specify multiple allowed methods. In this case, since we will only define one handler function, we can check the received HTTP method inside the code by using the imported request object [2]. The request object is an instance of a Request subclass [2]. So, we will just define a URL called /getpost which will return a different message depending if the request was a POST or a GET. In order for our route to support both methods, we just pass a list with the names of the two methods in the methods argument. Inside the code of the handling function, to check the method of the incoming HTTP request, we just use the method attribute of the request object. So, as shown bellow, we compare the value of this attribute with the name of the method in a conditional sentence. @app.route('/getpost', methods = ['POST', 'GET']) def postGetHandler(): if request.method == 'POST': return 'request via POST' else: return 'request via GET' Finally, to tell our application to run, we just call the run method on the Flask object we instantiated before, passing as arguments the host and the port where it will be listening. We will be specifying port 8090, so you must use it to send the HTTP requests when testing the code. app.run(host='0.0.0.0', port= 8090) The final complete code can be seen below. from flask import Flask from flask import request app = Flask(__name__) @app.route('/get', methods = ['GET']) def getHandler(): return 'GET handler' @app.route('/post', methods = ['POST']) def postHandler(): return 'POST handler' @app.route('/getpost', methods = ['POST', 'GET']) def postGetHandler(): if request.method == 'POST': return 'request via POST' else: return 'request via GET' app.run(host='0.0.0.0', port= 8090) Testing the code Since we are going to test different HTTP methods, the easiest way is to use a tool like Postman, which allows us to specify the HTTP method. This is a very powerful and versatile tool and I really recommend it for this kind of tests with HTTP requests. To test this, we can do all of the following HTTP requests on the local host: 127.0.0.1:8090 . In postman, you don’t need to put the http:// before the IP:port. So, a request will have the following format: 127.0.0.1:8090/FlaskRouteURL. So, start by running the Python script we previously defined. You can do it, for example, in IDLE, the Python IDE. Then, open Postman and make a HTTP GET request on the /get URL. You should receive the message we defined in the code, as seen in figure 1. Figure 1 – Flask HTTP GET method allowed. Now, if we change the method to POST and make the HTTP request on the same URL, we will get a “Method not allowed” message, as shown in figure 2. Figure 2 – Flask HTTP POST method not allowed. Now, if we change to the /post URL and make a POST request, it will send us the message defined in the code, as shown in figure 3. Figure 3 – Flask HTTP POST method allowed. If we now try a GET request, it will also output a “Method not allowed” message, as shown in figure 4. Figure 4 – Flask HTTP GET method not allowed. Now if we send a HTTP request with the GET method on the /getpost URL, we will receive the message indicating the request was via GET method, as seen in figure 5. Figure 5 – Flask HTTP GET on multiple method URL. Finally, if we send a HTTP request with the POST method on that same URL, we will receive a message indicating the request was via POST method, as seen in figure 6. Figure 6 – Flask HTTP POST on multiple method URL. References [1] [2] Pingback: Flask: Parsing JSON data | techtutorialsx Pingback: ESP8266: Posting JSON data to a Flask server on the cloud | techtutorialsx
https://techtutorialsx.com/2016/12/24/flask-controlling-http-methods-allowed/
CC-MAIN-2017-34
en
refinedweb
Hi! I saw a post on this site that exhibited the same exact problem that I have for a homework assignment and while it was solved, my issue with the homework assignment is different. The C++ program that I wrote asked the user to input information but once you enter a number, everything in "" is posted on to the screen at one time. Below I will post the instructions for the homework assignment and, the program that I wrote, and I would really appreciate any help or advice that anyone is able to issue.Thank you. The following table lists the freezing and boiling points of several substances. Write a program that asks the user to enter a temperature, and then shows all the substances that will freeze at that temperature and all that will boil at that temperature. For example, if the user enters -20 the program should report that water will freeze and oxygen will boil at that temperature. #include <iostream> using namespace std; int main() {int point; //Freezing and Boiling points cout<<"Please enter a number for freezing and or boiling points(F):"; cin>>point; cout<<"Freezing Point (F)"<<endl; if(point>=-173)cout<<"Ethyl alcohol will freeze"<<endl; if(point>=-38)cout<<"Mercury will freeze"<<endl; if(point>=-362)cout<<"Oxygen will freeze"<<endl; if(point>=32)cout<<"Water will freeze"<<endl; cout<<"\nBoiling Point (F)"<<endl; if(point<=172)cout<<"Ethyl alcohol will boil"<<endl; if(point<=676)cout<<"Mercury will boil"<<endl; if(point<=-306)cout<<"Oxygen will boil"<<endl; if(point<=212)cout<<"Water will boil"<<endl; cout<<"How'd I'd do Rich?"<<endl<<endl<<"Program end"; return 0; }
https://www.daniweb.com/programming/software-development/threads/326427/help-please
CC-MAIN-2017-34
en
refinedweb
0 Aoa, Hello All, i am having problem in this question Write a program that takes integer input from the user and store into the array dynamically allocated each time a new element is added. Your program should prompt user to take integers until he enters -1, which means end of input. After taking all the numbers print them in the reverse order as entered by the user. the code which i have written is attached along. it is printing junk values. need help :) regards code goes here #include <iostream> using namespace std; void question1(); int main() { question1(); } void question1() { int i=0; int* ptr=nullptr; int* temp=nullptr; bool boolean=true; while(boolean!=false) { cout << "Enter number: " << endl; int* ptr=new int[i+1]; cin >> ptr[i]; if(ptr[i]!=-1) { temp = new int[i+2]; for(int k=0; k<i; k++) {temp[k]=ptr[k];} i++; } else { for(int j=i-1; j>=0; j--) { cout << temp[j] << " " ; } boolean = false; } } delete[]ptr; delete[]temp; }
https://www.daniweb.com/programming/software-development/threads/413165/dynamic-array-problem-making-two-dynamic-arrays-and-copying-one-in-another
CC-MAIN-2017-34
en
refinedweb
0 Hi folks, just a queary about checking chars. I have a char[] variable. I ask the user a question and the user types a response and presses enter. I need to check the variable to see if it said particular things and more importantly if it said nothing. #include <iostream> #include <fstream> using namespace std; char _cFileName[100]; int main() { cout<<"Enter the name of the file to open or press enter for HELP: "; cin>>_cFileName; if(_cFileName == 'HELP') { cout<<"DISPLAY HELP FILE"<<endl; system("PAUSE"); return 0; } if(_cFileName == '') { cout<<"DISPLAY HELP FILE"<<endl; system("PAUSE"); return 0; } ifstream infile("data.txt"); cout<<"Your file is "<<sizeof(infile)<<" bytes."<<endl; system("PAUSE"); return 0; } Apparently using either "HELP" or 'HELP' or HELP when using the if statement does not work. So I am just wondering where I am going wrong. Perhaps chars are not the way to go when doing this? Also as an aside. I have not had a chance to test this yet, but does sizeof() work on streams? Any help is much appriciated.
https://www.daniweb.com/programming/software-development/threads/427826/a-quick-question-about-checking-chars
CC-MAIN-2017-34
en
refinedweb
I've updated from an earlier Flex 4 SDK (back from December) to the latest, and my application is suddenly getting compiler errors like this: Error: Could not resolve <mx:Canvas> to a component implementation. Likewise for some of the other classes. Any help on why my code is now suffering this would be greatly appreciated. We changed to mx namespace to /mx from /halo
https://forums.adobe.com/thread/693133
CC-MAIN-2018-30
en
refinedweb
the question im doing is: write a java program that allows the user to enter a set of numbers and then displays the sum and the average of the values. It should begin by asking the user how many values they will enter. you should use two seperate methods to compute the sum and the average.(note that the average is the sum divided by the number of values, so the average method should call the sum method) This is what I have so far: import javax.swing.JOptionPane; public class maths_comp { public static void main(String[]args) { //Declare variables String input; double iterations; double Sum; input = JOptionPane.showInputDialog("Enter the amount of numbers to be entered:"); iterations = Double.parseDouble(input); JOptionPane.showMessageDialog(null, "The Sum of the " + iterations + " numbers entered is: " + sum(iterations) + "\nThe Average of the numbers entered is: " + average(iterations)); } public static double sum(double iter) { String SumInput; double NewNum; int count = 1; double Total = 0; while(count <= iter) { SumInput = JOptionPane.showInputDialog("Enter number " + count + ":"); NewNum = Double.parseDouble(SumInput); Total = Total + NewNum; count = count + 1; } return Total; } public static double average(double iter) { double AvgNum; AvgNum = sum(iter) / iter; return AvgNum; } } The problem im having is that my program asks for the values again when calculating the average...how do I get it to ask the user only once?
https://www.daniweb.com/programming/software-development/threads/400649/problems-with-methods
CC-MAIN-2018-30
en
refinedweb
0 Hi, i know how to read specific line form a text file but if we want to delete lines by specifying the line number. here is the code, but it's not working: i want to delete line 1 and 2 of the text file. import java.io.*; public class ReadSpecificLine { public static void main(String[] args){ String line = ""; int lineNo; try { FileReader fr = new FileReader("C://Temp/File.txt"); BufferedReader br = new BufferedReader(fr); for(lineNo=1;lineNo<3;lineNo++) { while (lineNo = 1&&2, br.readLine() !=null); } br.readLine(); } } catch (IOException e) { e.printStackTrace(); } System.out.println("Line: " + line); } } Thank you
https://www.daniweb.com/programming/software-development/threads/341888/delete-specific-line-in-file
CC-MAIN-2018-30
en
refinedweb
If you have been programming in Python (in object oriented way of course) for some time, I'm sure you have come across methods that have self as their first parameter. It may seem odd, especially to programmers coming from other languages, that this is done explicitly every single time we define a method. As The Zen of Python goes, "Explicit is better than implicit". So, why do we need to do this? Let's take a simple example to begin with. We have a Point class which defines a method distance to calculate the distance from origin. class Point(object): def __init__(self,x = 0,y = 0): self.x = x self.y = y def distance(self): """Find distance from origin""" return (self.x**2 + self.y**2) ** 0.5 Let us now instantiate this class and find the distance. >>> p1 = Point(6,8) >>> p1.distance() 10.0 In the above example, __init__() defines three parameters but we just passed two (6 and 8). Similarly distance() requires one but zero arguments were passed. Why is Python not complaining about this argument number mismatch? Let me first clarify that Point.distance and p1.distance in the above example are a bit different. >>> type(Point.distance) <class 'function'> >>> type(p1.distance) <class 'method'> We can see that the first one is a function and second, a method. A peculiar thing about methods (in Python) is that the object itself is passed on as the first argument to the corresponding function. In the case of the above example, the method call p1.distance() is actually equivalent to Point.distance(p1). Generally, when we call a method with some arguments, the corresponding class function is called by placing the method's object before the first argument. So, anything like obj.meth(args) becomes Class.meth(obj, args). The calling process is automatic while the receiving process is not (its explicit). This is the reason the first parameter of a function in class must be the object itself. Writing this parameter as self is merely a convention. It is not a keyword and has no special meaning in Python. We could use other names (like this) but I strongly suggest you not to. Using names other than self is frowned upon by most developers and degrades the readability of the code ("Readability counts"). By now you are clear that the object (instance) itself is passed along as the first argument, automatically. This implicit behavior can be avoided by making a method, static. Consider the following simple example. class A(object): @staticmethod def stat_meth(): print("Look no self was passed") Here, @staticmethod is a function decorator which makes stat_meth() static. Let us instantiate this class and call the method. >>> a = A() >>> a.stat_meth() Look no self was passed From the above example, we are clear that the implicit behavior of passing the object as the first argument was avoided using static method. All in all, static methods behave like our plain old functions. >>> type(A.stat_meth) <class 'function'> >>> type(a.stat_meth) <class 'function'> The explicit self is not unique to Python. This idea was borrowed from Modula-3. Following is a use case where it becomes helpful. There is no explicit variable declaration in Python. They spring into action on the first assignment. The use of self makes it easier to distinguish between instance attributes (and methods) from local variables. In, the first example self.x is an instance attribute whereas x is a local variable. They are not the same and lie in different namespaces. Many have proposed to make self a keyword in Python, like this in C++ and Java. This would eliminate the redundant use of explicit self from the formal parameter list in methods. While this idea seems promising, it's not going to happen. At least not in the near future. The main reason is backward compatibility. Here is a blog from the creator of Python himself explaining why the explicit self has to stay. One important conclusion that can be drawn from the information so far is that, __init__() is not a constructor. Many na?ve Python programmers get confused with it since __init__() gets called when we create an object. A closer inspection will reveal that the first parameter in __init__() is the object itself (object already exists). The function __init__() is called immediately after the object is created and is used to initialize it. Technically speaking, constructor is a method which creates the object itself. In Python, this method is __new__(). A common signature of this method is __new__(cls, *args, **kwargs) When __new__() is called, the class itself is passed as the first argument automatically. This is what the cls in above signature is for. Again, like self, cls is just a naming convention. Furthermore, *args and **kwargs are used to take arbitary number of arguments during method calls in Python. Some important things to remember when implementing __new__() are: __new__()is always called before __init__(). __new__(). Not mandatory, but thats the whole point. Let's take a look at an example to be crystal clear. class Point(object): def __new__(cls,*args,**kwargs): print("From new") print(cls) print(args) print(kwargs) # create our object and return it obj = super().__new__(cls) return obj def __init__(self,x = 0,y = 0): print("From init") self.x = x self.y = y Now, let's now instantiate it. >>> p2 = Point(3,4) From new <class '__main__.Point'> (3, 4) {} From init This example illustrates that __new__() is called before __init__(). We can also see that the parameter cls in __new__() is the class itself ( Point). Finally, the object is created by calling the __new__() method on object base class. In Python, object is the base class from which all other classes are derived. In the above example, we have done this using super(). You might have seen __init__() very often but the use of __new__() is rare. This is because most of the time you don't need to override it. Generally, __init__() is used to initialize a newly created object while __new__() is used to control the way an object is created. We can also use __new__() to initialize attributes of an object, but logically it should be inside __init__(). One practical use of __new__() however, could be to restrict the number of objects created from a class. Suppose we wanted a class SqPoint for creating instances to represent the four vertices of a square. We can inherit from our previous class Point (first example in this article) and use __new__() to implement this restriction. Here is an example to restrict a class to have only four instances. class SqPoint(Point): MAX_Inst = 4 Inst_created = 0 def __new__(cls,*args,**kwargs): if (cls.Inst_created >= cls.MAX_Inst): raise ValueError("Cannot create more objects") cls.Inst_created += 1 return super().__new__(cls) A sample run. >>> p1 = SqPoint(0,0) >>> p2 = SqPoint(1,0) >>> p3 = SqPoint(1,1) >>> p4 = SqPoint(0,1) >>> >>> p5 = SqPoint(2,2) Traceback (most recent call last): ... ValueError: Cannot create more objects
https://www.programiz.com/article/python-self-why
CC-MAIN-2018-30
en
refinedweb
Release Notes Release Early, Release Often — Eric S. Raymond, The Cathedral and the Bazaar. Versioning Minor version numbers (0.0.x) are used for changes that are API compatible. You should be able to upgrade between minor point releases without any other code changes. Medium version numbers (0.x.0) may include API changes, in line with the deprecation policy. You should read the release notes carefully before upgrading between medium point releases. Major version numbers (x.0.0) are reserved for substantial project milestones. Deprecation policy REST framework PendingDeprecationWarningwarnings if you use the feature that are due to be deprecated. These warnings are silent by default, but can be explicitly enabled when you're ready to start migrating any required changes. For example if you start running your tests using python -Wd manage.py test, you'll be warned of any API changes you need to make. Version 1.2 would escalate these warnings to DeprecationWarning, which is loud by default. Version 1.3 would remove the deprecated bits of API entirely. Note that in line with Django's policy, any parts of the framework not mentioned in the documentation should generally be considered private API, and may be subject to change. Upgrading To upgrade Django REST framework to the latest version, use pip: pip install -U djangorestframework You can determine your currently installed version using pip freeze: pip freeze | grep djangorestframework 2.4.x series 2.4.4 Date: 3rd November 2014. - Security fix: Escape URLs when replacing format=query parameter, as used in dropdown on GETbutton in browsable API to allow explicit selection of JSON vs HTML output. - Maintain ordering of URLs in API root view for DefaultRouter. - Fix follow=Truein APIRequestFactory - Resolve issue with invalid read_only=True, required=Truefields being automatically generated by ModelSerializerin some cases. - Resolve issue with OPTIONSrequests returning incorrect information for views using get_serializer_classto dynamically determine serializer based on request method. 2.4.3 Date: 19th September 2014. - Support translatable view docstrings being displayed in the browsable API. - Support encoded filename*in raw file uploads with FileUploadParser. - Allow routers to support viewsets that don't include any list routes or that don't include any detail routes. - Don't render an empty login control in browsable API if loginview is not included. - CSRF exemption performed in .as_view()to prevent accidental omission if overriding .dispatch(). - Login on browsable API now displays validation errors. - Bugfix: Fix migration in authtokenapplication. - Bugfix: Allow selection of integer keys in nested choices. - Bugfix: Return Noneinstead of 'None'in CharFieldwith allow_none=True. - Bugfix: Ensure custom model fields map to equivelent serializer fields more reliably. - Bugfix: DjangoFilterBackendno longer quietly changes queryset ordering. 2.4.2 Date: 3rd September 2014. - Bugfix: Fix broken pagination for 2.4.x series. 2.4.1 Date: 1st September 2014. - Bugfix: Fix broken login template for browsable API. 2.4.0 Date: 29th August 2014. Django version requirements: The lowest supported version of Django is now 1.4.2. South version requirements: This note applies to any users using the optional authtoken application, which includes an associated database migration. You must now either upgrade your south package to version 1.0, or instead use the built-in migration support available with Django 1.7. - Added compatibility with Django 1.7's database migration support. - New test runner, using py.test. - Deprecated .modelview attribute in favor of explicit .querysetand .serializer_classattributes. The DEFAULT_MODEL_SERIALIZER_CLASSsetting is also deprecated. @detail_routeand @list_routedecorators replace @actionand @link. - Support customizable view name and description functions, using the VIEW_NAME_FUNCTIONand VIEW_DESCRIPTION_FUNCTIONsettings. - Added NUM_PROXIESsetting for smarter client IP identification. - Added MAX_PAGINATE_BYsetting and max_paginate_bygeneric view attribute. - Added Retry-Afterheader to throttled responses, as per RFC 6585. This should now be used in preference to the custom X-Trottle-Wait-Secondsheader which will be fully deprecated in 3.0. - Added cacheattribute to throttles to allow overriding of default cache. - Added lookup_value_regexattribute to routers, to allow the URL argument matching to be constrainted by the user. - Added allow_noneoption to CharField. - Support Django's standard status_codeclass attribute on responses. - More intuitive behavior on the test client, as client.logout()now also removes any credentials that have been set. - Bugfix: ?page_size=0query parameter now falls back to default page size for view, instead of always turning pagination off. - Bugfix: Always uppercase X-Http-Method-Overridemethods. - Bugfix: Copy filter_backendslist before returning it, in order to prevent view code from mutating the class attribute itself. - Bugfix: Set the .actionattribute on viewsets when introspected by OPTIONSfor testing permissions on the view. - Bugfix: Ensure ValueErrorraised during deserialization results in a error list rather than a single error. This is now consistent with other validation errors. - Bugfix: Fix cache_formattypo on throttle classes, was "throtte_%(scope)s_%(ident)s". Note that this will invalidate existing throttle caches. 2.3.x series 2.3.14 Date: 12th June 2014 - Security fix: Escape request path when it is include as part of the login and logout links in the browsable API. help_textand verbose_nameautomatically set for related fields on ModelSerializer. - Fix nested serializers linked through a backward foreign key relation. - Fix bad links for the BrowsableAPIRendererwith YAMLRenderer. - Add UnicodeYAMLRendererthat extends YAMLRendererwith unicode. - Fix parse_headerargument convertion. - Fix mediatype detection under Python 3. - Web browseable API now offers blank option on dropdown when the field is not required. APIExceptionrepresentation improved for logging purposes. - Allow source="*" within nested serializers. - Better support for custom oauth2 provider backends. - Fix field validation if it's optional and has no value. - Add SEARCH_PARAMand ORDERING_PARAM. - Fix APIRequestFactoryto support arguments within the url string for GET. - Allow three transport modes for access tokens when accessing a protected resource. - Fix QueryDictencoding on request objects. - Ensure throttle keys do not contain spaces, as those are invalid if using memcached. - Support blank_display_valueon ChoiceField. 2.3.13 Date: 6th March 2014 - Django 1.7 Support. - Fix defaultargument when used with serializer relation fields. - Display the media type of the content that is being displayed in the browsable API, rather than 'text/html'. - Bugfix for urlizetemplate failure when URL regex is matched, but value does not urlparse. - Use urandomfor token generation. - Only use Vary: Acceptwhen more than one renderer exists. 2.3.12 Date: 15th January 2014 - Security fix: OrderingFieldnow only allows ordering on readable serializer fields, or on fields explicitly specified using ordering_fields. This prevents users being able to order by fields that are not visible in the API, and exploiting the ordering of sensitive data such as password hashes. - Bugfix: write_only = Truefields now display in the browsable API. 2.3.11 Date: 14th January 2014 - Added write_onlyserializer field argument. - Added write_only_fieldsoption to ModelSerializerclasses. - JSON renderer now deals with objects that implement a dict-like interface. - Fix compatiblity with newer versions of django-oauth-plus. - Bugfix: Refine behavior that calls model manager all()across nested serializer relationships, preventing erronous behavior with some non-ORM objects, and preventing unneccessary queryset re-evaluations. - Bugfix: Allow defaults on BooleanFields to be properly honored when values are not supplied. - Bugfix: Prevent double-escaping of non-latin1 URL query params when appending format=jsonparams. 2.3.10 Date: 6th December 2013 - Add in choices information for ChoiceFields in response to OPTIONSrequests. - Added pre_delete()and post_delete()method hooks. - Added status code category helper functions. - Bugfix: Partial updates which erronously set a related field to Nonenow correctly fail validation instead of raising an exception. - Bugfix: Responses without any content no longer include an HTTP 'Content-Type'header. - Bugfix: Correctly handle validation errors in PUT-as-create case, responding with 400. 2.3.9 Date: 15th November 2013 - Fix Django 1.6 exception API compatibility issue caused by ValidationError. - Include errors in HTML forms in browsable API. - Added JSON renderer support for numpy scalars. - Added transform_<fieldname>hooks on serializers for easily modifying field output. - Added get_contexthook in BrowsableAPIRenderer. - Allow serializers to be passed filesbut no data. HTMLFormRenderernow renders serializers directly to HTML without needing to create an intermediate form object. - Added get_filter_backendshook. - Added queryset aggregates to allowed fields in OrderingFilter. - Bugfix: Fix decimal suppoprt with YAMLRenderer. - Bugfix: Fix submission of unicode in browsable API through raw data form. 2.3.8 Date: 11th September 2013 - Added DjangoObjectPermissions, and DjangoObjectPermissionsFilter. - Support customizable exception handling, using the EXCEPTION_HANDLERsetting. - Support customizable view name and description functions, using the VIEW_NAME_FUNCTIONand VIEW_DESCRIPTION_FUNCTIONsettings. - Added MAX_PAGINATE_BYsetting and max_paginate_bygeneric view attribute. - Added cacheattribute to throttles to allow overriding of default cache. - 'Raw data' tab in browsable API now contains pre-populated data. - 'Raw data' and 'HTML form' tab preference in browseable API now saved between page views. - Bugfix: required=Trueargument fixed for boolean serializer fields. - Bugfix: client.force_authenticate(None)should also clear session info if it exists. - Bugfix: Client sending empty string instead of file now clears FileField. - Bugfix: Empty values on ChoiceFields with required=Falsenow consistently return None. - Bugfix: Clients setting page_size=0now simply returns the default page size, instead of disabling pagination. [*] [*] Note that the change in page_size=0 behaviour fixes what is considered to be a bug in how clients can effect the pagination size. However if you were relying on this behavior you will need to add the following mixin to your list views in order to preserve the existing behavior. class DisablePaginationMixin(object): def get_paginate_by(self, queryset=None): if self.request.QUERY_PARAMS[self.paginate_by_param] == '0': return None return super(DisablePaginationMixin, self).get_paginate_by(queryset) 2.3.7 Date: 16th August 2013 - Added APITestClient, APIRequestFactoryand APITestCaseetc... - Refactor SessionAuthenticationto allow esier override for CSRF exemption. - Remove 'Hold down "Control" message from help_text' widget messaging when not appropriate. - Added admin configuration for auth tokens. - Bugfix: AnonRateThrottlefixed to not throttle authenticated users. - Bugfix: Don't set X-Throttle-Wait-Secondswhen throttle does not have waitvalue. - Bugfix: Fixed PATCHbutton title in browsable API. - Bugfix: Fix issue with OAuth2 provider naive datetimes. 2.3.6 Date: 27th June 2013 - Added trailing_slashoption to routers. - Include support for HttpStreamingResponse. - Support wider range of default serializer validation when used with custom model fields. - UTF-8 Support for browsable API descriptions. - OAuth2 provider uses timezone aware datetimes when supported. - Bugfix: Return error correctly when OAuth non-existent consumer occurs. - Bugfix: Allow FileUploadParserto correctly filename if provided as URL kwarg. - Bugfix: Fix ScopedRateThrottle. 2.3.5 Date: 3rd June 2013 - Added get_urlhook to HyperlinkedIdentityField. - Serializer field defaultargument may be a callable. @actiondecorator now accepts a methodsargument. - Bugfix: request.usershould be still be accessible in renderer context if authentication fails. - Bugfix: The lookup_fieldoption on HyperlinkedIdentityFieldshould apply by default to the url field on the serializer. - Bugfix: HyperlinkedIdentityFieldshould continue to support pk_url_kwarg, slug_url_kwarg, slug_field, in a pending deprecation state. - Bugfix: Ensure we always return 404 instead of 500 if a lookup field cannot be converted to the correct lookup type. (Eg non-numeric AutoIntegerpk lookup) 2.3.4 Date: 24th May 2013 - Serializer fields now support labeland help_text. - Added UnicodeJSONRenderer. OPTIONSrequests now return metadata about fields for PUTrequests. - Bugfix: charsetnow properly included in Content-Typeof responses. - Bugfix: Blank choice now added in browsable API on nullable relationships. - Bugfix: Many to many relationships with throughtables are now read-only. - Bugfix: Serializer fields now respect model field args such as max_length. - Bugfix: SlugField now performs slug validation. - Bugfix: Lazy-translatable strings now properly serialized. - Bugfix: Browsable API now supports bootswatch styles properly. - Bugfix: HyperlinkedIdentityField now uses lookup_fieldkwarg. Note: Responses now correctly include an appropriate charset on the Content-Type header. For example: application/json; charset=utf-8. If you have tests that check the content type of responses, you may need to update these accordingly. 2.3.3 Date: 16th May 2013 - Added SearchFilter - Added OrderingFilter - Added GenericViewSet - Bugfix: Multiple @actionand @linkmethods now allowed on viewsets. - Bugfix: Fix API Root view issue with DjangoModelPermissions 2.3.2 Date: 8th May 2013 - Bugfix: Fix TIME_FORMAT, DATETIME_FORMATand DATE_FORMATsettings. - Bugfix: Fix DjangoFilterBackendissue, failing when used on view with queryset attribute. 2.3.1 Date: 7th May 2013 - Bugfix: Fix breadcrumb rendering issue. 2.3.0 Date: 7th May 2013 - ViewSets and Routers. - ModelSerializers support reverse relations in 'fields' option. - HyperLinkedModelSerializers support 'id' field in 'fields' option. - Cleaner generic views. - Support for multiple filter classes. - FileUploadParser support for raw file uploads. - DecimalField support. - Made Login template easier to restyle. - Bugfix: Fix issue with depth>1 on ModelSerializer. Note: See the 2.3 announcement for full details. 2.2.x series 2.2.7 Date: 17th April 2013 - Loud failure when view does not return a Responseor HttpResponse. - Bugfix: Fix for Django 1.3 compatibility. - Bugfix: Allow overridden get_object()to work correctly. 2.2.6 Date: 4th April 2013 - OAuth2 authentication no longer requires unnecessary URL parameters in addition to the token. - URL hyperlinking in browsable API now handles more cases correctly. - Long HTTP headers in browsable API are broken in multiple lines when possible. - Bugfix: Fix regression with DjangoFilterBackend not worthing correctly with single object views. - Bugfix: OAuth should fail hard when invalid token used. - Bugfix: Fix serializer potentially returning Noneobject for models that define __bool__or __len__. 2.2.5 Date: 26th March 2013 - Serializer support for bulk create and bulk update operations. - Regression fix: Date and time fields return date/time objects by default. Fixes regressions caused by 2.2.2. See #743 for more details. - Bugfix: Fix 500 error is OAuth not attempted with OAuthAuthentication class installed. Serializer.save()now supports arbitrary keyword args which are passed through to the object .save()method. Mixins use force_insertand force_updatewhere appropriate, resulting in one less database query. 2.2.4 Date: 13th March 2013 - OAuth 2 support. - OAuth 1.0a support. - Support X-HTTP-Method-Override header. - Filtering backends are now applied to the querysets for object lookups as well as lists. (Eg you can use a filtering backend to control which objects should 404) - Deal with error data nicely when deserializing lists of objects. - Extra override hook to configure DjangoModelPermissionsfor unauthenticated users. - Bugfix: Fix regression which caused extra database query on paginated list views. - Bugfix: Fix pk relationship bug for some types of 1-to-1 relations. - Bugfix: Workaround for Django bug causing case where Authtokencould be registered for cascade delete from Usereven if not installed. 2.2.3 Date: 7th March 2013 - Bugfix: Fix None values for for DateField, DateTimeFieldand TimeField. 2.2.2 Date: 6th March 2013 - Support for custom input and output formats for DateField, DateTimeFieldand TimeField. - Cleanup: Request authentication is no longer lazily evaluated, instead authentication is always run, which results in more consistent, obvious behavior. Eg. Supplying bad auth credentials will now always return an error response, even if no permissions are set on the view. - Bugfix for serializer data being uncacheable with pickle protocol 0. - Bugfixes for model field validation edge-cases. - Bugfix for authtoken migration while using a custom user model and south. 2.2.1 Date: 22nd Feb 2013 - Security fix: Use defusedxmlpackage to address XML parsing vulnerabilities. - Raw data tab added to browsable API. (Eg. Allow for JSON input.) - Added TimeField. - Serializer fields can be mapped to any method that takes no args, or only takes kwargs which have defaults. - Unicode support for view names/descriptions in browsable API. - Bugfix: request.DATA should return an empty QueryDictwith no data, not None. - Bugfix: Remove unneeded field validation, which caused extra queries. Security note: Following the disclosure of security vulnerabilities in Python's XML parsing libraries, use of the XMLParser class now requires the defusedxml package to be installed. The security vulnerabilities only affect APIs which use the XMLParser class, by enabling it in any views, or by having it set in the DEFAULT_PARSER_CLASSES setting. Note that the XMLParser class is not enabled by default, so this change should affect a minority of users. 2.2.0 Date: 13th Feb 2013 - Python 3 support. - Added a post_save()hook to the generic views. - Allow serializers to handle dicts as well as objects. - Deprecate ManyRelatedField()syntax in favor of RelatedField(many=True) - Deprecate null=Trueon relations in favor of required=False. - Deprecate blank=Trueon CharFields, just use required=False. - Deprecate optional objargument in permissions checks in favor of has_object_permission. - Deprecate implicit hyperlinked relations behavior. - Bugfix: Fix broken DjangoModelPermissions. - Bugfix: Allow serializer output to be cached. - Bugfix: Fix styling on browsable API login. - Bugfix: Fix issue with deserializing empty to-many relations. - Bugfix: Ensure model field validation is still applied for ModelSerializer subclasses with an custom .restore_object()method. Note: See the 2.2 announcement for full details. 2.1.x series 2.1.17 Date: 26th Jan 2013 - Support proper 401 Unauthorized responses where appropriate, instead of always using 403 Forbidden. - Support json encoding of timedelta objects. format_suffix_patterns()now supports includestyle URL patterns. - Bugfix: Fix issues with custom pagination serializers. - Bugfix: Nested serializers now accept source='*'argument. - Bugfix: Return proper validation errors when incorrect types supplied for relational fields. - Bugfix: Support nullable FKs with SlugRelatedField. - Bugfix: Don't call custom validation methods if the field has an error. Note: If the primary authentication class is TokenAuthentication or BasicAuthentication, a view will now correctly return 401 responses to unauthenticated access, with an appropriate WWW-Authenticate header, instead of 403 responses. 2.1.16 Date: 14th Jan 2013 - Deprecate django.utils.simplejsonin favor of Python 2.6's built-in json module. - Bugfix: auto_now, auto_now_addand other editable=Falsefields now default to read-only. - Bugfix: PK fields now only default to read-only if they are an AutoField or if editable=False. - Bugfix: Validation errors instead of exceptions when serializers receive incorrect types. - Bugfix: Validation errors instead of exceptions when related fields receive incorrect types. - Bugfix: Handle ObjectDoesNotExist exception when serializing null reverse one-to-one Note: Prior to 2.1.16, The Decimals would render in JSON using floating point if simplejson was installed, but otherwise render using string notation. Now that use of simplejson has been deprecated, Decimals will consistently render using string notation. See #582 for more details. 2.1.15 Date: 3rd Jan 2013 - Added PATCHsupport. - Added RetrieveUpdateAPIView. - Remove unused internal save_m2mflag on ModelSerializer.save(). - Tweak behavior of hyperlinked fields with an explicit format suffix. - Relation changes are now persisted in .save()instead of in .restore_object(). - Bugfix: Fix issue with FileField raising exception instead of validation error when files=None. - Bugfix: Partial updates should not set default values if field is not included. 2.1.14 Date: 31st Dec 2012 - Bugfix: ModelSerializers now include reverse FK fields on creation. - Bugfix: Model fields with blank=Trueare now required=Falseby default. - Bugfix: Nested serializers now support nullable relationships. Note: From 2.1.14 onwards, relational fields move out of the fields.py module and into the new relations.py module, in order to separate them from regular data type fields, such as CharField and IntegerField. This change will not affect user code, so long as it's following the recommended import style of from rest_framework import serializers and referring to fields using the style serializers.PrimaryKeyRelatedField. 2.1.13 Date: 28th Dec 2012 - Support configurable STATICFILES_STORAGEstorage. - Bugfix: Related fields now respect the required flag, and may be required=False. 2.1.12 Date: 21st Dec 2012 - Bugfix: Fix bug that could occur using ChoiceField. - Bugfix: Fix exception in browsable API on DELETE. - Bugfix: Fix issue where pk was was being set to a string if set by URL kwarg. 2.1.11 Date: 17th Dec 2012 - Bugfix: Fix issue with M2M fields in browsable API. 2.1.10 Date: 17th Dec 2012 - Bugfix: Ensure read-only fields don't have model validation applied. - Bugfix: Fix hyperlinked fields in paginated results. 2.1.9 Date: 11th Dec 2012 - Bugfix: Fix broken nested serialization. - Bugfix: Fix Meta.fieldsonly working as tuple not as list. - Bugfix: Edge case if unnecessarily specifying required=Falseon read only field. 2.1.8 Date: 8th Dec 2012 - Fix for creating nullable Foreign Keys with ''as well as None. - Added null=<bool>related field option. 2.1.7 Date: 7th Dec 2012 - Serializers now properly support nullable Foreign Keys. - Serializer validation now includes model field validation, such as uniqueness constraints. - Support 'true' and 'false' string values for BooleanField. - Added pickle support for serialized data. - Support source='dotted.notation'style for nested serializers. - Make Request.usersettable. - Bugfix: Fix RegexFieldto work with BrowsableAPIRenderer. 2.1.6 Date: 23rd Nov 2012 - Bugfix: Unfix DjangoModelPermissions. (I am a doofus.) 2.1.5 Date: 23rd Nov 2012 - Bugfix: Fix DjangoModelPermissions. 2.1.4 Date: 22nd Nov 2012 - Support for partial updates with serializers. - Added RegexField. - Added SerializerMethodField. - Serializer performance improvements. - Added obtain_token_viewto get tokens when using TokenAuthentication. - Bugfix: Django 1.5 configurable user support for TokenAuthentication. 2.1.3 Date: 16th Nov 2012 - Added FileFieldand ImageField. For use with MultiPartParser. - Added URLFieldand SlugField. - Support for read_only_fieldson ModelSerializerclasses. - Support for clients overriding the pagination page sizes. Use the PAGINATE_BY_PARAMsetting or set the paginate_by_paramattribute on a generic view. - 201 Responses now return a 'Location' header. - Bugfix: Serializer fields now respect max_length. 2.1.2 Date: 9th Nov 2012 - Filtering support. - Bugfix: Support creation of objects with reverse M2M relations. 2.1.1 Date: 7th Nov 2012 - Support use of HTML exception templates. Eg. 403.html - Hyperlinked fields take optional slug_field, slug_url_kwargand pk_url_kwargarguments. - Bugfix: Deal with optional trailing slashes properly when generating breadcrumbs. - Bugfix: Make textareas same width as other fields in browsable API. - Private API change: .get_serializernow uses same instanceand dataordering as serializer initialization. 2.1.0 Date: 5th Nov 2012 - Serializer instanceand datakeyword args have their position swapped. querysetargument is now optional on writable model fields. - Hyperlinked related fields optionally take slug_fieldand slug_url_kwargarguments. - Support Django's cache framework. - Minor field improvements. (Don't stringify dicts, more robust many-pk fields.) - Bugfix: Support choice field in Browsable API. - Bugfix: Related fields with read_only=Truedo not require a querysetargument. API-incompatible changes: Please read this thread regarding the instance and data keyword args before updating to 2.1.0. 2.0.x series 2.0.2 Date: 2nd Nov 2012 - Fix issues with pk related fields in the browsable API. 2.0.1 Date: 1st Nov 2012 - Add support for relational fields in the browsable API. - Added SlugRelatedField and ManySlugRelatedField. - If PUT creates an instance return '201 Created', instead of '200 OK'. 2.0.0 Date: 30th Oct 2012 - Fix all of the things. (Well, almost.) - For more information please see the 2.0 announcement. 0.4.x series 0.4.0 - Supports Django 1.5. - Fixes issues with 'HEAD' method. - Allow views to specify template used by TemplateRenderer - More consistent error responses - Some serializer fixes - Fix internet explorer ajax behavior - Minor xml and yaml fixes - Improve setup (e.g. use staticfiles, not the defunct ADMIN_MEDIA_PREFIX) - Sensible absolute URL generation, not using hacky set_script_prefix 0.3.x series 0.3.3 - Added DjangoModelPermissions class to support django.contrib.authstyle permissions. - Use staticfilesfor css files. - Easier to override. Won't conflict with customized admin styles (e.g. grappelli) - Templates are now nicely namespaced. - Allows easier overriding. - Drop implied 'pk' filter if last arg in urlconf is unnamed. - Too magical. Explicit is better than implicit. - Saner template variable auto-escaping. - Tidier setup.py - Updated for URLObject 2.0 - Bugfixes: - Bug with PerUserThrottling when user contains unicode chars. 0.3.2 - Bugfixes: - Fix 403 for POST and PUT from the UI with UserLoggedInAuthentication (#115) - serialize_model method in serializer.py may cause wrong value (#73) - Fix Error when clicking OPTIONS button (#146) - And many other fixes - Remove short status codes - Zen of Python: "There should be one-- and preferably only one --obvious way to do it." - get_name, get_description become methods on the view - makes them overridable. - Improved model mixin API - Hooks for build_query, get_instance_data, get_model, get_queryset, get_ordering 0.3.1 - [not documented] 0.3.0 - JSONP Support - Bugfixes, including support for latest markdown release 0.2.x series 0.2.4 - Fix broken IsAdminUser permission. - OPTIONS support. - XMLParser. - Drop mentions of Blog, BitBucket. 0.2.3 - Fix some throttling bugs. X-Throttleheaderbecomes decoupled into Viewand Resource, your views should now inherit from View, not Resource. The handler functions on views .get() .put() .post()etc, no longer have the contentand authargs. Use self.CONTENTinside a view to access the deserialized, validated content. Use self.userinside a view to access the authenticated user. allowed_methodsand anon_allowed_methodsare now defunct. if a method is defined, it's available. The permissionsattribute on a Viewis now used to provide generic permissions checking. Use permission classes such as FullAnonAccess, IsAuthenticatedor IsUserOrIsAnonReadOnlyto set the permissions. The authenticatorsclass becomes authentication. Class names change to Authentication. The emittersclass becomes renderers. Class names change to Renderers. ResponseExceptionbecomes ErrorResponse. The mixin classes have been nicely refactored, the basic mixins are now RequestMixin, ResponseMixin, AuthMixin, and ResourceMixinYou can reuse these mixin classes individually without using the Viewclass. 0.1.x series 0.1.1 - Final build before pulling in all the refactoring changes for 0.2, in case anyone needs to hang on to 0.1. 0.1.0 - Initial release.
http://www.tomchristie.com/rest-framework-2-docs/topics/release-notes
CC-MAIN-2018-30
en
refinedweb
Introduction In my last Blog I showed you how to get an authenticated user session using a trusted connection. In this part I want to show you how to gather MDM system information from a SAP Portal system object. After that I will introduce the basic concept behind working with MDM and the Java API 2, showing you how the system works (RecordID, Lookup Values, Display Fields…). If you build a solution based on MDM you will sooner or later face a two or three tier MDM system landscape. Meaning that there will be a development system (D-System) and there might be a quality assurance system (Q-System) and there will be a productive system (P-System). This scenario can include multiple MDM Server or maybe only different Repositories on the same server and in addition maybe (D-, Q-, P-Clients). So now you face the problem how to address the different repositories from your code without having to take care about addressing the right one dependant on which landscape you are in. In case you are using a SAP Portal I want to point to a very useful thread I have found on SDN. Hard-code credentials – any other solution exists? This forum post shows you how to work with the SAP Portal system objects and how to retrieve information out of it. I have used this concept to address the problem I have just pointed out to you in addition with the trusted connection user session authentication and it worked fine for me. Let us continue with the next topic… MDM Java API 2 introducing the concept First of all I want to give u a brief introduction into the concept of MDM to better understand my future coding. As a coder you need a much better understanding of the concept behind the MDM than anyone else. Simple GUI-Users (Graphical User Interface) don’t have the need to understand the technical details behind the data storage and operations like Java programmers. So at the beginning the most important task is to understand how MDM will store data and connect the data with each other. So let’s start with the first graphic: Figure 1: Table concept in MDM The first graphic shows the table concept of MDM. The central table called “Main table” is the centre of the model. From the main table there will be references going towards any kind of sub tables. In the Main table will be fields. Those fields can hold values which are stored directly in the table or hold a reference to any record of a sub table. Sub tables can store data values or hold references to other sub tables. To illustrate the possible layout of a main table take a look at figure 2. [Click the picture to enlarge!] Figure 2: Possible layout in main table (MDM Console view) As you can see the main table stores e.g. text values directly as well as references to other sub tables (Lookup’s of different kinds). To find out where the lookup’s are pointing to, you can open up the MDM Console and click on an entry in the list of fields in the main table to see its details (Figure 3). Figure 3: Main table field details (Bottom part of MDM Console) If we look at the details we have to notice two important things. First, figure 3 shows us a field detail named “CODE”. This code is very important for us because it is the name we will use in our code to address this field in the table. Second, we have to notice that the field is of “Type” Lookup [Flat]. This tells us, that the value we will find in this field will be of type RecordID. A RecordID is the ID of the record in the sub table (e.g. Sales Product Key – Table [Detail: Lookup Table]). This means that we will not be able to access the value directly by accessing the field in the main table. We will only get the reference to the sub table which holds the actual value desired. In my future Blogs I will give more details on the concepts behind MDM and give examples of other techniques used. So enough of MDM concepts and let’s get to the Java API and some examples. Searching in MDM Searching in MDM will be one of the most common used functionality there is. Searching in a repository has some prerequisites. First we have to have a connection to the MDM Server and second we have to have an authenticated user session to a repository. In my Blogs published before I showed you how to setup those prerequisites. In the class provided I have combined all the necessary steps to get the connection and the user session. So now let’s get to the code. package com.sap.sdn.examples; import java.util.Locale; import com.sap.mdm.commands.AuthenticateUserSessionCommand; import com.sap.mdm.commands.CommandException; import com.sap.mdm.commands.CreateUserSessionCommand; import com.sap.mdm.commands.SetUnicodeNormalizationCommand; import com.sap.mdm.commands.TrustedUserSessionCommand; import com.sap.mdm.data.RecordResultSet; import com.sap.mdm.data.RegionProperties; import com.sap.mdm.data.ResultDefinition; import com.sap.mdm.data.commands.RetrieveLimitedRecordsCommand; import com.sap.mdm.ids.FieldId; import com.sap.mdm.ids.TableId; import com.sap.mdm.net.ConnectionAccessor; import com.sap.mdm.net.ConnectionException; import com.sap.mdm.net.SimpleConnectionFactory; import com.sap.mdm.search.FieldSearchDimension; import com.sap.mdm.search.Search; import com.sap.mdm.search.TextSearchConstraint; import com.sap.mdm.server.DBMSType; import com.sap.mdm.server.RepositoryIdentifier; public class SearchExamples { // Instance variables needed for processing private ConnectionAccessor mySimpleConnection; // Name of the server that mdm runns on private String serverName = "IBSOLUTI-D790B6"; // Name of the repository shown in the mdm console private String RepositoryNameAsString = "SDN_Repository"; // Name of the DB-Server this could be an IP address only private String DBServerNameAsString = "IBSOLUTI-D790B6\\SQLEXPRESS"; // Define the Database type (MS SQL Server) private DBMSType DBMSTypeUsed = DBMSType.MS_SQL; // Create a new data region private RegionProperties dataRegion = new RegionProperties(); // Session which will be used for searching private String userSession; // Default user name private String userName = "Admin"; // Password is empty on default setup private String userPassword =""; // result we will get from mdm public RecordResultSet Result; /** * Constructor for class */ public SearchExamples(){ // Set the Data Region dataRegion.setRegionCode("engUSA"); // Set the locale on data region dataRegion.setLocale(new Locale("en", "US")); // Set the name of data region dataRegion.setName("US"); // get a connection to the server this.getConnection(); // Authenticate a user session try { this.getAuthenticatedUserSession(); } catch (ConnectionException e) { // Do something with exception e.printStackTrace(); } catch (CommandException e) { // Do something with exception e.printStackTrace(); } // Get resulting records Result = this.SearchTypes(); } // ResultDefinition Main Table declaration private ResultDefinition rdMain; /** * Method that will search for all records in main table of a certain type * * @return RecordResultSet that holds all resulting records from search */ public RecordResultSet SearchTypes() { /** * 1. First we create the Result Definition. This result definition will * tell the search which fields are of interest to us. The list could * include all fields of the table or only the ones we are interested * in. */ // Define which table should be represented by this ResultDefintion // In my repository this is the table MAINTABLE rdMain = new ResultDefinition(new TableId(1)); // Add the desired FieldId's to the result definition // In my repository this is the field PRODUCT_NAME rdMain.addSelectField(new FieldId(2)); // In my repository this is the field TYP rdMain.addSelectField(new FieldId(27)); /** * 2. Create the needed search parameters. * Define what to search for and where. */ // Create the field search dimension [Where to search!?] FieldSearchDimension fsdMaintableType = new FieldSearchDimension(new FieldId(27)); // Create the text search constraint [What to search for?! (Every record that contains ROOT)] TextSearchConstraint tscTypeRoot = new TextSearchConstraint("ROOT", TextSearchConstraint.CONTAINS); /** * 3. * Create the search object with the given search parameters. */ // Create the search Search seSearchTypeRoot = new Search(new TableId(1)); // Add the parameters to the search seSearchTypeRoot.addSearchItem(fsdMaintableType, tscTypeRoot); /** * 4. * Create the command to search with and retrieve the result */ // Build the command RetrieveLimitedRecordsCommand rlrcGetRecordsOfTypeRoot = new RetrieveLimitedRecordsCommand(mySimpleConnection); // Set the search to use for command rlrcGetRecordsOfTypeRoot.setSearch(seSearchTypeRoot); // Set the session to use for command rlrcGetRecordsOfTypeRoot.setSession(this.userSession); // Set the result definition to use rlrcGetRecordsOfTypeRoot.setResultDefinition(rdMain); // Try to execute the command try { rlrcGetRecordsOfTypeRoot.execute(); } catch (CommandException e) { // Do something with the exception e.printStackTrace(); } // Return the result return rlrcGetRecordsOfTypeRoot.getRecords(); } /** * Create and authenticate a new user session to an MDM repository. * * @param mySimpleConnection * The connection to the MDM Server * @param RepositoryNameAsString * name of the repository to connect to * @param DBServerNameAsString * name of DBServer * @param DBMSType * Type of DBMS that MDM works with * @param dataRegion * RegionProperties defining the language the repository should * be connected with. * @param userName * Name of the user that should make the connection to repository * @param userPassword * password of user that should be used if connection is not trusted * @throws ConnectionException * is propagated from the API * @throws CommandException * is propagated from the API */ public String getAuthenticatedUserSession( ) throws ConnectionException, CommandException { /* * We need a RepositoryIdentifier to connect to the desired repository * parameters for the constructor are: Repository name as string as read * in the MDM Console in the "Name" field DB Server name as string as * used while creating a repository DBMS Type as string - Valid types * are: MSQL, ORCL, IDB2, IZOS, IIOS, MXDB */ RepositoryIdentifier repId = new RepositoryIdentifier( RepositoryNameAsString, DBServerNameAsString, DBMSTypeUsed); // Create the command to get the Session CreateUserSessionCommand createUserSessionCommand = new CreateUserSessionCommand( mySimpleConnection); // Set the identifier createUserSessionCommand.setRepositoryIdentifier(repId); // Set the region to use for Session - (Language) createUserSessionCommand.setDataRegion(dataRegion); // Execute the command createUserSessionCommand.execute(); // Get the session identifier this.userSession = createUserSessionCommand.getUserSession(); // Authenticate the user session try { // Use command to authenticate user session on trusted connection TrustedUserSessionCommand tuscTrustedUser = new TrustedUserSessionCommand( mySimpleConnection); // Set the user name to use tuscTrustedUser.setUserName(userName); tuscTrustedUser.setSession(this.userSession); tuscTrustedUser.execute(); this.userSession = tuscTrustedUser.getSession(); } catch (com.sap.mdm.commands.CommandException e) { /* In Case the Connection is not Trusted */ AuthenticateUserSessionCommand authenticateUserSessionCommand = new AuthenticateUserSessionCommand( mySimpleConnection); authenticateUserSessionCommand.setSession(this.userSession); authenticateUserSessionCommand.setUserName(userName); authenticateUserSessionCommand.setUserPassword(userPassword); authenticateUserSessionCommand.execute(); } // For further information see: // // Create the normalization command SetUnicodeNormalizationCommand setUnicodeNormalizationCommand = new SetUnicodeNormalizationCommand( mySimpleConnection); // Set the session to be used setUnicodeNormalizationCommand.setSession(this.userSession); // Set the normalization type setUnicodeNormalizationCommand .setNormalizationType(SetUnicodeNormalizationCommand.NORMALIZATION_COMPOSED); // Execute the command setUnicodeNormalizationCommand.execute(); // Return the session identifier as string value return this.userSession; } /* * The method will return a ConnectionAccessor which is needed every time * you want to execute a Command like searching or as on any other Command * there is. @return SimpleConnection to MDM Server */ public void getConnection() { String sHostName = serverName; // We need a try / catch statement or a throws for the method cause try { /* * retrieve connection from Factory The hostname can be the name of * the server if it is listening on the standard port 20005 or a * combination of Servername:Portnumber eg. MDMSERVER:40000 */ mySimpleConnection = SimpleConnectionFactory.getInstance(sHostName); } catch (ConnectionException e) { // Do some exception handling e.printStackTrace(); } } } To test the code we also need a simple test class that will instantiate the sample class and prints out the number of retrieved Records. package com.sap.sdn.examples; public class Test { /** * @param args */ public static void main(String[] args) { // Create instance of search class SearchExamples test = new SearchExamples(); // Print out the amount of found records System.out.println("Total of " + test.Result.getCount() + " Records found that match criteria TYPE=ROOT"); } } I wrote a lot of comments in the code but I will give you some more details on what is happening there. First of all there are some instance variables that hold the connection information to the MDM Server and the MDM Repository. The constructor will set up some variables which are needed for the search. A connection to the MDM server will be created. A Session will be created and will be authenticated, if there is a trusted connection we will use it to authenticate and if we only have a normal connection to the server normal authentication will be used. The search will be triggered and the result will be stored in instance variable. The search needs some elements to work with. At the beginning in the SearchTypes() method [Comment 1. in code] set up a ResultDefiniton which tells the search what fields are of interest to the program and should be accessible in the result. If this definition is not including some desired field and you want to access it later in your code, an exception will be thrown. [Comment 2. in code] Define a search dimension and a search constraint to tell the search where to look at and what to look for. There are a lot of search constraints to work with. All available constraints: [Comment 3. in code] Define the search itself and use the dimension and constraint to set it up. [Comment 4. in code] Build the command that is needed to execute the search. More Details on command please see this link So this is just a very simple example and I will give you more advanced codes in future Blogs. If you have any questions on what the code does or need more detailed explanations please feel free to comment on this Blog. If this code helped you a little please feel free to comment as well. So this will be the last Blog for this year since I am rebuilding my flat at the moment. The topic of the next Blog needs to be defined and I will update my agenda in my first Blog next year. So I wish you a happy new year and as you would say in german: “ Einen guten Rutsch ins neue Jahr!”. Best regards, Tobi from one developer to the other, congratulations on your blog , you are rigth on the money . after a couple of projects we forget how hard things were in the begging . I congratulate you efforts. looking foward to read your next blog . good luck with the flat . my recomendations for a subject is looking up taxonomy values . its not as easy as with the first API, but I like the new way also . heck , just not having a instance of CatalogData feels great ! Cheers Buddy, Guy SAP MDM Training SAP MDM Training
https://blogs.sap.com/2008/01/02/mdm-java-api-2-an-introductive-series-part-iv/
CC-MAIN-2018-30
en
refinedweb
Some of the limitations of Webdynpro ABAP are as follows: SAP System Prior to SAP NetWeaver 7.0 (NW2004s) Web Dynpro ABAP is not released for official use in systems prior to SAP NetWeaver 7.0 (NW2004s). SAP NetWeaver 7.00 and 7.10 Web Dynpro ABAP is officially released starting SAP NetWeaver 7.0 (NW2004s).The following general limitations exist: - No mobile device support-Mobile devices are not supported for/in Web Dynpro ABAP. - Web browser related limitations Web Dynpro ABAP does not provide a controled behavior for the new window (Ctrl+N) functionality of Web browsers. This browser function should not be used as part of the application flow. For information on Firefox limitations see also SAP note 1089062. - No SAP GUI support The use of Web Dynpro ABAP applications inside SAP GUI is not supported due to technical restrictions which may inhibit the correct behavior of applications under some circumstances. - No support for SAP GUI dynpros Calling classic SAP GUI based dynpros by Web Dynpro ABAP is not supported, neither directly, e.g. via CALL SCREEN, nor indirectly via CALL TRANSACTION, SUBMIT REPORT etc. Note that from NetWeaver 7.0 SP12 onwards, if a Dynpro is called, there is no longer a program dump. Instead, an exception is raised in the associated internal mode. Note that in an input help, an error message is displayed accordingly, so that the application can be continued. - BIApplicationFrame limitations The BIApplicationFrame UI element only supports the execute and drilldown command. The BIApplicationFrame for Web Dynpro ABAP does not support BI Web Applications for Java with Netweaver 7.0. - Popup-related limitations For Web Dynpro popups it is not possible to set their size programmatically. A popup displayed on top of an Adobe Form may cause flickering. Under certain circumstances the popup may become hidden under the Adobe form. All UI elements that are derived from AbstractActiveComponent (Gantt, Network, OfficeControl, InteractiveForm) are not supported in popups. - eCATT eCATT is not supported for Web Dynpro ABAP. - Value help for date fields See SAP note 1056623. - Internet Explorer 8 Internet Explorer 8 is supported for Web Dynpro ABAP starting SP20. - Firefox 2.0 Firefox 2.0 is supported in SAP NetWeaver 7.00 for Web Dynpro ABAP starting SP14. The UI elements Gantt, Network, and OfficeControl are not supported in Firefox. - Firefox 3.0 Firefox 3.0 is supported in SAP NetWeaver 7.00 for Web Dynpro ABAP starting SP19. - Firefox 3.5 Firefox 3.5 is supported in SAP NetWeaver 7.00 for Web Dynpro ABAP starting SP21. - Firefox 3.6 Firefox 3.6 is supported inSAP NetWeaver 7.00 for Web Dynpro ABAP starting SP24. - FireFox (all versions) is not supported in SAP NetWeaver 7.10. - Flash Player To use Adobe Flash Islands a Flash Player version 10 or higher is required. - Safari See Product Availablility Matrix (PAM): service.sap.com/pam and SAP Note 1634749: “Safari browser for end user and administrators”. - Chrome See Product Availablility Matrix (PAM): service.sap.com/pam and SAP Note 1655306: “Google Chrome for end users and administrators”. - SAP Interactive Forms by Adobe The Interactive Forms by Adobe is supported for the 32-bit version of browsers only. Supported Adobe Reader versions: The solution is generally available starting Adobe Reader version 7.0.9 (Note that there are additional browser and platform restrictions, see below). SAP recommends to use the newest available Adobe Reader version. Note that the HTTPS protocol is supported by Adobe Readers as of Version 8.1 only. As of SAP NetWeaver 7.00 SPS10 it’s strongly recommended to use ZCI forms with an XML interface only instead of ACF based forms. Because additional central features have been added in subsequent SPs (input help and dropdown list boxes), it’s generally recommended that the most up-to-date Support Package for Web Dynpro ABAP should be installed. The Web Dynpro Context which is bound to the property “dataSource” of the interactiveForm UI element must not contain Attributes of complex datatype, i.e. Attributes which are typed with structures, internal tables or reference types are not supported (sole exception: print forms having a DDIC interface). Furthermore, the Context node names and the Attribute names must not have a namespace prefix. - Office Integration The Microsoft Office Integration is supported for the 32-bit version of Microsoft Office in combination with Internet Explorer 32-bit version. 64-bit Internet explorer and 64-bit Microsoft Office are only supported together with installation of Microsoft Office Integration. - UI element libraries The following UI element libraries are no longer available/visible in the View Designer in the ABAP Development Workbench EP_INTERNAL MOBILE PROPOSALS SAP_HOME - UI elements The following UI elements of the PATTERN library are not supported: ContextualPanel, NavigationList, FreeContextualPanel and ViewSwitch. The action onAction is not supported for TableColumnGroups. See also SAP note 1254508. The UI element Table is not supported within TablePopins or RowRepeaters . SAP NetWeaver 7.01 and 7.11 (Enhancement Package 1) The limitations for SAP NetWeaver 7.00 and SAP NetWeaver 7.10 are still valid, except: - Web browser related limitations Web Dynpro ABAP does not provide a controled behavior for the new window (Ctrl+N) functionality of Web browsers. This browser function should not be used as part of the application flow. - Firefox 2.0 The limitation concerning Firefox 2.0 is no longer valid: All UI elements that are derived from AbstractActiveComponent (Gantt, Network, InteractiveForm) are now supported in Firefox. - Firefox 3.0 The limitation concerning Firefox 3.0 is no longer valid. Firefox 3.0 is supported for 7.01 starting SP4 and 7.11 starting SP2. Additionally, there are the following new limitations for this release: - Firefox 2.0 All new GenericActiveComponent (GAC*) UI elements and Office Integration are not supported in Firefox. - Popup-related limitations AcfExecute, AcfUpDownload, FlashIsland, Office Integration, all GenericActiveComponent (GAC*) UI elements are not supported in popups. - Rendering Engine Starting from 7.01 SP4 or 711 SP2 the old rendering engine “UR Classic” isn’t supported anymore. The application paramter WDLIGHTSPEED should be deleted or set to X. In the application “wd_global_setting” mark the check box behind the label “Lightspeed Rendering (WDLIGHTSPEED):” - UI element restrictions ActiveX: UI elements based on ActiveX (Crystal integration, GenericActiveComponent) are supported for the 32-bit version of Internet Explorer only. The UI element ActiveX is only available for SAP-internal use. In addition, there are restrictions for SAP-internal use. Table Paginator: The paginator of the Web Dynpro ABAP UI Element table can no longer be switched on in SAP NetWeaver 7.01 for Web Dynpro ABAP. It has been replaced with the functionality of scrollbars. For more information, see SAP note 1407668. InteractiveForm (SAP Interactive Forms by Adobe): For Internet Explorer 8 the Adobe Document Services (ADS) of SP5 (7.11: SP4) or later have to be used. Microsoft Windows 7 is supported starting with Adobe Reader version 9.2, Internet Explorer 8 or Firefox 3.5 and SAP NetWeaver 7.01 SP06 (ADS version 802.20090618120017.572641) / SAP NetWeaver 7.11 SP04 (ADS version 802.20090618120017.572641). SAP NetWeaver 7.02 (Enhancement Package 2), 7.20 and 7.30The limitations for 7.01 and 7.11 are still valid, except: - Web browser related limitations Web Dynpro ABAP does not provide a controled behavior for the new window (Ctrl+N) functionality of Web browsers. This browser function should not be used as part of the application flow - Browser specific restrictions Safari: The Safari browser (version 4, 5.0) is supported for MacOS. - UI element restrictions InteractiveForm (SAP Interactive Forms by Adobe): Microsoft Windows 7 is supported starting with Adobe Reader version 9.2, Internet Explorer 8 or Firefox 3.5 and SAP NetWeaver 7.02 SP02 (ADS version 802.20090911061436.597944) / SAP NetWeaver 7.20 SP02 (ADS version 823.20090916041231.601740). The InteractiveForm UI-element is not supported for Safari 5.1. - Rendering Engine The old rendering engine “UR Classic” isn’t support and was removed in 7.02, 7.20 and above. The application paramter WDLIGHTSPEED should be deleted or set to X. In the application “wd_global_setting” mark the check box behind the label “Lightspeed Rendering (WDLIGHTSPEED):” - eCATT is now supported for Web Dynpro ABAP. See also SAP note 948076. Pls comment if there is any erroneous points.. Thanks Katrice Hi Katrice, Thanks for highlighting all these differences. i had faced problem explaining client about possible difference pre and post upgrade. This is really helpful. i would like to add one difference which i figure post 7.0. The multi-value paste to ALV cell from excel works (also in select options input). Thanks, Tashi Thanks for adding the information. Thanks KH
https://blogs.sap.com/2014/05/11/limitations-for-web-dynpro-abap/
CC-MAIN-2018-30
en
refinedweb
Tulip GUI Python bindings Project description Module description Graphs play an important role in many research areas, such as biology, microelectronics, social sciences, data mining, and computer science. Tulip () [1] [2] is an Information Visualization framework dedicated to the analysis and visualization of such relational data. Written in C++ the framework enables the development of algorithms, visual encodings, interaction techniques, data models, and domain-specific visualizations. The Tulip GUI library is available to the Python community through the Tulip-Python bindings [3] allowing to create and manipulate Tulip views (typically Node Link diagrams) trough the tulipgui module. It has to be used with the tulip module dedicated to the creation, storage and manipulation of the graphs to visualize. The bindings have been developed using the SIP tool [4] from Riverbank Computed Limited, allowing to easily create quality Python bindings for any C/C++ library. The main features provided by the bindings are the following ones: - creation of interactive Tulip visualizations (Adjacency Matrix, Geographic, Histogram, Node Link Diagram, Parallel Coordinates, Pixel Oriented, Scatter Plot, Self Organizing Map, Spreadsheet) - the ability to change the data source on opened visualizations - the possibilty to modify the rendering parameters for node link diagram visualizations - the ability to save visualization snapshots to image files on disk Release notes Some information regarding the Tulip-Python releases pushed on the Python Packaging Index: - 5.1.0: based on Tulip 5.1.0 released on 07/11/2017 - 5.0.0: based on Tulip 5.0.0 released on 27/06/2017 - 4.10.0: based on Tulip 4.10.0 released on 08/12/2016 - 4.9.0 : based on Tulip 4.9.0 released on 08/07/2016 - 4.8.1 : based on Tulip 4.8.1 released on 16/02/2016 - 4.8.0 : Initial release based on Tulip 4.8 Example The following script imports the tree structure of the file system directory of the Python standard library, applies colors to nodes according to degrees, computes a tree layout and quadratic Bézier shapes for edges. The imported graph and its visual encoding are then visualized by creating an interactive Node Link Diagram view. A window containing an OpenGL visualization of the graph will be created and displayed. from tulip import tlp from tulipgui import tlpgui import os # get the root directory of the Python Standard Libraries pythonStdLibPath = os.path.dirname(os.__file__) # call the 'File System Directory' import plugin from Tulip # importing the tree structure of a file system params = tlp.getDefaultPluginParameters('File System Directory') params['directory color'] = tlp.Color.Blue params['other color'] = tlp.Color.Red params['directory'] = pythonStdLibPath graph = tlp.importGraph('File System Directory', params) # compute an anonymous graph double property that will store node degrees degree = tlp.DoubleProperty(graph) degreeParams = tlp.getDefaultPluginParameters('Degree') graph.applyDoubleAlgorithm('Degree', degree, degreeParams) # create a heat map color scale heatMap = tlp.ColorScale([tlp.Color.Green, tlp.Color.Black, tlp.Color.Red]) # linearly map node degrees to colors using the 'Color Mapping' plugin from Tulip # using the heat map color scale colorMappingParams = tlp.getDefaultPluginParameters('Color Mapping', graph) colorMappingParams['input property'] = degree colorMappingParams['color scale'] = heatMap graph.applyColorAlgorithm('Color Mapping', colorMappingParams) # apply the 'Bubble Tree' graph layout plugin from Tulip graph.applyLayoutAlgorithm('Bubble Tree') # compute quadratic bezier shapes for edges curveEdgeParams = tlp.getDefaultPluginParameters('Curve edges', graph) curveEdgeParams['curve type'] = 'QuadraticDiscrete' graph.applyAlgorithm('Curve edges', curveEdgeParams) # create a node link diagram view of the graph, # a window containing the Tulip OpenGL visualization # will be created and displayed nodeLinkView = tlpgui.createNodeLinkDiagramView(graph) # set some rendering parameters for the graph renderingParameters = nodeLinkView.getRenderingParameters() renderingParameters.setViewArrow(True) renderingParameters.setMinSizeOfLabel(8) renderingParameters.setEdgeColorInterpolate(True) nodeLinkView.setRenderingParameters(renderingParameters) References Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/tulipgui-python/
CC-MAIN-2018-30
en
refinedweb
This page contains an archived post to the Jini Forum made prior to February 25, 2002. If you wish to participate in discussions, please visit the new Artima Forums. help me.. Posted by hamid khan on May 10, 2001 at 8:52 AM Assalamalaykum .i'm hamid khan want to know more about JTAPI (As i am a beginer).I know all the java programming but JTPAI is new for me.I am going to work on this technology very soon.So pls {if u don't mind}guide me if u can.i'll be very thankful to u.what is the procedure like ?what i've to do to import this telephony package ?I'm not getting the proper books for it . pls suggest me some good books on it. I've read about the book Essentials JTAPI by Spancer Roberts and UnderStanding JAVA TELEPHONY on net. how are these books ?Can i buy these books or search for some other writers ..pls reply.thanks.khudahafizhamid> import javax.telephony.*;> import javax.telephony.events.*;> import MyOutCallObserver; > > /*> * Places a telephone call from 560750 to 667461> */> public class Outcall {> > public static final void main(String args[]) {> > /*> * Create a provider by first obtaining the default implementation of> * JTAPI and then the default provider of that implementation.> */> Provider myprovider = null;> try {> System.out.println("ok");> JtapiPeer peer = JtapiPeerFactory.getJtapiPeer(null);> System.out.println("ok");> myprovider = peer.getProvider(null);> } catch (Exception excp) {> System.out.println("Can't get Provider: " + excp.toString());> System.exit(0);> }> > /*> * We need to get the appropriate objects associated with the> * originating side of the telephone call. We ask the Address for a list> * of Terminals on it and arbitrarily choose one.> */> Address origaddr = null;> Terminal origterm = null;> try {> origaddr = myprovider.getAddress("560750");> > /* Just get some Terminal on this Address */> Terminal[] terminals = origaddr.getTerminals();> if (terminals == null) {> System.out.println("No Terminals on Address.");> System.exit(0);> } > origterm = terminals[0];> } catch (Exception excp) {> // Handle exceptions;> }> > > /*> * Create the telephone call object and add an observer.> */> Call mycall = null;> try {> mycall = myprovider.createCall();> mycall.addObserver(new MyOutCallObserver());> } catch (Exception excp) {> // Handle exceptions> }> > /*> * Place the telephone call.> */> try {> Connection c[] = mycall.connect(origterm, origaddr, "667461");> } catch (Exception excp) {> // Handle all Exceptions> }> }> }
https://www.artima.com/legacy/jini/vision/messages/135.html
CC-MAIN-2018-30
en
refinedweb
MiOS Bridge Binding This binding exposes read, and read-command, access to Devices controlled by a MiOS Home Automation controller, such as those seen at. It exposes the ability to do the following things in the MiOS HA Controller Devices- Read State Variables & Device Attributes, and invoke (single parameter) UPnP Commands to control the Device. Scenes- Read the current execution state of a Scene, and invoke those Scenes within the remote HA Controller System- Read System-level Attributes. It uses the remote control interfaces (aka “UI Simple” JSON Calls, and HTTP Long-polling) of the MiOS HA Controller to keep the bound openHAB Items in sync with their counterparts in the MiOS HA Controller. The binding uses the openHAB Transformation Service extensively to “map” the Data & Commands between the two systems. A set of example MAP transform files is provided and these can readily be augmented without needing to tweak the code. Original code was used from the XBMC Binding, and then heavily modified. Snippets included from the HTTP Binding for the various datatype mapping functions. - Configuration - Item Commands (Reacting) - MiOS Binding and MiOS Action Examples Configuration MiOS Unit Configuration In order for the MiOS openHAB Binding to talk to your MiOS Unit, it needs configuration indicating where it lives. This information is specified within the services/mios.cfg file. Each MiOS Unit is identified by a Unit name, which is user-supplied. This name will be used throughout the subsequent setup steps, and permits you to connect to more than one MiOS Unit that you might have within your environment. The binding will only talk you MiOS Units living on the same LAN as your MiOS Unit and/or are directly reachable from your LAN where openHAB is running. The MiOS gateway services, such as and, are not supported. 🚨🔧 The simplest configuration entry for openhab.cfg contains a Unit name, house, and a hostname, 192.168.1.22, to use for the MiOS Unit connection: house.host=192.168.1.22 If you have local DNS setup correctly, then use this form instead: house.host=ha.myhouse.example.com Optionally, you can specify the port and timeout to use. These default to 3480 and 60000 (ms) respectively. These have reasonable defaults, so you shouldn’t need to make adjustments. house.host=ha.myhouse.example.com house.port=3480 house.timeout=30000 You can also declare multiple MiOS Units, as illustrated in this example. houseUpstairs.host=ha-upstairs.myhouse.example.com houseDownstairs.host=ha-downstairs.myhouse.example.com 🔦 The MiOS Unit name is case-sensitive, and may only contain AlphaNumeric characters. The leading character must be an [ASCII] alpha. Back to Table of Contents Transformations Internally, the MiOS Binding uses the openHAB Transformation Service. The MiOS Binding supplies a number of pre-configured MAP Transformation for the common use-cases. 🚨🔧 These transformations must be copied from the source-code repository: features/openhab-addons-external/src/main/resources/transform/mios*.map and placed into your openHAB installation under the transform directory. If you have a Unix machine, the MAP files can also be downloaded using: sudo apt-get install subversion svn checkout 🔦 These transformations can be readily extended by the user, for any use-cases that aren’t covered by those pre-configured & shipped with the Binding. Back to Table of Contents Item Configuration The MiOS Binding provides a few sources of data from the target MiOS Unit. These can be categorized into the following data values: - MiOS Device UPnP State Variables - MiOS Device Attributes - MiOS Scene Attributes - MiOS System Attributes The examples below illustrates the form of each. The general form of these bindings is: mios="unit:<unitName>,<miosThing>{,command:<commandTransform>}{,in:<inTransform>}{,out:<outTransform>}" In many cases, only a subset of these parameters need to be specified/used, with internal defaults applied for the common use-cases. The sections below describe the types of things that can be bound, in addition to the transformations that are permitted, and any default transformations that may be applied for you. Item Generation : MiOS Item Generator 🚨🔧The MiOS Item Generator is a free-standing tool that generates an initial openHAB Items file for a MiOS Unit. After the initial generation the openHAB Items file can be customized, or can be regenerated, as Devices are added/removed from the MiOS Unit. 🔦 The Item Generator examples use a MiOS Unit name of “ house”. This name must match the MiOS Unit name declared in the MiOS Unit configuration. Any name can be used, as long as they’re in sync across the configuration files. Back to Table of Contents Item : MiOS Device Binding - Values (Reading) Device Bindings can be read-only, with data flowing from the MiOS Unit into openHAB. Device Bindings have the form: mios="unit:<unitName>,device:<deviceId>/service/<serviceURN>/<serviceVariable> or mios="unit:<unitName>,device:<deviceId>/service/<serviceAlias>/<serviceVariable> With examples like: Number MiOSMemoryUsed "Used [%.0f KB]" (BindingDemo) {mios="unit:house,device:382/service/urn:cd-jackson-com:serviceId:SystemMonitor/memoryUsed"} Number MiOSMemoryAvailable "Available [%.0f KB]" (BindingDemo) {mios="unit:house,device:382/service/urn:cd-jackson-com:serviceId:SystemMonitor/memoryAvailable"} Number MiOSMemoryCached "Cached [%.0f KB]" (BindingDemo) {mios="unit:house,device:382/service/urn:cd-jackson-com:serviceId:SystemMonitor/memoryCached"} Number MiOSMemoryBuffers "Buffers [%.0f KB]" (BindingDemo) {mios="unit:house,device:382/service/urn:cd-jackson-com:serviceId:SystemMonitor/memoryBuffers"} String MiOSCMHLastRebootLinux "Reboot [%s]" (BindingDemo) {mios="unit:house,device:382/service/urn:cd-jackson-com:serviceId:SystemMonitor/cmhLastRebootTime"} String MiOSMemoryUsedString "Memory Used [%s KB]" (BindingDemo) {mios="unit:house,device:382/service/urn:cd-jackson-com:serviceId:SystemMonitor/memoryUsed"} or, since we’ve internally Alias’d the UPnP Service Id that Chris used, you can also use: Number MiOSMemoryUsed "Used [%.0f KB]" (BindingDemo) {mios="unit:house,device:382/service/SystemMonitor/memoryUsed"} Number MiOSMemoryAvailable "Available [%.0f KB]" (BindingDemo) {mios="unit:house,device:382/service/SystemMonitor/memoryAvailable"} Number MiOSMemoryCached "Cached [%.0f KB]" (BindingDemo) {mios="unit:house,device:382/service/SystemMonitor/memoryCached"} Number MiOSMemoryBuffers "Buffers [%.0f KB]" (BindingDemo) {mios="unit:house,device:382/service/SystemMonitor/memoryBuffers"} String MiOSCMHLastRebootLinux "Reboot [%s]" (BindingDemo) {mios="unit:house,device:382/service/SystemMonitor/cmhLastRebootTime"} String MiOSMemoryUsedString "Memory Used [%s KB]" (BindingDemo) {mios="unit:house,device:382/service/SystemMonitor/memoryUsed"} Or you can replace the Weather information, from the openHAB demo.items file, with contents from the Weather Underground (WUI) Plugin from MiOS: Number Weather_Temperature "Outside Temperature [%.1f °F]" <temperature> (Weather_Chart) {mios="unit:house,device:318/service/TemperatureSensor1/CurrentTemperature"} or, you can track the status of a Light Switch or perhaps a Dimmer: Number HallLightAsSwitch "On/Off [%1d]" (BindingDemo) {mios="unit:house,device:11/service/SwitchPower1/Status"} Number HallLightAsDimmer "Level [%3d]" (BindingDemo) {mios="unit:house,device:11/service/Dimming1/LoadLevelStatus"} The serviceAliases are built into the MiOS Binding and may be expanded over time, as feedback is received. Each Alias is case-sensitive, and there can be multiple Aliases for a single UPnP ServiceId: Back to Table of Contents Item : MiOS Scene Binding - Values (Reading) Scene Bindings are read-only, with data flowing from the MiOS Unit into openHAB. Scene Bindings have the form: mios="unit:<unitName>,scene:<sceneId>/<attributeName> With examples like: Number SceneGarageOpenId (GScene) {mios="unit:house,scene:109/id"} Number SceneGarageOpenStatus (GScene) {mios="unit:house,scene:109/status"} String SceneGarageOpenActive (GScene) {mios="unit:house,scene:109/active"} Back to Table of Contents Item : MiOS System Binding System Bindings are read-only, with data flowing from the MiOS Unit into openHAB. System Bindings have the form: mios="unit:<unitName>,system:/<attributeName> With examples like: Number SystemZWaveStatus "[%d]" (GSystem) {mios="unit:house,system:/ZWaveStatus"} String SystemLocalTime "[%s]" (GSystem) {mios="unit:house,system:/LocalTime"} String SystemTimeStamp "[%s]" (GSystem) {mios="unit:house,system:/TimeStamp"} String SystemUserDataDataVersion "[%s]" (GSystem) {mios="unit:house,system:/UserData_DataVersion"} Number SystemDataVersion "[%d]" (GSystem) {mios="unit:house,system:/DataVersion"} String SystemLoadTime "[%s]" (GSystem) {mios="unit:house,system:/LoadTime"} Back to Table of Contents Transformations Sometimes the value presented by the binding isn’t in the format that you require for your Item. For these cases, the binding provides access to the standard openHAB Transformation Service. To utilize the Transformation Service, you need to declare additional settings on your bindings. These take the form of the in: and out: declarations at the end of the binding. The in: declaration is used when values are received from the MiOS Unit, but before it places the value into openHAB. The out: declaration is used when values are taken from the openHAB system for delivered to the MiOS Unit (in Command execution, for example). mios="unit:<unitName>,<miosThing>{,in:<inTransform>}{,out:<outTransform>}" As you can see by the above declaration, the input and output transformations are optional. If they aren’t declared, then an internal, automated, transformation will be attempted based upon the Type of the Item being bound and, in some cases, the type of MiOS Attribute and/or State Variable involved in the binding. With examples like: String SystemZWaveStatusString "ZWave Status String [%d]" (GSystem) {mios="unit:house,system:/ZWaveStatus,in:MAP(miosZWaveStatusIn.map)"} Contact LivingRoomZoneTripped "Living Room (Zone 2) [%s]" <contact> (GContact,GWindow,GPersist) {mios="unit:house,device:117/service/SecuritySensor1/Tripped,in:MAP(miosContactIn.map)"} and a map transform file like transform/miosZWaveStatusIn.map: 1=Cool Bananas 0=In the Dog house -=Your guess is as good as mine! and a map transform file like transform/miosSwitchIn.map: 1=OPEN 0=CLOSED Then as data flows from the MiOS system, data for these items will be transformed into the new String format for display and/or rule purposes. To ease the setup process, the common MiOS entities have internal defaults for these parameters. This aids in keeping the binding string simple for typical use-case scenarios. The defaults are as follows: For Devices, the defaults are as follows: For Scenes, they look like: For users wanting more advanced configurations, the openHAB Transformation Service provides a number of other transforms that may be of interest: JS(example.js)- run the Javascript to perform the conversion. MAP (example.map)- Transform using the static, file-based, conversion. XSLT(example.xslt)- Transform using an XSLT transformation. EXEC(...)- Transform using the OS-level script. REGEX(...)- Transform using the supplied Regular Expression and use Capture markers (and )around the value to be extracted. XPATH(...)- Transform using the supplied XPath Expression. More reading on these is available in the openHAB Wiki. Back to Table of Contents Item Commands (Reacting) By default, openHAB will send Commands to the Controls that have been outlined in the associated sitemaps/*.sitemap file. The Commands sent depend upon the type of Control that’s been bound to the Item. Through observation, the following commands are commonly sent: - Switch - ON, OFF(When Bound to a Switch Item) - Switch - TOGGLE(When Bound to a Contact Item) - Switch - ON(When autoupdate="false"is also present in the binding list) - Slider - INCREASE, DECREASE, <PCTNumber> MiOS Units don’t natively handle these Commands so a mapping step must occur before openHAB Commands can be executed by a MiOS Unit. Additionally, since MiOS Bindings are read-only by default, we must add a parameter to indicate we want data to flow back to the MiOS Unit. The command: Binding parameter is used to specify that we want data to flow back to the MiOS unit as well as how to perform the required mapping. For most Items bound using the MiOS Binding, internal defaults will take care of the correct command:, in: and out: parameters. These need only be specified if you have something not handled by the internal defaults, or wish to override them with custom behavior. Back to Table of Contents Item : MiOS Device Binding - Commands (Reacting) For MiOS Devices, this parameter can take one of several forms: mios="unit:<unitName>,device:<deviceId>/service/<UPnPVariable>,command:{<CommandMap>}" With definitions as: <CommandMap> is <blank> OR; <CommandMap> is <InlineCommandMap> OR; <CommandMap> is <openHABTransform> ( <TransformParams> ) <InlineCommandMap> is <openHABCommandMap> { | <openHABCommandMap> }* <openHABCommandMap> is <openHABCommand> { = <UPnPAction> } <openHABCommand> is ON, OFF, INCREASE, DECREASE, etc or the special value _defaultCommand <UPnPVariable> is <ServiceName> / <ServiceVariable> OR; <UPnPVariable> is <ServiceAlias> / <ServiceVariable> <openHABTransform> is MAP, XSLT, EXEC, XPATH, etc <BoundValue> is ?, ??, ?++, ?-- Back to Table of Contents Device Command Binding Examples (Parameterless) In practice, when discrete commands are being sent by openHAB, the map is fairly simple. In the examples listed below, the *.map files are provided and can be downloaded per the Transformations setup descriptions. Back to Table of Contents A Switch… You might start off with an inline definition of the mapping: Switch FamilyTheatreLightsStatus "Family Theatre Lights" (GSwitch) {mios="unit:house,device:13/service/SwitchPower1/Status,command:ON=SwitchPower1/SetTarget(newTargetValue=1)|OFF=SwitchPower1/SetTarget(newTargetValue=0),in:MAP(miosSwitchIn.map)"} And then reduce it to the internal default map, but specify that you only want to handle ON and OFF Commands: Switch FamilyTheatreLightsStatus "Family Theatre Lights" (GSwitch) {mios="unit:house,device:13/service/SwitchPower1/Status,command:ON|OFF,in:MAP(miosSwitchIn.map)"} or, more simply, use the internal defaults altogether: Switch FamilyTheatreLightsStatus "Family Theatre Lights" (GSwitch) {mios="unit:house,device:13/service/SwitchPower1/Status"} Back to Table of Contents An Armed Sensor… The simple version, using internal defaults for the SecuritySensor1/Armed service state of the Device: Switch LivingRoomZoneArmed "Zone Armed [%s]" {mios="unit:house,device:117/service/SecuritySensor1/Armed"} or the fully spelled out version: Switch LivingRoomZoneArmed "Zone Armed [%s]" {mios="unit:house,device:117/service/SecuritySensor1/Armed,command:MAP(miosArmedCommand.map),in:MAP(miosSwitchIn.map)"} Back to Table of Contents A Lock… The simple version, using internal defaults for the DoorLock1/Status service state of the Device: Switch GarageDeadboltDStatus "Garage Deadbolt" (GLock,GSwitch) {mios="unit:house,device:189/service/DoorLock1/Status"} or the full version: Switch GarageDeadboltDStatus "Garage Deadbolt" (GLock,GSwitch) {mios="unit:house,device:189/service/DoorLock1/Status,command:MAP(miosLockCommand.map),in:MAP(miosSwitchIn.map)"} Back to Table of Contents Device Command Binding Examples (Parameterized) For some Commands, in order to pass this information to the remote MiOS Unit, we need to know either the current value of the Item, or we need to know the current value of the Command. To do this, we introduce the <BoundValue> parameter that, when present in the mapped-command, will be expanded prior to being sent to the MiOS Unit: ?++- Item Value + 10 ?--- Item Value - 10 ?- Item Value ??- Command Value Additionally, since <PCTNumber> is just a value, it won’t match any of the entries in our Mapping file, so we introduce a magic key _defaultCommand. We first attempt to do a literal mapping and, if that doesn’t find a match, we go look for this magic key and use it’s entry. Back to Table of Contents A Dimmer, Volume Control, Speed controlled Fan… The simple version, using internal defaults for the Dimming1/LoadLevelStatus service state of the Device: Dimmer MasterCeilingFanLoadLevelStatus "Master Ceiling Fan [%d]%" <slider> (GDimmer) {mios="unit:house,device:101/service/Dimming1/LoadLevelStatus"} or the full version: Dimmer MasterCeilingFanLoadLevelStatus "Master Ceiling Fan [%d]%" <slider> (GDimmer) {mios="unit:house,device:101/service/Dimming1/LoadLevelStatus,command:MAP(miosDimmerCommand.map)"} Since Dimmer Items in openHAB can be sent INCREASE, DECREASE or <PCTNumber> as the command, the mapping file must account for both the static commands ( INCREASE, DECREASE) as well as the possibility of a Command Value being sent. The miosDimmerCommand.map file has a definition that handles this situation: INCREASE=urn:upnp-org:serviceId:Dimming1/SetLoadLevelTarget(newLoadlevelTarget=?++) DECREASE=urn:upnp-org:serviceId:Dimming1/SetLoadLevelTarget(newLoadlevelTarget=?--) _defaultCommand=urn:upnp-org:serviceId:Dimming1/SetLoadLevelTarget(newLoadlevelTarget=??) Back to Table of Contents A Thermostat… A Thermostat is composed of a number of pieces. Each piece must be first bound to openHAB Items, and then a number of mappings must be put in place. Since all the components of a Thermostat have reasonable internal defaults, we’ll use the simpler form for our Item definitions in openHAB: /* Thermostat Upstairs */ Number ThermostatUpstairsId "ID [%d]" {mios="unit:house,device:335/id"} String ThermostatUpstairsDeviceStatus "Device Status [%s]" (GThermostatUpstairs) {mios="unit:house,device:335/status"} Number ThermostatUpstairsCurrentTemperature "Upstairs Temperature [%.1f °F]" <temperature> (GThermostatUpstairs, GTemperature) {mios="unit:house,device:335/service/TemperatureSensor1/CurrentTemperature"} Number ThermostatUpstairsHeatCurrentSetpoint "Heat Setpoint [%.1f °F]" <temperature> (GThermostatUpstairs) {mios="unit:house,device:335/service/TemperatureSetpoint1_Heat/CurrentSetpoint"} Number ThermostatUpstairsCoolCurrentSetpoint "Cool Setpoint [%.1f °F]" <temperature> (GThermostatUpstairs) {mios="unit:house,device:335/service/TemperatureSetpoint1_Cool/CurrentSetpoint"} String ThermostatUpstairsFanMode "Fan Mode" (GThermostatUpstairs) {mios="unit:house,device:335/service/HVAC_FanOperatingMode1/Mode"} String ThermostatUpstairsFanStatus "Fan Status [%s]" (GThermostatUpstairs) {mios="unit:house,device:335/service/HVAC_FanOperatingMode1/FanStatus"} String ThermostatUpstairsModeStatus "Mode Status" (GThermostatUpstairs) {mios="unit:house,device:335/service/HVAC_UserOperatingMode1/ModeStatus"} String ThermostatUpstairsModeState "Mode State [%s]" (GThermostatUpstairs) {mios="unit:house,device:335/service/HVAC_OperatingState1/ModeState"} Number ThermostatUpstairsBatteryLevel "Battery Level [%d] %" (GThermostatUpstairs) {mios="unit:house,device:335/service/HaDevice1/BatteryLevel"} DateTime ThermostatUpstairsBatteryDate "Battery Date [%1$ta, %1$tm/%1$te %1$tR]" <calendar> (GThermostatUpstairs) {mios="unit:house,device:335/service/HaDevice1/BatteryDate"} DateTime ThermostatUpstairsLastUpdate "Last Update [%1$ta, %1$tm/%1$te %1$tR]" <calendar> (GThermostatUpstairs) {mios="unit:house,device:335/service/HaDevice1/LastUpdate"} and these need to be paired with similar items in the Text item=ThermostatUpstairsCurrentTemperature { Text item=ThermostatHumidityUpstairsCurrentLevel Setpoint item=ThermostatUpstairsHeatCurrentSetpoint minValue=40 maxValue=80 Setpoint item=ThermostatUpstairsCoolCurrentSetpoint minValue=40 maxValue=80 Switch item=ThermostatUpstairsFanMode mappings=[ContinuousOn="On", Auto="Auto"] Text item=ThermostatUpstairsFanStatus Switch item=ThermostatUpstairsModeStatus mappings=[HeatOn="Heat", CoolOn="Cool", AutoChangeOver="Auto", Off="Off"] Text item=ThermostatUpstairsModeState Text item=ThermostatUpstairsBatteryLevel Text item=ThermostatUpstairsBatteryDate } Back to Table of Contents Item : MiOS Scene Binding - Commands (Reacting) MiOS Scenes are parameterless. They can only be requested to execute, and they provide status updates as attribute values during their execution ( status) or if they’re currently active ( active). For MiOS Scenes, the command: parameter has a simpler form: mios="unit:<unitName>,scene:<sceneId>{/<SceneAttribute>},command:{<CommandMap>}{,in:<inTransform>}" With definitions as: <CommandList> is <blank> OR; <CommandList> is <openHABCommand> { | <openHABCommand> }* <openHABCommand> is ON, OFF, INCREASE, DECREASE, TOGGLE etc Back to Table of Contents Scene Command Binding Examples In general Scenes tend to look like: String SceneMasterClosetLights "Master Closet Lights Scene" <sofa> (GScene) {mios="unit:house,scene:109/status, autoupdate="false"} Or if you want the Scene executed upon receipt of ON or TOGGLE Commands: String SceneMasterClosetLights "Master Closet Lights Scene" <sofa> (GScene) {mios="unit:house,scene:109/status,command:ON|TOGGLE", autoupdate="false"} 🔦 Here we’ve added an additional configuration to the binding declaration, autoupdate="false", to ensure the Switch no longer has the ON and OFF States automatically managed. In openHAB, this declaration ensures that the UI rendition appears like a Button. Back to Table of Contents MiOS Binding and MiOS Action Examples Users typically have configurations falling into one or more of the following categories, which will be used to outline any subsequent examples: - Augmenting - openHAB Rules that “add” to existing MiOS Scenes. - Co-existing - Replacing MiOS Scenes with openHAB Rules, but keeping the Devices. - Examples for Augmenting Adding Notifications and Text-to-Speech (TTS) when a House Alarm is triggered MiOS has a standardized definition that most Alarm Panel plugins adhere to (DSC, Ademco, GE Caddx, Paradox, etc). This exposes a standardized UPnP-style attribute, AlarmPartition2/Alarm, for the Alarm System being in active Alarm mode. It has the value None or Alarm. Here we check the specific transition between those two states as we want to avoid being re-notified, when the Uninitialized » Alarm state transition occurs, should openHAB restart. Item declaration ( house.items): String AlarmArea1Alarm "Alarm Area 1 Alarm [%s]" (GAlarmArea1) {mios="unit:house,device:228/service/AlarmPartition2/Alarm"} String AlarmArea1ArmMode "Alarm Area 1 Arm Mode [%s]" (GAlarmArea1) {mios="unit:house,device:228/service/AlarmPartition2/ArmMode"} String AlarmArea1LastUser "Alarm Area 1 Last User [%s]" (GAlarmArea1) {mios="unit:house,device:228/service/AlarmPartition2/LastUser"} Rule declaration ( house-alarm.rules): rule "Alarm Panel Breach" when Item AlarmArea1Alarm changed to Active then pushNotification("House-Alarm", "House in ALARM!! Notification") say("Alert: House in Alarm Notification") end rule "Alarm Panel Armed (Any)" when Item AlarmArea1ArmMode changed from Disarmed to Armed then say("Warning! House Armed Notification") // Perform deferred notifications, as the User.state may not have been processed yet. createTimer(now.plusSeconds(1)) [ logDebug("house-alarm", "Alarm-Panel-Armed-Any Deferred notification") var user = AlarmArea1LastUser.state as StringType if (user == null) user = "user unknown" pushNotification("House-Armed", "House Armed Notification (" + user + ")") ] end rule "Alarm Panel Disarmed (Fully)" when Item AlarmArea1ArmMode changed from Armed to Disarmed then say("Warning! House Disarmed Notification") // Perform deferred notifications, as the User.state may not have been processed yet. createTimer(now.plusSeconds(1)) [ logDebug("house-alarm", "Alarm-Panel-Disarmed-Fully Deferred notification") var user = AlarmArea1LastUser.state as StringType if (user == null) user = "user unknown" pushNotification("House-Disarmed", "House Disarmed Notification (" + user + ")") ] end Adding Notifications when Battery Powered devices are running low MiOS Systems standardize Battery Level indications (0-100%) for all battery-power devices (Alarm sensors, Z-Wave Door Locks, etc) and Nest uses a simple “ok” String to represent the Battery Status. Item declaration ( house.items): Number GarageDeadboltBatteryLevel "Garage Deadbolt Battery Level [%d %%]" <energy> (GBattery,GPersist) {mios="unit:house,device:189/service/HaDevice1/BatteryLevel"} Number HallCupboardZoneBatteryLevel "Hall Cupboard Battery Level [%d %%]" <energy> (GBattery,GPersist) {mios="unit:house,device:301/service/HaDevice1/BatteryLevel"} Number EXTFrontMotionZoneBatteryLevel "EXT Front Motion Zone Battery Level [%d %%]" <energy> (GBattery,GPersist) {mios="unit:house,device:302/service/HaDevice1/BatteryLevel"} Number EXTRearMotionZoneBatteryLevel "EXT Rear Motion Battery Level [%d %%]" <energy> (GBattery,GPersist) {mios="unit:house,device:396/service/HaDevice1/BatteryLevel"} String NestSmokeGuestBedroom_battery_health "Guest Bedroom Smoke Battery Health [%s]" <energy> (GSmoke,GBattery,GPersist) {nest="<[smoke_co_alarms(Guest Bedroom).battery_health]"} Rule declaration ( house-battery.rules): val Number LOW_BATTERY_THRESHOLD = 60 // for Z-Wave Battery devices val String OK_BATTERY_STATE = 'ok' // for Nest Thermostat and Protect devices rule "Low Battery Alert" when Time cron "0 0 8,12,20 * * ?" then GBattery?.members.filter(s|s.state instanceof DecimalType).forEach[item | var Number level = item.state as Number var String name = item.name if (level < LOW_BATTERY_THRESHOLD) { logInfo('Low-Battery-Alert', 'Bad: ' + name) pushNotification("Low-Battery-Alert", "House Low Battery Notification (" + name + ")") } else { logDebug('Low-Battery-Alert', 'Good: ' + name) } ] GBattery?.members.filter(s|s.state instanceof StringType).forEach[item | var String level = (item.state as StringType).toString var String name = item.name if (level != OK_BATTERY_STATE) { logInfo('Low-Battery-Alert', 'Bad: ' + name) pushNotification("Low-Battery-Alert", "House Low Battery Notification (" + name + ")") } else { logDebug('Low-Battery-Alert', 'Good: ' + name) } ] end Examples for Co-existing When Motion detected turn Lights ON (OFF after 5 minutes) This is typical of a declarative Scene in MiOS. In this case, the lights are left on for 5 minutes, and if new motion is detected in that time, another 5 minute clock is started. The logging can be removed as needed. Item declaration ( house.items): Group GSwitch All Switch MasterClosetLightsStatus "Master Closet Lights" (GSwitch) {mios="unit:house,device:391/service/SwitchPower1/Status"} Switch MasterClosetFibaroLightStatus "Master Closet Fibaro Light" (GSwitch) {mios="unit:house,device:431/service/SwitchPower1/Status"} Rule declaration ( house-master.rules): import org.openhab.model.script.actions.Timer import java.util.concurrent.locks.ReentrantLock val int MCL_DELAY_SECONDS = 300 var Timer mclTimer = null var ReentrantLock mclLock = new ReentrantLock(false) rule "Master Closet Motion" when Item MasterClosetZoneTripped changed from CLOSED to OPEN then logInfo("house-master", "Master-Closet-Motion Timer lights ON") sendCommand(MasterClosetLightsStatus, ON) sendCommand(MasterClosetFibaroLightStatus, ON) mclLock.lock if (mclTimer != null) { mclTimer.cancel logInfo("house-master", "Master-Closet-Motion Timer Cancel") } mclTimer = createTimer(now.plusSeconds(MCL_DELAY_SECONDS)) [ logInfo("house-master", "Master-Closet-Motion Timer lights OFF") sendCommand(MasterClosetLightsStatus, OFF) sendCommand(MasterClosetFibaroLightStatus, OFF) ] mclLock.unlock end When Motion detected turn Lights ON (if nighttime) and OFF after 5 minutes Item declaration ( sunrise.items): DateTime ClockDaylightStart "Daylight Start [%1$tH:%1$tM]" <calendar> {astro="planet=sun, type=daylight, property=start, offset=-30"} DateTime ClockDaylightEnd "Daylight End [%1$tH:%1$tM]" <calendar> {astro="planet=sun, type=daylight, property=end, offset=+30"} Item declaration ( house.items): Switch KitchenSinkLightStatus "Kitchen Sink Light" (GSwitch) {mios="unit:house,device:99/service/SwitchPower1/Status"} Switch KitchenPantryLightStatus "Kitchen Pantry Light" (GSwitch) {mios="unit:house,device:425/service/SwitchPower1/Status"} Switch PowerHotWaterPumpStatus "Power Hot Water Pump" (GSwitch) {mios="unit:house,device:303/service/SwitchPower1/Status"} Switch KitchenPantryZoneArmed "Zone Armed [%s]" {mios="unit:house,device:426/service/SecuritySensor1/Armed"} Rule declaration ( house-kitchen.rules): import org.openhab.model.script.actions.Timer import java.util.concurrent.locks.ReentrantLock val int K_DELAY_SECONDS = 240 var Timer kTimer = null var ReentrantLock kLock = new ReentrantLock(false) rule "Kitchen Motion" when Item KitchenMotionZoneTripped changed from CLOSED to OPEN then logInfo("house-kitchen", "Kitchen-Motion Timer ON") // Ignore this Rule if the Motion sensor is bypassed. if (KitchenMotionZoneArmed.state != ON) { logInfo("house-kitchen", "Kitchen-Motion Not Armed, skipping") return void } val DateTime daylightStart = new DateTime((ClockDaylightStart.state as DateTimeType).getCalendar) val DateTime daylightEnd = new DateTime((ClockDaylightEnd.state as DateTimeType).getCalendar) var boolean night = daylightStart.isAfterNow || daylightEnd.isBeforeNow if (night) { logInfo("house-kitchen", "Kitchen-Motion Night Time") sendCommand(KitchenSinkLightStatus, ON) sendCommand(KitchenPantryLightStatus, ON) } logInfo("house-kitchen", "Kitchen-Motion Any Time") sendCommand(PowerHotWaterPumpStatus, ON) kLock.lock if (kTimer != null) { kTimer.cancel logInfo("house-kitchen", "Kitchen-Motion Timer Cancel") } kTimer = createTimer(now.plusSeconds(K_DELAY_SECONDS)) [ logInfo("house-kitchen", "Kitchen-Motion Timer OFF") sendCommand(KitchenSinkLightStatus, OFF) sendCommand(KitchenPantryLightStatus, OFF) sendCommand(PowerHotWaterPumpStatus, OFF) ] kLock.unlock end When opening/closing Windows keep Nest Away state in sync to save energy. Explicitly check for the OPEN » CLOSED state transition, to avoid issues when openHAB restarts ( Uninitialized » OPEN) or when duplicate values come in from the MiOS System ( OPEN » OPEN). Item declaration ( house.items): Group GPersist (All) Group GWindow "All Windows [%d]" <contact> (GContact) Contact LivingRoomZoneTripped "Living Room (Zone 2) [MAP(en.map):%s]" <contact> (GWindow,GPersist) {mios="unit:house,device:117/service/SecuritySensor1/Tripped"} Contact KitchenZoneTripped "Kitchen (Zone 3) [MAP(en.map):%s]" <contact> (GWindow,GPersist) {mios="unit:house,device:118/service/SecuritySensor1/Tripped"} Contact FamilyRoomZoneTripped "Family Room (Zone 5) [MAP(en.map):%s]" <contact> (GWindow,GPersist) {mios="unit:house,device:120/service/SecuritySensor1/Tripped"} Contact MasterBedroomZoneTripped "Master Bedroom (Zone 8) [MAP(en.map):%s]" <contact> (GWindow,GPersist) {mios="unit:house,device:122/service/SecuritySensor1/Tripped"} Contact Bedroom3ZoneTripped "Bedroom #3 (Zone 9) [MAP(en.map):%s]" <contact> (GWindow,GPersist) {mios="unit:house,device:123/service/SecuritySensor1/Tripped"} Contact Bedroom2ZoneTripped "Bedroom #2 (Zone 10) [MAP(en.map):%s]" <contact> (GWindow,GPersist) {mios="unit:house,device:124/service/SecuritySensor1/Tripped"} Contact GuestBathZoneTripped "Guest Bathroom (Zone 11) [MAP(en.map):%s]" <contact> (GWindow,GPersist) {mios="unit:house,device:125/service/SecuritySensor1/Tripped"} Contact StairsWindowsZoneTripped "Stairs Windows (Zone 12) [MAP(en.map):%s]" <contact> (GWindow,GPersist) {mios="unit:house,device:126/service/SecuritySensor1/Tripped"} Contact MasterBath1ZoneTripped "Master Bath (Zone 19) [MAP(en.map):%s]" <contact> (GWindow,GPersist) {mios="unit:house,device:133/service/SecuritySensor1/Tripped"} Contact MasterBath2ZoneTripped "Master Bath (Zone 20) [MAP(en.map):%s]" <contact> (GWindow,GPersist) {mios="unit:house,device:134/service/SecuritySensor1/Tripped"} Contact MasterBath3ZoneTripped "Master Bath (Zone 21) [MAP(en.map):%s]" <contact> (GWindow,GPersist) {mios="unit:house,device:135/service/SecuritySensor1/Tripped"} Rule declaration (house.rules): rule "Windows Closed (all) when Item Bedroom2ZoneTripped changed from OPEN to CLOSED or Item Bedroom3ZoneTripped changed from OPEN to CLOSED or Item FamilyRoomZoneTripped changed from OPEN to CLOSED or Item GuestBathZoneTripped changed from OPEN to CLOSED or Item KitchenZoneTripped changed from OPEN to CLOSED or Item LivingRoomZoneTripped changed from OPEN to CLOSED or Item MasterBath1ZoneTripped changed from OPEN to CLOSED or Item MasterBath2ZoneTripped changed from OPEN to CLOSED or Item MasterBath3ZoneTripped changed from OPEN to CLOSED or Item MasterBedroomZoneTripped changed from OPEN to CLOSED or Item StairsWindowsZoneTripped changed from OPEN to CLOSED then if (GWindow.members.filter(s|s.state==OPEN).size == 0) { say("Attention: All Windows closed.") sendCommand(Nest_away, "home") } end rule "Windows Opened (any)" when Item Bedroom2ZoneTripped changed from CLOSED to OPEN or Item Bedroom3ZoneTripped changed from CLOSED to OPEN or Item FamilyRoomZoneTripped changed from CLOSED to OPEN or Item GuestBathZoneTripped changed from CLOSED to OPEN or Item KitchenZoneTripped changed from CLOSED to OPEN or Item LivingRoomZoneTripped changed from CLOSED to OPEN or Item MasterBath1ZoneTripped changed from CLOSED to OPEN or Item MasterBath2ZoneTripped changed from CLOSED to OPEN or Item MasterBath3ZoneTripped changed from CLOSED to OPEN or Item MasterBedroomZoneTripped changed from CLOSED to OPEN or Item StairsWindowsZoneTripped changed from CLOSED to OPEN then if (GWindow.members.filter(s|s.state==OPEN).size == 1) { say("Attention: First Window opened.") sendCommand(Nest_away, "away") } end Examples for Replacing Publishing data to SmartEnergyGroups.com (SEG) Here’s what you do to replace it with openHAB functionality: Item declaration ( house.items): Group GPersist (All) Group GMonitor (All) Group GMonitorTemperature (GMonitor) Group GMonitorHumidity (GMonitor) Group GMonitorPower (GMonitor) Group GMonitorEnergy (GMonitor) Number WeatherTemperatureCurrentTemperature "Outside [%.1f °F]" <temperature> (GPersist,GMonitorTemperature) {mios="unit:house,device:318/service/TemperatureSensor1/CurrentTemperature"} Number WeatherLowTemperatureCurrentTemperature "Outside Low [%.1f °F]" <temperature> (GPersist,GMonitorTemperature) {mios="unit:house,device:319/service/TemperatureSensor1/CurrentTemperature"} Number WeatherHighTemperatureCurrentTemperature "Outside High [%.1f °F]" <temperature> (GPersist,GMonitorTemperature) {mios="unit:house,device:320/service/TemperatureSensor1/CurrentTemperature"} Number WeatherHumidityCurrentLevel "Outside Humidity [%d %%]" (GPersist,GMonitorHumidity) {mios="unit:house,device:321/service/HumiditySensor1/CurrentLevel"} Number NestTStatUpstairs_humidity "Humidity [%d %%]" (GPersist,GMonitorHumidity) {nest="<[thermostats(Upstairs).humidity]"} Number NestTStatUpstairs_ambient_temperature_f "Upstairs [%.1f °F]" <temperature> (GPersist,GMonitorTemperature) {nest="<[thermostats(Upstairs).ambient_temperature_f]"} Number NestTStatDownstairs_humidity "Humidity [%d %%]" (GPersist,GMonitorHumidity) {nest="<[thermostats(Downstairs).humidity]"} Number NestTStatDownstairs_ambient_temperature_f "Downstairs [%.1f °F]" <temperature> (GPersist,GMonitorTemperature) {nest="<[thermostats(Downstairs).ambient_temperature_f]"} Persistence declaration ( rrd4j.persist): Strategies { // for rrd charts, we need a cron strategy everyMinute : "0 * * * * ?" everyDay : "0 0 23 * * ?" } Items { SystemDataVersion, SystemUserDataDataVersion, SystemTimeStamp, SystemLocalTime, SystemLoadTime : strategy = everyDay GPersist* : strategy = everyChange, everyMinute, restoreOnStartup GTemperature* : strategy = everyMinute, restoreOnStartup } Rule declaration ( seg.rules): import java.util.Locale rule "Log Data to SmartEnergyGroups (SEG)" when Time cron "0 0/2 * * * ?" or Item NestTStatUpstairs_ambient_temperature_f changed or Item NestTStatDownstairs_ambient_temperature_f changed or Item WeatherTemperatureCurrentTemperature changed or Item WeatherLowTemperatureCurrentTemperature changed or Item WeatherHighTemperatureCurrentTemperature changed or Item NestTStatUpstairs_humidity changed or Item NestTStatDownstairs_humidity changed or Item WeatherHumidityCurrentLevel changed then val String SEG_SITE = "<yourSiteKeyHere>" val String SEG_URL = "" val String NODE_NAME = "openHAB" val Locale LOCALE = Locale::getDefault var String segData = "" GMonitorTemperature?.members.forEach(item| segData = segData + String::format(LOCALE, "(t_%s %s)", item.name, (item.state as Number).toString) ) GMonitorHumidity?.members.forEach(item | segData = segData + String::format(LOCALE, "(h_%s %s)", item.name, (item.state as Number).toString) ) GMonitorPower?.members.forEach(item | segData = segData + String::format(LOCALE, "(p_%s %s)", item.name, (item.state as Number).toString) ) GMonitorEnergy?.members.forEach(item | segData = segData + String::format(LOCALE, "(e_%s %s)", item.name, (item.state as Number).toString) ) segData = String::format("(site %s (node %s ? %s))", SEG_SITE, NODE_NAME, segData) sendHttpPostRequest(SEG_URL, "application/x-www-form-urlencoded", segData) end
https://docs.openhab.org/v2.1/addons/bindings/mios1/readme.html
CC-MAIN-2018-30
en
refinedweb
Defines label settings for Doughnut series. Namespace: DevExpress.XtraCharts Assembly: DevExpress.XtraCharts.v19.2.dll public class DoughnutSeriesLabel : PieSeriesLabel Public Class DoughnutSeriesLabel Inherits PieSeriesLabel The DoughnutSeriesLabel class provides label functionality for series of the Doughnut view type. The DoughnutSeriesLabel class inherits properties and methods from the base PieSeriesLabel class which define the common settings for the labels of pie and doughnut series. An instance of the DoughnutSeriesLabel class can be obtained via the SeriesBase.Label property of a series whose view type is DoughnutSeriesView.
https://docs.devexpress.com/CoreLibraries/DevExpress.XtraCharts.DoughnutSeriesLabel
CC-MAIN-2020-10
en
refinedweb
Namespace: DevExpress.XtraCharts Assembly: DevExpress.XtraCharts.v19.2.dll public class SideBySideBar3DSeriesView : Bar3DSeriesView, ISideBySideBarSeriesView, IBarSeriesView Public Class SideBySideBar3DSeriesView Inherits Bar3DSeriesView Implements ISideBySideBarSeriesView, IBarSeriesView The SideBySideBar3DSeriesView class provides the functionality of a series view of the 3D Bar type within a chart control. In addition to the common bar view settings inherited from the base Bar3DSeriesView class, the SideBySideBar3DSeriesView class declares the SideBySideBar3DSeriesView.BarDistanceFixed and SideBySideBar3DSeriesView.BarDistance properties which control the space between adjacent bars. Note that a particular view type can be defined for a series via its SeriesBase.View property. For more information on series views of the simple bar type, please see the Side-by-Side Bar Chart topic.
https://docs.devexpress.com/CoreLibraries/DevExpress.XtraCharts.SideBySideBar3DSeriesView
CC-MAIN-2020-10
en
refinedweb
Lesson 33 - E-shop in ASP.NET Core MVC - Editing Orders 2 - Customer In the previous lesson, E-shop in ASP.NET Core MVC - Editing Orders, we started editing the order and implemented a part of the payment details editing. Today, we're going to continue with ASP.NET Core tutorial by editing customer details. We'll complete the project in the next lesson. Updating Customer Details We'll use our already completed _PersonPartial view to update the customer details. We'll render the entire form in a dialog box with the help of jQuery UI dialog. We'll display the changes we made using AJAX again. Editing the ViewModel In order to render the _PersonPartial view, we also need to pass BasePersonViewModel to the page as part of the model. So let's add it to our OrderEditViewModel: public class OrderEditViewModel : InvoiceViewModel { public BasePersonViewModel PersonVM { get; set; } } Controller In the controller, in the Edit() action, we also need to initialize the newly added PersonVM property. Since we basically use the same code in the Edit() action in AccountController for filling the PersonEditViewModel instance, I recommend moderate refactoring and moving the common code, for example, to a new BasePersonViewModel constructor, and use it in program order editing by e-shop administrators in C# .NET. We'll display the form in a dialog box and update data in background using AJAX.:
https://www.ict.social/csharp/asp-net/core/e-shop/e-shop-in-aspnet-core-mvc-editing-orders-2-customer
CC-MAIN-2020-10
en
refinedweb
Custom panel plugin not being found when running rviz2 I have tried to create a custom ros2 package that will be a panel plugin for rivz2, by following information from "User's Guide to plugin development" as well as "Creating a ROS 2 package" but when running rviz2 it is not finding my custom plugin. I expect to find the custom panel in the rviz GUI under Panels --> Add New Panel but there is nothing there. How can I ensure that the plugin is found by rviz when it is running, any suggestions would be helpful! The package is successfully built with colcon and when running ros2 pkg list I have verified that ros2 finds the custom package, my_panel_plugin, but I haven't found any other way of troubleshooting what the issue could be. I also tried running rviz with debug flag ros2 run rviz2 rviz2 __log_level:=debug but my only conclusion from that is that my custom plugin is not being found by the plugin loader in the same way as eg. the rviz_default_plugins. According to the documentation linked above I need to invoke the PLUGINLIB_EXPORT_CLASS macro for the plugin loader to find the plugin. I believe I have done this correctly but see the related files for reference, CMakeLists.txt, package.xml and plugin_description.xml. ROS enviroment variables ROS_PYTHON_VERSION=3 ROS_VERSION_NAME=dashing ROS_DISTRO=dashing ROS_VERSION=2 File structure ├── rviz_plugins │ └── my_panel_plugin │ ├── CMakeLists.txt │ ├── my_panel_plugin.cpp │ ├── my_panel_plugin.h │ ├── package.xml │ └── plugin_description.xml I've added #include <pluginlib/class_list_macros.hpp> PLUGINLIB_EXPORT_CLASS(sf::rviz_plugins::my_panel_plugin, rviz_common::Panel) at the end of my_panel_plugin.cpp. CMakeLists.txt cmake_minimum_required(VERSION 3.5) project(my_panel_plugin) set(CMAKE_CXX_STANDARD 14) set(CMAKE_CXX_STANDARD_REQUIRED ON) # find dependencies find_package(ament_cmake REQUIRED) find_package(rclcpp REQUIRED) find_package(rviz_common REQUIRED) find_package(Qt5 COMPONENTS Widgets REQUIRED) find_package(pluginlib REQUIRED) include_directories(include) # Qt5 boilerplate options from set(CMAKE_INCLUDE_CURRENT_DIR ON) set(CMAKE_AUTOMOC ON) add_library(${PROJECT_NAME} SHARED my_panel_plugin.cpp ) # Link ament packages ament_target_dependencies(${PROJECT_NAME} rclcpp rviz_common) # Link non ament packages target_link_libraries(${PROJECT_NAME} Qt5::Widgets) # prevent pluginlib from using boost target_compile_definitions(my_panel_plugin PUBLIC "PLUGINLIB__DISABLE_BOOST_FUNCTIONS") install(TARGETS my_panel_plugin DESTINATION lib/${PROJECT_NAME} ) pluginlib_export_plugin_description_file(my_panel_plugin plugin_description.xml) # replaces catkin_package(LIBRARIES ${PROJECT_NAME}) ament_export_libraries(${PROJECT_NAME}) ament_package() package.xml <package format="2"> <name>my_panel_plugin</name> <version>0.0.1</version> <description>This package adds a plugin to rviz2</description> <maintainer email="you@example.com">Your Name</maintainer> <license>Proprietary</license> <buildtool_depend>ament_cmake</buildtool_depend> <depend>rclcpp</depend> <depend>rviz_common</depend> <depend>libqt5-widgets</depend> <depend>rviz2</depend> <export> <rviz plugin="plugin_description.xml"/> <build_type>ament_cmake</build_type> </export> </package> plugin_description.xml <library path="lib/libmy_panel_plugin"> <class name="my_panel_plugin/My_panel_plugin" type="sf::rviz_plugins::My_panel_plugin" base_class_type="rviz_common::Panel"> <description> My_panel_plugin </description> </class> </library> I'm also trying to develop a custom Panel and could not get it to be added to the list populated by "Add New Panel." Same with a custom Display. I was using the debian ros-eloquent-desktop install. I switched to a source install... to look at tackling the issue you pointed out (make sure to resource and that /opt/ros/eloquent isn't still in your AMENT_PREFIX_PATH, and/or just sudo apt remove ros-eloquent-*), and now my package's custom panel and display are showing up in the dialogs as expected. tuser are you using the debians or a source install? I'm using debian packages so trying to install from source could be worth trying. Do you have any idea why using a source install solved your issue? I should probably also upgrade to using eloquent. Unfortunately I do not have an idea of why they would behave differently. Looking at the apt show version for my debian install and the eloquent branches for rviz2 and pluginlib, little has been committed to these two libraries....... So it could be a packaging problem or there is another package with significant changes from the debian release I did not consider. I will proceed with an alternative solution not using rviz plugins for now. But thanks for the help, if I have time to revisit this I will try your suggestion.
https://answers.ros.org/question/341214/custom-panel-plugin-not-being-found-when-running-rviz2/
CC-MAIN-2020-10
en
refinedweb
Kentico CMS Quick Tip: Minimal JSON Web APIs with IHttpHandler and .ashx Files Sean G. Wright Updated on ・7 min read Kentico Quick Tips (8 Part Series) This post was inspired by a discussion I had recently with Chris Bass, an inspiring member of the Kentico developer community. Check out his blog for more Kentico content. Kentico Portal Engine CMS + API When we need to make calls to our Kentico CMS Portal Engine web application from the browser over XHR or from another web service, we need an API to communicate with. There are several ways to accomplish this, each with pros and cons, depending on our requirements 🤔. Here's another great blog post about this topic from Kristian Bortnik. Web API 2 - The Sports Car 🏎 Kentico's documentation explains the steps for integrating Web API 2 into the CMS. The approach is great if you need a large and robust custom API surface standing in front of the CMS - and it's an approach I've used many, many times 👍. However, the setup isn't trivial and the solution effectively runs the Web API OWIN-based application within the CMS - which leads to some sharp edges 🔪. Kentico REST API - Public Transportation 🚍 Kentico has a REST API built-in, which can be used to query, and modify all kinds of data within the application 🧐. It does provide security through a Basic Authentication HTTP header and but authenticates against the normal User accounts created within Kentico. The REST API exposes the Kentico Page data and *Info objects directly, effectively creating a projection of the database over HTTP. Given the above, the caveat of this built-in solution is that it's verbose, not customizable, and a leaky abstraction 😞. IHttpHandler - The Commuter Car 🚗 For those simple scenarios where we only need a handful of endpoints, exposing a limited set of data, curated and filtered for us, we would like a way to build an API... without all the API. A nice solution to this problem is the ASP.NET IHttpHandler, which can be exposed through an .ashx file in our CMS project. You can read more about how IHttpHandler works in Microsoft's documentation. IHttpHandler gives us extremely low-level to an incoming HTTP Request and the outgoing HTTP Response. There's no WebForms code here, just the raw request and response 😮. This is perfect for our use-case since we don't want to render HTML through a complex web of page lifecycle events and user controls 👏. Let's take a look at some code for a concrete example of how all of this works. I see the IHttpHandlersolution as ideal for small client-side integrations and also JSON-based server-to-server integrations with existing, in-production, Kentico CMS sites that can't risk big architectural changes. I've done both, but we'll look at the client-side solution today. Example: An E-Commerce Store with Dynamic Prices Imagine we have a Business-to-Business (B2B) e-commerce application where the prices and inventory need to be pulled live from a back-end warehousing or ERP system (not Kentico). We don't want to delay the loading of a product details page each time a visitor requests it because we need to fetch the pricing - that would hurt SEO and the User Experience ☹! Instead, we want to cache the product details page and then, via JavaScript, request the price independently 😁. So, we need a simple API endpoint that can forward this request on to the back-end system. Creating the .ashx File Let's open our Kentico CMS solution, expand the project, and then the CMSPages folder. Right-click on the CMSPages folder and select "Add" -> "Generic Handler". We're going to name this handler ProductApi and Visual Studio will add the .ashx extension for us. IIS already knows how to process requests for .ashxfiles, so even if you haven't worked with this file type before, there's no extra work necessary to use it in your application 🤓. What we end up with is a class named ProductApi that looks like the following: public class ProductApi : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World"); } public bool IsReusable { get { return false; } } } Handling the Request Now we need to handle the incoming request from the browser. We have the raw HttpContext to work with here. This is good, because we don't want the typical Web Forms infrastructure to get in our way, but it also means a bit more work for us 😑. The ProcessRequest method is where we will do all of our work with the HTTP Request. Let's assume that the browser is going to send an XHR request, with a skuid specified as a query string parameter, and it will expect a JSON response in return. Here's ProcessRequest with some validation and error handling: public void ProcessRequest(HttpContext context) { // Set this first context.Response.ContentType = "application/json"; string method = context.Request.HttpMethod; if (!string.Equals(method, "GET", StringComparison.OrdinalIgnoreCase)) { context.Response.StatusCode = 400; return; } string skuIdParam = context.Request.QueryString.Get("skuid"); int skuId = ValidationHelper.GetInteger(skuIdParam, 0); if (skuId == 0) { context.Response.StatusCode = 400; return; } SKUInfo sku = SKUInfoProvider.GetSKUInfo(skuId); if (sku is null) { context.Response.StatusCode = 404; return; } // continue processing ... Creating the Response Now that we have handled all the potential issues, we can get our back-end system identifier out of the sku object and request the most up-to-date values. For our example we will pretend the response comes back in the following shape: public class ProductStats { public decimal Price { get; set; } public int Inventory { get; set; } } Let's get hand-wavy 👋 here and assume we were successful in getting values back from our back-end system and we now want to send them back to the browser. These last few steps are pretty simple: - Get the ProductStatsresponse from the back-end system - Use Newtonsoft.Jsonto serialize the C# object into a JSON string - Write the JSON stringas the HTTP response // continue processing ... ProductStats response = // response from our back-end system string responseText = JsonConvert.SerializeObject( response, serializationSettings); context.Response.Write(responseText); } You might have noticed the serializationSettings parameter above. That can be customized to your preferences and use-case, but it allows you to define how Newtonsoft.Json produces JSON from your C#. I typically store this in a static readonly field in my IHttpHandler, and these are the settings I tend to use 😎: private static readonly JsonSerializerSettings serializationSettings = new JsonSerializerSettings { Formatting = Formatting.None, ContractResolver = new CamelCasePropertyNamesContractResolver(), // UTC Date serialization configuration DateFormatHandling = DateFormatHandling.IsoDateFormat, DateParseHandling = DateParseHandling.DateTimeOffset, DateTimeZoneHandling = DateTimeZoneHandling.Utc, DateFormatString = "yyyy-MM-ddTHH:mm:ss.fffK", }; Using Our API So what does using this "API" look like? Well, we can request the "API" in the browser like so: But what about from JavaScript? Well, that's just as easy 😀! I'm going to assume we're supporting modern browsers, so legacy fallbacks are up to the reader 😏: (async () => { const params = new URLSearchParams({ skuid: 10 }); const response = await fetch(`/CMSPages/ProductApi.ashx?${params}`); const { price, inventory } = await response.json(); console.log('Price', price); console.log('Inventory', inventory); })() Who would have thought we could create a completely custom JSON based integration API in just a couple of minutes 🤗!? Bonus: All the Context I would also like to note that since the HTTP request is going to the same domain that the JavaScript loaded under, there's no annoying cookie or CORS restrictions 🧐. All cookies for the current domain are sent back to the server with every HTTP request, even XHR requests to our .ashx file. This means that the normal Kentico *Context classes that give us access to the ambient request data, like the current authenticated user ( MembershipContext) and the current shopping cart ( ShoppingCartContext) are all still available in our ProductApi class ⚡. If we want to respond with additional discounts for Customers in different groups, or send the current user's ID with the SKU Id to our back-end system to get product recommendations, we can do that too 😄! How about displaying a shipping time estimate based on information gathered from the Browser Geolocation API and the items in the shopping cart? Yup, we could do that 😃. Wrap Up While Kentico's Web API 2 integration approach and the built-in REST API provide a lot of functionality, they don't quite fit the requirements for a small, custom, minimalist endpoint exposed by the CMS for XHR requests from the browser. Fortunately, IHttpHandlers and .ashx files give us a quick and dirty way to stand up an endpoint using low-level ASP.NET features, without losing out on the functionality of the CMS 👍. If you try this approach out, let me know what you think! Thanks for reading 🙏! If you are looking for additional Kentico content, checkout the Kentico tag here on DEV: Or my Kentico blog series: Kentico Quick Tips (8 Part Series)
https://dev.to/wiredviews/kentico-cms-quick-tip-minimal-json-web-apis-with-ihttphandler-and-ashx-files-5dho
CC-MAIN-2020-10
en
refinedweb
Firebase Authentication in Flutter - Production Patterns. This tutorial will cover the implementation and architecture for Firebase Authentication. We use Firebase Authentication in production to keep my code maintainable and easy to manage. We cover the basic login and sign up functionality. Today we'll be going over the production practices I follow when implementing email authentication using Firebase in Flutter. We'll be building a social media app called compound. It's called compound because that's the middle word of the book in front of me on my desk. "The Compound Effect". Even if you don't want to build a social media app, I'll be teaching you the principles you need to apply to a firebase project to build literally any app you want.The Architecture If you don't know, I use an Mvvm Style architecture with Provider for my UI / Business logic separation and get_it as a service locator. I've found this to be the most consistent and easy to understand architecture that I've used in production. It keeps implementations short and specific. In short the architecture specifies that each view or basic widget can have it's own ViewModel that contains the logic specific to that piece of UI. The ViewModel will make use of services to achieve what the user is requesting through their interactions. Services is where all the actual work happens. ViewModels make use of the services but doesn't contain any hard functionality outside of conditionals and calling services. So, to get to the task at hand. We'll have an Authentication service that we'll use to sign in or sign up with that will store an instance of the current firebase user for us to use when required. We will have two views, Login and SignUp view which will make of the two functions on the service. The entire backend of the application will be built using Firebase so make sure to go to your console and login with a gmail account.Setup Firebase Project Open up the firebase console and click on "Add Project". Call it "compound", go next, select your account and then create. This will take a few seconds to setup. When it's complete click on continue and you'll land on the overview page. Click on the Android Icon (or iOS) and add your package name, I'll set mine to com.filledstacks.compound. I'll set the nickname to "Compound". Register the app and then download the google-services.json file. If you have your own project or want to use my starting code, which you can download here, open up the code and place the google-service.json file in the android/app folder. Then open the build.gradle file in the android/app folder and change the applicationId to match the one you entered for your Firebase project. Open up the pubspec.yaml and add the firebase_auth plugin. firebase_auth: ^0.15.3 Then we have to enable the google services. Open the build.gradle file in the android folder and add the google services dependency. dependencies { // existing dependencies classpath 'com.android.tools.build:gradle:3.5.0' classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" // Add the google services classpath classpath 'com.google.gms:google-services:4.3.0' } Open up the android/app/build.gradle file and apply the google services plugin. Add the following line at the bottom of the file. // ADD THIS AT THE BOTTOM apply plugin: 'com.google.gms.google-services' That's it for the Android setup. Lets continue with the Firebase project. Once you've created the app you can go next and skip the firebase comms check that they do. On the left hand side, click on the Authentication Icon. The third icon from top (might change). Click on the Setup sign in methods button and click on email / password and enable it. That's it for the project setup, we'll get back to the Firebase console in the next episode.Authentication Implementation The starting code that I provided has a few things setup already. This is to make sure we keep the app to the point and only show the firebase parts. We'll be creating the Authentication service and then using it in the viewmodels, which are completely empty. The responsibility of the AuthenticationService in this case is to wrap the Firebase Authentication functionality for us. It will send the info we entered, and then tell us if it's successful or not. If it fails we return an error message to show the user. Under the services folder create a new file called authentication_service.dart. import 'package:flutter/foundation.dart'; class AuthenticationService { Future loginWithEmail({@required String email, @required String password}) { // TODO: implement loginWithEmail return null; } Future signUpWithEmail({@required String email, @required String password}) { // TODO: implement signUpWithEmail return null; } } We'll start off keeping a reference to the FirebaseAuth instance locally. Then we'll perform signInWithEmailAndPassword and store the result in a variable called user. If there's no errors we'll check if the user is not null and return that value. If it fails we return the message from the error. final FirebaseAuth _firebaseAuth = FirebaseAuth.instance; Future loginWithEmail({ @required String email, @required String password, }) async { try { var user = await _firebaseAuth.signInWithEmailAndPassword( email: email, password: password); return user != null; } catch (e) { return e.message; } } Sign up looks very similar. The only difference is that the result of the createUserWithEmailAndPassword function returns a FirebaseAuth object instead of the user like login. Future signUpWithEmail({ @required String email, @required String password, }) async { try { var authResult = await _firebaseAuth.createUserWithEmailAndPassword( email: email, password: password); return authResult.user != null; } catch (e) { return e.message; } } That's it for the AuthenticationService. Open up the locator.dart file and register the service as a lazy singleton. All that means is that there will only ever be 1 authentication service in existence, and we'll lazily create it once it has been requested the first time.4 void setupLocator() { locator.registerLazySingleton(() => NavigationService()); locator.registerLazySingleton(() => DialogService()); locator.registerLazySingleton(() => AuthenticationService()); } We'll start with sign up so that we can then perform a login afterwards. Open up the main.dart file and make sure home is set to SignUpView. Then open up the signup_view_model.dart file. We'll start by retrieving the AuthenticationService, NavigationService and DialogService from the locator. Then we'll create a function called SignUp that takes the email and password. In this function we'll set the view to busy before requesting, do the sign up. Then check the result, if it's a bool and it's true then we navigate to the HomeView. If it's false we'll show a general dialog, if it's a string we'll show the content as a dialog. class SignUpViewModel extends BaseModel { final AuthenticationService _authenticationService = locator<AuthenticationService>(); final DialogService _dialogService = locator<DialogService>(); final NavigationService _navigationService = locator<NavigationService>(); Future signUp({@required String email, @required String password}) async { setBusy(true); var result = await _authenticationService.signUpWithEmail( email: email, password: password); setBusy(false); if (result is bool) { if (result) { _navigationService.navigateTo(HomeViewRoute); } else { await _dialogService.showDialog( title: 'Sign Up Failure', description: 'General sign up failure. Please try again later', ); } } else { await _dialogService.showDialog( title: 'Sign Up Failure', description: result, ); } } } Open up the BusyButton to take in the busy property from the model and in the onPressed function call model.signUp. BusyButton( title: 'Sign Up', busy: model.busy, onPressed: () { model.signUp( email: emailController.text, password: passwordController.text, ); }, ) If you run the app now, enter some details and login you'll see it navigate to the HomeView. If you want to see the error dialog enter a password with less than 6 characters and you'll see the dialog pop up. Also if you've already signed up you can try signing up with the same email again and you'll get a friendly error message :)Login Logic The login logic logic is literally exactly the same as the sign up logic. Being able to refactor for shared code is a good skill to have, I'll leave it up to you as an exercise to do. For now we'll write non dry code by simple repeating the pattern. Open up the login_view_model.dart class LoginViewModel extends BaseModel { final AuthenticationService _authenticationService = locator<AuthenticationService>(); final DialogService _dialogService = locator<DialogService>(); final NavigationService _navigationService = locator<NavigationService>(); Future login({@required String email, @required String password}) async { setBusy(true); var result = await _authenticationService.loginWithEmail( email: email, password: password); setBusy(false); if (result is bool) { if (result) { _navigationService.navigateTo(HomeViewRoute); } else { await _dialogService.showDialog( title: 'Login Failure', description: 'Couldn\'t login at this moment. Please try again later', ); } } else { await _dialogService.showDialog( title: 'Login Failure', description: result, ); } } } Open the login view. Pass the busy value to the BusyButton and in the onPressed function call the login function. BusyButton( title: 'Login', busy: model.busy, onPressed: () { model.login( email: emailController.text, password: passwordController.text, ); }, ) Open up the main.dart file and change home to LoginView. If you re-run the code now you'll land on the LoginView. Enter the details you entered, click login and you're done :) . This is just the start of the app, we'll add functionalities a normal app would have throughout the rest of the series. In the next tutorial we'll make sure once we're signed in we go straight to the HomeView. We'll also create a user profile, make sure it's always available when the app is open and add roles (for later use ;) ). I decided to ask you guys to start sharing the tutorials more, I'm still seeing some unmaintainable code when new clients come to me. We have to spread the architecture and code quality love around and make that the core focus when building apps. Until next time, Dane Mackier. :) In this tutorial we'll see how to add Apple Sign In to our Flutter apps from scratch. Learn how to implement Apple Sign In with Flutter & Firebase Authentication (from scratch), and give your iOS users a convenient way of signing into your app. Apple Sign In with Flutter & Firebase Authentication. In this tutorial we'll see how to add Apple Sign In to our Flutter apps from scratch. Apple Sign In is a new authentication method that is available on iOS 13 and above. It is very convenient, as your iOS users already have an Apple ID, and can use it to sign in with your app. So just as you would offer Google Sign In on Android, it makes sense to offer Apple Sign In on iOS. We will use the Apple Sign In Flutter plugin available on pub.dev. Note: this plugin supports iOS only, and you can only use this on devices running iOS 13 and above. After creating a new Flutter project, add the following dependencies to your pubspec.yaml file: dependencies: firebase_auth: ^0.15.3 apple_sign_in: ^0.1.0 provider: ^4.0.1 Note: we will use Provider for dependency injection, but you can use something else if you prefer. Next, we need add Firebase to our Flutter app. Follow this guide for how to do this: After we have followed the required steps, the GoogleService-Info.plist file should be added to our Xcode project. And while in Xcode 11, select the Signing & Capabilities tab, and add "Sign In With Apple" as a new Capability: Once this is done, ensure to select a team on the Code Signing section: This will generate and configure an App ID in the "Certificates, Identifiers & Profiles" section of the Apple Developer portal. If you don't do this, sign-in won't work. As a last step, we need to enable Apple Sign In in Firebase. This can be done under Authentication -> This completes the setup for Apple Sign In, and we can dive into the code.Checking if Apple Sign In is available Before we add the UI code, let's write a simple class to check if Apple Sign In is available: import 'package:apple_sign_in/apple_sign_in.dart'; class AppleSignInAvailable { AppleSignInAvailable(this.isAvailable); final bool isAvailable; static Future<AppleSignInAvailable> check() async { return AppleSignInAvailable(await AppleSignIn.isAvailable()); } } Then, in our main.dart file, let's modify the entry point: void main() async { // Fix for: Unhandled Exception: ServicesBinding.defaultBinaryMessenger was accessed before the binding was initialized. WidgetsFlutterBinding.ensureInitialized(); final appleSignInAvailable = await AppleSignInAvailable.check(); runApp(Provider<AppleSignInAvailable>.value( value: appleSignInAvailable, child: MyApp(), )); } The first line prevents an exception that occurs if we attempt to send messages across the platform channels before the binding is initialized. Then, we check if Apple Sign In is available by using the class we just created. And we use Provider to make this available as a value to all widgets in our app. Adding the UI codeAdding the UI code Note: this check is done upfront so that appleSignInAvailableis available synchronously to the entire widget tree. This avoids using a FutureBuilderin widgets that need to perform this check. Instead of the default counter app, we want to show a Sign In Page with a button: import 'package:apple_sign_in/apple_sign_in.dart'; import 'package:apple_sign_in_firebase_flutter/apple_sign_in_available.dart'; import 'package:apple_sign_in_firebase_flutter/auth_service.dart'; import 'package:flutter/material.dart'; import 'package:provider/provider.dart'; class SignInPage extends StatelessWidget { @override Widget build(BuildContext context) { final appleSignInAvailable = Provider.of<AppleSignInAvailable>(context, listen: false); return Scaffold( appBar: AppBar( title: Text('Sign In'), ), body: Padding( padding: const EdgeInsets.all(6.0), child: Column( mainAxisAlignment: MainAxisAlignment.center, children: [ if (appleSignInAvailable.isAvailable) AppleSignInButton( style: ButtonStyle.black, // style as needed type: ButtonType.signIn, // style as needed onPressed: () {}, ), ], ), ), ); } } Note: we use a collection-if to only show the AppleSignInButtonif Apple Sign In is available. See this video for UI-as-code operators in Dart. Back to our main.dart file, we can update our root widget to use the class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Apple Sign In with Firebase', debugShowCheckedModeBanner: false, theme: ThemeData( primarySwatch: Colors.indigo, ), home: SignInPage(), ); } } At this stage, we can run the app on an iOS 13 simulator and get the following:Adding the authentication code Here is the full authentication service that we will use to sign in with Apple (explained below): import 'package:apple_sign_in/apple_sign_in.dart'; import 'package:firebase_auth/firebase_auth.dart'; import 'package:flutter/services.dart'; class AuthService { final _firebaseAuth = FirebaseAuth.instance; Future<FirebaseUser> signInWithApple({List<Scope> scopes = const []}) async { // 1\. perform the sign-in request final result = await AppleSignIn.performRequests( [AppleIdRequest(requestedScopes: scopes)]); // 2\. check the result switch (result.status) { case AuthorizationStatus.authorized: final appleIdCredential = result.credential; final oAuthProvider = OAuthProvider(providerId: 'apple.com'); final credential = oAuthProvider.getCredential( idToken: String.fromCharCodes(appleIdCredential.identityToken), accessToken: String.fromCharCodes(appleIdCredential.authorizationCode), ); final authResult = await _firebaseAuth.signInWithCredential(credential); final firebaseUser = authResult.user; if (scopes.contains(Scope.fullName)) { final updateUser = UserUpdateInfo(); updateUser.displayName = '${appleIdCredential.fullName.givenName} ${appleIdCredential.fullName.familyName}'; await firebaseUser.updateProfile(updateUser); } return firebaseUser; case AuthorizationStatus.error: print(result.error.toString()); throw PlatformException( code: 'ERROR_AUTHORIZATION_DENIED', message: result.error.toString(), ); case AuthorizationStatus.cancelled: throw PlatformException( code: 'ERROR_ABORTED_BY_USER', message: 'Sign in aborted by user', ); } return null; } } First, we pass a List<Scope> argument to our method. Scopes are the kinds of contact information that can be requested from the user ( fullName). Then, we make a call to AppleSignIn.performRequests and await for the result. Finally, we parse the result with a switch statement. The three possible cases are authorized, error and cancelled. Authorized If the request was authorized, we create an OAuthProvider credential with the identityToken and authorizationCode we received. We then pass this to _firebaseAuth.signInWithCredential(), and get an AuthResult that we can use to extract the FirebaseUser. And if we requested the full name, we can update the profile information of the FirebaseUser object with the fullName from the Apple ID credential. Error or Cancelled If authentication failed or was cancelled by the user, we throw a PlatformException that can be handled by at the calling site. Now that our auth service is ready, we can add it to our app via Provider like so: class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return Provider<AuthService>( create: (_) => AuthService(), child: MaterialApp( title: 'Apple Sign In with Firebase', debugShowCheckedModeBanner: false, theme: ThemeData( primarySwatch: Colors.indigo, ), home: SignInPage(), ), ); } } Then, in our SignInPage, we can add a method to sign-in and handle any errors: Future<void> _signInWithApple(BuildContext context) async { try { final authService = Provider.of<AuthService>(context, listen: false); final user = await authService.signInWithApple( requestEmail: true, requestFullName: true); print('uid: ${user.uid}'); } catch (e) { // TODO: Show alert here print(e); } } Finally, we remember to call this on the callback of the AppleSignInButton: Testing thingsTesting things AppleSignInButton( style: ButtonStyle.black, type: ButtonType.signIn, onPressed: () => _signInWithApple(context), ) Our implementation is complete, and we can run the app. If we press the sign-in button and an Apple ID is not configured on our simulator or device, we will get the following: After signing in with our Apple ID, we can try again, and we will get this: After continuing, we are prompted to enter the password for our Apple ID (or use touch ID/face ID if enabled on the device). If we have requested full name and email access, the user will have a chance edit the name, and choose to share or hide the email: After confirming this, the sign-in is complete and the app is authenticated with Firebase. Note: if the sign-in screen is not dismissed after authenticating, it's likely because you forgot to set the team in the code signing options in Xcode. The next logical step is to move away from the SignInPage and show the home page instead. This can be done by adding a widget above the onAuthStateChaged stream of FirebaseAuth. Congratulations, you have now enabled Apple Sign In in your Flutter app! Your iOS users are grateful. 🙏 Full Source Code is here on GitHub. Happy coding!.
https://morioh.com/p/95a0edfaf41d
CC-MAIN-2020-10
en
refinedweb
Can't recall currently what excaly was needed but there is stuff under en-US culture that is necessary for Commerce to work properly. I can take a deeper look a bit later. I'm seeing two different parts in this: (1) Switching the default language to en-AU (or another language). Admittedly, I have not tried this yet, but there's nothing that stands out to me that should initially cause any issues with this. You just need to make sure the default language in Commerce Manager is configured to be en-AU. (2) Removing the language code from the URL. This is done by adding the language to the host name in EPiServerFramework.config (or applying the culture in Admin Mode -> Site Information). We've recently seen an issue with this (yesterday, actually), but this could just be related to our particular solution. We are currently looking into this issue. I think the issues you are running into, specifically when testing on Enoteca, are because everything was built in the en-US language branch from the beginning. It's possible if you create a new solution from scratch, and focus on working only in the en-AU language branch, you might not have as many issues. In EPiServer 7, from what I'm seeing, the default language branch of "en" for the Root Page and the Recycle Bin doesn't really matter too much. It's all dependent on what language branch you create the Start Page under. Lastly, if all else fails, I'm sure there's a SQL script that can be ran to solve this. But I'd say that should be last ditch effort. I thought I'd just add some more information about the issue we're experiencing that Chris mentioned, because it sounds like part of the issue for you is the same as we're experiencing. When setting the default language on the site to get rid of the language code we can't use the URL resolver to fetch a friendly url for the startpage. So we get the exception below whenever the page we're trying to get the virtual path for is the startpage on the site. Since edit mode also tries to resolve the startpage url rendering the site tree and whatnot It also breaks down. Tested this before and after switching master language branches, since the site where I'm having the issue is using en-US. Same result The error we get is the following: [NullReferenceException: Object reference not set to an instance of an object.] EPiServer.Business.Commerce.Providers.CatalogEntryPageRoute.OnVirtualPathCreated(Object sender, UrlBuilderEventArgs e) +66 EPiServer.Web.Routing.ContentRoute.GetVirtualPath(RequestContext requestContext, RouteValueDictionary values) +442 EPiServer.Web.Routing.UrlResolver.GetVirtualPath(ContentReference contentLink, String language, RouteValueDictionary routeValues, RequestContext requestContext) +504 EPiServer.Web.Routing.UrlResolver.GetVirtualPath(ContentReference contentLink, String language) +25 From what I've gathered so far when the event VirtualPathCreated is triggered, the UrlBuilder object sent in is null or the Path property is because we get a null exception in CatalogEntryPageRoute.OnVirtualPathCreated. With being able to properly debug the code executed in EPiServer.Business.Commerce.Proivders.CatalogEntryPageRoute and stepping through the events leading up to that it's hard to say exactly what's wrong. All other pages load the their resolved url's fine, the only thing I can think of is that since the startpage won't have any url segments the UrlBuilder object don't have the path set. The CMS handles this but the addon code for commerce don't have a null check for it. Thanks for all the help guys! Chris regarding the two parts you mentioned. I have no issue with the first point. We have successfully setup a site in en-AU and have it all working correctly with Commerce in en-AU also. It is the second point you mentioned where my problem lies. I've actually setup a site from scratch where all it has is the start page and one other content page. When I fire up the start page I am experiencing the exact error that Jonas described above. This error occurs after you set the language code by going to Admin -> Site Information. What I have noticed is that this error does not occur when you're running just the CMS. It is only when you apply commerce that these strange errors start to appear. Jonas have you managed to find a way around this? I've been advised to log a support ticket for this issue so will keep you all updated. Thanks, Damien It is the mismatch culture issue between ECF and CMS. Commerce 1 R3 does not support non-specific culture. The same issues like: Hi All, I logged a support case for this and there is a hot fix available that fixes the issue. Basically you'll get an updated version of EPiServer.Business.Commerce.dll. Once I replaced the dll in my site everything worked fine. Just contact support to get the hot fix. Thanks, Damien Regarding the NullReferenceException in OnVirtualPathCreated, the problem is that they forgot to check if the output from their RemoveTrailingSlash function returns null. This is the solution I came up with. Create a local CatalogEntryPageRoute class that inherits from EPi's class, the only difference is that we check the var text for null: public class CatalogEntryPageRoute : EPiServer.Business.Commerce.Providers.CatalogEntryPageRoute { public CatalogEntryPageRoute(string url, string physicalFile) : base(url, physicalFile) { } public override void OnVirtualPathCreated(object sender, UrlBuilderEventArgs e) { string text = VirtualPathUtility.RemoveTrailingSlash(e.UrlBuilder.Path); if (! string.IsNullOrWhiteSpace(text) && text.EndsWith(".aspx")) { e.UrlBuilder.Path = text; Global.UrlRewriteProvider.ConvertToInternal(e.UrlBuilder); Global.UrlRewriteProvider.ConvertToExternal(e.UrlBuilder, null, System.Text.Encoding.UTF8); } } } Create an intialization module, and hook up the above route class to the default route instead of EPi's: ModuleDependency(typeof(InitializeCommerceManagerModule))] public class MyDataInitialization : IConfigurableModule { public void Initialize(InitializationEngine context) { var entryViewUrl = UrlService.GetUrl(Constants.EntryViewCommandName); var defaultRoute = new CatalogEntryPageRoute("{seourl}.aspx", entryViewUrl); foreach (Delegate d in ContentRoute.CreatedVirtualPath.GetInvocationList()) { ContentRoute.CreatedVirtualPath -= (EventHandler<UrlBuilderEventArgs>)d; } ContentRoute.CreatedVirtualPath = (EventHandler<UrlBuilderEventArgs>)Delegate.Combine(ContentRoute.CreatedVirtualPath, new EventHandler<UrlBuilderEventArgs>(defaultRoute.OnVirtualPathCreated)); } } Migrating yet another site from CMS 6 -> 7 -> 7.5 and once again using Mattias Bomelin solution while in 7.0, thanks! Hi All, I'm wondering if someone can help me out. I'm on a project using EPiServer 7 + Commerce 1R3. The client will only be using a single language (en-AU). So I'm trying to remove the language code from the URL's. I have spent a lot of time trying to do this with no luck. I've even tried to get it to work using the Enoteca sample website. These are the things I have tried so far: I have also tried disabling all languages in "Manage Website Languages" except for "en-AU". But this also results in a "Something Went Wrong" message. I'm not too sure whether this is because when you install a default instance of CMS + Commerce it installs some content under the "en" language code (i.e. root, recycle bin etc.). The commerce content provider is inserted under the "en-US" language code. Then when I start building my site I'm building it in the "en-AU" language code. So it seems like I need all three languages enabled. If anyone knows how to solve this it would be greatly appreciated! Thanks in advance Damo
https://world.episerver.com/forum/developer-forum/Episerver-Commerce/Thread-Container/2013/8/EPiServer-7--Commerce-1R3-Single-Language-Issue/
CC-MAIN-2020-10
en
refinedweb
sia_ses_init, sia_ses_authent, sia_ses_suauthent, sia_ses_reauthent, sia_ses_estab, sia_ses_launch, sia_ses_release - SIA session routines (Security Integration Architecture) #include <sia.h> #include <siad.h> int sia_ses_init( SIAENTITY ** entityhdl, int arg, char **argv, char *hostname, char *username, char *ttyname, int can_collect_input, char *gssapi ); int sia_ses_authent( int (*collect)(), char *passkey, SIAENTITY *entityhdl ); int sia_ses_suauthent( int (*collect)(), SIAENTITY *entityhdl ); int sia_ses_reauthent( int (*collect)(), SIAENTITY *entityhdl ); int sia_ses_estab( int (*collect)(), SIAENTITY *entityhdl ); int sia_ses_launch( int (*collect)(), SIAENTITY *entityhdl ); int sia_ses_release( SIAENTITY **entityhdl ); Standard C library (libc.so and libc.a) can_collect_input. Further input on SIA collection routines is available from the interface specifications in /usr/include/{sia,siad}.h. The entityhdl parameter points to the SIAENTITY structure that was allocated and setup by the previous sia_ses_init() call. Values in the SIAENTITY structure may be changed by the sia_* routines. The passkey parameter provides a precollected password to the authentication routine. Set this parameter to NULL if no password has been precollected. This parameter is read only. sia_ses_init() The sia_ses_init() routine initializes SIA sessions. The routine allocates an entity handle structure and initializes various values in that structure. It must be called before any of the other SIA session processing routines. sia_ses_reauthent() The sia_ses_reauthent() routine is used to revalidate a user's password. It is associated with applications that require that the user be reauthenticated. Such applications are the typical terminal or session locking applications. This call must be preceded by a call to sia_ses_init() and followed by a call to sia_ses_release(). sia_ses_release() The sia_ses_release() routine is called at the end of the session processing to release any resources associated with the session startup processing, including the SIAENTITY structure. After calling the sia_ses_release() routine, do the setuid and then exec the program to start the actual new process running as the session user ID. sia_ses_authent() The sia_ses_authent() routine is called to authenticate an entity. Since this routine may require parameter collection, a collect routine pointer is provided by the calling application. It is also possible that the password has been pre-collected by the application (such as, ftp). The passkey parameter allows the application to provide a password to the security mechanisms. Providing a passkey is not sufficient to keep the underlying mechanisms from trying to prompt for additional information. The sia_ses_init() routine must be called before calling this routine. sia_ses_suauthent() The sia_ses_suauthent() routine processes the su command. Since the processing of the su command is viewed as special and may require an alternative configuration from the normal sia_ses_authent() routine, it has been made a separate SIA capability. Like the sia_ses_authent() routine sia_ses_suauthent is preceded by a call to sia_ses_init() and followed by a call to sia_ses_release(). sia_ses_estab() The sia_ses_estab() routine is called to establish context for a session that is already checked or authenticated. This routine checks system or mechanism wide parameters such as licensing or resource limitations. The sia_ses_estab() routine also collects the complete set of information or context required to launch a session. However, for a login model the environment processing (clearenv() and setenv()) must still be done. Copy any HOME or SHELL strings from the SIAENTITY structure because the final call to sia_ses_release() will free the entire SIAENTITY structure. If the sia_ses_estab() routine fails, sia_ses_release() is automatically called. sia_ses_launch() The sia_ses_launch() routine is called to do the final processing of a session before the actual start of the session by the application. This processing usually consists of the logging or auditing the session startup and any tty conditioning which may be required. Not all security mechanisms may require processing at this time. Generally, the local mechanism is required to do the launch processing. If the sia_ses_launch() routine fails, sia_ses_release() is automatically called. On the return from sia_ses_launch(), the effective UID (EUID) has been set to the UID of the user for this session. Generally, a setreuid(geteuid(),geteuid()) follows this return setting both the real user ID (RUID) and effective user ID (EUID) to the effective user ID (EUID). The remaining processing is utility dependent. All the users group memberships are set using initgroups(). The sia_ses_*() routines return SIASUCCESS when the are successful and SIAFAIL when they are not successful. The errno value is not (normally) set explicitly by sia_* routines. The errno values are those returned from the dynamic loader interface, from dependent (siad_*) routines, or from malloc. Possible errors include resource constraints (no memory) and various authentication failures. /etc/passwd /etc/group /etc/sia/matrix.conf initgroups(3), siad_ses_init(3), matrix.conf(4) Security sia_ses_init(3)
http://nixdoc.net/man-pages/Tru64/man3/sia_ses_reauthent.3.html
CC-MAIN-2020-10
en
refinedweb
OBMolTorsionIter Class Reference Iterate over all torsions in an OBMol. More... #include <openbabel/obiter.h> Detailed Description Iterate over all torsions in an OBMol. - Since: - version 2.1 To facilitate iteration through all torsions in a molecule, without resorting to atom indexes (which will change in the future), a variety of iterator methods are provided. This has been made significantly easier by a series of macros in the obiter.h header file: \#define FOR_TORSIONS_OF_MOL(t,m) for( OBMolTorsionIter t(m); t; t++ ) Here is an example: #include <openbabel/obiter.h> #include <openbabel/mol.h> OBMol mol; OBAtom *a, *b, *c, *d; double tor; FOR_TORSIONS_OF_MOL(t, mol) { // The variable a behaves like OBTorsion* when used with -> and * but // but needs to be explicitly converted when appearing as a parameter // in a function call - use &*t a = _mol.GetAtom((*t)[0] + 1); // indices in vector start from 0!!! b = _mol.GetAtom((*t)[1] + 1); c = _mol.GetAtom((*t)[2] + 1); d = _mol.GetAtom((*t)[3] + 1); tor = mol.GetTorsion(a->GetIdx(), b->GetIdx(), c->GetIdx(), d->GetIdx()); } Constructor & Destructor Documentation Member Function Documentation - Returns: - Whether the iterator can still advance (i.e., visit more torsions) Preincrement -- advance to the next torsion and return. - Returns: - A vector of four atom indexes specifying the torsion - See also: - OBAtom::GetIdx() The documentation for this class was generated from the following files:
http://openbabel.org/api/2.3/classOpenBabel_1_1OBMolTorsionIter.shtml
CC-MAIN-2020-10
en
refinedweb
New Room (SOLVED) How can I create new room (a new instance) depending on a condition? I've seen, for example, that if the maximum number of players is reached, a room is automatically created. But I would like to create the room according to a pre-established condition. When I connect to the room, I saw that I can send from client to server options that can interpret on requestJoin or in onIne function. Depending on those options, I would like to create a new room. How can I do that? Thanks Hey @rscata, the method you're looking for is requestJoin(options). To create a new room, this method should return falsefor every room already spawned. The new room will then be created if requestJoin(options)returns trueon it. More info: Cheers! I tried this method, but if it's false, I get the message "server error: join_request_fail" and do not create a new room. @rscata can you provide an example of when you want to create a new room? Maybe you should register a new room name instead - having a different condition for requestJoin(). Cheers! Let's say I have a poker game what contains some game rooms. At this point I want to offer two posibilities: - To join one of the rooms that is already created and has not reached max player limit. - To reate a new room and join it. @rscata I see. Thanks for the explanation. I've tested the code below and it works as you described. Server-side maxClients = 4; onInit (options) { // identify when a new room is being requested this.create = options.create || false; } requestJoin (options) { if (options.create) { // creating a new room let allowed = (options.create == this.create); // this room is not being "created" anymore. this.create = false; return allowed; } else { // joining an existing room return this.clients.length > 0; } } Client-side // creating a new room let room = client.join("poker", { create: true }) room.onJoin.add(() => { /* created! */ }) // joining an existing room let room = client.join("poker") room.onJoin.add(() => { /* joined! */ }) room.onError.add(() => { /* no rooms available? */ }) I feel it should be easier to achieve this, though. For next versions of Colyseus, I think it's important to know when a room is being created or is an existing one during onInit/ requestJoin(like we're doing ourselves on this.create) Let me know if you have any question. Cheers! Thank you for explanation.This solution works great, but I still got a problem when I'm trying to join a room by roomId. I read in documentation about that. // connect the client directly into a specific room id let room = client.join('H1gbitW3Pf'); And I got this error: server error: Error: no available handler for "H1gbitW3Pf" @rscata thanks for reporting. This was actually a bug. It has been fixed now on version 0.8.12. Release: Thanks for support If we still have some problems or have questions, can we contact you? Hey @rscata, I've just released a new version ( colyseus@0.8.13, see release) which adds the isNewparameter on requestJoin. With this version you'd be able to achieve the same result by having this implementation: // ... maxClients = 4; requestJoin (options, isNew) { return (options.create && isNew) || this.clients.length > 0; } // ... This post is deleted!
https://discuss.colyseus.io/topic/36/new-room-solved
CC-MAIN-2020-10
en
refinedweb
Testing Asynchronous Code with FakeAsync in Angular By Josh Morony When creating automated unit tests in Ionic and Angular applications we would typically follow a process like this: - Set up the testing environment - Run some code - Make assertions as to what should have happened This process is also commonly referred to as AAA (Arrange, Act, Assert). I don’t plan to provide an introduction to unit testing in this tutorial (this serves as a good starting point for testing Ionic applications), but a typical test might look something like this: import { CUSTOM_ELEMENTS_SCHEMA } from "@angular/core"; import { TestBed, ComponentFixture, async } from "@angular/core/testing"; import { AppComponent } from "./app.component"; describe("Component: Root Component", () => { let comp: AppComponent; let fixture: ComponentFixture<AppComponent>; beforeEach(async(() => { TestBed.configureTestingModule({ declarations: [AppComponent], schemas: [CUSTOM_ELEMENTS_SCHEMA], providers: [] }).compileComponents(); })); beforeEach(() => { fixture = TestBed.createComponent(AppComponent); comp = fixture.componentInstance; }); afterEach(() => { fixture.destroy(); comp = null; }); it("some thing should happen when we do some thing", () => { someObject.doSomething(); expect(someThing).toBe(thisValue); }); }); The testing environment is arranged using Angular’s TestBed so that we can test this component in isolation, and we make sure to reset the component before each test (and destroy it afterward). Then we act inside the actual test: someObject.doSomething(); this gets our application into the state that we want to test, and then we make our assertion: expect(someThing).toBe(thisValue); At this point we expect that someThing will be thisValue and if it is not the test should fail. These are just some made up values to illustrate a point. This works fine in a case like this where everything is executed synchronously, but what happens if our test contains some asynchronous code? Something like this: it('some thing should happen when we do some thing', () => { let flag = false; let testPromise = new Promise((resolve) => { // do some stuff }); testPromise.then((result) => { flag = true; }); expect(flag).toBe(true); }); If you’re familiar with asynchronous code (and if you’re not, you should watch this) then you would know that since flag = true is being called from within a promise handler (i.e. it is asynchronous) the expect statement is going to run before flag = true does. This means that our test will fail because it hasn’t had time to finish executing properly. The flow of our test would look something like this: - Set flagto false - Create promise - Set up handler for promise - Expect that flagis true - Run promise handler, which sets flagto true This is a problem, what we really want is: - Set flagto false - Create promise - Set up handler for promise - Run promise handler, which sets flagto true - Expect that flagis true One quick solution to this problem would be to simply add the expect statement inside of the promise callback, and that’s probably how we would deal with a similar situation in our actual application, but it is not ideal for tests. Victor Savkin goes into more detail about why this is the case, as well as a lot more detail in general about this topic, in his article Controlling Time with Zone.js and FakeAsync. The better solution to this problem is to use the fakeAsync helper that Angular provides, which essentially gives you an easy way to run asynchronous code before your assertions. Introducing FakeAsync, flushMicrotasks, and tick Depending on your familiarity level with Angular, you may or may not have heard of Zone.js. Zone.js is included in Angular and patches the browser so that we can detect when asynchronous code like Promises and setTimeout complete. This is used by Angular to trigger its change detection, which is a weird, complex, and wonderful process I tried to cover somewhat in this article. It’s important that Angular knows when asynchronous operations complete, because perhaps one of those operations might change some property binding that now needs to be reflected in a template. So, the zone that Angular uses allows it to detect when asynchronous functions complete. FakeAsync is similar in concept, except that it kind of “catches” any asynchronous operations. Any asynchronous code that is triggered is added to an array, but it is never executed… until we tell it to be executed. Our tests can then wait until those operations have completed before we make our assertions. When a test is running within a fakeAsync zone, we can use two functions called flushMicrotasks and tick. The tick function will advance time by a specified number of milliseconds, so tick(100) would execute any asynchronous tasks that would occur within 100ms. The flushMicrotasks function will clear any “microtasks” that are currently in the queue. I’m not going to attempt to explain what a microtask is here, so I would highly recommend giving Tasks, microtasks, queues, and schedules a read. In short, a microtask is created when we perform asynchronous tasks like setting up a handler for a promise. However, not all asynchronous code is added as microtasks, some things like setTimeout are added as normal tasks or macrotasks. This is an important difference because flushMicrotasks will not execute timers like setTimeout. To use fakeAsync, flushMicrotasks, and tick in your tests, all you need to do is import them: import { TestBed, ComponentFixture, inject, async, fakeAsync, tick, flushMicrotasks } from '@angular/core/testing'; and then wrap your tests with fakeAsync: it('should test some asynchronous code', fakeAsync(() => { })); this will cause your tests to be executed in the fakeAsync zone. Now inside of those tests you can call flushMicrotasks to run any pending microtasks, or you can call tick() with a specific number of milliseconds to execute any asynchronous code that would occur within that timeframe. Examples of Testing Asynchronous Code in Ionic and Angular If you’ve read this far, hopefully, the general concept makes at least some sense. Basically, we wrap the test in fakeAsync and then we call either flushMicrotasks() or tick whenever we want to run some asynchronous code before making an assertion in the test. There are a few subtleties that can trip you up, though, so I want to go through a few examples and discuss the results. Seeing a few tests in action should also help solidify the concept. Let’s start with a basic example that is available in the Angular documentation: it('should test some asynchronous code', fakeAsync(() => { let flag = false; setTimeout(() => { flag = true; }, 100); expect(flag).toBe(false); // PASSES tick(50); expect(flag).toBe(false); // PASSES tick(50); expect(flag).toBe(true); // PASSES })); The flag is initially false, but we have a setTimeout (a macrotask, not a microtask) that changes the flag after 100ms. Time is progressed by 50ms and we expect the flag to still be false, it is then progressed by another 50ms and we expect the flag to be true. This test will pass all of these expectations, because when we expect that the flag is true, 100ms of time has passed and the setTimeout has had time to execute its code. If we were to do this instead: it('should test some asynchronous code', fakeAsync(() => { let flag = false; setTimeout(() => { flag = true; }, 100); expect(flag).toBe(false); flushMicrotasks(); expect(flag).toBe(true); // FAILS })); The test would fail because a setTimeout is not a microtask, and flushMicrotasks() will not cause it to execute. Let’s take a look at an example with a promise: it('should test some asynchronous code', fakeAsync(() => { let flag = false; Promise.resolve(true).then((result) => { flag = true; }); flushMicrotasks(); expect(flag).toBe(true); // PASSES })); This time we switch the flag to true inside of a promise handler. Since the promise handler is a microtask, it will be executed when we call flushMicrotasks and so our test will pass. We could also use tick instead of flushMicrotasks and it would still work. What about two promises? it('should test some asynchronous code', fakeAsync(() => { let flagOne = false; let flagTwo = false; Promise.resolve(true).then((result) => { flagOne = true; }); Promise.resolve(true).then((result) => { flagTwo = true; }); flushMicrotasks(); expect(flagOne).toBe(true); // PASSES expect(flagTwo).toBe(true); // PASSES })); This test has two flags, and it switches their values inside of two separate promises. We call flushMicrotasks (but we could also call tick) and then expect them both to be true. This test will also work, because flushMicrotasks will clear all of the microtasks that are currently in the queue. What about Observables? import { from } from "rxjs"; //...snip it("should test some asynchronous code", fakeAsync(() => { let testObservable = from(Promise.resolve(true)); let flag = false; testObservable.subscribe(result => { flag = true; }); flushMicrotasks(); expect(flag).toBe(true); // PASSES })); Yep! That will also work. But now let’s take a look at a more complicated example that won’t work: it("should test some asynchronous code", fakeAsync(() => { let flag: any = false; let testPromise = new Promise(resolve => { setTimeout(() => { resolve(true); }, 3000); }); testPromise.then(result => { flag = result; }); expect(flag).toBe(false); // PASSES flushMicrotasks(); expect(flag).toBe(true); // FAILS })); We are switching the value of flag inside of a promise again, so you might think that this would work if we called flushMicrotasks. But, inside of the promise we are triggering a new macrotask and the promise will not resolve until the setTimeout triggers, which happens after 3000ms. In order for this code to work, we would need to use the tick function instead: it('should test some asynchronous code', fakeAsync(() => { let flag: any = false; let testPromise = new Promise((resolve) => { setTimeout(() => { resolve(true); }, 3000); }); testPromise.then((result) => { flag = result; }); expect(flag).toBe(false); // PASSES tick(3000); expect(flag).toBe(true); // PASSES })); Summary As complex and confusing as this may seem at first, it is actually quite an elegant way to deal with a complicated problem. By using fakeAsync we can ensure that all of our asynchronous code has run before we make assertions in our tests, and we even have fine tuned control over how we want to advance time throughout the test.
https://www.joshmorony.com/testing-asynchronous-code-with-fakeasync-in-angular/
CC-MAIN-2020-10
en
refinedweb
Using Redux with React Hooks in a React Native app Aman Mittal— January 27, 2020 With React Hooks growing usage, the ability to handle a component's state and side effects is now a common pattern in the functional component. React Redux offers a set of Hook APIs as an alternative to the omnipresent connect() High Order Component. In this tutorial, let us continue to build a simple React Native app where a user can save their notes and let use Redux Hooks API to manage state. This post is in continuation of the below previous post here. If you are familiar with the basics of React Hooks and how to implement them with a basic navigation setup, you can skip the previous post and can continue from this one. Table of Contents - Installing redux - Adding action types and creators - Add a reducer - Configuring a redux store - Accessing global state - Dispatching actions - Running the app - Conclusion Installing redux If you have cloned the repo from the previous example, make sure that the dependencies in the package.json file looks like below: "dependencies": {"@react-native-community/masked-view": "0.1.5","expo": "~36.0.0","react": "~16.9.0","react-dom": "~16.9.0","react-native": "","react-native-gesture-handler": "~1.5.0","react-native-paper": "3.4.0","react-native-reanimated": "~1.4.0","react-native-safe-area-context": "0.6.0","react-native-screens": "2.0.0-alpha.12","react-navigation": "4.0.10","react-navigation-stack": "2.0.10","react-redux": "7.1.3","redux": "4.0.5"}, Next, install the following dependencies from a terminal window to integrate and use Redux to manage the state. yarn add redux react-redux lodash.remove The directory structure that I am going to follow to manage Redux related files is going to be based on the pragmatic approach called ducks. Here is the link to a great post on using ducks pattern in Redux and React apps. This post can help you understand the pattern and why there can be a requirement for it. What ducks pattern allows you to have are modular reducers in the app itself. You do not have to create different files for actions, types, and action creators. Instead, you can define them all in one modular file however, if there is a need to create more than one reducer, you can have defined multiple reducer files. Adding action types and creators in the button press, timers or network requests. To begin, inside the src/ directory, create a subdirectory called redux. Inside it, create a new file called notesApp.js. So far, the application has the ability to let the user add notes. In the newly created files, let us begin by defining two action types and their creators. The second action type is going to allow the user to remove an item from the ViewNotes screen. // Action Typesexport const ADD_NOTE = 'ADD_NOTE'export const DELETE_NOTE = 'DELETE_NOTE' Next, let us define action creators for each of the action type. The first one is going trigger when saving the note. The second creator is going to trigger when deleting the note. // Action Creatorslet noteID = 0export function addnote(note) {return {type: ADD_NOTE,id: noteID++,note}}export function deletenote(id) {return {type: DELETE_NOTE,payload: id}} Add a reducer, the state and action and must return the default state. The initial state is going to be an empty array. Add the following after you have defined action creators. Also, make sure to import remove utility from lodash.remove npm package at the top of the file notesApp.js that was installed at the starting of this post. // import the dependencyimport remove from 'lodash.remove'// reducerconst initialState = []function notesReducer(state = initialState, action) {switch (action.type) {case ADD_NOTE:return [...state,{id: action.id,note: action.note}]case DELETE_NOTE:const deletedNewArray = remove(state, obj => {return obj.id != action.payload})return deletedNewArraydefault:return state}}export default notesReducer Configuring a redux store A store is an object that brings and notesReducer from './notesApp'const store = createStore(notesReducer)export default store To bind this Redux store in the React Native app, open the entry point file App.js and import the store as well as the High Order Component Provider from react-redux npm package. This HOC helps you to pass the store down to the rest of the components of the current app. import React from 'react'import { Provider as PaperProvider } from 'react-native-paper'import AppNavigator from './src/navigation'import { Provider as StoreProvider } from 'react-redux'import store from './src/redux/store'// modify the App componentexport default function App() {return (<StoreProvider store={store}><PaperProvider><AppNavigator /></PaperProvider></StoreProvider>)} That's it! The Redux store is now configured and ready to use. Accessing global state To access state when managing it with Redux, useSelector hook is provided. It is similar to mapStateToProps argument that is passed inside the connect(). It allows you to extract data from the Redux store state using a selector function. The major difference between the hook and the argument is that the selector may return any value as a result, not just an object. Open ViewNotes.js file and import this hook from react-redux. // ...after rest of the importsimport { useSelector } from 'react-redux' Next, instead of storing notes array using useState Hook, replace it with the following inside ViewNotes functional component. const notes = useSelector(state => state) Dispatching actions The useDispatch() hook completely refers to the dispatch function from the Redux store. This hook is used only when there is a need to dispatch an action. Import it from react-redux and also, the action creators addnote and deletenote from the file redux/notesApp.js. import { useSelector, useDispatch } from 'react-redux' To dispatch an action, define the following statement after the useSelector hook. const dispatch = useDispatch() Next, dispatch two actions called addNote and deleteNote to trigger these events. const addNote = note => dispatch(addnote(note))const deleteNote = id => dispatch(deletenote(id)) Since the naming convention is exactly the same for the addNote action as from the previous post's helper function, there is no need to make any changes inside the return statement of the functional component for this. However, deleteNote action is new. To delete a note from the list rendered, add a prop onPress to List.Item UI component from react-native-paper. This is going to add the functionality of deleting an item from the list when the user touches that item. Here is the code snippet of List.Item component with changes. Also, make sure that to modify the values of props: title and description. <List.Itemtitle={item.note.noteTitle}description={item.note.noteValue}descriptionNumberOfLines={1}titleStyle={styles.listTitle}onPress={() => deleteNote(item.id)}/> The advantage useDispatch hook provides is that it replaces mapDispatchToProps and there is no need to write boilerplate code to bind action creators with this hook now. Running the app So far so good. Now, let us run the application. From the terminal window execute the command expo start or yarn start and make sure the Expo client is running on a simulator or a real device. You are going to be welcomed by the following home screen that currently has no notes to display. Here is the complete demo that showcases both adding a note and deleting a note functionality. For you reference, there are no changes made inside the AddNotes.js file and it still uses the useState to manage the component's state. There are quite a few changes made to ViewNotes.js file so here is the complete snippet of code: // ViewNotes.jsimport React from 'react'import { StyleSheet, View, FlatList } from 'react-native'import { Text, FAB, List } from 'react-native-paper'import { useSelector, useDispatch } from 'react-redux'import { addnote, deletenote } from '../redux/notesApp'import Header from '../components/Header'function ViewNotes({ navigation }) {// const [notes, setNotes] = useState([])// const addNote = note => {// note.id = notes.length + 1// setNotes([...notes, note])// }const notes = useSelector(state => state)const dispatch = useDispatch()const addNote = note => dispatch(addnote(note))const deleteNote = id => dispatch(deletenote(id))return (<><Header titleText="Simple Note Taker" /><View style={styles.container}>{notes.length === 0 ? (<View style={styles.titleContainer}><Text style={styles.title}>You do not have any notes</Text></View>) : (<FlatListdata={notes}renderItem={({ item }) => (<List.Itemtitle={item.note.noteTitle}description={item.note.noteValue}descriptionNumberOfLines={1}titleStyle={styles.listTitle}onPress={() => deleteNote(item.id)}/>)}keyExtractor={item => item.id.toString()}/>)}<FABstyle={styles.fab}smallicon="plus"label="Add new note"onPress={() =>navigation.navigate('AddNotes', {addNote})}/></View></>)}const styles = StyleSheet.create({container: {flex: 1,backgroundColor: '#fff',paddingHorizontal: 10,paddingVertical: 20},titleContainer: {alignItems: 'center',justifyContent: 'center',flex: 1},title: {fontSize: 20},fab: {position: 'absolute',margin: 20,right: 0,bottom: 10},listTitle: {fontSize: 20}})export default ViewNotes Conclusion With the addition to hooks such as useSelector and useDispatch not only reduces the need to write plentiful boilerplate code but also gives you the advantage to use functional components. For advanced usage of Hooks with Redux, you can check out the official documentation here. You can find the complete code for this tutorial in the Github repo here. Originally published at Heartbeat.fritz.ai
https://amanhimself.dev/redux-with-react-native-hooks/
CC-MAIN-2020-10
en
refinedweb
I would like to use the CLI to send a new RPM to the Motor. There are examples of a command line. thanks. Andrew. UAVCAN CLI control on RPi over SSH I would like to use the CLI to send a new RPM to the Motor. There are examples of a command line. Example for v0: Example for v1 (please remove the unnecessary functionality and add a publisher for regulated.zubax.actuator.esc.RatiometricSetpoint.0.1 or its RPM alternative): Thanks. it need Python script? Can’t directly set from the command line (CLI)? Did I correctly interpret the answer? I am not aware of any standalone CLI tools that can be used for that. You may be interested in the UAVCAN GUI tool – it’s not a CLI application, but it does have the functionality you seem to be looking for. I do not have a display on the RPI. :). I wanted to check the controller Myxa for quick. I have it connected to the RPI. python3 is so python3. I’ll look at the candump and make for me CLI command. To work with UAVCAN Myxa i need to know the dictionary DSDL. Where can I find the entire dictionary for Myxa? Watching all the DSDL keys in the Kucher is not very convenient. There are no dictionaries in UAVCAN, there are types and namespaces. Zubax Myxa relies only on the standard namespace uavcan. If you have questions about Myxa, it’s best to visit the Zubax support forum. I do not understand the meaning of CLI. Can you describe at least one example with the CLI from real life? For the study of the UAVCAN, I have two RPI and two SLCAN. I want to connect them. In V1.0 I do not find RPM type.. RatiometricSetpoint is [-512, 511] very little. I have Motor -45000…45000 RPM. The RPM type for v1 is regulated.zubax.actuator.esc.AngVelSetpoint. The examples I linked above should help you get started. “velocity of the motor in rad/s.” We do not use rad/s. Why is it in the zubax category? It looks very biased and not abstract. I can place the parameters in my category. Will you add my company to your git? I think RPM and … is not for this topic “CLI”.
https://forum.uavcan.org/t/uavcan-cli-control-on-rpi-over-ssh/719/10
CC-MAIN-2020-10
en
refinedweb
A Toolkit for Reinforcement Learning in Card Games RLCard. RLCard is developed by DATA Lab at Texas A&M University. - Official Website: - Paper: Installation Make sure that you have Python 3.5+ and pip installed. We recommend installing rlcard with pip as follow: git clone cd rlcard pip install -e . Or you can directly install the package with pip install rlcard Examples Please refer to examples/. A short example is as below. import rlcard from rlcard.agents.random_agent import RandomAgent env = rlcard.make('blackjack') env.set_agents([RandomAgent()]) trajectories, payoffs = env.run() We also recommend the following toy examples. - Playing with random agents - Deep-Q learning on Blackjack - Running multiple processes - Having fun with pretrained Leduc model - Leduc Hold'em as single-agent environment - Training CFR on Leduc Hold'em Demo Run examples/leduc_holdem_human.py to play with the pre-trained Leduc Hold'em model: >> Leduc Hold'em pre-trained model >> Start a new game! >> Agent 1 chooses raise =============== Community Card =============== ┌─────────┐ │░░░░░░░░░│ │░░░░░░░░░│ │░░░░░░░░░│ │░░░░░░░░░│ │░░░░░░░░░│ │░░░░░░░░░│ │░░░░░░░░░│ └─────────┘ =============== Your Hand =============== ┌─────────┐ │J │ │ │ │ │ │ ♥ │ │ │ │ │ │ J│ └─────────┘ =============== Chips =============== Yours: + Agent 1: +++ =========== Actions You Can Choose =========== 0: call, 1: raise, 2: fold >> You choose action (integer): Documents Please refer to the Documents for general introductions. API documents are available at our website. Available Environments We provide a complexity estimation for the games on several aspects. InfoSet Number: the number of information sets; Avg. InfoSet Size: the average number of states in a single information set; Action Size: the size of the action space. Name: the name that should be passed to env.make to create the game environment. Evaluation The perfomance is measured by winning rates through tournaments. Example outputs are as follows: Cite this} }
https://pythonawesome.com/a-toolkit-for-reinforcement-learning-in-card-games/
CC-MAIN-2020-10
en
refinedweb
Auto-conf meeting logs Contents 18 June 2007 This is a discussion I had with Scott Lewis (slewis2) on org.eclipse.ecf.discovery in general, and its overlap with our org.eclipse.iscovery. Bottom line: it overlaps quite a bit and we should consider implementing its interfaces. <rcjsuen> Oge could you discuss with Scott? <onnadi3> Sure thing <onnadi3> slewis2: I'm trying to understand how ecf.discovery works. Could you please tell me if my understanding is correct? <slewis2> have you taken a look at test code? <onnadi3> The one included with jmdns? <slewis2> no...in plugin org.eclipse.ecf.tests.discover y <slewis2> in tests module of repo <onnadi3> I don't think so <onnadi3> Gimme a minute :-) <slewis2> ok...we can chat too...proceed <onnadi3> It seems like a service provider e.g. a Web server communicates its presence by generating an IServiceEvent <onnadi3> Other bundles that require the Web server, register as IServiceTypeListeners <onnadi3> Once they get the event, tehy can extract the IServiceInfo and find out thte port and inetaddress of said server <onnadi3> Is that how ecf.discovery works? <slewis2> essentially that's right...although the generating of an IServiceEvent is done implicitly via registerService() <slewis2> RE: receiving the event and IServiceInfo...ye s...if the port and inetaddress is what's needed...typically there will be more/other service-specific info...as properties <onnadi3> what class contains registerService()? <slewis2> you mean impl? <slewis2> (or interface)? <onnadi3> Well, I'm not sure. How are services registered? <slewis2> the IDiscoveryContainerAdapter has the interface (discovery API) <slewis2> services are registered via IDiscoveryContain erAdapter.registerService(IServiceInfo serviceInfo) <slewis2> the jmdns implementation of the IDiscoveryCon tainerAdapter is in org.eclipse.ecf.provider.discover y plugin, o.e.e.p.discovery.container.JMDNSDiscoveryC ontainer class <onnadi3> Sorry, I'm just trying to locate the class <slewis2> np...which one? <slewis2> I can find URLs to src if you like <onnadi3> I'm looking for org.eclipse.ecf.provider.disc overy.container.JMDNSDiscoveryContainer <onnadi3> A URL would be great, thanks <slewis2>. eclipse.ecf/plugins/org.eclipse.ecf.provider.jmdns/sr c/org/eclipse/ecf/provider/jmdns/container/?root=Tech nology_Project <onnadi3> Ok. So in the case of a hypothetical Web server, what would it register its service to? <slewis2> It would a) create an appropriate instance of IServiceInfo, then call IDiscoveryContainerAdapter .registerService(svcInfo) <slewis2> then listeners would/will be asynchronously notified <slewis2> the conventions on type names are different for different providers (e.g. jmdns has _http._tcp for example) but the API leaves the service type as typeless (String) so other providers can use other conventions <slewis2> and even zeroconf's is just convention <onnadi3> Hmm, so if this Web server didn't exist on the local machine, I would write a plugin that polled the server every so often and then registered or unregistered the WebServerServiceInfo as necessary, right? <slewis2> you could do that...if you wanted to act as a discovery 'proxy' for the web server...but it would be better to just add service registration to the web server <slewis2> if you can, obviously <onnadi3> Alright! I think I see it now. ecf.discovery is kinna like a 'protocol' that servers and clients can use to keep aware of each other <slewis2> it's not a real protocol in the traditional sense, as there's nothing implied about what goes on wire between processes, but it is a high-level API for processes to keep aware of each other...note there's nothing inherently asymmetric ('server' or 'client') about it...any process can use as 'server' or 'client' <slewis2> API implies asynch notification between processes, at level of 'service type' <onnadi3> Excellent. I think this is what I, and rcjsuen, and lemmy are looking for <onnadi3> Thank you so much for patiently explaining everything to me <slewis2> great. Note that although the current impl is based upon jmdns, that we would like other impls done...that use other protocols <slewis2> and if generalizations are needed/necessary for other 'styles' of discovery then we can/will add them <onnadi3> Well, we're working on the discovery of services that exist on the local machine, so I think we'll be able to implement ecf.discovery without using a protocol like SLP, jmDNS etc. <slewis2> also note: there's a IDiscoveryService interface that is intended to be an OSGi service <slewis2> onnadi3: so you are discovering/searching for OSGi services or other manner of service? <onnadi3> Not really. More like, we're discovering binaries in the filesystem or local ports running daemons <slewis2> I see <onnadi3> So am I right in saying that we can implement ecf.discovery without using other protocols, or IDiscoveryService? <slewis2> yeah, sure...you can create your own impl of ecf.discovery that does whatever you want <onnadi3> sweet <slewis2> it may be appropriate to include such a plugin in ECF (as another provider of discovery) if desired/possible * michaelr has joined #eclipse-soc <slewis2> but just use jmdns as an example...and look to implement IContainer and IDiscoveryContainerAdapte r <slewis2> then if you like you can reuse/use the DiscoveryView in org.eclipse.ecf.discovery.ui...and your discovered services should show up there <onnadi3> IContainer? I don't see it in the ecf.discovery source tree <slewis2> It's in the org.eclipse.ecf plugin (core) <slewis2> discovery depends upon it <onnadi3> I see it <slewis2> for ID interfaces, IContainer, ContainerFacto ry and some util classes <slewis2> IContainer.getAdapter(IDiscoveryContainerAdap ter.class) is how you get an IDiscoveryContainerAdapt er instance (or via OSGi service) <onnadi3> We'll probably be using OSGi services, so I guess we won't need to implement the IContainer <slewis2> in jmdns/plugin.xml you will see that it defines a namespace and a container factory...you will probably need to do this too <slewis2> ...or OSGi services...sure <onnadi3> I can't find jmdns/plugin.xml in here: <slewis2> see org.eclipse.ecf.tests.discovery.Activator .start for the access to discovery as OSGi service (note that IDiscoveryService is sub-interface of IDiscoveryContainerAdapter) <slewis2> yeah, it's not in jmdns project...its here... <slewis2>. eclipse.ecf/plugins/org.eclipse.ecf.provider.jmdns/?r oot=Technology_Project <slewis2> we simply use jmdns as a library for jmdns impl of ecf...it has no knowledge of ECF or Eclipse concepts (ID, IContainer, adapters, etc) <onnadi3> OK. I have one last question. I noticed in the IDiscoveryContainerAdapter Javadoc, it says that gets are synchronous and requests are asynchronous. What is the difference between get and request and what does it mean for them to be synch vs. asynch? <slewis2> this is a good thing, as it makes it quite easy to take other discovery protocols and 'wrap them' with an ECF-aware bundle <slewis2> requests don't block (and don't have return values) <slewis2> gets do block (how long determined by timeout), and return IServiceInfo (or others) 16 June 2007 Action items: 1. We should all complete the commiter registration process so we can begin moving our code to dev.eclipse.org 2. Look at org.eclipse.ecf.discovery to see if we can reuse their code <onnadi3> Good morning all <lemmy> Lets time today's meeting for one hour? should be enough, shouldn't it? <lemmy> Hi onnadi3 <onnadi3> lemmy: sure thing <lemmy> btw. we could change our meeting time now that i'm back in cest. <lemmy> I'm fine with the current time though, but i'd not mind moving it to a later spot so you have time for a decent breakfast? <onnadi3> Eh, I'm now used to waking up freakishly early :-) <lemmy> ok, so meeting time will remain as is. <onnadi3> Howdy, rcjsuen <rcjsuen> ogeHi <lemmy> 1. soc.eclipse.org as SCM <lemmy> Remy, I guess the process has been started and the foundation will contact me and Ogechi. <rcjsuen> Yes <onnadi3> I've been contacted. I can start filling out the paperwork after the meeting <lemmy> They have contacted you already? that is fast. <onnadi3> Yeah, someone called emo-records@eclipse.org <rcjsuen> More like, that's scary. <rcjsuen> that might be an automated email <rcjsuen> lemmy: did you get anything in your dev.eclipse.or gemail? <rcjsuen> Or did i fill that in wrong <lemmy> remy, nothing so far <rcjsuen> nope, i used dev.eclipse.org at lemmster de <rcjsuen> should be okay <rcjsuen> oh wel <lemmy> lets wait a day for the email to come and take actions then. <rcjsuen> yeah <rcjsuen> weekends and all <rcjsuen> give them a business day or two i guess <onnadi3> Should we move to the next item? <rcjsuen> yes, let's <lemmy> I've changed my Bugzilla id and now I can't log into the wiki anymore. <rcjsuen> i guess we're all looking at. org/index.php/Auto-conf_meeting_agenda ? <rcjsuen> although SCM was last instead of #1 <onnadi3> yup, I'm looking at it <lemmy> 2. Status common-collections <onnadi3> Well, I eventually didn't need the MultiMap class so I did not use commons-collections <lemmy> perfect :) <rcjsuen> well then <lemmy> why isn't it necessary anymore? some design change? <onnadi3> Yeah, kinna. I originally thought that IFinder.find() would return a MultiMap of all the services it was capable of finding <onnadi3> Hence it would need a MultiMap <onnadi3> But then I realized that we wouldn't want *all* the services discovered if only 1 was needed <onnadi3> And so the MultiMap class went out the door <lemmy> 3. dynamic discovery and undiscovery of a service <lemmy> a) during runtime a service might appear and disappear again. Hence we need a way to handle this scenario. <lemmy> I believe this is a rather important scenario which should be covered by our design. <onnadi3> Could we perhaps have a field in ServiceID that denotes when a service is 'volatile'? <onnadi3> In that case, things like services at ports or sockets could be marked volatile and their discovery results wouldn't be cached <lemmy> do you have an example for a non volatile service ? <onnadi3> Well, binaries could be non-volatile <lemmy> caching and volatile are different things. <onnadi3> What I mean is that the results from discovering 'volatile' services won't be cached <onnadi3> But those from non-volatile will <lemmy> i might make sense to cache those results too. at least it should be possible with the cache implementation . <rcjsuen> some kinda switch to enable/disable i guess <lemmy> and binaries might be volatile as well. <lemmy> imagine a binary on a removable disk. <onnadi3> Ah <lemmy> i suggest to assume all services to be volatile. <rcjsuen> someome might suddenly uninstall gcc! <onnadi3> How does Eclipse handle the locations of JREs? <lemmy> that's an even better scenario. <rcjsuen> You mean when someone uninstalls a JRE? Hm, I never thought about that. <lemmy> onnadi3: i'm not sure if jdt checks for existence and handle this case gracefully. <rcjsuen> I guess you want to look at the implementations of org.eclipse.jdt.launching.IVIMInstall. <lemmy> but you will probably get some sort of error message. either low level or high level. <onnadi3> Alrighty. I was just thinking that whatever is good enough for JDT is good enough for us :-) <lemmy> we need to distinguish between two different undiscovery events. one that is actively triggered by some finder (running asynchronously in a Thread) and a simple rediscovery. <lemmy> we won't have asynchronous Threads for binaries I guess, but we might have them for network services. For example jSLP works exactly like this. <lemmy> This Thread would fire an undiscovery event. <rcjsuen> so simple rediscovery you mean as in just invoking find() again? <lemmy> which brings us to the question if consumers can register for undiscovery event.s <onnadi3> What do you mean by undiscovery? <lemmy> rcjsuen: excatly <lemmy> undiscovery == service is gone <onnadi3> Hmm, I don't think people will/should be able to register for undiscovery events since we're not guaranteeing that they even get discovery events. They have to call Discovery to find if a service is there <lemmy> For most of the stuff I'm talking about you will find code in org.eclipse.ecf.discovery. <lemmy> And I suggest that we all have a deeper look into it. It seems as we might be able to reuse/extend this codebase. <lemmy> Basically it is a framework for discovery of remote services. But I don't see a reason why it couldn't be used for local discovery too. <lemmy> Btw. Scott Lewis is the project lead for ECF and also a mentor. <onnadi3> what's his IRC handle? <rcjsuen> slewis2 <lemmy> Actually he is the one who brought me up to the idea. <lemmy> It even offers some basic UI to show discovery results. * rcjsuen coughs. <onnadi3> cool. So we all look at ecf.discovery and decide whether or not to rearchitect our own Discovery? <lemmy> ? <rcjsuen> I mean, that code ain't great <onnadi3> rcjsuen: :-) <lemmy> rcjsuen: the ui code? <rcjsuen> from what i've seen before <lemmy> rcjsuen: the ui code? <rcjsuen> yeah, that <lemmy> OK, I don't really mind if the UI code is scrap or not. As long as the framework behind has potential. .) <lemmy> :) <rcjsuen> true <rcjsuen> Well, I haven't looked at the discovery APIs much. <rcjsuen> Not my area in ECF <onnadi3> So we all look at ecf.discovery and decide whether or not to rearchitect our own Discovery? <lemmy> My vision is to hook into the ECF discovery framework and focus on implementing the finders. This might even answer the question of project housing for discovery. * zx|work has quit IRC (Read error: 110 (Connection timed out)�) <lemmy> onnadi3: Yep, we should definitely do that. Even if we later see, that we cannot use it. <onnadi3> Hope we can use it. Less work for all of us <rcjsuen> Quite so. <lemmy> Do we want to have another meeting in two/three days to discuss our findings? <onnadi3> Sure <rcjsuen> Lemme check my Lotus Notes calendar. >_> <lemmy> Monday 7pm CEST/1pm EDT <lemmy> ? <rcjsuen> I might have a team meeting then, dunno what our MBA student wants. <lemmy> we could do it an hour later or two. <lemmy> You're free at that time? <rcjsuen> These meetings take a while <lemmy> a while as in the whole day? <rcjsuen> what time frame are you two free in * zx|work has joined #eclipse-soc <onnadi3> All week except Tuesday 10 - 11 <lemmy> usually I work until 6pm. I could do it directly from work so we could start an hour earlier. <rcjsuen> I don't know when he's gnona schedule it <rcjsuen> is the issue <lemmy> lets discuss the meeting time Monday once you know the meeting time. <rcjsuen> lemmy: if you can do 6pm that'd be great cuz then i can just do the lunch thing <lemmy> should be possible. <rcjsuen> I won't be in Thursday's SOC meeting btw <rcjsuen> lemmy: but okay, i'll talk to him on Monday to see when we're having our stat meeting <lemmy> great, anything else to talk about? <onnadi3> Yup, I've got two more items <lemmy> go ahead :) <onnadi3> > 5. Decide on how to specify service versions. <onnadi3> and 6. Should we redesign the Discovery UI (not GUI)? <onnadi3> I guess No. 6 can wait until after we check out ecf.discovery <lemmy> service versions is an interesting issue. I'm not sure whether ECF provides with this or not. <rcjsuen> Doesn't look like ECF's Discovery APIs have a version concept in IServiceID or in IServiceInfo. <rcjsuen> I guess unless it's in its properties. <lemmy> Maybe we should extend the service version thingy to a broader view of service properties <onnadi3> lemmy: interesting <lemmy> SLP allows to define properties per service which can be queried by an ldap style query. * wizEpit has quit IRC (Read error: 104 (Connection reset by peer)�) <rcjsuen> Well, having properties certainly helps us scale in the event of unforseen...properties. <rcjsuen> Makes it more extensible too i naway <onnadi3> If we allowed arbitrary properties, then it would be up to the service consumer to know how to identify all those features in the binary/port they're looking for <onnadi3> And so a project like CDT would, in addition to defining a GCCFinder, would also define a GCCIdentifier that, given a GCC binary, could tell what properties it has <lemmy> i'd expect to have an interface like: discovery gimme service x with propertiy A and property B not property C <lemmy> GCCIdentifier sound like an GCCValidator. <lemmy> +s <onnadi3> Sure :-) I guess my point is that we'd need a way of extracting the properties from the binary/port/soc ket <lemmy> well, this would be the responsibility of the finder. <rcjsuen> yeah <onnadi3> that's right * nickboldt has joined #eclipse-soc <lemmy> morning nick <nickboldt> hey there <rcjsuen> hi nick <lemmy> rcjsuen/onnadi: If you look into the examples at, you'll see what I mean by a query. <lemmy> Also how they handle service properties. <onnadi3> I looked it up on Wikipedia :-) <rcjsuen> wow, very small <lemmy> Btw. I'll probably spent some official work hours getting ecf.discovery running with jSLP. <onnadi3> lemmy: you have or you will? <lemmy> will hopefully <lemmy> I mean it makes sense in the project i work in. <lemmy> and my boss has already in-officially agreed to contribute the impl afterwards. <lemmy> Anyway, I guess we are done? <lemmy> I gotta catch a train. <onnadi3> I guess so <onnadi3> Bottom line, learn ecf.discovery <rcjsuen> lemmy: okay, ttyl Markus <lemmy> bye, have a nice day. <rcjsuen> bye <onnadi3> bye, lemms :-) 9 June 2007 We decided to do the following: - Use the Apache Commons Collection MultiMap implementation. We would need to get rid of javax.beans.* since it won't compile with CDC 1.0 - Make SimpleConsumer use the console, not a GUI - Create an immutable class, DiscoveryResults that is the return type of IFinder.find() - Maintain a list of high-level requirements of this project <rcjsuen> onnadi3: Hi <rcjsuen> onnadi3: I realized why you SimpleFinder compiles for you but not for me. <onnadi3> lemmy: Good Morning. Why is that? <rcjsuen> onnadi3: You are using a compiler compliance level of 5.0 or 6.0 <rcjsuen> The code will not compile below <rcjsuen> My workspace default is 1.4 so it was using a 1.4 setting. <rcjsuen> I set the project to 5.0 and it compiled. <rcjsuen> See for yourself. <onnadi3> OK. I remember fixing the problem somehow but I'm not sure what I did. I'm pulling up Eclipse now <lemmy> Hi <onnadi3> lemmy: Wie geht es Ihnen? :-) <lemmy> Super <lemmy> Tomorrow I'll leave India and travel back to Germany. :) <onnadi3> Nice. You excited? <lemmy> The previous three months have been a nice experience, but I'm happy to go back. <onnadi3> Ah. So I gues this is also the end of your crunch time, eh <rcjsuen> onnadi3: Did you look at your blog's replies? <lemmy> Yep <onnadi3> just checking em now <onnadi3> Oops. Ouch <onnadi3> Sheesh! Why didn't I notice that?? <onnadi3> oh well :-) <rcjsuen> The ol' debugger would've done it. <lemmy> The third comment is the most interesting one. But I'm not sure about the JDK dependencies of common-collections. <onnadi3> Well, the commons collection is the Apache Jar that I mentioned in my earlier e-mail <onnadi3> You said we should avoid the licensing issues, right? <rcjsuen> That was me. <lemmy> Apache license is compatible. Orbit might even contain it already. <rcjsuen> I just don't want to pull in a jar if I'm just using one class. <onnadi3> lemmy: oops. Sorry :-) <lemmy> What's on the agenda for today? <lemmy> I'm a little bit in a hurry, cause I still need to pack my bags... <onnadi3> Well, the only problem I'm having now is figuring out how to query Discovery for the services published to it <lemmy> It appears as if commons-collection isn't in Orbit yet, but many other commons-* are. <lemmy> +1 for using commons-collections if foundation 1.0 works for it. <rcjsuen> yes, I don't see it in Orbit <rcjsuen> It might be something odd like J2SE-1.2. <onnadi3> So I'd just download the jar and import from there, right? I don't have to use org.orbit? <ijuma> i'd guess 1.2 minimum too <rcjsuen> I think httpclient is j2se-1.2 <ijuma> rcjsuen: why is 1.2 odd? <lemmy> Create a new plug-in for that JAR. There is even a wizard for such a task. <rcjsuen> ijuma: I didn't think people cared about j2se-1.2 ;) <rcjsuen> then again, until i hopped on the eRCP bandwagon, i didn't think anyone cared about anything below j2se-1.4 <onnadi3> lemmy: why do I have to create a new plugin for the Commons jar? ican just include har in my current plug-in project, can't I? <ijuma> rcjsuen: well, because it's where the collections framework was introduced, it's likely that commons-collection would require that much at least <ijuma> but possibly 1.4 too <ijuma> i know it's not 1.5 <ijuma> (yet) <rcjsuen> ah <lemmy> onnadi3: creating a separate plug-in for a jar has the advantage, that other plug-in can reference the very same jar too. <onnadi3> lemmy: Ah. That's clever. <lemmy> And in this specific case, commons-collection might get included in the Orbit project. Discovery would then reference it there. <onnadi3> So even though I'd include the jar in a plug-in, Discovery will still reference it as a package exported by a plugin, not as a plugin itself, right? <lemmy> It's a lessons learned thing. Many many plug-ins used to include log4j. In the end this caused some headaches. <ijuma> rcjsuen: but to answer your question, Spring 2.0 still works with 1.3. 2.1 has just moved to 1.4 in the last development milestone. So yeah, people in the enterprise are also _slow_ ;) <rcjsuen> yeah <rcjsuen> I realized a lot of ppl are still on 1.3 <onnadi3> lemmy: So, do you have any ideas for how I can get the services stored in Discovery via a call to a Consumer class? <lemmy> I don't see the difference in your question. Discovery depends on the collections plug-in. The collection plug-in exports a couple of packages which then can be used by Discovery. <rcjsuen> I imported commons collection's source to my workspace <rcjsuen> It won't work on CDC1.0 because of java.beans.* <1.5 <rcjsuen> lemmy: yeah i told him that <lemmy> OK <rcjsuen> it's To Be Addressed <lemmy> rcjsuen: Can you tell which part of collections requires java.beans*? <rcjsuen> BeanMap <rcjsuen> if I delete it it's okay <lemmy> If that is the only reason, we could modify the JAR and file an enhancement request at apache.org <rcjsuen> The binary apepars to be 571,259 bytes. <rcjsuen> This will likely not fly in an embedded context. <lemmy> Size doesn't matter. ;) <onnadi3> :-) <lemmy> rcjsuen: Should be easy to optimize size-wise for the embedded use case. <rcjsuen> well, just saying, Equinox won't like this <lemmy> In general I prefer to reuse existing code over implementing from scratch. <lemmy> Fight NIH. :) <rcjsuen> NIH? <lemmy> Not Invented Here <rcjsuen> o <rcjsuen> well, let's address onnadi3's question <onnadi3> lemmy: So, do you have any ideas for how I can get the services stored in Discovery via a call to a Consumer class? <lemmy> Yep <onnadi3> Yay! <lemmy> So you're asking how the interface might look like? <rcjsuen> I think he's asking about how consumer plug-in accesses discovery plug-in. <onnadi3> Well, no. The interface is simply a menu item that you click and it should display a popup that lists all the found srevices so far <onnadi3> rcjsuen: Yup! <lemmy> onnadi3: Why do you "waste" time implementing a UI? <onnadi3> It took 0 time. It was what was produced by the Eclipse Wizard <lemmy> Testing is much better done with JUnit. <onnadi3> lemmy: Well, I wanted to complete the example <lemmy> Btw. I'm missing JUnit test cases in general. <onnadi3> Since I've written a SimpleFinder, I also want a SimpleConsumer <lemmy> I'd suggest to design the consumer as a JUnit plug-in project. <onnadi3> The only thing I think I can test right now is the MultiMap class (which will going soon anyway). The rest of the code is boilerplate which I don't hinkw we agreed on how to test it <lemmy> Where is the implementation for MultiMap? <onnadi3> Does a JUnit plugin project mean simply a blank project with JUnit testcases in it? <onnadi3> Sorry, the MultiMap code is not in the repo yet because I couldn't get th test for it to work <onnadi3> I guess I should put all compiled code in the repo, huh... <rcjsuen> My boss was just telling us to commit our project's code yesterday ;) <rcjsuen> I sidestepped by saying he just told us the package name to use today (which was different from what I was using) so I had to refactor and test properly ;p <onnadi3> :-) <lemmy> Might be that I'm not used to CVS anymore, but I'm fine with compiling but untested code in the repo. <lemmy> Simply to make sure the code is in a safe place. <onnadi3> Alrighty. Sorry about that. I'll have it up today <rcjsuen> just commit whenever you make a little change <rcjsuen> will save you time when you screw things up <onnadi3> Gotcha. And then i can actually discuss it with people <rcjsuen> I can speak from experience. <rcjsuen> onnadi3: yes, just create a org.eclipse.discovery.tests plug-in project <rcjsuen> then put your test cases there <rcjsuen> maybe some "fake code" if necessary <lemmy> I'm just picky, but in IFinder it reads "find() heuristically locates...". Why include "heuristically" in the interface at all? Seems more like an implementation detail to me. :) <onnadi3> Alrighty! <onnadi3> lemmy: good point on that <onnadi3> I'd still like to have an examples package where I can show how a consumer would utilize discovery to get services from a Finder <rcjsuen> yes, working examples are critical <lemmy> Even though I agree on the necessity for examples, I still think UI is a waste. Somebody wanting to use discovery will not care about the UI part. It will even make the example harder to understand. <rcjsuen> well, the sample unit test should be exemplary enough in that context Jun 09 10:16:19 * kreismeister (n=Miranda@p57AB7ABB.dip.t-dialin.net) has joined #eclipse-soc <onnadi3> lemmy: Well, I kinna thought that a real world Consumer would have a UI <rcjsuen> yes, but anyone that's going to use your plug-in already knows how to write a UI <onnadi3> And also, I wasn't sure how to control a plug-in from the console <lemmy> onnadi3: They probably will, but they want to learn how to use Discovery and not how to wire it up to the UI layer. :) <lemmy> onnadi3: Run Eclipse/OSGi with "-console". "help" will then show you what you can do. <rcjsuen> lemmy: for the issue of accessing the services <onnadi3> Alrighty <rcjsuen> lemmy: you're more familiar with OSGi services <lemmy> Which issues? I must have missed them. :o <onnadi3> lemmy: So, do you have any ideas for how I can get the services stored in Discovery via a call to a Consumer class? <rcjsuen> that <lemmy> Does services in this context really mean OSGi services? <onnadi3> oh no no no <onnadi3> It means the services that Finders find <rcjsuen> well then, never mind <lemmy> Btw. I would find() let return something like a DiscoveryResult rather than a Map. Internally it could still use a Map though, but later you might want to add some methods to it. <lemmy> ...which would be possible with a Map, but would look funny. <onnadi3> okey-doke <onnadi3> (BTW, you can find the MultiMap code I wrote here) <lemmy> Also it shouldn't be possible for a consumer to modify the DiscoveryResults (Map). <lemmy> Since you might return a reference to the internal discovery cache. <onnadi3> Ah. An immutable structure <rcjsuen> yes, immutable is good <lemmy> Maps.unmodifiableMap(aMap). <lemmy> IIRC <lemmy> But if you use a DiscoveryResult, you don't need to expose the Map at all. <onnadi3> Alright, I can do that <lemmy> onnadi3: May I remind you of this "must", "should" and "nice to have" list that we agreed upon in last weeks meeting? :) <lemmy> Requirements tracking is somewhat important to me. <lemmy> Later we might want to create enhancement requests (bugs.eclipse.org) out of it. :) <onnadi3> Yes, that's a good point. So the DiscoveryResults thing would go in the "must" box, right? <lemmy> Why not. :) <lemmy> Though we should also keep track of higher level requirements. <lemmy> Mostly all the ideas which came up last week. <onnadi3> OK <onnadi3> So, somethign like this "IFinder.find() must return a class (say, DiscoveryResult) that contains a Map or whatever data structure is used internally."? <lemmy> MutliMap does not compile for me and I'd move it to .internal. too. <lemmy> Exactly <rcjsuen> lemmy: for the requirements page? <lemmy> rcjsuen: ? <rcjsuen> onnadi3: is that for the requirements page? <onnadi3> Yeah, e.g. <rcjsuen> That'd be a bit too low level in my opinion. You should scratch out the "Map or whatever data structure" bits. <rcjsuen> Just say it contains the results and leave it at that. <lemmy> Might be the case that it doesn't need a Map later on anymore. <onnadi3> rcjsuen: Hmm, but then Map would fit the requirement... <lemmy> Leave the internal implementation to the code. Only document external interfaces. :) <rcjsuen> yes, don't get into the implementation details if possible <onnadi3> OK <rcjsuen> as long as I can get the results <rcjsuen> i don't care if you're using a List of Lists or not <onnadi3> :-) <lemmy> onnadi3: You can't add domain specific methods to a Map which you can do with a domain specific object. That's my whole point for DiscoveryResult. <onnadi3> lemmy: I understand <onnadi3> Now could we please go back to the issue of getting the results from Discovery. It seems tht without that final example, I won't have made any real progress <lemmy> Sure <lemmy> Offer an EP that returns the DiscoveryResult <lemmy> Or start with a OSGi Service. <onnadi3> The consumer offers an EP? <lemmy> Nope, Discovery does. <onnadi3> I thought EPs take classes of a certain type. Can they pblish, too? <lemmy> Let me sketch it out for an OSGi service: Discovery registers an IDiscoveryService which offers a method discover(ServiceIdentifier) <lemmy> A consumer would then obtain the service an call the IDiscoveryService.discover(aSpecificServiceId) method. <lemmy> Since we don't have ServiceIdentifiers atm, the identifier could simple be a String for the moment. <lemmy> Basically this String will be somehow mapped to the key in the Map I assume. <lemmy> Reasonable so far? <onnadi3> So, in the code I have, the FinderTracker would also register an IDiscoveryService ? <onnadi3> Why does it have to be an interface seeing as there's only 1 Discovery? <lemmy> An interface is useful because consumer plug-ins might want to include this very interface in their plug-in. If an OSGi service is optional and at runtime not present, the consumer code would break because it might access the interfaec. <lemmy> If the interface is included in the consumer plug-in, it will not break with a NoClassDefFound. <onnadi3> oh!! <onnadi3> How long have you been coding OSGi? <lemmy> Not that long. Just for a couple of month. <onnadi3> Oh man... <onnadi3> Alright. So Discovery has an extension pt. that accepts IDiscovyerServices <onnadi3> So when Discovery is given a new extension, it changes a field in the IDiscoveryService to the results of calling discover(serviceID) <lemmy> I'd suggest to skip the EP for the moment and solely use OSGi services. An EP can be designed later with the knowledge gained while working with OSGi services. <lemmy> I don't get this. <lemmy> Remy? <onnadi3> lemmy: what don't you understand? <rcjsuen> No idea. <lemmy> I gotta go now. :( Can we continue Monday afternoon? <onnadi3> Sure. But if you have time, please send me an email about how to get data from Discovery via a separate OSGi bundle <onnadi3> I still don't understand how to do it <rcjsuen> Maybe this will be helpful <rcjsuen> Was good enough for me for the most part. <lemmy> Talk to you guys on Monday. :) <rcjsuen> bye <onnadi3> lemmy: have a safe trip <lemmy> Thanks 2 June 2007 At this weekly meeting we tried to decide on exactly what the term "service" means, but we decided to save the discussion for a later date. We also decided on a preliminary architecture for the Discovery plug-in: Discovery will be an OSGi bundle that connects with other service finders through the OSGi services API, but connects with service consumers through Eclipse Extension Points. <onnadi3> thanks <lemmy> I'm not sure, did I send my proposed meeting agenda out? <onnadi3> I didn't get yours...I sent one out, though <lemmy> I thought that one was in response to mine. <lemmy> it's out <onnadi3> Could you please send it again? I can't find it in mly inbox <lemmy> I've sent it a minute ago <onnadi3> got it <lemmy> it overlaps with your email <onnadi3> No problem. Yours is bigger so let's go over the technical aspects of yours, get to p. number 2 in mine, and then move on to the organizational issues <onnadi3> So, first point: "What exactly is a "service" anyway? (Better term...)" <lemmy> a minute... I'm in voip call currently. <onnadi3> Oh...sorry <lemmy> no problem <lemmy> remy: you're with us? * soulreaper has quit (Read error: 104 (Connection reset by peer)) <rcjsuen> yes <lemmy> great <lemmy> onnadi3: first point is about deciding if a service needs an identifier. for that we should know what a "service" might be <lemmy> Suppose plug-in A wants to discover a JDK and calls discovery. plug-in B wants to discover a JDK too, but how would discovery know that it can return the results from the previous run. <onnadi3> lemmy: Services can be binaries, or ports where a daemon is listening <lemmy> that sounds pretty much like a technical description of a service. <lemmy> s/that/this <onnadi3> Hmmm, if we assume for the moment that plug-in A does not care about version, then the name of the binary can be its identifier <lemmy> Will this work for a daemon running on a remote host? <lemmy> You don't have a name and only a simple port number. <onnadi3> Hmm, that was why I tried using URIs in my sample implementation. That way, we're not restricted to only local binaries or ports <lemmy> What about URIs and versions? <onnadi3> That's a bit trickier <onnadi3> IIRC, GNU Autoconf somehow doesn't bother about version numbers <onnadi3> It only tests for features <onnadi3> And that's kind of what I would like to do also since it will simplify matters a lot <rcjsuen> No, autoconf can check versions <lemmy> Also if I create a JAVA5 project, I want Eclipse to suggest only >=JAVA5 JDKs <onnadi3> true true <lemmy> or foundation 1.0. ;) <onnadi3> :-) or that <lemmy> Btw. foundation 1.0 makes it even more complicated because it represents a group of possible jdks. <rcjsuen> Well, there are no URIs in F1.0 anyway. <onnadi3> Yeah. But we can still use logical URIs <onnadi3> But I guess the main problem is...how to detect version? <lemmy> detecting a version is really specific to a binary. <lemmy> I'd assume it is something specified by a consumer. <onnadi3> Do you know how Autoconf does it? I'm rebrowsing their docs right now <onnadi3> but...how would that help GNU Autoconf find the bin? I'd hope they didn't call <binary> --version <rcjsuen> Well, not everything has a binary, so that wouldn't make sense. <onnadi3> Yeah. And i think if you ant to add a test for sthg it doesn't know about, you have to write the test yourself <rcjsuen> Define "doesn't know about" <onnadi3> Well, let's say configure only knew how to detect JDKs, then if you wanted it to detect GCC, you'd have to write your own test for GCC <onnadi3> I think <rcjsuen> hm <lemmy> configure shouldn't know how to detect jdks. It should know how to discover a specific resource with given attributes. <lemmy> a consumer passes those attribute to configure to utilize the discover facilities. <lemmy> discovery facilities could be a registry finder, a fs finder, a port checker... <lemmy> the consumer would know how to use these finders to find what he needs. <lemmy> my point about a service identifier is to have a globally valid identifier for a given service/resource. <lemmy> however there is a problem. if a second consumer tries to discover a certain resource that has already been identified and discovery would just return already known results, it might miss resources which would have been discovered because of much better heuristics by the second consumer. <onnadi3> Will it be possible to assign a unique identifier to every program? <lemmy> Would be great if we would already have such identifiers. <lemmy> Or at least a schema by which we could generate reproducible identifiers. <rcjsuen> lemmy: Hm, that's an interesting scenario <lemmy> rcjsuen: a simple and stupid solution would be a "force" flag to always rediscover even with previous results. <lemmy> Having identifiers also allows for consumers to not provide discovery rules. They would reuse already exiting discovery rules without shipping them. <lemmy> shipping/including... <onnadi3> So I guess the main problem now is how to incorporate version information into the "service" name in a standard way <onnadi3> Most (though not all) program I know use an increasing numerical scheme for versioning so maybe all we have to do is concat the binary's name with its versio <onnadi3> Then we ca ntell if one version is greater or less than another <lemmy> Or have an identifier class with names and versions. ;) <lemmy> However, implementation doesn't really matter, does it? <lemmy> yet <onnadi3> Well...I was trying to make it general enough to also encompass ports (which do not have version information) <lemmy> The port doesn't, but the service behind it. :) <onnadi3> Wait. That's not true. There might be different versions of processes running at the same port <onnadi3> :-) <lemmy> For my web project I need Tomcat 5 and not 4. :) <onnadi3> Hmmm, so I guess it all boils down to finding binaries <onnadi3> Whether hidden behind an fs, or a port <lemmy> Though we are talking pretty much about semantics of discovery results. Might it be more reasonable to provide a broad collection of finders and the consumer provides the semantics instead? <rcjsuen> I don't think that's unreasonable. <onnadi3> OK. So as you said earlier, discovery provides libraries for searching the fs, ports, registry etc. for whatever it is a consumer is looking for <lemmy> Well, those are just my thoughts. :) <onnadi3> They're good thoughts. I like the idea <onnadi3> We just have to solve the little details now <lemmy> normally the details bite us in the a** <onnadi3> We got all summer to get used to it :-) <lemmy> so we will file having service identifiers as should but currently as too hard to implement and move on to the next thing? <lemmy> like we've done with validation. <onnadi3> Well, i was thinking of defining service as sthg like "any resource that can be identified by a URI" <onnadi3> (implementation of URI may vary) <lemmy> I'm a little bit short on time currently, but maybe we can open a list on the wiki with "must", "should" and "nice to have" to keep track of the features? <rcjsuen> Yes, probably better to move on. We don't want this meeting to on forever on this thing <onnadi3> Alrighty <onnadi3> Next point, "Extension Points vs. OSGi (declarative?) services" <onnadi3> Do y'all think it is worth it to write custom tools for dealing with OSGi services? <lemmy> first of all, we don't need to decide "either/or". We can use both. <onnadi3> Well, I thought the only place we need them is for connecting customer plugins with Discovery <onnadi3> And once we're using XPs, there won't be any need for OSGi services <lemmy> XPs? <rcjsuen> eXtension Points <lemmy> What's wrong with EPs? ;) <lemmy> XP is already used by extreme programming. :) <lemmy> ...in my head that is. <rcjsuen> lemmy: same <lemmy> onnadi3: writing service doesn't really require tooling (maybe for declarative service it does though). <onnadi3> Yeah. For declarative services...so you don't think we'll need dynamic discovery of system resources aka Services? <lemmy> My only point is, that EPs have more dependencies whereas OSGi services are pretty low level. <onnadi3> What kind of dependencies? <onnadi3> (I'm not attached to EPs. I jsut want whatever is simplest) <lemmy> Using services for low level API might therefore be useful. For high level EPs are a better fit. <rcjsuen> onnadi3: Well, a lot more people are familiar with extension points anyway. <lemmy> I'm just aiming at this provisioning angle... <lemmy> In provisioning you might not have access to org.eclipse.* <rcjsuen> looks like org.eclipse.equinox.common, runtime_registry_compatibility.jar, org.eclipse.equinox.registry <rcjsuen> org.eclipse.osgi.services, org.eclipse.osgi <lemmy> org.eclipse.osgi? That's new to me <lemmy> Is it the plug-in or package name? <rcjsuen> well, bundle symbolic name is org.eclipse.osgi <rcjsuen> but sure there are org.osgi.* stuff and also org.eclipse.osgi.* stuff <lemmy> I'd suggest to hook the finders to the base discovery via services and offer services and EPs to consumers. <lemmy> This would certainly require another plug-in on top of base discovery that encapsulates EPs. <lemmy> <onnadi3> what do you mean encapsulate EPs? Wouldn't I just provide the Extension Points and then consumers can write extensions for them? <lemmy> bear with my terrible first grade painting skills. :) <lemmy> "EP connector" is necessary so that base can be used in a pure OSGi environment. <rcjsuen> I see your point. <lemmy> Maybe this is overengineering. <lemmy> however foundation 1.0 already leads the path in this direction. <onnadi3> lemmy: Could you please explain what provisioning does? I didn't really understand their Wiki page <lemmy> I don't think their use cases have been fixed yet. : <lemmy> Simplified it deals with bringing Eclipse to the customer. <lemmy> Eclipse in this context doesn't mean the SDK. <lemmy> But software components (== bundles == plug-ins) in general. <lemmy> One of the goals of provisioning is to redo the UpdateManager. <lemmy> And to get rid of "features". ;o <onnadi3> So its just a reimplementation of parts of Eclipse?? <rcjsuen> Except something-not-named-features-but-acts-just-like-features will probably remain. <lemmy> *g* <onnadi3> Well, if provisioning is important enough then we might as well use OSGi services and EPs <lemmy> It is up to us how important we rate them. I tend to rate fundamental facilities very high though. <lemmy> But this isn't carved in stone <lemmy> onnadi3: I feel like you dislike services? ;) <lemmy> s/I feel/It feels <onnadi3> No, I don't. They seem a bit more straightforward than EPs <onnadi3> You simply call a method to register your bundle. that's it <onnadi3> But in sum, I'm worried because I don't understand what Provisioning is good for but it sounds like it is worth the effort to use services and EPs * rcjsuen__ (n=rcjsuen@CPE0080c6eae091-CM001947577b96.cpe.net.cable.rogers.com) has joined #eclipse-soc <lemmy> We might want to bring Pascal into the discussion. <rcjsuen__> My connection problems are kicking in. <lemmy> :( * benny`work (n=benny@eclipse/developer/Technology/bennywork) has joined #eclipse-soc <onnadi3> rcjsuen, rcjsuen__:There're are two of you :-) <onnadi3> Anyway, I think we've kinna beaten this topic to death. We should probably go with OSGi and EPs for now, and then if necessary, revisit the issue <lemmy> I agree, and at first the effort for correcting our decision isn't to high. <lemmy> too <lemmy> Speaking about IRC connection problems, I could offer both of you an account for my IRC proxy. This should solve your firewall problems too, since it runs on 57000 <onnadi3> that would be nice <lemmy> It keeps a buffer of the last n lines so you don't miss anything while you're disconnected. <lemmy> onnadi3: OK, I'll set it up then. <lemmy> Lets stick to the organizational topics, they seem to be easy. :) <onnadi3> And on that note, we can move to " - Rules Engine <onnadi3> - DSL vs. Rule Engines vs. anonymous Classes <onnadi3> - JSR94 ()" <lemmy> onnadi3: what about syndicating your blog on the planet? <onnadi3> I already posted a bug report about it but I don't think anyone's looked at it yet :-( <rcjsuen__> onnadi3: Number? <lemmy> Great, so it should appear soon. <rcjsuen__> lemmy: did you talk to Gunnar/Chris directly or something? <lemmy> By meeting minutes I mean that we should try to summarize our meetings and post those to the mailing list/wiki. A simple chat log is way to verbose for outsides to read. <lemmy> Chris is currently on the road which might be the reason for the delay. <lemmy> If it isn't syndicated on Monday, we should ping them directly. <rcjsuen__> I was talking to Chris last night on Sametime <rcjsuen__> I think he was in the airport maybe or something <lemmy> rcjsuen: so you will take care of that issue? :) <rcjsuen__> lemmy: if onnadi3 gives me the bug # <lemmy> I don't have Sametime. ;) <lemmy> rcjsuen: I'm sure this can be arranged. <rcjsuen__> or i guess technically i don't need the bug # i can just give him the blog link <onnadi3> I'm searching for it.... <rcjsuen__> but better that way since he can then resolve that bug <rcjsuen__> lemmy: i will ST him next time i see him and asusming he's not busy <rcjsuen__> he's a busy guy after all o.O <lemmy> who isn't these days. <onnadi3> Uh...do y'all know how to search for only bugs posted to PlanetEclipse? <lemmy> I have a suggestion as a first milestone due next week: Implement a simple fs finder connected to org.eclipse.discovery via a service. An exemplary consumer utilizes this finder to discover JDKs. :) <lemmy> Next week meaning 06/09. <lemmy> Including the CVS cleanups. <onnadi3> I can do that. <lemmy> Might make sense to implement the consumer in form of some JUnit tests. <onnadi3> I'm not sure about the consumer part...since that will require extension points and I don't know when exactly my book will arrive next week <lemmy> So we get around the Drools discussion this time. ;) <lemmy> onnadi3: Use a service or just simple method calls for the moment. <onnadi3> okey-doke <onnadi3> Could you please give me an example of what the tests would look like? <benny`work> hi guys <onnadi3> the junit tests <lemmy> After all org.eclipse.discovery is supposed to provide EP _and_ services. <onnadi3> benny`work: hallo <benny`work> hi onnadi3 <lemmy> JUnit might be tricky since you don't know the fs content where the tests get executed. <lemmy> Hi Benny <benny`work> hi markus <lemmy> You don't really know how many JDKs to expect. <lemmy> Might be possible to mock the content of the fs somehow. <onnadi3> I think I can do it as org.eclipse.discovery.examples instead <lemmy> renaming/moving is never a problem. We have great tooling at our disposal. ;) <lemmy> for the meeting minutes I suggest to rotate the burden. who wants to take care of this meeting? :o <onnadi3> rcjsuen: I refiled the bug. Its #190652 <onnadi3> lemmy: I can do it all. Old meeting logs are here <onnadi3> So should we move on to the DSL vs. Drools or figure out TagSea first? <lemmy> dunno, maybe we can keep drools for next week. * _rcjsuen (n=rcjsuen@CPE0080c6eae091-CM001947577b96.cpe.net.cable.rogers.com) has joined #eclipse-soc <lemmy> Btw. by meeting notes I'm talking about a summary instead of just a chat log. <onnadi3> oh. ah. >_rcjsuen< don't you wanna use the proxy I've offered you? <_rcjsuen> Well, that was painful. <onnadi3> Three rcjsuens!!! * rcjsuen has quit (Read error: 110 (Connection timed out)) >_rcjsuen< don't you wanna use the proxy I've offered you? <onnadi3> Alright. We could rotate the responsibility but I don't mind doing the chat logs and the summaries <lemmy> Perfect :) <onnadi3> I'm getting paid big bucks to do it, anyway B-) <lemmy> TagSE is just an idea how to make the code easy to understand for Remy and me. Check out the website and decide for yourself if it makes sense after all. I haven't used it yet. <_rcjsuen> I've looked at TagSEA. <_rcjsuen> Never used it, but have seen a 5-10 minute demo. <onnadi3> It seems to me like we should start with Javadoc and move to TagSea if we find deficiencies. I'm not sure what TagSea will really accomplish for us * onnadi3 has quit (Read error: 104 (Connection reset by peer)) * rcjsuen__ has quit (Read error: 110 (Connection timed out)) * onnadi3 (n=chatzill@mississippi-lnx.cc.gatech.edu) has joined #eclipse-soc <onnadi3> lemmy: Could you please save your chat log? I think you're the only one with the complete thing now <lemmy> Sure, I'll send it to you <onnadi3> danke <lemmy> bitte >_rcjsuen< Are you going to use the bouncer? Otherwise I would reuse your account. * onnadi3_ (n=chatzill@luke.cc.gatech.edu) has joined #eclipse-soc <onnadi3_> rcjsuen, lemmy: Sorry guys <onnadi3_> Have you guys made a decision on TagSea? <lemmy> the decision is yours to make. :) <onnadi3_> cool. I vote that we use Javadoc and if you guys find my commenting illegible, then perhaps we can move on to TagSea... <lemmy> fine * rcjsuen__ (n=rcjsuen@CPE0080c6eae091-CM001947577b96.cpe.net.cable.rogers.com) has joined #eclipse-soc <onnadi3_> Next up, "- QA/Testing (Unit/FIT)" <onnadi3_> What does FIT mean? <lemmy> functional tests <onnadi3_> howzat different from unit tests? <lemmy> It is. You might wanna read up on that one. :) <lemmy> Lets reschedule this one for next week too. My head is empty atm. <onnadi3_> Alright <lemmy> I'll go for some R&R soon. Maybe have a nice cocktail too. ;) <onnadi3_> lemmy: Could you just explain what " - IRC presence (bouncer/proxy)" this is so we know whether to shelve it too? <lemmy> It's this IRC proxy thingy we've already talked about. <onnadi3_> (Ah...that's why you want to move everything to next week ;) ) <onnadi3_> oh OK <lemmy> One reason, also because my battery is nearly depleted. <lemmy> Good night everybody <onnadi3_> okey-dokey. Could you please send th eupdated logs
https://wiki.eclipse.org/index.php?title=Auto-conf_meeting_logs&oldid=40317
CC-MAIN-2020-10
en
refinedweb
Date: Sat, 17 Mar 2007 18:38:12 -0500 From: darkness39@yahoo.com Newsgroups: misc.invest.financial-plan Subject: Re: Help for a 40 year old.... posting-account=zZOxjAwAAACc4Z2z6yV3JFMS51LsPcbI iQBVAwUARfx75Pl/I4+O31e5AQEbSQIA3qtkcfWj5NM1Zwj+r1411N0LeNnzeRo/ x/RZ1JLQnesOoBfQXuuxx/SG7OP0U7xY6OusLcU8ehJC+d4OYG2Ajw== =x1HU On Mar 17, 6:52 pm, black wrote: > OK...I admin that I am not a money wizard!! There I said it. > I have ideas but can never seem to implement them. Therefore, we have > a good amount of money in the bank but its not earning us anything. Mutual Funds for Dummies and Financial Planning for Dummies are both helpful. My general thoughts are: - work out how much you need for expenses, etc. It is normally prudent to keep at least 6 months of living expenses (including mortgage) in a 'cash' account (MM Funds, CDs etc.) - make sure you have enough term life insurance and disability insurance to cover you in situations of death or serious illness (including benefits provided by your employer) Assuming you have a normal retirement profile (ie seeking to retire between 60 and 65, no special health or family issues) and you feel you are making adequate college provisions for children etc. then - subject to that, maximise your 401k contributions, at least to the point of getting any company matching, investing, if possible, in a low cost index fund. Ideally, a low cost index fund and a low cost international index fund (70-75% in the former, 25-30% in the latter). You don't want to make investments in a lump sum, you want to make investments steadily over time, on a month by month basis. If there is a range of index funds on offer, what you want is the fund that 1). matches or tracks the widest index possible (ie if possible more than the Standard and Poors 500 index) and 2). has the lowest possible fees and Total Expense Ratio. - IRAs are a separate issue, as I am not an American based investor, I leave that to others to consider Remember paying down your mortgage is also an investment, and one which gives a certain rate of return (the interest rate of your mortgage, adjusted for the tax benefits and the tax you pay on investment income).
http://www.info-mortgage-loans.com/usenet/posts/18359-22754.misc.invest.financial-plan.shtml
crawl-002
en
refinedweb
nfo stores index basketball info gift index artists downloads genres yahoo stores accessories giftstores info gift gift stores news bands giftstores gift stores giftstores import dvds gift stores accessories giftstores info dvds online adult sex toys antique books prints antiques collectibles various artists amazon cds good books stores find good books gift find game home info games artist index young artists anime manga cheap japanese kits giftstores gift historical prints original find adicting info shopping ivillage anniversary roses gifts php council directory programs links arts great artist online music funniest home abc afv screensavers free tetris independent download music paintings furniture silver american antique information old antique atlases reference antiquarian swiss accessories unique fine lyrics artist christian dvds forums php art prints year traditional modern silver coral jade typewriters home collecting typewriter find new independent artists mtv music find information giftstores info ambigram downloads christian jazz country artists video index home find java search games index information antique mall collectibles animal wild world teasing videos home life gift animated claus downloads boogie download boogie downloads download themes supplies party games crafts sale gift stores foosball games loaded films online anime cheap dvds music rent dvd online budokai psp game animated shin anime films online loaded cheap rentals coin sales arcade genre jhtml radio rock agent info search services videos show aircraft artists local subject research history american university rock online indieheaven christian indie music news downloads artists artist antique collectibles rare books prints gospel info stores giftstores gospel info gift giftstores machines ping pong download christian free download garage graduation info gift stores giftstores download unlimited access music songs artist program lyric music artists antiques info gift stores find show artists music bands import figures music anime free new download artists arcade online download game operated games find products jewelry hand american indian crafts native stores information info year guide musicians recording world music artists directory artist jhtml bands madonna council directory programs book arts artists music own actors movie american information books antiques collectibles videos animals wild animal video artists msn music travel adventure choose info party crafts games activities modern traditional sixth year japanese import anime music giftstores info stores action ebay find antiques architectural band home bands artists machines arcade find hong poker action gift old antique s find information index indian alaska american gift stores giftstores latin download accessories parts ebay computers action videos artists rock download anniversary second happy modern pop download rock gallery model anime giftstores info classic gift stor Splurfle - Online Shopping & More If you experience unusual problems with this site please email the webmaster.
http://splurfle.com/Store-Pages-18/341270.html
crawl-002
en
refinedweb
The. For example, a bookstore might define the URL for a list of books it sells and the URL for details about a specific book with the ISBN 0321396855. This is in contrast to action-centric applications, which typically have long, cryptic URLs describing actions to perform, such as. Query parameters are used to filter the results. Using the same bookstore example, specifying the subject parameter restricts the book list to books about a specific subject. For example, the URL returns a list of books about the Eclipse platform. Dr. Roy Fielding coined the term REST in his Ph.D. dissertation, where he referred to "hypermedia as the engine of application state." This means that a resource is expected to contain hyperlinks. These hyperlinks are the method by which a transition can take place that changes the resource state or transfers to another resource. While hyperlinks are commonplace in (X)HTML applications meant to be used by humans, they have not typically been used in XML, which is meant to be consumed by machines.. To learn more about REST, see the Resources section at the end of this article. A WSDL description contains all the details of a Web service, including: - The service's URL. - The communication mechanisms it understands. - What operations it can perform. - The structure of its messages. Clients can use these details to interact with a service. No doubt, bindings. WSDL is an XML language to formally describe a Web service. Consider a Web service's WSDL description its API contract with clients. The WSDL description specifies the address, allowable communication mechanisms, interface, and message types of a Web service. In short, a WSDL description provides all the information a client needs to use a Web service. WSDL's usefulness extends beyond its use as an API contract. Being a formal definition, WSDL can be consumed by Web services tools to perform actions, such as: - Generate client and service stubs in various languages. - Publish a Web service. - Dynamically test a Web service. The majority of Web services tools include support for WSDL 1.1, and support for WSDL 2.0 is growing. The Apache Web services project contains two subprojects that currently support WSDL 2.0. Woden is a Java™-based WSDL 2.0 validating parser. The project also contains an XSL Transformation (XSLT) WSDL 2.0 pretty printer that provides a more human-readable form of a WSDL document. Axis2, also from Apache, is a popular Web service runtime engine capable of generating Java client and server stubs from a WSDL 2.0 document. Describe a REST Web service with WSDL 2.0 The remainder of this article takes you through the steps needed to create a WSDL 2.0 description for a REST Web service using the following simple example scenario. You run a bookstore, which has a creative URL:. You've previously created two REST Web services: - The book list service retrieves the list of the books that you sell in your store. - The book details service retrieves the details about a specific book. The information is returned in XML documents. Take a look at the details about the services: The URL of the book list service is. This list of books you sell is quite large, as you're an established retailer. So you've provided the following list of query parameters that clients can use to filter the results: - Author - Language - Publisher - Subject - Title For example, the URL returns the list of computer books about Eclipse, as shown in Listing 1. Listing 1. Response from the book list service The URL of the book details service is, where ISBN_NUMBER should be replaced with the ISBN for a specific book. For example, the URL returns the details of my book, Eclipse Web Tools Platform, as shown in Listing 2 (see Resources for a link to the book's Web site). Listing 2. Response from the book details service In the following sections, you learn how to create a WSDL 2.0 description of the book list service using the details outlined above. The book details service, while useful in the scenario, doesn't have a structurally different WSDL description, so it's not covered in the article. However, the book list and book details WSDL 2.0 descriptions and the sample documents from Listing 1 and Listing 2 are available in the Downloads section at the end of this article. Before getting to the WSDL description of the book list service, let's take a look at the WSDL 2.0 basics. WSDL 2.0 is an XML language with the core namespace. The root element of a WSDL 2.0 document is the description element. There are four child elements of description that together encapsulate all of the details about a Web service: types interface binding service A skeleton WSDL 2.0 document is shown in Listing 3. Listing 3. Skeleton WSDL 2.0 document The types element contains all of the XML schema element and type definitions that describe the Web service's messages. WSDL 2.0 is open for the use of other type systems but practically is only used with XML schema. The interface element defines the Web service operations, including the specific input, output, and fault messages that are passed, and the order in which they are passed. The binding element defines how a client can communicate with the Web service. In the case of REST Web services, a binding specifies that clients can communicate using HTTP. The service element associates an address for the Web service with a specific interface and binding. WSDL 2.0 defines two other namespaces of interest with respect to REST Web services: - HTTP namespace, which includes the HTTP binding elements - WSDL extensions namespace, which includes definitions of three attributes: two used to associate a hyperlink in an XML document with a Web service description and the third to describe a Web service operation as safe All of the WSDL elements discussed in this section are shown in more detail in the following sections. Address the book list service As a reminder, the URL for the book list service is. To address the service, you use the WSDL service element, which requires at least one endpoint child element. The endpoint element's address attribute is used to specify the service's URL, as shown in Listing 4. The endpoint element is also used to associate a binding with the service with the binding attribute. The service element in turn associates an interface with the service with the interface attribute. You create the interface and binding in the following sections, so you can leave these attribute values blank for now. The book list service's WSDL 2.0 document also requires a target namespace, so define the namespace. Listing 4. The book list service definition Talking HTTP with the book list service The book list service's binding definition needs to specify that the service communicates using HTTP. To do this, specify the value for the binding element's type attribute. The binding can also optionally reference an interface. Leave the interface attribute blank; you create it in the next section. If an interface is associated with the binding, the binding element can optionally declare a child operation element that mirrors the interface operation element. You need to create a stub operation element and fill in this reference after creating the interface. There are four HTTP communication verbs: - GET - PUT - DELETE The book list service is a read request and, therefore, communicates with HTTP GET. Set the GET verb on the operation element using the HTTP method attribute from the WSDL 2.0 HTTP namespace. To use this attribute, you first need to declare the namespace on the description element. The book list service's HTTP binding declaration can be seen in Listing 5. Update the endpoint element's binding reference to reference the tns:BookListHTTPBinding now that you've declared the binding. Listing 5. The book list binding definition Define the book list service operation So far you've learned how to address and communicate with the book list Web service. Next you specify the book list service operation, which describes what the book list service does. The interface element and its child operation element are used to define a service's operations. In the case of the book list service, you define a single operation, getBookList, that responds to requests with a book list. Next, specify three attributes on the operation element: - pattern: Used to specify the message exchange pattern (MEP) for the operation. The MEP defines the sequence of messages in the operation and their direction. In this case, specify the value to indicate that the service receives one input message—the request for the book list—and sends one output message—the book list. To support this MEP, specify inputand outputchild elements of the operationelement. These elements are used to reference the XML schema elements that define the message structures, which are created in the next section. - style: Used to specify additional information about an operation. Specify the value, which places restrictions on the inputelement content, such as requiring that it only use XML schema elements. - wsdlx:safe: From the WSDL extensions namespace, this attribute declares that this operation is idempotent. This type of operation doesn't modify the resource and can therefore be called many times with the same results. To make use of this element, declare the WSDL extensions namespace on the descriptionelement. You can find the predefined MEPs, styles, and the safe attribute definition in "WSDL 2.0 Part 2: Adjuncts" (see Resources for a link). Listing 6. You can update the service and binding elements' interface references and the binding operation element's interface operation reference now that you've declared the interface and operation. Listing 6. The book list interface definition Define the book list service operation messages The book list Web service has two messages: an input message and an output message. You need to describe specific message structures so that clients know what message to send to the service and what message to expect from the service. WSDL 2.0 supports multiple type systems for describing the message content, but XML schema is the only one in use. This section doesn't cover the details of XML schema. XML schema is used in many other applications, like WSDL 1.1, and there are many good articles about it. This section highlights how to use XML schema for the book list REST Web service and how to use additional attributes defined by WSDL 2.0 to annotate a schema attribute. To create the two messages for the book list REST Web service, you need to create two global elements: getBookListrepresents the input message. It contains a sequence of elements, including each of the query parameters you allow on the service, namely author, title, publisher, subject, and language. The content of the getBookListelement is restricted to elements only, because you specified the IRI style for the interface operation. bookListrepresents the output message. It contains a sequence of bookelements. Each bookelement contains titleand urlattributes. The titleattribute should be self explanatory. The urlattribute is a link to a book details REST Web service, which returns the details for the specific book. Your definition of the url attribute includes two attributes from the WSDL extensions namespace. The attributes wsdlx:interface and wsdlx:binding identify the specific WSDL 2.0 interface and binding for the service. Tools can use this semantic information to automatically discover the service. To make use of these attributes, specify the WSDL extensions namespace on the schema element. Also, include the book details service namespace from its WSDL 2.0 description to reference the interface and binding for the service. The XML schema for the book list service is shown in Listing 7. You can get the book details service description in the Downloads section. Listing 7. XML schema for the book list service To reference the input and output elements declared in the XML schema, you have to import the schema into your WSDL document. To import a schema, you use a schema import element in the types section, as shown in Listing 8. You also need to add references to the getBookList and bookList elements in the interface operation's input and output elements, and add the XML schema book list schema's namespace declarations to the description element. Listing 8 shows the complete WSDL 2.0 description of the book list REST Web service. Get the description of this service and the book details REST Web service in the Downloads section. Listing 8. WSDL 2.0 description of the book list REST Web service In this article you learned about REST and how WSDL 2.0. REST Web services use HTTP and XML for communication. RESTful applications are resource-centric as opposed to action-centric. The value in describing REST Web services in a formal way is using the description as a formal contract between clients and service providers, and support for tools. WSDL 2.0 supports the description of REST Web services. You walked through the description of the book list REST Web service using WSDL 2.0 and XML schema. Hopefully you'll use the steps shown here for authoring the book list service's WSDL 2.0 description to author descriptions for your own REST Web services. Thanks to Christopher Ferris, John Kaputin, and Anne James for reviewing this article and providing useful and detailed feedback. Information about download methods Learn - Read "How I Explained REST to My Wife" by Ryan Tomayko for a good introduction to REST. - Read "RESTful SOA using XML" (developerWorks, Feb 2008) for a good overview of REST in practice and implementing REST Web services. - Check out Roy Fielding's dissertation "Architectural Styles and the Design of Network-based Software Architectures," the definitive source about REST. - "WSDL 2.0: Primer" provides a good introduction to the features of WSDL 2.0. - The "WSDL 2.0 Part 1: Core Language" recommendation provides the specifics of WSDL 2.0 and the use of XML schema with WSDL 2.0. - The "WSDL 2.0 Part 2: Adjuncts" recommendation provides the specifics of HTTP, SOAP 1.2, and MEP usage with WSDL 2.0. - Read Lawrence Mandel's book Eclipse Web Tools Platform: Developing Java Web Applications . - The SOA and Web services zone on IBM® Apache Woden for a Java-based WSDL 2.0 validating parser. - Download Apache Axis2 for a Java-based Web services run time capable of generating and running Web services based on WSDL 2.0. - Innovate your next development project with IBM trial software, available for download or on DVD. Discuss - Participate in the discussion forum. - Discuss REST and Web services on the Best Practices for SOA and Web Services and Web Site Development with Open Source Software forums. - Get involved in the developerWorks community by participating in developerWorks blogs. Lawrence Mandel, a software developer at the IBM Toronto Lab, is currently working on a reporting solution for IBM Rational. He leads the Apache Woden project, which is developing a WSDL 2.0 validating parser and related tools. Lawrence is coauthor of the book Eclipse Web Tools Platform: Developing Java Web Applications.
http://www.ibm.com/developerworks/webservices/library/ws-restwsdl/
crawl-002
en
refinedweb
Book Excerpt Index Note: The JDBC 2.0 API includes many new features in the java.sql package as well as the new Standard Extension package, javax.sql. This new JDBC API moves Java applications into the world of heavy-duty database computing. New features in the java.sql package include support for SQL3 data types, scrollable result sets, programmatic updates, and batch updates. The new JDBC Standard Extension API, an integral part of Enterprise JavaBeans (EJB) technology, allows you to write distributed transactions that use connection pooling, and it also makes it possible to connect to virtually any tabular data source, including files and spread sheets. java.sql javax.sql Readers who are new to JDBC might want to refer to the JDBC Basics chapter in the online version of The Java Tutorial Continued. For further training on JDBC, take a look at Chapter 3 which focuses on this API's new features and how to set up distributed application connections.. RowSet Rowsets may have many different implementations to fill different needs. These implementations fall into two broad categories, rowsets that are connected and those that are disconnected. A disconnected rowset gets a connection to a data source in order to fill itself with data or to propagate changes in data back to the data source, but most of the time it does not have a connection open.. It needs to maintain metadata about the columns it contains and information about its internal state. It also needs a facility for making connections, for executing commands, and for reading and writing data to and from the data source. A connected rowset, by contrast, opens a connection and keeps it open for as long as the rowset is in use. Although anyone can implement a rowset, most implementations will probably be provided by vendors offering RowSet classes designed for fairly specific purposes. To make writing an implementation easier, the Java Software division of Sun Microsystems, Inc., plans to provide reference implementations for three different styles of rowsets in the future. The following list of planned implementations gives you an idea of some of the possibilities. CachedRowSet JDBCRowSet ResultSet WebRowSet As the conceptual description of rowsets pointed out, what you can do with a rowset depends on how it has been implemented. It can also depend on which properties have been set. The example rowsets used in this chapter are based on the CachedRowSet implementation, but because they are used for different purposes, one has several properties set whereas the other has none. Among other things, this tutorial will show you which properties to use and when to use them. Getting back to our owner of The Coffee Break chain, he has had one of his developers write an application that lets him project the effects of changing different coffee prices. To create this application, the developer hooked together various JavaBeans components, setting their properties to customize them for his application. The first JavaBeans component, called Projector, was one that the owner bought from an economic forecasting firm. This Bean takes all kinds of factors into account to project future revenues. Given the price and past sales performance of a coffee, it predicts the revenue the coffee is likely to generate and displays the results as a bar chart. The second JavaBeans component is a CachedRowSet. The owner wants to be able to look at different coffee pricing scenarios using his laptop, so the application is set up such that it creates a rowset that can be copied to the laptop's disc. The owner can later fire up the application on his laptop so that he can make updates to the rowset to test out various pricing strategies. The third Bean is a form for displaying and updating ResultSet objects. The form can be used for displaying and updating our CachedRowSet because CachedRowSet is simply a specialized implementation of ResultSet. The application has a graphical user interface that includes buttons for opening and closing the application. These buttons are themselves JavaBeans components that the programmer assembled to make the GUI for his application. While he is at work, the owner can click on the form's New Data button to get a rowset filled with data. This is the work that requires the rowset to get a connection to the data source, execute its query, get a result set, and populate itself with the result set data. When this work is done, the rowset disconnects itself. The owner can now click on the Close button to save the disconnected rowset to his laptop's disc. At home or on a plane, the owner can open the application on his laptop and click the button Open to copy the rowset from disc and start making updates using the form. The form displays the rowset, and he simply uses arrow keys or tabs to highlight the piece of data he wants to update. He uses the editing component of the form to type in new values, and the Projector Bean shows the effects of the new values in its bar chart. When he gets back to headquarters, the owner can copy his updated rowset to his office computer if he wants to propagate the updates back to the database. New Data Open As part of the implementation, the application programmer will do the following: To put this all together, the application programmer will probably use a visual Bean development tool, which means that he will use very little RowSet API directly. Of course, the owner will use the application without writing any RowSet code himself. The upshot of all of this is that generally tools will generate the RowSet code you see in this tutorial. Also, remember that the code shown here is for illustrative purposes only because it uses the CachedRowSet class, for which there is no implementation currently available. Although the JDBC Standard Extension specification gives a preliminary outline of its functionality, some details in its implementation may be different when it is completed. Because a programmer will generally use a Bean visual development tool to create a RowSet object and set its properties, the example code fragments shown here would most likely be executed by a Bean development tool. The main purpose of this section is to show you when and why you would want to set certain properties. The code for creating a CachedRowSet object simply uses the default constructor. CachedRowSet crset = new CachedRowSet(); Now the programmer can set the CachedRowSet object's properties to suit the owner's needs. The RowSet interface, which the CachedRowSet class implements, contains get/set methods for retrieving and setting properties.. The following example uses several properties and explains why they are needed. get set JavaBeans The owner wants the convenience of being able to make updates by scrolling to the rows he wants to update, so the property for the type needs to be set to scrollable. It may be that the CachedRowSet class will be implemented so that it is by default TYPE_SCROLL_INSENSITIVE, in which case the programmer would not need to set the rowset's type property. It does no harm to set it, however. The default for the concurrency property is ResultSet.CONCUR_READ_ONLY, so it needs to be set to CONCUR_UPDATABLE for our entrepreneur's use. The following lines of code make the CachedRowSet object crset scrollable and updatable. TYPE_SCROLL_INSENSITIVE ResultSet.CONCUR_READ_ONLY CONCUR_UPDATABLE crset crset.setType(ResultSet.TYPE_SCROLL_INSENSITIVE); crset.setConcurrency(ResultSet.CONCUR_UPDATABLE); The owner will want to make updates to the table COFFEES, so the programmer sets the rowset's command string with the query SELECT * FROM COFFEES. When the method execute is called, this command will be executed, and the rowset will be populated with the data in the table COFFEES. The owner can then use the rowset to make his updates. In order to execute its command, the rowset will need to make a connection with the database COFFEEBREAK, so the programmer also needs to set the properties required for that. If the DriverManager were being used to make a connection, he would set the properties for a JDBC URL, a user name, and a password. However, he wants to use the preferred means of getting a connection, which is to use a DataSource object, so he will set the properties for the data source name, the owner's user name, and the owner's password. 160 in the advanced tutorial.) Here is the code a tool would generate to set the command string, the data source name, the user name, and the password properties for the CachedRowSet object crset. COFFEES SELECT * FROM COFFEES execute COFFEEBREAK DriverManager DataSource crset.setCommand("SELECT * FROM COFFEES"); crset.setDataSourceName("jdbc/coffeesDB"); crset.setUsername("juanvaldez"); crset.setPassword("espresso"); Note that the String object set for the data source name is the logical name that the system administrator (or someone acting in that capacity) registered with a JNDI naming service as the logical name for the COFFEEBREAK database. A programmer just needs to get the logical name, in this case jdbc/coffeesDB, from the system administrator and use it to set the data source property. When the rowset makes a connection, it will use the information in its properties, so the programmer or tool will not need to do anything except execute the command string, which you will see later. Internally the rowset gives the JNDI naming service the string the programmer set for the data source name property. Because jdbc/coffeesDB was previously bound to a DataSource object representing the database COFFEEBREAK, the naming service will return a DataSource object that the rowset can use to get a connection to COFFEEBREAK. String jdbc/coffeesDB The programmer sets one more property, the transaction isolation level, which determines the transaction isolation level given to the connection that the rowset establishes. The owner does not want to read any data that has not been committed, so the programmer chooses the level TRANSACTION_READ_COMMITTED. The following line of code sets the rowset's property so that "dirty reads" will not be allowed. TRANSACTION_READ_COMMITTED crset.setTransactionIsolation( Connection.TRANSACTION_READ_COMMITTED); The other properties are all optional for the owner, so the programmer does not set any others. For example, he does not need to set a type map because there are no custom mappings in the table COFFEES. If the owner has the programmer change the command string so that it gets data from a table that has user-defined types with custom mappings, then the type map property will need to be set. Being a JavaBeans component, a RowSet object has the ability to participate in event notification. In the application we are considering, the Projector Bean needs to be notified when the rowset is updated, so it needs to be registered with the rowset as a listener. The developer who wrote the Projector Bean will already have implemented the three RowSetListener methods rowChanged, rowSetChanged, and cursorMoved.. RowSetListener rowChanged rowSetChanged cursorMoved PRICE SALES The following line of code registers projector, the bar chart component, as a listener for crset. projector crset.addRowSetListener(projector); Now that projector is registered as a listener with the rowset, it will be notified every time an event occurs. So far the programmer has created a CachedRowSet object and set its properties. Now all he has to do in order to get a scrollable and updatable rowset is to call the method execute on the rowset. As a result of this call, the rowset does all of the following behind the scenes: SELECT * FROM COFFEES The invocation that accomplishes all of this is the following single line of code. crset.execute(); This produces a CachedRowSet object that contains the same data as the ResultSet object generated by the query SELECT * FROM COFFEES. In other words, they both contain the data in the table COFFEES.. Scrolling in a rowset is exactly the same as scrolling in a result set.. For example, the following code fragment iterates through the entire RowSet object crset, printing the name of every coffee in the table COFFEES. while crset.execute(); while (crset.next()) { System.out.println(crset.getString("COF_NAME")); } With a non-scrollable rowset or result set, you are limited to iterating through the data once and in a forward direction. With scrolling, you can move the cursor in any direction and can go to a row as many times as you like. If you want a review of how to move the cursor, see the advanced tutorial section "Moving the Cursor in Scrollable Result Sets" on page 109. The owner of The Coffee Break wanted a scrolling rowset so that he could easily make updates to a particular row. The following section illustrates moving the cursor to update a row. Updating a CachedRowSet object is similar to updating a ResultSet object. The updateXXX methods and the methods insertRow and deleteRow are inherited from ResultSet and are used in the same way. For example, the owner has brought up the rowset, which contains the current data in the table COFFEES, on his laptop computer. He wants to change the price for French_Roast_Decaf, which is in the fifth row, so he moves the cursor there. The GUI tool displaying the rowset will generate the following line of code to move the cursor to the fifth row. updateXXX insertRow deleteRow French_Roast_Decaf crset.absolute(5); The Projector Bean will be notified that the cursor has moved but will do nothing about it. The owner now moves the cursor to the price, which is the third column, and changes the column's value to 10.49. In response, the GUI tool generates the following update statement. crset.updateFloat(3, 10.49f); Next the owner clicks on the ROW DONE button to indicate that he is finished making updates to the current row. This causes the GUI tool to generate the following line of code. ROW DONE crset.updateRow(); The method rowChanged is called on the Projector Bean to notify it that a row in the rowset has changed. The Projector Bean determines whether the price and/or number of pounds sold has changed and, if so, plugs the most current price and number of pounds sold into its projection calculations. After it arrives at new projected values for revenue from sales of the affected coffee, it updates the bar chart to reflect the new values. Now the owner moves to the previous row, which is the fourth row, changes the price to 9.49 and the sales amount to 500, and clicks the ROW DONE button. Note that the fourth column in the rowset contains the number of pounds sold in the last week. The GUI tool generates the following code. crset.previous(-1); // or crset.absolute(4); crset.updateFloat(3, 9.49f); crset.updateInt(4, 500); crset.updateRow(); So far the owner has updated the fourth and fifth rows in the rowset, but he has not updated the values in the database. If this had been a ResultSet object, both the result set and the database would have been updated with the call to the method updateRow. However, because this is a disconnected rowset, the method CachedRowSet.acceptChanges has to be called for the database to be updated. The owner will click the UPDATE DATABASE button if he wants to propagate his changes back to the database. The GUI tool will generate the following line of code. updateRow CachedRowSet.acceptChanges UPDATE DATABASE crset.acceptChanges(); The application is implemented so that the acceptChanges method is not actually invoked until the owner returns to work and copies the updated rowset to his office computer. On the office machine, the rowset can create a connection for writing updated values back to the database. In addition to updating the database with the new values in rows four and five of the rowset, the acceptChanges method will set the values that the rowset keeps as its "original" values. Original values are the values the rowset had just before the current set of updates. acceptChanges Before writing new values to the database, the rowset's writer component works behind the scenes to compare the rowset's original values with those in the database. If no one has changed values in the table, the rowset's original values and the values in the database should be the same. If there is no conflict, that is, the rowset's original values and the database's values match, the writer may choose to write the updated values to the database, depending on how it is implemented. The current values that the writer enters will be used as the original values when a new set of updates is made. For example, the price in the fifth row was 9.99 before it was updated to 10.49. The rowset's original price of 9.99 should match the price for French_Roast_Decaf coffee in the database. If it does, the writer can update the database price to 10.49 and change the rowset's original price to 10.49. The next time the price for French_Roast_Decaf is changed, the writer will compare the original value (10.49) with the current value in the database. In this example scenario you have seen how a rowset can be used to pass a set of rows to a thin client, in this case a laptop computer. You have also seen how a rowset can provide scrolling and updatability, which the JDBC driver used at the owner's office does not support. The next part of this chapter will show you how a rowset might be used in an EJB application. For this example, we assume that you are familiar with the concepts discussed in "Basic Tutorial" and "Advanced Tutorial," especially the sections on using the JDBC Standard Extension API. This EJB example gives only a high-level explanation of the EJB classes and interfaces used in it; if you want a more thorough explanation, you should see the EJB specification available at the following URL: Let's assume that the owner of The Coffee Break has set up an EJB application to make it easier for his managers to order coffee for their stores. The managers can bring up an order form that has two buttons: one button for viewing a table with the coffees and prices currently available and another button for placing an order. The developer who designed the form used JavaBeans components to create the buttons, layout, and other components of the form. The application developer will use Enterprise JavaBeans components to make the buttons do their work. The EJB component (enterprise Bean) will be deployed in a container provided by an EJB server. The container manages the life cycle of its enterprise Beans and also manages the boundaries of the transactions in which they participate. The EJB implementation also includes a DataSource class that works with an XADataSource class to provide distributed transactions. XADataSource An EJB application is always a distributed application, an application that distributes its work among different machines. An EJB application uses the three-tier model. The first tier is the client, which is typically a web browser. In our example, the client is a form running on The Coffee Break's intranet. The second tier, or middle tier, is made up of the EJB server and the JDBC driver. The third tier is one or more database servers. The method getCoffees is one of the three methods that is implemented by our enterprise Bean, the class CoffeesBean. Let's look at the implementation of this method, which creates and populates a rowset, and then look at how its invocation and execution are spread out over the three tiers. This method will be explained in more detail later in this chapter. getCoffees CoffeesBean public RowSet getCoffees() throws SQLException { Connection con = null; try { con = ds.getConnection("managerID", "mgrPassword"); Statement stmt = con.createStatement(); ResultSet rs = stmt.executeQuery( "SELECT COF_NAME, PRICE FROM COFFEES"); CachedRowSet rset = new CachedRowSet(); crset.populate(rs); rs.close(); stmt.close(); return crset; } finally { if (con != null) con.close(); } return null; } Display Coffee Prices Note that a distributed application is not restricted to three tiers: It may have two tiers, a client and server. Also note that a distributed application is different from a distributed transaction. A distributed transaction, often referred to as a global transaction, is a transaction that involves two or more DBMS servers. A global transaction will always occur within the context of a distributed application because by definition it requires at least a client and two servers. As you have just seen, the rowset used in our EJB example is a CachedRowSet object that is created and populated on the middle tier server. This disconnected rowset is then sent to a thin client. All of this is also true of the rowset in the laptop example, but there are some differences between the two rowsets. The main difference is that the rowset used in the order form for The Coffee Break is not updatable by the client; it is simply a list of coffees and their prices that the manager can look at. Therefore, the rowset does not need its concurrency property set. In fact, the rowset used in the order form does not need any properties at all set. The method getCoffees gets a DataSource object and then uses it to get a connection, so the rowset does not need to perform these tasks. This means that the rowset does not use a data source name, user name, or password and thus does not need the properties for them set. The order form rowset also needs no command property because the getCoffees implementation executes the query to get coffee names and prices. Recall that by contrast, the rowset in the laptop example created a connection, executed its command string, and populated itself with data by having its execute method invoked. The only CachedRowSet method used in the EJB example is populate, which just reads data from the ResultSet object passed to it and inserts the data into the rowset. populate In the EJB framework, one or more enterprise Beans can be deployed in a container, which manages the Beans. The container controls the life cycle of a Bean, and it also controls the boundaries of distributed transactions. Every enterprise Bean has a transaction attribute to tell the container how it should be managed with regard to distributed transactions. The developer of the enterprise Bean in our example has given the Bean the transaction attribute TX_REQUIRED, which means that the Bean's methods must be executed in the scope of a global transaction. If the component that invokes one of the enterprise Bean's methods is already associated with a global transaction, the enterprise Bean method will be associated with that transaction. If not, the container must start a new distributed transaction and execute the enterprise Bean method in the scope of that transaction. When the method has completed, the container will commit the transaction. TX_REQUIRED The fact that the Bean's container manages the start and end of transactions has implications for the Bean's behavior. First, the Bean should not call the methods commit or rollback. Second, the Bean should not change the default setting for a connection's auto-commit mode. Because the DataSource object is implemented to work with distributed transactions, any connection it produces has its auto-commit mode disabled. This prevents the connection from automatically committing a transaction, which would get in the way of the container's management of the transaction. Thus, the Bean should leave the connection's auto-commit mode disabled in addition to not calling the methods commit or rollback. commit rollback SessionBean The EJB component in this example is a stateless SessionBean object, which is the simplest kind of enterprise Bean. Being a session Bean means that it is an extension of the client that creates it, typically reading and updating data in a database on behalf of the client. A session Bean is created when a client begins its session and is closed when the client ends its session. Being stateless means that the Bean does not need to retain any information it might get from a client from one method invocation to the next. Therefore, any Bean instance can be used for any client. For example, the enterprise Bean we will use has three methods. One creates a CoffeesBean object, another one retrieves a table of coffees and prices, and a third places a manager's order. In general, because our enterprise Bean is a SessionBean object, it is created when a manager opens The Coffee Break order form and is closed when he/she quits it. It is stateless because it does not have to remember coffee prices or what the client ordered. An EJB application has four parts, which are described briefly in the following list. The enterprise Bean developer writes the first three, and anyone, including the enterprise Bean developer, may supply the fourth. The sections following this one show the code for each interface or class. In our example, the remote interface is the interface Coffees, which declares the methods getCoffees and placeOrder. The container generates an implemention of this interface that delegates to the class CoffeesBean. CoffeesBean, supplied by the developer, actually defines what the methods do. It is the third item in this list. Instances of the interface Coffees are EJBObjects. Coffees placeOrder EJBObjects In our example, the home interface is the interface CoffeesHome, which is registered with a JNDI naming service. It declares the method create and creates Coffees objects. The container implements this interface so that the method create delegates its work to the method ejbCreate, which is implemented by the CoffeesBean class. CoffeesHome create ejbCreate Instances of this class are enterprise Beans. In our example this class is CoffeesBean, which implements the methods ejbCreate, getCoffees, and placeOrder. In our example, the client class is CoffeesClient. This class typically includes GUI components. For our example, if it were fully implemented, the CoffeesClient class would include buttons for invoking the methods getCoffees and placeOrder. It would also include a text editor for typing in the parameters for the method placeOrder. This class could have many different implementations, with or without GUI components, and it could be written by the enterprise Bean developer or anyone else. CoffeesClient Now let's look at some sample code for an EJB application. Note that we kept this example very simple in order to concentrate on the basic concepts. The interface Coffees declares the methods that managers of The Coffee Break coffee houses can invoke. In other words, this interface contains the methods that a remote client can invoke. This interface, which extends EJBObject, declares the methods getCoffees and placeOrder. It imports four packages because it uses elements from each one. Both methods can throw a RemoteException as well as an SQLException because they use methods from the package java.rmi, the package for remote method invocation on Java objects. The following code defines the interface Coffees. EJBObject RemoteException SQLException java.rmi import java.rmi.*; import java.sql.*; import javax.sql.*; import javax.ejb.*; public interface Coffees extends EJBObject { public RowSet getCoffees() throws RemoteException, SQLException; public void placeOrder(String cofName, int quantity, String MgrId) throws RemoteException, SQLException; } The home interface CoffeesHome is a factory for Coffees objects. It declares only the single method create, which creates Coffees objects, thus making CoffeesHome the simplest possible form of the home interface. The method create may throw a RemoteException, from the java.rmi package, or a CreateException, from the javax.ejb package. CreateException javax.ejb import java.rmi.*; import javax.ejb.*; public interface CoffeesHome extends javax.ejb.EJBHome { public Coffees create() throws RemoteException, CreateException; } So far you have seen two interfaces with one thing in common: These interfaces contain the methods that will be called by the client class. The two methods in Coffees are called in response to button clicks from a manager. The client calls the CoffesHome.create method to get a Coffees object it can use for invoking the methods defined on Coffees. CoffesHome.create The first thing the CoffeesClient class does is to retrieve a CoffeesHome object that has been registered with a JNDI naming service. The CoffeesHome object has been bound to the logical name ejb/Coffees, so when ejb/Coffees is given to the method lookup, it returns a CoffeesHome object. Because the instance of CoffeesHome is returned as an RMI PortableRemoteObject, it has to be narrowed to a CoffeesHome object before being assigned to the variable chome. The method CoffeesHome.create can then be called on chome to create the Coffees object coffees. Once the client has a Coffees object, it can call the methods Coffees.getCoffees and Coffees.placeOrder on it. ejb/Coffees lookup PortableRemoteObject chome CoffeesHome.create coffees Coffees.getCoffees Coffees.placeOrder The methods invoked by a CoffeesClient object are implemented in the class CoffeesBean, which you will see next. import java.sql.*; import javax.sql.*; import javax.naming.*; import javax.ejb.*; import javax.rmi.*; class CoffeesClient { public static void main(String[] args) { try { Context ctx = new InitialContext(); Object obj = ctx.lookup("ejb/Coffees"); CoffeesHome chome = (CoffeesHome) PortableRemoteObject.narrow( obj, CoffeesHome.class); Coffees coffees = chome.create(); RowSet rset = coffees.getCoffees(); // display the coffees for sale // get user input from GUI coffees.placeOrder("Colombian", 3, "12345"); // repeat until user quits } catch (Exception e) { System.out.print(e.getClass().getName() + ":"); System.out.println(e.getMessage()); } } } The final part of our EJB component is the class CoffeesBean, which implements the methods that are declared in the interfaces Coffees and CoffeesHome and that are invoked in the class CoffeesClient. Note that it implements the SessionBean interface, but because it is a stateless SessionBean object, the implementations of the methods ejbRemove, ejbPassivate, and ejbActivate are empty. These methods apply to a SessionBean object with conversational state, but not to a stateless SessionBean object such as an instance of CoffeesBean. We will examine the code more closely after you have looked at it. ejbRemove ejbPassivate ejbActivate import java.sql.*; import javax.sql.*; import javax.naming.*; import javax.ejb.*; public class CoffeesBean implements SessionBean { public CoffeesBean () {} public void ejbCreate() throws CreateException { try { ctx = new InitialContext(); ds = (DataSource)ctx.lookup("jdbc/CoffeesDB"); } catch (Exception e) { throw new CreateException(); } } public RowSet getCoffees() throws SQLException { Connection con = null; try { con = ds.getConnection("managerID", "mgrPassword"); Statement stmt = con.createStatement(); ResultSet rs = stmt.executeQuery( "select * from coffees"); CachedRowSet rset = new CachedRowSet(); rset.populate(rs); rs.close(); stmt.close(); return rset; } finally { if (con != null) con.close(); } }(); } finally { if (con != null) con.close(); } } // // Methods inherited from SessionBean // public void setSessionContext(SessionContext sc) { this.sc = sc; } public void ejbRemove() {} public void ejbPassivate() {} public void ejbActivate() {} private SessionContext sc = null; private Context ctx = null; private DataSource ds = null; } CoffeesBean can be divided into the following steps: public CoffeesBean() {} public void ejbCreate() throws CreateException { try { ctx = new InitialContext(); ds = (DataSource)ctx.lookup(""jdbc/CoffeesDB"); } catch (Exception e) { throw new CreateException(); } } The Context object ctx and the DataSource object ds are private fields originally set to null. This method retrieves an instance of the DataSource class that is associated with the logical name jdbc/CoffeesDB and assigns it to ds. The Data- Source object ds can be used to create connections to the database COFFEEBREAK. This work is done once when the Bean is created to avoid doing it over and over each time the methods getCoffees and placeOrder are called. Context ctx ds null jdbc/CoffeesDB Data- Source public RowSet getCoffees() throws SQLException { Connection con = null; try { con = ds.getConnection("managerID", "mgrPassword"); Statement stmt = con.createStatement(); ResultSet rs = stmt.executeQuery( "SELECT COF_NAME, PRICE FROM COFFEES"); As the signature indicates, this method returns a RowSet object. It uses the DataSource object that the method ejbCreate obtained from the JNDI naming service to create a connection to the database that ds represents. Supplying a user name and password to the method DataSource.getConnection produces the Connection object con. This is a connection to the database COFFEEBREAK because when the system administrator deployed the DataSource object used to make the connection, she gave it the properties for the COFFEEBREAK database. DataSource.getConnection Connection con The code then creates a Statement object and uses it to execute a query. The query produces a ResultSet object that has the name and price for every coffee in the table COFFEES. This is the data that the client has requested. Statement CachedRowSet crset = new CachedRowSet(); crset.populate(rs); The preceding code creates the CachedRowSet object crset and populates it with the data that is in rs. This code assumes that the class CachedRowSet has been defined and that it provides the method populate, which reads data from a ResultSet object and inserts it into a RowSet object. Now let's look at the rest of the implementation of the getCoffees method. rs rs.close(); stmt.close(); return rset; } finally { if (con != null) con.close(); } return null; The method getCoffees returns the newly-populated RowSet object if the connection is made and the rowset is successfully filled with data. Otherwise, getCoffees returns null. There are two points to be made about these lines of code. First, it contains a finally block that assures that even if there is an exception thrown, if the connection is not null, it will be closed and thereby recycled. Because the EJB server and JDBC driver being used implement connection pooling, a valid connection will automatically be put back into the pool of available connections when it is closed. finally The second point is that the code does not enable the auto-commit mode, nor does it call the methods commit or rollback. The reason is that this enterprise Bean is operating within the scope of a distributed transaction, so the container will commit or rollback all transactions.(); } catch (SQLException e) { throw e; } finally { if (con != null) con.close(); } } The method placeOrder gets values for the three input parameters and then sets them. After the manager clicks the PLACE ORDER button on the order form, he gets three blank spaces into which to type the coffee name, the number of pounds, and his ManagerID. For example, the manager might have typed "Colombian", 50, and "12345" in the blanks on his form. The server would get the following line of code: PLACE ORDER "Colombian", 50 "12345" coffees.placeOrder("Colombian", 50, "12345"); The placeOrders method would produce the following code: placeOrders pstmt.setString(1, "Colombian"); pstmt.setInt(2, 50); pstmt.setString(3, "12345"); The following update statement would effectively be sent to the DBMS server to be executed. INSERT INTO ORDERS VALUES ("Colombian", 50, "12345") This would put a new row into the ORDERS table to record the manager's order. ORDERS As with the method getCoffees, the placeOrders method has a finally block to make sure that a valid connection is closed even if there is an exception thrown. This means that if the connection is valid, it will be returned to the connection pool to be reused. The rest of the implementation deals with methods inherited from the SessionBean interface. The methods ejbRemove, ejbPassivate, and ejbActivate apply to SessionBean objects with state. Because CoffeesBean is a stateless SessionBean object, the implementations for these methods are empty. Congratulations! You have finished the tutorials on the complete JDBC API. You can create tables, update tables, retrieve and process data from result sets, use prepared statements and stored procedures, and use transactions. You can also use the more advanced functionality, including SQL3 data types, batch updates, programmatic updates, custom mapping, making a connection with a DataSource object, connection pooling, distributed transactions, and rowsets. You have also seen how the JDBC API works with EJB technology and gotten a high-level summary of the four parts of an EJB application. The reference chapters give more examples and more in-depth explanations of the features you have learned to use in these tutorials. Remember to take advantage of the glossary and the index as aids for getting information quickly. Maydene Fisher has extensive experience as a technical writer specializing in the documentation of object-oriented programming languages.
http://java.sun.com/developer/Books/JDBCTutorial/chapter5.html
crawl-002
en
refinedweb
The). A common use of internal access is in component-based development because it enables a group of components to cooperate in a private manner without being exposed to the rest of the application code. For example, a framework for building graphical user interfaces could provide Control and Form classes that cooperate by using members with internal access. Since these members are internal, they are not exposed to code that is using the framework. It is an error to reference a type or a member with internal access outside the assembly within which it was defined. An internal virtual method can be overridden in some languages, such as textual Microsoft intermediate language (MSIL) using Ilasm.exe, even though it cannot be overridden by using C#. } } For more information, see the following sections in the C# Language Specification: 3.5.1 Declared Accessibility 3.5.4 Accessibility Constraints 10.3.5 Access Modifiers 10.3.8.2 Declared Accessibility If I declare to Classes in the same assembly but on diffrent namespaces...does the internal modifier still allowing me to access internal members of one class on the other :
http://msdn.microsoft.com/en-us/library/7c5ka91b.aspx
crawl-002
en
refinedweb
“Geneva” is Microsoft’s open platform for user access that helps companies simplify access to applications and other systems with an interoperable claims-based model. “Geneva” includes three components for enabling claims-based access. The “Geneva” platform improves developer efficiency and application security by externalizing identity activities from inside applications to a robust external service. Using the Geneva Framework, an ASP or WCF developer can outsource authentication to a security token server (STS), such as Geneva Server, where options to address evolving security or deployment requirements are readily available. An STS provides applications tokens that include claims, identity data about users useful for authorization, without requiring the application to look up those values. Beta 2 of the following components are now available for public evaluation: Begin your evaluation of Microsoft Code Name “Geneva” beta 2 today! The Identity Developer Training Kit offers a comprehensive set of technical content including hands-on labs and references that are designed to help you learn how to use Microsoft's identity products and services. "Geneva" is an open platform that provides simplified user access and single sign-on for on-premises and cloud-based applications in the enterprise, across organizations, and on the Web. Find more information. Find all the latest product information about Microsoft Code Name “Geneva” beta 2. Find a Web forum that addresses your questions on Microsoft Code Name “Geneva” beta 2. Plan your organizations deployment of "Geneva" Server or Windows CardSpace “Geneva” to help make single sign-on (SSO) access or managed Information Card access possible. Learn how to set up and deploy “Geneva” servers, “Geneva” server proxies or client computers running Windows CardSpace “Geneva” in your production environment. The "Geneva" team Connect site has additional tools, samples and documentation, such as the Microsoft Online Services Federation Utility CTP. These guides walk you through setup of a small test lab environment that you can use to evaluate the next generation of Microsoft federated identity technologies, code named "Geneva". Fabrikam Shipping is a semi-realistic sample web application that demonstrates how to implement common tasks and features in web applications. It combines the techniques presented separately in other technology learning material such as the Geneva Framework SDK and the Identity Developer Training Kit. Microsoft Information Technology (Microsoft IT) deployed a Volume Licensing Authentication/Authorization system (VLAS) based on the Microsoft Code Name "Geneva" Framework — claims-aware application ─ this paper details the benefits of using the Geneva Framework, including how the Volume Licensing application is architected. Read how Sun and Microsoft are utilizing the SAML federation standard in both the Sun OpenSSO Enterprise federation solution and the forthcoming Microsoft “Geneva” Server federation solution. Learn about the need for standards-based identity federation, and the solutions that improve the interoperability of mixed-technology directory environments.. By David Chappell. This whitepaper introduces you to using S.DS.AD, in the .NET Framework 2.0, to perform Active Directory and AD LDS (formerly ADAM) management tasks. S.DS.P, in the .NET Framework 2.0, provides raw LDAP access, meaning that it is designed specifically to reach beyond Active Directory and AD LDS (formerly ADAM) to other LDAP compliant directories. Therefore, if you plan to use .NET managed code against other LDAP directories, a great place to focus is on S.DS.P. A code sample accompanies the paper. System.DirectoryServices.AccountManagement is a namespace in the Microsoft .NET Framework 3.5 that provides uniform access and manipulation of security principals across multiple principal stores. S.DS.AM manages directory objects independent of the System.DirectoryServices namespace. In this MSDN Magazine article learn how to use the new System.DirectoryServices.AccountManagement namespace designed specifically for managing security principals. A code sample accompanies the article. Learn how to set up AD FS in a test lab environment. This guide walks you through set-up of a claims-aware application and a Windows NT token–based application on an AD FS-enabled Web server. Visit the SDK to learn about the AD FS API namespaces. Learn how to design and implement authentication and authorization in WCF through end-to-end application scenarios. Improve the security of your WCF services through prescriptive guidance including guidelines, Q&A, practices at a glance, and step-by-step how to guides. Windows CardSpace is the Microsoft .NET Framework 3.0 component that provides the consistent user experience required by the identity metasystem.
http://msdn.microsoft.com/en-us/security/aa570351.aspx
crawl-002
en
refinedweb
Represents a reader that provides fast, non-cached, forward-only access to XML data. <PermissionSetAttribute(SecurityAction.InheritanceDemand, Name := "FullTrust")> _ Public Class XmlTextReader _ Inherits XmlReader _ Implements IXmlLineInfo, IXmlNamespaceResolver Dim instance As XmlTextReader [PermissionSetAttribute(SecurityAction.InheritanceDemand, Name = "FullTrust")] public class XmlTextReader : XmlReader, IXmlLineInfo, IXmlNamespaceResolver [PermissionSetAttribute(SecurityAction::InheritanceDemand, Name = L"FullTrust")] public ref class XmlTextReader : public XmlReader, IXmlLineInfo, IXmlNamespaceResolver public class XmlTextReader extends XmlReader implements IXmlLineInfo, IXmlNamespaceResolver In the .NET Framework version 2.0 release, the recommended practice is to create XmlReader instances using the XmlReader..::.Create method. This allows you to take full advantage of the new features introduced in this release. For more information, see Creating XML Readers.: Enforces the rules of well-formed XML. XmlTextReader does not provide data validation. Checks that DocumentType nodes are well-formed. XmlTextReader checks the DTD for well-formedness, but does not validate using the DTD. For nodes where NodeType is XmlNodeType.EntityReference, a single empty EntityReference node is returned (that is, the Value property is String.Empty). nullNothingnullpt. This class has an inheritance demand. Full trust is required to inherit from XmlTextReader. See Inheritance Demands for more
http://msdn.microsoft.com/en-us/system.xml.xmltextreader.aspx
crawl-002
en
refinedweb
Kumar Vora (dev_info@adobe.com), Vice President of Product Management, Adobe 13 Oct 2004 This article describes a joint solution by Adobe® and IBM® that enables enterprises to communicate more effectively, automate document-based processes, access systems offline, share information securely, and exchange data and information with people and systems. Optimized for IBM WebSphere®, the Adobe Intelligent Document Platform creates Intelligent Documents that can be used throughout the IBM middleware application stack to extend business processes, provide better interaction between systems, and improve communications with partners and customers. Introduction2 Platform, Enterprise Edition™ (J2EE™ architecture. Adobe and IBM have teamed to optimize this platform on IBM WebSphere to extend, visit ®. Another important advantage of combining XML and PDF is the ability to map different data formats from different applications to one unified form. For example, if a transaction involves three different systems for credit checks, order fulfillment, and order shipping, it is impractical to make users fill out three separate forms just to accommodate the system architecture. The Intelligent Document Platform makes it possible to integrate information from different form fields into different XML documents for back-end systems. You can also create different forms using the same data format. This makes it convenient for enterprises to present information differently, or in different forms, or in a personalized form that shows only the information relevant to the user. Enterprise architects no longer need to break apart an official XML schema into special definitions just to deploy different forms. The PDF and XML architectures cleanly separate the form template and form data, as shown in Figure 1. They combine the separate template and data at run time to dynamically build the form representation with which the user interacts. You can save Intelligent Documents in PDF or in an XML Data Package (XDP) to be processed as XML. An XDP file is simply an XML file that packages a PDF file in XML, along with XML form and template data. Since an XDP file is an XML file, standard XML tools, system interfaces, and Web services can work with it, making the XML data directly accessible. The Adobe XML architecture also supports user data in an arbitrary XML format. For example, if your organization uses a standard schema for purchase orders, account information, or project schedules, your PDF forms can directly use XML data structured according to this schema. Figure 2 shows the advantage of this approach. For example, once you have a schema for a purchase order, interface designers can create form layouts for human interactions while application developers can use the same schema to build Web services interfaces for back-end system integration. For example, users can download the form, fill it out, and submit it via e-mail, or alternatively, click a Submit button on the form to send the data to a Web services interface via a Simple Object Access Protocol (SOAP) message. For an organization that has implemented enterprise resource planning (ERP), the application can directly connect to the service interface, generate the appropriate XML data, and invoke the service. The end result is the same: the order processing system gets a completed XML document that conforms to the desired schema. When you need to accommodate human interaction in an automated process, this approach reduces the cost of requirements analysis, interface development, and back-end integration. People can participate in processes while they are offline, because XML data travels within the PDF document. PDF files include a variety of security options - from access restrictions to electronic signatures - that help protect sensitive company information. In addition, the XML data can be locked by the PDF container to create documents of record and to comply with standards for electronic document archiving (as recently mandated by the U.S. National Archives and Records Administration). The association of XML data to its display format is critical to maintaining an audit trail, fighting fraud, and protecting the abuse of raw data stored in databases, which is especially critical in highly regulated industries. Components of the Intelligent Document Solution The quality of information in core business systems improves when the systems receive data directly from customers via Intelligent Documents. Manual workarounds are reduced or eliminated because Intelligent Documents are connected to back-office applications. The ability to embed business logic into documents empowers enterprises to control security at the document level. As a result, an Intelligent Document can be the record of a transaction from the time it is initiated until it is archived. Enterprises can deploy the Adobe LiveCycle™ line of J2EE-based enterprise servers and design tools to run the Adobe Intelligent Document Platform as part of its J2EE architecture. The LiveCycle products used in the joint Adobe and IBM solution are the following: The Adobe Intelligent Document Platform provides Adobe Document Services - the underlying technologies that offer the ability to manage the creation and integration of intelligent documents. Adobe Document Services manage the complete document lifecycle, unlocking the power and extending the reach of data - by leveraging native XML and PDF to integrate documents into the IT environment - without sacrificing document integrity or open standards. Adobe Document Services includes the following services (see Figure 3): The IBM WebSphere software platform provides the ideal foundation for the Adobe Intelligent Document Platform. WebSphere provides the infrastructure to connect people, core systems, and the information in those systems. Adobe Intelligent Documents enable more people to interact with business information and processes. Extending the Intelligent Document Solution to J2EE and WebSphere The joint solution provided by Adobe and IBM includes Adobe Document Services, delivered by Adobe LiveCycle modules optimized for the IBM WebSphere software platform and WebSphere Application Server, and easy-to-use tools and resources to help WebSphere users develop and deploy forms-based processes as an integrated part of their Web application and business integration infrastructures. Compatible with J2EE and WebSphere Studio Application Developer, the solution offers a single development environment - versus multiple islands of tools - to lower development costs and increase developer productivity. The optimizations make it possible for enterprises to maximize the performance, features, and availability of Adobe Document Services by using the highly reliable and scalable WebSphere Application Server to host them. IBM and Adobe have integrated Adobe Document Services with IBM DB2® Content Manager, IBM WebSphere Business Integrator, IBM WebSphere Portal, and IBM middleware (see Figure 4) to provide an intelligent and integrated form development, deployment, and process management solution. The IBM and Adobe solution offers a single, unified system to manage all types of business content and documents - including forms - and related business processes. With process-level integration, enterprises can exchange data across applications to fulfill a business process. Adobe Document Services running on WebSphere enable the enterprise to use Intelligent Documents with business processes to integrate a variety of disparate business systems so that information in forms can be efficiently shared and reused. WebSphere offers integrated support for key Web services open standards, such as SOAP, Universal Description, Discovery and Integration (UDDI), and Web Services Description Language (WSDL), making it a leading production-ready Web application server for the deployment of enterprise Web services solutions. Adobe Document Services supports these open standards and the powerful interoperability between Web Services and J2EE applications offered by WebSphere, which can enable key solution offerings for collaboration, B2B, portal serving, content management, commerce, and pervasive computing. WebSphere also offers superior connectivity with the J2EE Connector Architecture (JCA). Enterprises can use this connectivity with Adobe Document Services to integrate Intelligent Documents with SAP, PeopleSoft, Oracle ERP Financials, J.D. Edwards, IBM CICS®, IBM IMS™, and IBM Host On-Demand applications, through a corresponding set of IBM adapters. In addition to optimizing Document Services for WebSphere, Adobe and IBM have also integrated Adobe LiveCycle Designer with WebSphere Studio, providing a single, familiar development environment for integrating documents and mission-critical business processes. WebSphere Studio provides development tools for building and deploying specialized application environments (such as portals) and custom Web applications that include the ability to allow users to access, fill out, and submit forms over the Web. WebSphere Studio accelerates J2EE development with templates, wizards, and a comprehensive visual XML development environment for building document type definitions (DTDs), XML schemas, XML, and Extensible Style Language (XSL) files. It also supports integration of relational data and XML. Two Steps to Integrate Documents with Processes Adobe Document Services can expedite the delivery and use of Intelligent Documents in traditional forms-based processes such as loan applications for banking customers, or claims processing for insurance companies. Designing Forms to Use as Intelligent Documents Enterprises use Adobe LiveCycle Designer to design and convert forms into PDF-based Intelligent Documents that can be integrated with business processes using WebSphere Studio. Organizations that custom-design application front ends no longer need to perform costly Java development to design forms; instead, they can focus efforts on higher-value tasks. You can use LiveCycle Designer to combine XML schema and the Adobe PDF presentation layer into an XML Data Package (XDP), which creates intelligent, actionable forms. Adobe provides a plug-in for WebSphere Studio to enable recognition of Adobe forms as valid components within a Java development project. Java developers can automatically launch LiveCycle Designer to edit forms in context, as shown in Figure 5. Developers can include URL or JSP references to forms in custom Web applications, using the form as a medium for people to enter information and interact with the application. Form designers can also use Adobe LiveCycle Reader Extensions to add tokens to PDF forms that, when downloaded over the Web, automatically turn on advanced features within Adobe Reader, such as digital signatures and offline filling. These features are assigned to that particular document only. They can include the ability to: Once the template design is complete and all visual and interactive elements have been added, the Intelligent Document is ready for deployment as either a downloadable PDF file that can be filled in offline with Adobe Reader, or an HTML page readable in any browser. Generating Intelligent Documents Integrated with Processes After creating intelligent forms with Adobe LiveCycle Designer, an enterprise can easily deploy those forms to custom Web applications and portal environments using IBM WebSphere Studio Application Developer. Adobe technologies run as services on the application server. Depending on the steps required to process a request, Adobe LiveCycle Forms uses different services. The services provide the functionality for client-side and server-side execution of documents that are rendered into PDF or HTML. Using the configuration and administration tools, administrators and developers can configure and administer the services. When end users request a document (or click a button or image on a form), the request initiates a series of specific processes and interactions among the web application, Adobe LiveCycle Forms, and the browser. After receiving the document, end users can interact with it online. After end users are finished with the document, they submit the information captured. At run time, Adobe Document Services, running on WebSphere Application Server, merge multiple file formats and XML data with templates to generate Intelligent Documents. For example, the Java code in Figure 6 (RenderForm.java) renders a PDF form using XML data. import com.adobe.formServer.ejb.*; import com.adobe.formServer.interfaces.IOutputContext; import java.io.*; import java.util.*; import javax.naming.*; import javax.rmi.*; public class RenderForm { public static void main( String[] args ) throws Exception { Hashtable jndiEnvironment = new Hashtable(); jndiEnvironment.put( Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.cosnaming.CNCtxFactory" ); jndiEnvironment.put( Context.PROVIDER_URL, "iiop://localhost:2809" ); Context ctx = new InitialContext( jndiEnvironment ); IFormServerHome fsHome = (IFormServerHome) PortableRemoteObject.narrow( ctx.lookup( "cell/nodes/og-p650/servers/server1/ejb/com/adobe/formServer/FormServer" ), IFormServerHome.class ); IFormServer fs = fsHome.create(); IOutputContext oc = fs.renderForm( new File( "Form.xdp" ).toURI().getPath(), "PDFForm", new String( "b<NumberField>50</NumberField> <StringField>Fifty</StringField> </DataIsland>" ).getBytes(), "", // options unused "", // user agent unused ", // application root unused ", // target URL unused "", " // base URL unused ); // Save the rendered form to the specified target. System.out.write( oc.getOutputContent()); System.out.close(); } } Enterprises can choose different ways to deploy Intelligent Documents for filling out forms and interacting with processes. Developers can use forms created in LiveCycle Designer with Web services developed in WebSphere Studio to link form-filling activities with back-end transactions (such as data lookups). Portal applications developed with WebSphere Portal, and custom applications built with WebSphere Studio, can link to forms that are either Web pages that are filled in online or downloadable PDF files that can be filled in offline with Adobe Reader. By combining system-level security capabilities in WebSphere with document-level security capabilities from Adobe, organizations can extend the protection of confidential or sensitive information outside the protected network. The solution extends an enterprise's internal authentication and authorization model to provide additional document-level security for information collected from or delivered to users over the Internet. Conclusion With the availability of the Adobe Intelligent Document Platform optimized for IBM WebSphere, documents are integrated with the very IT infrastructure and enterprise applications that streamline and automate business processes. This common technology platform allows organizations to achieve improved efficiency, better communications with constituents, and more responsiveness. Integration between components of the Adobe Intelligent Document Platform and WebSphere help IBM customers extend the value of their IBM middleware investments. Download Resources About the author Kumar Vora serves as Adobe's vice president of the server technologies organization, which focuses on the company's server and Internet businesses. He provides strategic leadership and oversees Adobe's Internet and OEM printing solutions, Adobe Studio, Adobe server products and channel partner relationships. Vora came to Adobe from Oblix, an eBusiness security infrastructure company he co-founded in 1996, where he served as vice president of technology. At Oblix, Kumar was responsible for all aspects of product development from the design to delivery of scalable, multi-platform, enterprise-class server products. Prior to Oblix, Kumar held software engineering and management positions at Hewlett-Packard Company, Apple Computer and Silicon Graphics Computers, Inc. for a variety of networking and Internet-related products. He earned a bachelor's degree in computer science and engineering from the Indian Institute of Technology, Bombay, and a master's degree in computer science from the University of Wisconsin. Rate this page Please take a moment to complete this form to help us better serve you. Did the information help you to achieve your goal? Please provide us with comments to help improve this page: How useful is the information?
http://www.ibm.com/developerworks/websphere/techjournal/0410_vora/0410_vora.html
crawl-002
en
refinedweb
User:Gentgeen From Wikibooks, the open-content textbooks collection John's a major contributer to the Cookbook, but don't be suprised to see him at the Chemistry or Biology shelves, either. I've got an open feature request to include Cookbook namespace modules in the count of total modules in the project. If you'd like to support it, please go and vote for it. He also messes around at Wikipedia as w:User:Gentgeen (where he's also an admin), and just started using Wikimedia Commons as commons:User:Gentgeen - My style sheet and JavaScript - My personal sandbox - Some images that I like
http://en.wikibooks.org/wiki/User:Gentgeen
crawl-002
en
refinedweb
Public Property KeyPreview As Boolean Dim instance As Form Dim value As Boolean value = instance.KeyPreview instance.KeyPreview = value public bool KeyPreview { get; set; } public: property bool KeyPreview { bool get (); void set (bool value); } /** @property */ public boolean get_KeyPreview () /** @property */ public void set_KeyPreview (boolean value) public function get KeyPreview () : boolean public function set KeyPreview (value : boolean) When this property is set to true, the form will receive all KeyPress, KeyDown, and KeyUp events. After the form's event handlers have completed processing the keystroke, the keystroke is then assigned to the control with focus. For example, if the KeyPreview property is set to true and the currently selected control is a TextBox, after the keystroke is handled by the event handlers of the form the TextBox control will receive the key that was pressed. To handle keyboard events only at the form level and not allow controls to receive keyboard events, set the KeyPressEventArgs.Handled property in your form's KeyPress event handler to true. You can use this property to process most keystrokes in your application and either handle the keystroke or call the appropriate control to handle the keystroke. For example, when an application uses function keys, you might want to process the keystrokes at the form level rather than writing code for each control that might receive keystroke events. If a form has no visible or enabled controls, it automatically receives all keyboard events. A control on a form may be programmed to cancel any keystrokes it receives. Since the control never sends these keystrokes to the form, the form will never see them regardless of the setting of KeyPreview. The following code example demonstrates setting a form's KeyPreview property to true and handling the key events at the form level. To run the example, paste the following code in a blank form. using namespace System::Windows::Forms; // This button is a simple extension of the button class that overrides // the ProcessMnemonic method. If the mnemonic is correctly entered, // the message box will appear and the click event will be raised. // This method makes sure the control is selectable and the // mnemonic is correct before displaying the message box // and triggering the click event. public ref class MyMnemonicButton: public Button { protected: bool ProcessMnemonic( char inputChar ) { if ( CanSelect && IsMnemonic( inputChar, this->Text ) ) { MessageBox::Show( "You've raised the click event " "using the mnemonic." ); this->PerformClick(); return true; } return false; } }; // Declare the controls contained on the form. public ref class Form1: public System::Windows::Forms::Form { private: MyMnemonicButton^ button1; public private: System::Windows::Forms::ListBox^ ListBox1; public: Form1() : Form() { // Set KeyPreview object to true to allow the form to process // the key before the control with focus processes it. this->KeyPreview = true; // Add a MyMnemonicButton. button1 = gcnew MyMnemonicButton; button1->Text = "&Click"; button1->Location = System::Drawing::Point( 100, 120 ); this->Controls->Add( button1 ); // Initialize a ListBox control and the form itself. this->ListBox1 = gcnew System::Windows::Forms::ListBox; this->SuspendLayout(); this->ListBox1->Location = System::Drawing::Point( 8, 8 ); this->ListBox1->Name = "ListBox1"; this->ListBox1->Size = System::Drawing::Size( 120, 95 ); this->ListBox1->TabIndex = 0; this->ListBox1->Text = "Press a key"; this->ClientSize = System::Drawing::Size( 292, 266 ); this->Controls->Add( this->ListBox1 ); this->Name = "Form1"; this->Text = "Form1"; this->ResumeLayout( false ); // Associate the event-handling method with the // KeyDown event. this->KeyDown += gcnew KeyEventHandler( this, &Form1::Form1_KeyDown ); } private: // The form will handle all key events before the control with // focus handles them. Show the keys pressed by adding the // KeyCode object to ListBox1. Ensure the processing is passed // to the control with focus by setting the KeyEventArg.Handled // property to false. void Form1_KeyDown( Object^ /*sender*/, KeyEventArgs^ e ) { ListBox1->Items->Add( e->KeyCode ); e->Handled = false; } }; [System::STAThreadAttribute] int main() { Application::Run( gcnew Form.
http://msdn.microsoft.com/en-us/library/system.windows.forms.form.keypreview(VS.80).aspx
crawl-002
en
refinedweb
Re: why not?what is there too lose? - From: "Jerry Okamura" <okamuraj005@xxxxxxxxxxxxx> - Date: Sat, 1 Dec 2007 15:49:55 -1000 Here is my prediction. Oil if a finite resource. Someday, don't know when, demand for oil WILL start to outstrip supply. Is that going to result in some sort of catastrophy. Most likely not. What will happen, will most likely happen gradually, assuming of course Governments do not stick their noses in the process, which is also not likely...politicians ALWAYS think they are smarter than anyone else. But for the sake of discussion, let us say that they stay the heck out of trying to "maniplate the market". What will happen. Will we have an economic collapse? Not very likely. What is more likely is that as demand for oil outstrips the supply for oil, the price of oil will start to rise (supply and demand at work). As the price of oil rises, alternatives which are barely now not cost effective alternatives, will become the cost effective alternative. Wind, solar, thermal, nuclear power will become the energy source of choice for electricity. As gasoline prices rise, alternatives to gasoline powered cars, will become the automobiles of choice. Technology which are now not cost effective like getting oil for coal tar, and oil shale will become more cost effective. Drilling for oil in deeper and deeper waters will become cost effective as the price of oil continues to rise. Known oil reserves which are now off limits, like ANWR and off our coast line, will be exploited, and anyone who gets in the way of such exploration will be brushed aside. And as the price of oil continues to rise (demand still outstripping supply), the price will continue to rise to the point that even an person in denial cannot avoid what is happening. And what happens "if" technology cannot fill the void. At that point, governments all over the world will pour R&D money into developing new technologies in order to try to avoid an economic catastrophy. In time, rising prices and new alternatives, will solve the problem.... It requires very little government interference, and government interference may actually make the solution even harder to achieve. "V" <vfr44@xxxxxxx> wrote in message news:70cc9e8a-043a-4ca9-a1eb-3f7f1dcd9e8c@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx On Nov 27, 2:39�pm, emuamericaneagle <davse...@xxxxxxx> wrote: With the rising costs of health care why not try out a universal system. I can see the fear in trying a universal system if we were the first to try it but there are several countries functioning with a universal system. Atleast with a universal system everybody will have some type of coverage and access. Health care premiums alone right now can break your bank, then on top of that people have co-pays, deductibles and coinsurances! Then your insurance can turn around a deny something simply because they say it was a pre-exsisting condition or not medically necessary. It is horrible that people who become diagnosed with something like cancer might think man how am I going to pay for this, instead of what are my chances of beating this. We should only have to worry about keeping ourselves healthy or getting healthy and not the cost of it all. Hopefully the future holds some type of universal system because is the cost of health insurance continues to rise then there will be even more of us uninsured. Healthcare???? You think it is political biz as usual in the US? If the brainiacs cannot 'kind a'' replace crude with a sustainable alternative we are headed for disaster. So Dem or Rep...any politician in charge had better come to terms with how things really are and not live in dream land...we are running out of time. It seems everyone wishes to hide their heads in the sand. And in the big picture, we can't fix the problem, we can only postpone the inevitable. But buying a little more time would make things much more livable in the not so distant future than the current path we are headed in. ten billion people can't burn the trees! (Ten billion people is a conservative estimate of world population in the not so distant future. We are at 7 people billion now.) The World Coal Institute estimates world energy reserves as follows: "At current production levels coal will be available for at least the next 155 years compared to 41 years for oil and 65 years for gas." Even though this was written a few years ago and it is based on 'current production and consumption' it gives the same haunting message to the generations to come. We may not exactly see the end of our free flowing energy as we know it - but some of our descendants will in the not so distant future. This is the legacy they will inherit from us. But before the energy dries up completely massive changes in our world will have taken place. You see, no other animal destroys its environment except mankind. We are the only ones that do not accept and live within our comfortable means. We not only debt with our finances we debt with our environment. What we are borrowing in terms of petroleum, coal and natural gas takes millions of years for nature to make. Yet we are using it all up in just a few hundred years...we can never pay it back. I think our countries future will be....'America...a Democratic, Communist Nation Under God.' would 'hopefully' separate us from the atheist based communists that have been run as dictatorships. Without energy our country is open for takeover ... no jets...no tanks...no transport on the ground or in the air. Luckily we will still have nuclear powered submarines and aircraft carriers as long as the uranium holds out. But the jets on the flattop all use jet fuel. All the supplies for those subs and carriers petroleum dependent. So long before the crude dries up the government must 'secure a supply' of crude for it own needs. Other countries such as Russia that have a good supply of, that's why we elect politicians to deal with these troubles. When it comes to the future, I see people living in miniature houses (the lucky ones that survive that is, after all most of the population died off long ago from starvation, freezing to death or from the riots) with roofs shingled completely with solar material. They drive up to their house on an electric scooter that is recharged from their solar roof. If they are higher up the totem pole they may have a solar golf cart. But in either case, luck must still be on their side for without the sun shinning to charge it, their transportation sits idle. (Not much lead left to build big batteries...China gobbled it all up, so we have to make due with very small storage cells.) They work for the government and in exchange the government feeds and clothes them from their warehouses. You see, we have become a sort of 'Communist Democracy' for without that bold leap and a desire 'to put our country first' Russia or China would have stepped in to acquire some new real estate. The warehouses are fed from government owned coal fired steam locomotives. Diesel dried up long ago, so it was either wood or coal to fuel the trains. It did not take our government long to realize this. the electric plants only had to shut down sporadically for 8 months so until they could build the first of a large fleet of steam locomotives. This was a 'slight' government oversight. They never figured that the coal fired power plants were fed with 'diesel powered' locomotives. They kept concentrated on the prediction that we had a hundred of years of coal left, but were oblivious as to how that coal is delivered to the power plant. But all these changes have some bright spots in them. As the coal producers were able to hire many more workers to manually mine coal, as the diesel powered mining equipment sit idle from lack of diesel fuel. Now some of the states or bigger cities had the foresight to build one or two electric rail trolleys for public transport. Your only problem is getting to the main road to catch the trolley and then it is a straight ride to the government warehouse. What happened to Private industry & Money? Money is nothing more than stored energy. But since the crude dried up, the 'real energy' behind the money has vanished...and so did private industry. What about the coal mines...all government owned. If you want to eat you work..it is that simple. So, what is money good for nowadays...to wipe your ass? Not really, the government supplied toilet paper works better than that. Martha Stewart syndrome died out long ago, now people are happy to eat rice and beans and get a clean glass of water to drink. After all, the government can't afford to fool around decorating everyone's house, they can hardly produce enough food to keep a fraction of the population alive. Yes, tractors, reapers and farming is very crude intensive...but no one bothered to think about that as they continued to squander the worlds petroleum resources. On a positive note, since most of the population died off from 'natural causes', the government does not have to worry about passing 'population control' any longer. They tried to get that universally opposed program passed for many years, but the public just would not go for it...too UN-American...goes against our religious upbringings...too controversial and all of the rest. We can still hear the cries now...Communist!...Atheist!...Baby Killer....Hitler....Impeach the President!!!! Such objections are only subjective and prejudicial states of mind. As such, all problems related to 'controversial subjects' such as this are problems created in the mind...the mind of ego based, prejudicial man. If you find yourself being distracted with such thoughts as 'too controversial' just ask yourself if the proposed controversy is true, false or I don't know? This introspective method may help you become truth based and not ego based. You will have made a 'choice divorced of need'...you wont 'need your ego' to support the truth...the truth will be able to stand on its own. But nature helped us humans out with that hard decision - for nature does not discriminate nor find the truth too controversial or provocative or opinionated to be true. And in the end, nature settled the dispute of population control with even handed justice of 75% of our population dying off, ever reminding us all that nature does not bow to man...it is always man that bows to nature. But, people hold no grudges against nature and are more in harmony with nature and enjoy a simpler life nowadays. People pick pine needles from trees to make their tea, since there is no jet fuel to import any Darjeeling tea or coffee. Once in a while people are able to kill a bird, a rat or cat to supplement their diet - so we still can find a place of gratitude in our life for such gifts. Of course one problem still haunts the world? The last remaining buckets of crude will soon be gone and they have still not found out how to make the tires for the solar powered golf carts and scooters without that critical ingredient of crude oil? Add it all together and you have 'America...a Democratic, Communist Nation Under God.' as the 'best fit ' equation. And for dessert add 'politics as usual' and we can see nothing substantive will be done in the US to fix our energy woes until it is too late. (Really it can't be fixed, we can only slowed down the inevitable at this point.) See: BTW, do I like communism? No, I like things EXACTLY as they are. But what I like doesn't matter...neither does what you like matter. That's the point,. The atheists can still be atheists and the Christians, Muslims and Jews can still worship as they like...that is why we would be a free democracy...of sorts. But the difference is, instead of the ego based decisions that politicians and the titans of business get sucked into, they will put the long term viability as top priority over personal profit. We must all pull together and stop pulling in counterproductive directions. The gov needs to cut the fat and stop all this foolish sickness that they are addicted to in Washington. Hire yourself some truth based philosophers and futurists as Socrates suggested in the Republic as an oversight committee to keep you guys on track. One important thing would be to add an addendum to the constitution or bill of rights or whatever other documents that outlines what we are 'now' all about...something that is clear advice that we can all look to and not the 1000 page BS that politician use to hide their sickness. And yes,...hiding behavior is a signpost of die-ease. And put it right upfront in the addendum as to why things changed...we were energy whores and had no other choice. But realize this, throughout history many great nations that once were are not around any longer. Hopefully the US will understand this and start accepting the truth that something has to give and it can't be business as usual...it doesn't matter what you like...it doesn't matter what you hope for...all that really matters is what is. See: Take care, V (Male) Agnostic Freethinker Practical Philosopher Futurist . - References: - Prev by Date: Re: Romney just lost the South and therefore any chance - Next by Date: Re: Attention you useless rightwing pieces of ***: Break's over; grab your cocks and drop your socks - Previous by thread: Re: why not?what is there too lose? - Next by thread: Re: Re: Mohammed On My Front Lawn... - Index(es):
http://newsgroups.derkeiler.com/Archive/Alt/alt.politics/2007-12/msg00112.html
crawl-002
en
refinedweb
Paul Schafer and Tom Arnold May 2004 Applies to: Microsoft Visual Studio® 2005 Team System Summary: This article examines tools for testing software that are integrated into Visual Studio 2005 Team System. (9 with Quality Assurance Our Solution Extensibility Possibilities Conclusion Visual Studio 2005 Team Test Edition introduces a suite of new test tools. These tools have been used internally at Microsoft and are being integrated into Visual Studio 2005 Team Test Edition, for the first time from Microsoft. The new tools are integrated tightly with Visual Studio, which means that they work not only in their own testing framework, but also within a larger framework that provides a complete software development life cycle solution. Before an application or Web page can meet its quality and performance goals, it must undergo rigorous testing. Historically, Microsoft Visual Studio has been a product that focused squarely on the software developer, while providing light support for the testing aspects of development. The test engineers in an organization's Quality Assurance group have no doubt correctly perceived past versions of Microsoft Visual Studio as offering them little in their efforts to ensure the release of quality software. As a developer or tester, you use Visual Studio to code your own tests. But to create certain specialized tests or to manage tests, you typically had to use other Microsoft products, purchase third-party tools, or create a tool from scratch. Your job became even more complex when you needed to model and publish data, organize supporting documents, track bugs, and create test suites such as build verification tests (BVT). The resulting toolset likely produced results that did not transfer among its various tools and storage mechanisms. In one case, an IT group of a finance-sector company had accumulated a number of disparate testing tools that it used through the phases of the development life cycle. Each tool was a separate executable that came from a separate supplier. As a consequence, there was little interaction among the tools, and interaction among the tools' users was impeded. For example, entering a project's requirements in one tool and then copying them to another did not establish a link between the requirements in the databases of the two tools. Because no link was established, changing a requirement in the first tool did not update the data accessed by the other tools being used by the development and testing teams. With Visual Studio 2005 Team Test Edition, those who test software will be pleased to see their toolset approaching the level of value already enjoyed by developers' tools. The primary example is the capability to use the Visual Studio Integrated Development Environment (IDE) to create and run tests. A number of core test types—including unit, Web, load, and manual tests—as well as the measurement of code coverage, are now integrated into Visual Studio. (In fact, Visual Studio 2005 Team Test Edition introduces a new project type—the "test" project—that is displayed in Solution Explorer along with the traditional project types.) New testing tools are also integrated with the other parts of the Visual Studio 2005 Team System. This means that software testers will also be able to publish their results to a database, generate trend and historical reports, compare different kinds of data, see how many and which bugs were found as a result of testing, and identify which bugs are not linked to a test that could help reproduce them. These are the test types supported in Visual Studio 2005 Team Test Edition: <TestMethod()> Public Sub OrderStatusCodesTest() Dim target As AdventureWorks.AdventureValues = New AdventureWorks.AdventureValues ' TODO: Assign to an appropriate value for the property Dim val As System.Data.SqlClient.SqlDataReader Assert.AreEqual(val, target.OrderStatusCodes) Assert.Inconclusive("Look at this code and make sure it does what you want") End Sub In addition you can run any automated test (all tests other than manual), as well as groups of tests, from a command line. The new capabilities of Visual Studio 2005 Team Test Edition are realized through the following UI elements: The role of the Test View window is to let you navigate to edit (author) your tests. For example, to code unit tests that exercise the code of the project that you want tested (also known as the code under test, or CUT), you use the IDE of Visual Studio. After you author a unit test, it appears in the Test View window. If you then open it—by double-clicking, or by right-clicking and choosing Edit—the test opens in the Visual Studio IDE for further authoring. Similarly, opening a manual test opens the appropriate editor for manual tests, and opening a load or web test opens a custom editor for those test types as well. Figure 1. Test View window Part of the authoring experience is verifying that what you have coded performs as you expect. Although not the main window for executing tests (see the following section, Test Explorer window), the Test View window makes it easy for you to run your tests and then do additional authoring and fine-tuning based on the results. The Test Explorer window lets you manage your individual tests by placing them into "categories." For example, imagine you have 1000 tests but you consider fifty of them crucial tests to run every time you get a new build. You create a new category called "BVT" (for "build verification test") and place these fifty tests into that category. Then, you can run all fifty of those tests at the same time by running the BVT category. Figure 2. Test Explorer window The Test Explorer window also lets you filter the display of tests by owner, by test type, and so on, so that you can more easily find the tests you want to run, manage or assign as work items. After you have assembled categories in the Test Explorer window, you can use the command-line test utility to run all the tests in a category. (You can also use it to run all the tests in an assembly.) You use the Run Configuration dialog box to define exactly how tests will be run. It contains distinct pages for each test type. This means that, when you run tests of different types (unit, Web, and so on) at the same time, you can click a tab to display the settings page for that test type. This lets you apply distinct settings for all of the different tests in a test run before you start the test run. Figure 3. Run Configuration dialog box These run-configuration settings include the following: Once you have organized tests into a category, you can run its tests from the Test Explorer or Test View windows. You can run individual tests from the same windows. Starting a test run displays the Run Configuration dialog box, which lets you choose the settings that affect the way tests will be run. When you click OK in the Run Configuration dialog box, the test runs and its results display in the Test Results window. You can save run configurations to use later when you run tests or categories of tests, either through the Visual Studio IDE or from the command-line test utility. This window displays the current status of every test in the test run, namely Pending (not yet started), In Progress, Inconclusive, Passed, and Failed. When a load test has run, the window displays the Completed status. For example, manual tests are not deployed to a remote server but are run locally. When you start a test run that includes a manual test, it is displayed in the Test Results window with the status "Pending." It retains that status until the person who will be performing the manual test execution clicks the test in the Test Results window, which opens a list of steps that the person is to perform. The tester then clicks a button to indicate whether the test has passed or failed. Figure 4. Test Results window For tests that run automatically (all types other than manual), the Test Results window displays the test progress as it moves from Pending to a final result. Two states are possible for a final result: Passed or Failed. Various warnings can be associated with these two states. Once you have test results, you can double-click a row showing the results for a test to bring up a "details view" of the test's results. This window displays a high-level summary, warnings, the methods that were called during the execution of the test, and other factors such as an inability to instrument a binary, deploy the test to the remote server, and so on. If you enable code coverage in the Run Configuration dialog box for a test run, the Code Coverage window will show the source code modules that were exercised during that test run. It displays the names of the files, namespaces, classes, and methods of the source code, and the percentage of coverage achieved during that test run. (For example, 80% of a given method may have been exercised because only the IF statement was executed, not the ELSE statement.) Figure 5. Code Coverage window, showing results by file In addition, code not covered shows up in a different color than that of code that is covered. Double-clicking uncovered code opens the code editor and scrolls to the code that was missed. (Also, clicking a button in the Code Coverage window displays a key to the color-coding in your source file.) Figure 6. Code Coverage window, showing covered and uncovered code, by color Visual Studio 2005 Team Test Edition will include an infrastructure that allows for two levels of extensibility. The first is designed for individual testers and the second is designed for those who need to extend Visual Studio. For more information on ways that individuals and partners can contribute to the capabilities of Visual Studio 2005, see Visual Studio 2005 Team System: Extending the Suite. Within Visual Studio, testing is now treated as a top-tier activity that reduces the risks inherent with delivering complex Web and desktop applications, maximizes returns by reducing support costs, and is integrated in the overall software development life cycle. Visual Studio 2005 Team Test Edition delivers an integrated set of tools across the product development life cycle. For example, with Visual Studio 2005 Team Test Edition, tools for testing software are now integrated into the Visual Studio IDE alongside tools for building software. But these are just two aspects of the software development life cycle. Also integrated into Visual Studio 2005 Team Test Edition are tools for making code robust and responsive through static and dynamic analysis, and tools used in all phases of the software development life cycle, such as work-item tracking, enterprise-scale source-code control, schedule management, and project management. For more information on the other members of Visual Studio 2005 Team System, see:
http://msdn.microsoft.com/en-us/library/aa302183.aspx
crawl-002
en
refinedweb
Embed. For these reasons, we recommend storing connection strings in an application configuration file. Application configuration files contain settings that are specific to a particular application. For example, an ASP.NET application can have one or more web.config files, and a Windows application can have an optional app.config file. Configuration files share common elements, although the name and location of a configuration file vary depending on the application's host. Connection strings can be stored as key/value pairs in the connectionStrings section of the configuration element of an application configuration file. Child elements include add, clear, and remove. The following configuration file fragment demonstrates the schema and syntax for storing a connection string. The name attribute is a name that you provide to uniquely identify a connection string so that it can be retrieved at run time. The providerName is the invariant name of the .NET Framework data provider, which is registered in the machine.config file. <?xml version='1.0' encoding='utf-8'?> <configuration> <connectionStrings> <clear /> <add name="Name" providerName="System.Data.ProviderName" connectionString="Valid Connection String;" /> </connectionStrings> </configuration> You can save part of a connection string in a configuration file and use the DbConnectionStringBuilder class to complete it at run time. This is useful in scenarios where you do not know elements of the connection string ahead of time, or when you do not want to save sensitive information in a configuration file. For more information, see Connection String Builders (ADO.NET)..> The .NET Framework 2.0 introduced new classes in the System.Configuration namespace to simplify retrieving connection strings from configuration files at run time. You can programmatically retrieve a connection string by name or by provider name. The machine.config file also contains a connectionStrings section, which contains connection strings used by Visual Studio. When retrieving connection strings by provider name from the app.config file in a Windows application, the connection strings in machine.config get loaded first, and then the entries from app.config. Adding clear immediately after the connectionStrings element removes all inherited references from the data structure in memory, so that only the connection strings defined in the local app.config file are considered. For more information, see ASP.NET Configuration Files and Configuration File Schema for the .NET Framework. Starting with the .NET Framework 2.0, ConfigurationManager is used when working with configuration files on the local computer, replacing the deprecated ConfigurationSettings. WebConfigurationManager is used to work with ASP.NET configuration files. It is designed to work with configuration files on a Web server, and allows programmatic access to configuration file sections such as system.web. Accessing configuration files at run time requires granting permissions to the caller; the required permissions depend on the type of application, configuration file, and location. For more information, see Using the Configuration Classes and WebConfigurationManager for ASP.NET applications, and ConfigurationManager for Windows applications. You can use the ConnectionStringSettingsCollection to retrieve connection strings from application configuration files. It contains a collection of ConnectionStringSettings objects, each of which represents a single entry in the connectionStrings section. Its properties map to connection string attributes, allowing you to retrieve a connection string by specifying the name or the provider name. Property Description Name The name of the connection string. Maps to the name attribute. ProviderName The fully qualified provider name. Maps to the providerName attribute. ConnectionString The connection string. Maps to the connectionString attribute. This example iterates through the ConnectionStringSettings collection and displays the Name, ProviderName, and ConnectionString properties in the console window. System.Configuration.dll is not included in all project types, and you may need to set a reference to it in order to use the configuration classes. The name and location of a particular application configuration file varies by the type of application and the hosting process. Imports System.Configuration Class Program Shared Sub Main() GetConnectionStrings() Console.ReadLine() End Sub Private Shared Sub GetConnectionStrings() Dim settings As ConnectionStringSettingsCollection = _ ConfigurationManager.ConnectionStrings If Not settings Is Nothing Then For Each cs As ConnectionStringSettings In settings Console.WriteLine(cs.Name) Console.WriteLine(cs.ProviderName) Console.WriteLine(cs.ConnectionString) End If End Sub End Class using System.Configuration; class Program { static void Main() { GetConnectionStrings(); Console.ReadLine(); } static void GetConnectionStrings() { ConnectionStringSettingsCollection settings = ConfigurationManager.ConnectionStrings; if (settings != null) { foreach(ConnectionStringSettings cs in settings) { Console.WriteLine(cs.Name); Console.WriteLine(cs.ProviderName); Console.WriteLine(cs.ConnectionString); } } } } This example demonstrates how to retrieve a connection string from a configuration file by specifying its name. The code creates a ConnectionStringSettings object, matching the supplied input parameter to the ConnectionStrings name. If no matching name is found, the function returns null (Nothing in Visual Basic). ' Retrieves a connection string by name. ' Returns Nothing if the name is not found. Private Shared Function GetConnectionStringByName( _ ByVal name As String) As String ' Assume failure Dim returnValue As String = Nothing ' Look for the name in the connectionStrings section. Dim settings As ConnectionStringSettings = _ ConfigurationManager.ConnectionStrings(name) ' If found, return the connection string. If Not settings Is Nothing Then returnValue = settings.ConnectionString End If Return returnValue End Function // Retrieves a connection string by name. // Returns null if the name is not found. static string GetConnectionStringByName(string name) { // Assume failure. string returnValue = null; // Look for the name in the connectionStrings section. ConnectionStringSettings settings = ConfigurationManager.ConnectionStrings[name]; // If found, return the connection string. if (settings != null) returnValue = settings.ConnectionString; return returnValue; } This example demonstrates how to retrieve a connection string by specifying the provider-invariant name in the format System.Data.ProviderName. The code iterates through the ConnectionStringSettingsCollection and returns the connection string for the first ProviderName found. If the provider name is not found, the function returns null (Nothing in Visual Basic). ' Retrieve a connection string by specifying the providerName. ' Assumes one connection string per provider in the config file. Private Shared Function GetConnectionStringByProvider( _ ByVal providerName As String) As String 'Return Nothing on failure. Dim returnValue As String = Nothing ' Get the collection of connection strings. Dim settings As ConnectionStringSettingsCollection = _ ConfigurationManager.ConnectionStrings ' Walk through the collection and return the first ' connection string matching the providerName. If Not settings Is Nothing Then For Each cs As ConnectionStringSettings In settings If cs.ProviderName = providerName Then returnValue = cs.ConnectionString Exit For End If End If Return returnValue End Function // Retrieve a connection string by specifying the providerName. // Assumes one connection string per provider in the config file. static string GetConnectionStringByProvider(string providerName) { // Return null on failure. string returnValue = null; // Get the collection of connection strings. ConnectionStringSettingsCollection settings = ConfigurationManager.ConnectionStrings; // Walk through the collection and return the first // connection string matching the providerName. if (settings != null) { foreach (ConnectionStringSettings cs in settings) { if (cs.ProviderName == providerName) returnValue = cs.ConnectionString; break; } } return returnValue; } ASP.NET 2.0 introduced a new feature, called protected configuration, that enables you to encrypt sensitive information in a configuration file. Although primarily designed for ASP.NET, protected configuration can also be used to encrypt configuration file sections in Windows applications. For a detailed description of the. Provider Securing ASP.NET Web Sites and ASP.NET 2.0 Security Practices at a Glance on the ASP.NET Developer Center.
http://msdn.microsoft.com/en-us/library/ms254494.aspx
crawl-002
en
refinedweb
Namespaces are heavily used in C# programming in two ways. First, the .NET Framework uses namespaces to organize its many classes, as follows: System.Console.WriteLine("Hello World!"); System is a namespace and Console is a class in that namespace. The using keyword can be used so that the complete name is not required, as in the following example: using System; Console.WriteLine("Hello"); Console.WriteLine("World!"); For more information, see"); } } }: Using Namespaces (C# Programming Guide) How to: Use the Namespace Alias Qualifier (C# Programming Guide) How to: Use the My Namespace (C# Programming Guide) For more information, see the following sections in the C# Language Specification: 9 Namespaces
http://msdn.microsoft.com/en-us/library/0d941h9d.aspx
crawl-002
en
refinedweb
School talk:Theology [edit] documentary hypothesis The wikisource file s:Bible, English, King James, According to the documentary hypothesis has been proposed to be moved here. Any comments?--Rayc 04:12, 2 September 2006 (UTC) - I think this would be the best place to move it, maybe under a division of biblical/Christian studies (or something akin to it). It is not a source text (which is why WS won't accept it), and not exactly appropriate for WB. Instead, it's more of a set of teaching material which explains a particular theory of biblical studies that claims the Torah and deuteronomistical history is a collection of sources that had been compiled into one document. For any person who wants to study the Bible or the Old Testament, this set of pages provides a good germ for spurring such studies.—Zhaladshar (Talk) 16:37, 12 September 2006 (UTC) - Absolutely it should be moved here; the Document Theory is an exegetical theory that would be home under Christianity's Biblical Studies, I believe (even though it's wrong! ;) I'm glad you mentioned this course; I just did a lesson concerning the Document Theory, and the course will provide a great supplemental resource to link to. Btw, I should mention that the wikisource page was deleted, it seems. - --Opensourcejunkie 00:32, 12 March 2008 (UTC) - By Opensourcejunkie's request, I've imported it with history to Bible, English, King James, According to the documentary hypothesis. —Pathoschild 23:46:40, 03 April 2008 (UTC) [edit] Department of Christian Theology? Would it be possible to have a department which focuses purely on Christian theology rather than the various denominational biases which are implied in other departments? AlistairReece 14:50, 28 August 2007 (UTC) - One would hope that this could be achieved. -- Dionysios (talk), Founder of the Wikiversity School of Theology, Department of Orthodox Christian Studies, Date: 2007-10-23 (October 23, 2007) Time: 1231 UTC [edit] Organizational Changes I've been in these nightmarish denominational nomenclature wars before, and I want to avoid that, but I also wanted to bring much needed organization to what was a chaotic mess. You (Dionysis) have already seen the major changes I have made so far. I am wanting to conform the department names to what I created on the school intro page. Also, I am showing here some slight changes to what I have already put on the intro page that may be amenable to everyone. - Department of Traditional Christian Theology — includes topics that are relevant primarily to particular traditions - Center for Orthodox Studies — includes topics like ... - Center for Catholic Studies — includes topics like ... - Center for Reformational Studies — includes topics like ... - Center for Restorational Studies — includes topics like ... So, instead of "Department of Orthodox Christian Studies", it would be the "Center for Orthodox Studies" within the "Department of Traditional Christian Theology". BTW, it is common for university departments to have "Centers for" such and such in them. This scheme avoids some things: (1) in linking each tradition family to the Department of Traditional Christian Theology it doesn't make as overt a claim for each tradition family as being "Christian" as having the two words in the same phrase does (some in some traditions do not believe that others in other traditions are fully Christian -- again, avoiding conflict), but it does say they are nominally Christian (because they are in that department); (2) it avoids negative or unpalatable words as labels -- Orthodox is a wide enough term to cover that family of traditions (and it may not sound parochial as "Eastern ..." may sound to some); Catholic is here not called "Roman" which can also sound parochial; Reformational is used instead of "Protestant" because many of us "Protestants" hate that word (it sounds attached to something else, while in rebellion; it was used as a curse word by foes; and it doesn't cover reformational movements that existed prior to Martin Luther); and Restorational is used by adherents as an umbrella term. I would have used the term "Evangelical" instead of "Reformational" to cover Protestantism but some have a negative association with that term as well (not me, mind you - I love the term). I'm going to change the intro page for now (because I hate having to wait to do something that in my mind seems to be a great solution), but I am holding off changing the existing department pages themselves until I get some feedback. -- Guðsþegn 00:08, 24 October 2007 (UTC) - Hey, I really like this Department - Center metaphor; I think it could help clarify a lot. I wanted to suggest applying it to the Department of Biblical Studies as well. - --Opensourcejunkie 00:43, 12 March 2008 (UTC) [edit] Comparative Religion Hello All, I would like to see the creation of a school of comparative religion that can unify most of these schools of thought into a single curriculum that is more well established and has a higher degree of community cooperation. It seems like something we could all work together on. - I think this would be a great idea. We should start by creating the division of Comparative Religion and have certain subdivisions: (a) Mythology/metaphysics (b) Ethics (c) Practice ... These three areas could yield many interesting and profound comparisons. IAO131 19:55, 28 February 2008 (UTC) - There is a general feeling that the school/department metaphor may not be working well in Wikiversity. An alternative suggestion has been to create projects in the Topic namespace, ie, Topic:Comparative Religion. See: Wikiversity_learning_model/Discussion_group#A_4-pronged_approach. Countrymike 20:22, 28 February 2008 (UTC) - Okay, I added a department of comparative religion, it is very basic, so people can help by expanding it. Thank you all in advance. Magosgruss 21:36, 02.28.2008 (UTC) [edit] Faculties The faculties use too many odd naming conventions. Why separate it into 'dominant' & 'old,' 'historic' & 'tribal?' The 'dominant' ones are about as old as Zoroastrianism, but the latter is a just as significant monotheism, and so are some new ones such as Ayyavazhi and Baha'i. The 'historic' ones are not just historic, but still practiced, and similar to the 'tribal' ones, though focusing on tribes is not the main idea. Instead of 'dominant' & 'old' we should have 'the great religions' (i.e. monotheisms or henotheisms,) and instead of 'historic' & 'tribal' we should have the pagan, ethnic, or cultural religions. These two and perhaps 'new religions' are the only categories needed.--Dchmelik 03:07, 10 March 2008 (UTC) - Just so everyone is aware, the Sikh religion is, alas, less than a thousand years old (though I agree that it ought to be grouped with the "majors"). Meanwhile Zoroastrianism is God knows how many thousands of years old, and the major influence on pre-Islamic Iranian culture. The Druze religion just barely qualifies in terms of time, and hardly at all in terms of influence, and outside of Israel is often viewed as a sectarian form of Islam. In other words, borderline in every possible way. - I would suggest a cateogy of "major religions" defined as follows: more than 1 million followers, and/or more than 1000 years of history as a literate culture; and which has continued to exist up to the present day. (Zoroastrianism, Judaism, Christianity, Islam, Druze, Baha'i, Taoism, Confucianism, Chinese folk religion, Shinto, Hinduism, Buddhism, Jainism, Sikhism, Caodaism, Yiguandao, Brazilian Spiritism.) - With the numerical bar raised to 10 million (but retaining the 1000-year loophole) we lose only the Baha'is, Caodaists, and Yiguandao followers. --Dawud (The preceding unsigned comment was added by 210.60.55.9 (talk • contribs) 11:50, 16 June 2008.) [edit] Meher Baba Dept. Meher Baba is a person, not religion. Also, do we need to make an entire department for every Spiritual Master that did not found one of the 'great' or pagan religions? If so, there would be thousands of Departments. IIRC Meher Baba taught Vedanta and Sufism, so he should be a topic in the Sikh or Sant Mat depts. (Sant Mat could also be a Sikh topic.)--Dchmelik 03:07, 10 March 2008 (UTC) [edit] God and donating body parts this comment was moved from here I dont know if I am at the right place to ask . what God says about donating our body parts???? (The preceding unsigned comment was added by 68.165.57.243 (talk • contribs) 15:19, 10 April 2008.) - The answer depends on your religion, among other things. The Jade Knight 16:30, 3 September 2008 (UTC) [edit] Wicca a new religion? Is Wicca strictly speaking a new religion? I was under the impression that Wiccan trace their beliefs back to the Anglo-Saxon and Celtic world of the early centuries of the Common Era. AlistairReece 09:29, 13 August 2008 (UTC) - It seems to be a new religion, though many Wiccans say it is old; some of its ideas can be traced back far, but it also seems there are new ones, or at least ones that cannot be verified as old. The word 'Wicca' is actually based on the Anglo-Saxon words 'Wicche' and 'Wiccha,' but those meant types of priests or witches, and not a religion itself: classical words for religion were different than the root words of 'Wicca.'--Dchmelik 07:57, 14 August 2008 (UTC) - Wicca is indeed a new religion, founded about 60 years ago, though it makes claims to antiquity (its founder, Gardner, claimed to have gotten the original Wiccan teachings from a coven of witches which traced its religious heritage back thousands of years). Frequently Wicca is claimed to be ancient Celtic religion in a modern form, but (as some people have pointed out in no uncertain terms) Wicca is not a Celtic religion. The Jade Knight 16:33, 3 September 2008 (UTC)
http://en.wikiversity.org/wiki/School_talk:Theology
crawl-003
en
refinedweb
Flask-WTF offers simple integration with WTForms. This integration includes optional CSRF handling for greater security. Source code and issue tracking at GitHub. Install with pip and easy_install: pip install Flask-WTF or download the latest version from version control: git clone cd flask-wtf python setup.py develop If you are using virtualenv, it is assumed that you are installing Flask-WTF in the same virtualenv as your Flask application(s). The following settings are used with Flask-WTF: - CSRF_ENABLED default True CSRF_ENABLED enables CSRF. You can disable by passing in the csrf_enabled parameter to your form: form = MyForm(csrf_enabled=False) Generally speaking it’s a good idea to enable CSRF. If you wish to disable checking in certain circumstances - for example, in unit tests - you can set CSRF_ENABLED to False in your configuration. CSRF support is built using wtforms.ext.csrf; Form is a subclass of SessionSecureForm. Essentially, each form generates a CSRF token deterministically based on a secret key and a randomly generated value stored in the user’s session. You can specify a secret key by passing a value to the secret_key parameter of the form constructor, setting a SECRET_KEY variable on a form class, or setting the config variable SECRET_KEY; if none of these are present, app.secret_key will be used (if this is also not present, then CSRF is impossible; creating a form with csrf_enabled = True will raise an exception). NOTE: Previous to version 0.5.2, Flask-WTF automatically skipped CSRF validation in the case of AJAX POST requests, as AJAX toolkits added headers such as X-Requested-With when using the XMLHttpRequest and browsers enforced a strict same-origin policy. However it has since come to light that various browser plugins can circumvent these measures, rendering AJAX requests insecure by allowing forged requests to appear as an AJAX request. Therefore CSRF checking will now be applied to all POST requests, unless you disable CSRF at your own risk through the options described above. You can pass in the CSRF field manually in your AJAX request by accessing the csrf field in your form directly: var params = {'csrf' : '{{ form.csrf }}'}; A more complete description of the issue can be found here. In addition, there are additional configuration settings required for Recaptcha integration : see below. Flask-WTF provides you with all the API features of WTForms. For example: from flaskext.wtf import Form, TextField, Required class MyForm(Form): name = TextField(name, validators=[Required()]) In addition, a CSRF token hidden field is created. You can print this in your template as any other field: <form method="POST" action="."> {{ form.csrf }} {{ form.name.label }} {{ form.name(size=20) }} <input type="submit" value="Go"> </form> However, in order to create valid XHTML/HTML the Form class has a method hidden_tag which renders any hidden fields, including the CSRF field, inside a hidden DIV tag: <form method="POST" action="."> {{ form.hidden_tag() }} The safe filter used to be required with WTForms in Jinja2 templates, otherwise your markup would be escaped. For example: {{ form.name|safe }} However widgets in the latest version of WTForms return a HTML safe string so you shouldn’t need to use safe. Ensure you are running the latest stable version of WTForms so that you don’t need to use this filter everywhere. Instances of the field type FileField automatically draw data from flask.request.files if the form is posted. The data attribute will be an instance of Werkzeug FileStorage. For example: from werkzeug import secure_filename class PhotoForm(Form): photo = FileField("Your photo") @app.route("/upload/", methods=("GET", "POST")) def upload(): form = PhotoForm() if form.validate_on_submit(): filename = secure_filename(form.photo.data.filename) else: filename = None return render_template("upload.html", form=form, filename=filename) It’s recommended you use werkzeug.secure_filename on any uploaded files as shown in the example to prevent malicious attempts to access your filesystem. Remember to set the enctype of your HTML form to multipart/form-data to enable file uploads: <form action="." method="POST" enctype="multipart/form-data"> .... </form> Note: as of version 0.4 all FileField instances have access to the corresponding FileStorage object in request.files, including those embedded in FieldList instances. Flask-WTF supports validation through the Flask Uploads extension. If you use this (highly recommended) extension you can use it to add validation to your file fields. For example: from flaskext.uploads import UploadSet, IMAGES from flaskext.wtf import Form, FileField, file_allowed, \ file_required images = UploadSet("images", IMAGES) class UploadForm(Form): upload = FileField("Upload your image", validators=[file_required(), file_allowed(images, "Images only!")]) In the above example, only image files (JPEGs, PNGs etc) can be uploaded. The file_required validator, which does not require Flask-Uploads, will raise a validation error if the field does not contain a FileStorage object. Flask-WTF supports a number of HTML5 widgets. Of course, these widgets must be supported by your target browser(s) in order to be properly used. HTML5-specific widgets are available under the flaskext.wtf.html5 package: from flaskext.wtf.html5 import URLField class LinkForm(): url = URLField(validators=[url()]) See the API for more details. Flask-WTF also provides Recaptcha support through a RecaptchaField: from flaskext.wtf import Form, TextField, RecaptchaField class SignupForm(Form): username = TextField("Username") recaptcha = RecaptchaField() This field handles all the nitty-gritty details of Recaptcha validation and output. The following settings are required in order to use Recaptcha: - RECAPTCHA_USE_SSL : default False - RECAPTCHA_PUBLIC_KEY - RECAPTCHA_PRIVATE_KEY - RECAPTCHA_OPTIONS RECAPTCHA_OPTIONS is an optional dict of configuration options. The public and private keys are required in order to authenticate your request with Recaptcha - see documentation for details on how to obtain your keys. Under test conditions (i.e. Flask app testing is True) Recaptcha will always validate - this is because it’s hard to know the correct Recaptcha image when running tests. Bear in mind that you need to pass the data to recaptcha_challenge_field and recaptcha_response_field, not recaptcha: response = self.client.post("/someurl/", data={ 'recaptcha_challenge_field' : 'test', 'recaptcha_response_field' : 'test'}) If flaskext-babel is installed then Recaptcha message strings can be localized. The Form class provided by Flask-WTF is the same as for WTForms, but with a couple of changes. Aside from CSRF validation, a convenience method validate_on_submit is added: from flask import Flask, request, flash, redirect, url_for, \ render_template from flaskext.wtf import Form, TextField app = Flask(__name__) class MyForm(Form): name = TextField("Name") @app.route("/submit/", methods=("GET", "POST")) def submit(): form = MyForm() if form.validate_on_submit(): flash("Success") return redirect(url_for("index")) return render_template("index.html", form=form) Note the difference from a pure WTForms solution: from flask import Flask, request, flash, redirect, url_for, \ render_template from flaskext.wtf import Form, TextField app = Flask(__name__) class MyForm(Form): name = TextField("Name") @app.route("/submit/", methods=("GET", "POST")) def submit(): form = MyForm(request.form) if request.method == "POST" and form.validate(): flash("Success") return redirect(url_for("index")) return render_template("index.html", form=form) validate_on_submit will automatically check if the request method is PUT or POST. You don’t need to pass request.form into your form instance, as the Form automatically populates from request.form unless alternate data is specified. Pass in None to suppress this. Other arguments are as with wtforms.Form. Flask-specific subclass of WTForms SessionSecureForm class. Flask-specific behaviors: If formdata is not specified, this will use flask.request.form. Explicitly pass formdata = None to prevent this. If none of these are set, raise an exception. Wraps hidden fields in a hidden DIV tag, in order to keep XHTML compliance. New in version 0.3. Checks if form has been submitted. The default case is if the HTTP method is PUT or POST. Checks if form has been submitted and if so runs validate. This is a shortcut, equivalent to form.is_submitted() and form.validate() Validates a ReCaptcha. Werkzeug-aware subclass of wtforms.FileField Provides a has_file() method to check if its data is a FileStorage instance with an actual file. Return True iff self.data is a FileStorage with file data Validates that the uploaded file is allowed by the given Flask-Uploads UploadSet. upload_set : instance of flaskext.uploads.UploadSet message : error message You can also use the synonym file_allowed. Validates that field has a file. message : error message You can also use the synonym file_required. Creates <input type=search> widget TextField using SearchInput by default Creates <input type=url> widget TextField using URLInput by default Creates <input type=email> widget TextField using EmailInput by default Creates <input type=tel> widget TextField using TelInput by default Creates <input type=number> widget IntegerField using NumberInput by default DecimalField using NumberInput by default Creates <input type=range> widget IntegerField using RangeInput by default DecimalField using RangeInput by default
http://packages.python.org/Flask-WTF/
crawl-003
en
refinedweb
java.lang.Object groovy.lang.GroovyObjectSupportgroovy.lang.GroovyObjectSupport groovy.sql.GroovyResultSetExtensiongroovy.sql.GroovyResultSetExtension public class GroovyResultSetExtension extends GroovyObjectSupport Gro! public GroovyResultSetExtension(ResultSet set) set- the result set public void add(Map values) values- a map containing the mappings for column names and values public void eachRow(Closure closure) closure- the closure to perform on each row public Object getAt(int index) index- is the number of the column to look at starting at 1 public Object getProperty(String columnName) Object. columnName- the SQL name of the column protected ResultSet getResultSet() public Object invokeMethod(String name, Object args) public boolean next() getResultSet()cursor protected int normalizeIndex(int index) index- the raw requested index (may be negative) public boolean previous() getResultSet()object. TYPE_FORWARD_ONLY trueif the cursor is on a valid row; falseif it is off the result set public void putAt(int index, Object newValue) index- is the number of the column to look at starting at 1 newValue- the updated value public void setProperty(String columnName, Object newValue) Objectvalue. columnName- the SQL name of the column newValue- the updated value public String toString()
http://groovy.codehaus.org/gapi/groovy/sql/GroovyResultSetExtension.html
crawl-003
en
refinedweb
The selectTypecommand is used to change the set of allowable types of objects that can be selected when using the select tool. It accepts no other arguments besides the flags. There are basically two different types of items that are selectable when interactively selecting objects in the 3D views. They are classified as objects (entire objects) or components (parts of objects). The objectand componentcommand flags control which class of objects are selectable. It is possible to select components while in the object selection mode. To set the components which are selectable in object selection mode you must use the -ocm flag when specifying the component flags. Derived from mel command maya.cmds.selectType Example: import pymel.core as pm pm.selectType( allObjects=True ) pm.selectType( q=True, cv=True ) # Result: True # pm.selectType( allObjects=True, allComponents=False )
http://www.luma-pictures.com/tools/pymel/docs/1.0/generated/functions/pymel.core.general/pymel.core.general.selectType.html#pymel.core.general.selectType
crawl-003
en
refinedweb
The selectPrioritycommand is used to change the selection priority of particular types of objects that can be selected when using the select tool. It accepts no other arguments besides the flags. These flags are the same as used by the ‘selectType’ command. Derived from mel command maya.cmds.selectPriority Example: import pymel.core as pm pm.selectPriority( q=True, nurbsCurve=True ) # Result: 2 # pm.selectPriority( nurbsCurve=10 ) pm.selectPriority( handle=9, ikHandle=8 )
http://www.luma-pictures.com/tools/pymel/docs/1.0/generated/functions/pymel.core.general/pymel.core.general.selectPriority.html#pymel.core.general.selectPriority
crawl-003
en
refinedweb
So far in our XML series, we've covered basic syntax and enforcing document structure through the use of DTDs and XML Schema. Now it's time to put on your programmer's hat and get acquainted with Document Object Model (DOM), which provides easy access to XML documents via a tree-like set of objects. Since there are DOM implementations in quite a few languages, I'll try to keep things as language-neutral as possible in the process of introducing you to the specification. That means, unfortunately, no sample code. Some things you should know about DOM DOM is in reality nothing more than an abstract specification for accessing the content of a given document using a tree-like set of objects. The document doesn't necessarily have to be an XML document; keep that in mind as you read along. As with all things Web, the DOM specification is managed by the World Wide Web Consortium (W3C). Operating under a mandate to provide a uniform API for use with multiple platforms and languages, the W3C defines DOM as a set of abstract classes without an official implementation. So it's up to individual vendors to actually provide implementations of the specification's interfaces that are appropriate for a given platform and language. DOM's interface definitions were created using Object Management Group's �Interface Definition Language (IDL). It can often be helpful to examine these definitions even if you have no formal knowledge of IDL, which is fairly self-explanatory. I've linked to the appropriate IDL definition for each interface I mention in this article so that you can refer to it and the accompanying documentation if necessary. DOM has three levels of functionality: - Level 1 provides only the most basic support for parsing an XML document. - Level 2 extends Level 1 by providing support for XML namespaces. This is the currently recommended level of functionality, and I'll be referring you to Level 2 versions of the DOM interfaces in this article. - Level 3, which as of the day I'm writing this is still in the "working document" phase (meaning it's subject to change), adds additional support for XPath queries and loading and saving documents. Because the W3C's specification is only a minimum recommendation, vendors can, and often do, provide proprietary extensions. This is why, for example, many of the available DOM implementations will already have XPath support built-in. You should be wary of using these extensions, particularly ones that represent Level 3 functionality. The interfaces of those objects are still very much subject to change, and the final, official versions may be incompatible with code you've written for the working versions. DOM's object model (Is that redundant?) DOM expresses a document as a tree of Node objects. If you'll recall, a tree is defined as a set of interconnected objects, or nodes, with one node providing the root. Nodes are given names corresponding to their relative position to another node in the tree. For example, a node's parent node is the node one level up (closer to the root element) in the tree's hierarchy, while a child node is one level down; a sibling is to the immediate right or left of a node on the same level of the tree. Figure A gives a more graphical explanation of these terms, which you can refer to if you find any of this family business confusing. Node objects not only represent XML elements in a document, but they also represent everything else found in a document, from the topmost document element itself to individual content pieces like attributes, comments, and data. Each node has a specialized interface that corresponds to the XML content it represents, but these are all still nodes at heart. Object-oriented folks would say that all DOM objects inherit from node. The node interface is the primary method you'll use to navigate a document's tree and modify the structure of a document by adding new nodes. The node knows Node exposes a few navigation elements that allow you to move about a document's tree. The parentNode method returns the parent of the current node, while the nextSibling and previousSibling methods return the right-hand and left-hand siblings of the current node. You can determine whether a given node has children by examining the hasChildNodes property. Assuming that a node has children, these children can be retrieved using the ChildNodes property. ChildNodes returns all of the direct (one level down) children of the current node in a NodeList structure. NodeLists represent a group of nodes as an ordered list (retrievable by index number), while their cousins NamedNodeMaps represent them as a dictionary (retrievable by name). Both of these objects are "live," meaning that changes made to a list are immediately reflected in the underlying tree. Node objects also expose a set of methods for adding and deleting nodes to their group of children. The insertBefore method inserts a new node immediately before (to the left of) another node in a list of child nodes, while appendNode appends a node to the end (the extreme right) of the current node's list of children. The replaceNode method directly replaces one child node with another, while removeNode effectively deletes a node from a group of child nodes. Specialized node interfaces As I've said, the node interface provides a convenient way to navigate a document and modify it, but to do much meaningful work, you'll need to explore the less abstract DOM interfaces. In the remainder of this article, I'll examine a few of these interfaces. Document is the root The Document interface extends the node interface to represent an entire XML document and provides the root element in a document's tree (the <XML> element). Most implementations hand you a Document object when you load an XML document. Document is sort of a catch-all for things that affect the document as a whole or that don't really fit anywhere else. Most of its methods serve as factory methods for creating other DOM objects. These "createX" methods provide a way to create Element, DocumentFragment, TextNode, CDATASection, ProcessingInstruction, attribute (Attr), EntityReference, and various namespace nodes for implementations in languages that don't support traditional constructors. Document also includes two useful methods for moving to particular locations in a document: - getElementsByTagName returns a NodeList of all elements with a given tag name in the order they were encountered in the document. This is a handy method for retrieving all instances of a particular element in a document, and since the elements are returned as nodes, navigation around the document is possible. - getElementByID returns the element with an attribute of type ID that matches the specified ID. It's useful for quickly locating a single element in a document. One final thing of interest about the Document interface is that the Node interface exposes an ownerDocument property that returns the node's parent Document object. The elements of the tree Okay, I got ahead of myself there and mentioned two ways of retrieving an element before talking about the Element interface. Element represents, as you'd expect, an XML element. The element interface deals quite a bit with attributes (which incidentally are also available from the root node interface), with 13 methods that provide some form of access to attributes. Of these, you'll likely use the getAttribute/setAttribute and getAttributeNode/setAttributeNode methods most often. The former allow you to read or write an attribute's value directly, assuming you can supply the attribute's name. The latter allow you to work with the actual Attr object that represents an attribute. Conspicuously missing from the Element interface is any method of retrieving the data associated with an element. This is because a given element's data is considered to be a child node of that element, which is retrievable via the ChildNodes property of the root Node interface. If it contains only simple character data, then an element's data node will simply implement the Text interface. However, in the case of complex data, a group of child nodes implementing appropriate Element, Attr, and/or Text interfaces, depending on the type of data, will be present as child nodes of the element. Figure B� illustrates the complicated relationship between an element and its data node. Fragments of a document When working with XML, a common task is to create a new set of elements and append them to an existing document. The DocumentFragment interface minimally extends node by changing the behavior of the insertion methods (insertBefore, appendNode, and replaceNode) so that when a DocumentFragment is inserted into a document, only its child nodes are inserted, not the DocumentFragment node itself. This makes DocumentFragment an ideal temporary attachment point for new nodes in an XML tree. That's about it for our guided tour of DOM. Stay tuned for the next part in my increasingly inaccurately named Remedial XML trilogy (Douglas Adams would be proud), when I'll introduce the joys of the SAX API.
http://www.techrepublic.com/article/remedial-xml-say-hello-to-dom/5147519
crawl-003
en
refinedweb
How many times have you looked at a running application and wondered "What the heck is it doing, and why is it taking so long?" In these moments, you probably wish you had built more monitoring capabilities into your application. For example, in a server application, you might be interested in viewing the number and types of tasks queued for processing, the tasks currently in progress, throughput statistics over the past minute or hour, average task processing time, and so on. These statistics are easy to gather, but without an unintrusive means of retrieving the data when it is needed, they are not very useful. You can export operational data in lots of ways -- you could write periodic statistics snapshots to a log file, create a Swing GUI, use an embedded HTTP server that displays the statistics on a Web page, or publish a Web Service that can be used to query the application status. But in the absence of a monitoring and data publication infrastructure, most application developers do not go to these lengths, resulting in less visibility into the workings of an application than might be desired. As of Java 5.0, the class library and JVM provide a comprehensive management and monitoring infrastructure -- JMX. JMX is a standardized means for providing a remotely accessible management interface and is an easy way to add a flexible and powerful management interface to an application. JMX components, called managed beans (MBeans), are JavaBeans that provide accessors and business methods pertaining to the management of an entity. Each managed entity (which could be the entire application or a service within the application) instantiates an MBean and registers it using a human-readable name. A JMX-enabled application relies on an MBeanServer, which acts as a container for MBeans, providing remote access, namespace management, and security services. On the client side, the jconsole tool can act as a universal JMX client. Taken together, platform support for JMX dramatically reduces the effort required for an application to support an external management interface. In addition to providing an MBeanServer implementation, Java SE 5.0 also instruments the JVM to provide easy visibility into the state of memory management, class loading, active threads, logging, and platform configuration. Monitoring and management for most of the platform services are turned on by default (the performance impact is minimal), so it is just a matter of connecting to the application with a JMX client. Figure 1 shows the jconsole JMX client (part of the JDK) displaying one of the memory management views -- heap usage over time. The Perform GC button illustrates that JMX offers the capability to initiate operations in addition to viewing operating statistics. Figure 1. Using jconsole to view heap usage JMX specifies a protocol used to communicate between the MBeanServer and the JMX client, which can run over a variety of transports. Built-in transports for local connections, RMI, and SSL are provided, and it is possible to create new transports through the JMX Connector API. Authentication is enforced by the transport; the local transport allows you to connect to JVMs running on the local system under the same user ID, and the remote transports can authenticate with passwords or certificates. The local transport is enabled by default under Java 6; to enable it under Java 5.0, you need to define the system property com.sun.management.jmxremote when the JVM is launched. The document "Monitoring and Management using JMX" (see Resources) describes the configuration steps to enable and configure transports. Instrumenting a Web server Instrumenting an application to use JMX is easy. Like many other remote invocation frameworks (such as RMI, EJB, and JAX-RPC), JMX is interface-based. To create a managed service, you need to create an MBean interface specifying the management methods. You can then create an MBean that implements that interface, instantiates it, and registers it with the MBeanServer. Listing 1 shows an MBean interface for a network service such as a Web server. It providers getters to retrieve configuration information (such as the port number) and operational information (such as whether the service is started). It also contains getters and setters to view and change configurable parameters such as the current logging level and methods to invoke management operations such as start() and stop(). Listing 1. MBean interface for a Web server Implementing the MBean class is usually fairly straightforward, as the MBean interface is supposed to reflect the properties and management operations of an existing entity or service. For example, the getLogLevel() and setLogLevel() methods in the MBean will simply forward to the getLevel() and setLevel() methods on the Logger being used by the Web server. JMX imposes some naming restrictions; for instance, MBean interface names must end with MBean, and the MBean class for the FooMBean interface must be called Foo. (You can lift this restriction by using a more advanced JMX feature, dynamic MBeans.) Registering the MBean with the default MBeanServer is also easy, and is shown in Listing 2: Listing 2. Registering an MBean with the built-in JMX implementation The ObjectName passed to registerMBean() identifies the managed entity. Because it is anticipated that a given application may contain many managed entities, the name contains a domain ("myapp" in Listing 2) as well as a number of key-value pairs identifying the managed resource within the domain. The keys "name" and "type" are commonly used, and when present, the name should uniquely identify the managed entity across all MBeans of that type within that domain. Other key-value pairs can be specified as well, and the JMX API includes facilities for wildcard matching of object names. Having created and registered an MBean, you can immediately point jconsole at the application (type jconsole at the command line) and see its management attributes and operations in the "MBeans" view. Figure 2 shows the Attributes tab in jconsole for the new MBean, and Figure 3 shows the Operations tab. Using reflection, JMX figures out which properties are read-only ( Started, Port) and which are read-write ( LogLevel), and jconsole allows you to modify the read-write properties. If the setter for a read-write property throws an exception (such as IllegalArgumentException), JMX reports the exception back to the client. Figure 2. Attributes tab in jconsole for the MBean Figure 3. Operations tab in jconsole for the MBean Accessors and operations in MBeans can use any primitive type in their signatures, as well as String, Date, and other standard library classes. You can also use arrays and collections of these permitted types. MBean methods may also use other serializable data types, but doing so can create interoperability issues because the class files must be made available to the JMX client as well. (If you are using the RMI transport, you can use the automatic class downloading features of RMI to deliver required classes to the client.) If you wish to use structured data types in your management interface while avoiding the interoperability problems associated with class availability, you can use the Open MBeans feature of JMX to represent composite or tabular data. Instrumenting a server application When creating a management interface, certain parameters and operations suggest themselves as obvious candidates for inclusion, such as configuration parameters, operational statistics, debugging operations (such as changing the logging level or dumping application state to a file), and life cycle operations (start, stop). Retrofitting an application to support access to these attributes and operations is usually quite easy. However, to get the most value out of JMX, you may want to consider at design time what kinds of data would be useful to users and operators at run time. If you are using JMX to gain insight into what a server application is doing, you need a means of identifying and tracking units of work. If you use the standard Runnable and Callable interfaces to describe tasks, by making your task classes self-describing (such as by implementing a sensible toString() method), you can track tasks as they proceed through their life cycle and provide MBean methods to return lists of waiting, in-process, and finished tasks. TrackingThreadPool in Listing 3 illustrates a subclass of ThreadPoolExecutor that keeps track of which tasks are in progress, as well as timing statistics for tasks that have been completed. It accomplishes these tasks by overriding the beforeExecute() and afterExecute() hooks and providing getters to retrieve the collected data. Listing 3. Thread pool class that gathers task-in-progress and average task time statistics ThreadPoolStatusMBean in Listing 4 shows the MBean interface for a TrackingThreadPool, providing counts of active tasks, active threads, completed tasks, waiting tasks, and a list of the tasks currently waiting to execute and currently executing. Including the list of waiting and executing tasks in the management interface allows you to see not only how hard the application is working, but what it is working on right now. This feature can give you insight into not only the behavior of your application, but the nature of the data set it is working on as well. Listing 4. MBean interface for TrackingThreadPool If your tasks are heavyweight enough, it might even make sense to take it one step further and register an MBean for each task as it is submitted (and unregister it after it is finished). You could then use the management interface to query each task for what it is doing and how long it is taking or request that the task be canceled. ThreadPoolStatus in Listing 5 implements the ThreadPoolStatusMBean interface, providing the obvious implementations for each of the accessors. As is typical with MBean implementation classes, each of the operations is trivial to implement, delegating to the underlying managed object. In this example, the JMX code is entirely separate from the code for the managed entity. TrackingThreadPool knows nothing about JMX; it provides its own programmatic management interface by providing management methods and accessors for relevant attributes. You have a choice of implementing the management functionality directly in the implementation class (have TrackingThreadPool implement a TrackingThreadPoolMBean interface) or implementing it separately (as in Listings 4 and 5). Listing 5. MBean implementation for TrackingThreadpool To illustrate how these classes can provide visibility into what the application is working on, consider a Web crawler application that divides its work into two kinds of tasks: fetching remote pages and indexing the pages. Each task is described by either FetchTask or IndexTask, shown in Listing 6. You can create a ThreadPoolStatus MBean to provide the management interface for the thread pool used to process these tasks and register it with JMX. Listing 6. FetchTask class used in Web crawler application As each page is processed by the crawler, new tasks may be queued to fetch pages that are linked from that page, so at any given time there will likely be a mixture of fetching tasks and indexing tasks outstanding. Being able to identify exactly which pages are being processed or waiting to be processed allows you to understand not only your application's performance characteristics, but also the characteristics of the data it is operating on as well. Figure 4 shows a snapshot of a Web crawler that is processing the whitehouse.gov site. You can see that the home page has already been fetched and indexed, and the crawler is working on fetching and indexing the pages that have been linked directly from it. By hitting the Refresh button, you can sample the flow of tasks through the application, which can provide a lot of information about how an application is working without having to introduce extensive logging or run it in the debugger. Figure 4. Active and queued tasks in a Web crawler application The combination of JMX support in the platform and the jconsole JMX client offers a painless way to add management and monitoring capabilities to our applications. Even for applications that have no specific management requirements, building in these capabilities allows you to gain insight into how your programs behave and the nature of the data they process -- with very little effort. If your application exports a management interface that allows you to see what it is working on, you can learn more about what it is doing -- and be more confident that it is working as expected -- without resorting to intrusive mechanisms like adding logging code or using debuggers or profilers. Learn - "Monitoring and Managing Using JMX" (Sun Microsystems, Inc., 2004): Details the configuration and use of the built-in JMX agent. - JMX Best Practices (Sun Developer Network, 1994-2006): A description of best practices for naming managed objects, selecting JMX features, and choosing data types for managed attributes. - Manage Apache Geronimo with JMX (developerWorks, J. Jeffrey Hanson, August 2006): Learn how the Geronimo application server leverages JMX to facilitate application management. - The Java technology zone: Hundreds of articles about every aspect of Java programming. Get products and technologies.
http://www.ibm.com/developerworks/java/library/j-jtp09196/
crawl-003
en
refinedweb
Attributes in VB .NET It's new in VB .NET! ---------------------------------- Attribute represents a new feature with really great potential. You'll see more and more of attributes as you learn advanced programming techniques. Here's an introduction to how attributes are used in VB .NET. If you have a background in XML or HTML, the word attribute is a modifier of an element. For example, in the HTML Anchor tag: link text HREF is an attribute of the element A. Visual Studio uses XML attributes 'behind the scenes' in a lot of ways and it's important for people just learning VB .NET to avoid confusing them with the 'other' attributes. We took advantage of XML attributes when we demonstrated one way to use the less expensive VB .NET Learning Edition and still be able to compile class libraries. (Learn more about that in the article, A Better Way To Inherit?.) But (just to confuse you?), Visual Studio includes a technique borrowed from C++ that is also identified by the term attributes. If you have programmed Microsoft's MTS or COM, you might have used this type of attribute before .NET arrived. Attributes were used a great deal in COM, but they had to be programmed using a language called IDL (Interface Definition Language) and were generally considered to be a more advanced technique. VB .NET brings the same idea into the mainstream and gives it both power and makes it a lot easier to use. For example, attributes were only keywords in COM programming. But in VB .NET, they have the advantages of being objects.. (Trust Microsoft to make things harder than they have to be!) This is the type of attribute that will be introduced in this article. --------------------- Attributes in VB .NET Page 2 - The Two Types of Attributes --------------------- Any application can use the added information provided by an attribute. In general, there are two types of attributes in VB .NET: predefined and custom. A predefined attribute is used by the .NET Framework itself. You can use the same techniques in your own applications with custom attributes. Here's just a few of the predefined attributes taken from Microsoft's documentation. Notice that a recommended coding practice is to create an attribute name by dropping the trailing string "Attribute". COMClassAttribute Class Indicates to the compiler that the class should be exposed as a COM object. Specific to Visual Basic .NET. VBFixedStringAttribute Class Specifies the size of a fixed-length string in a structure for use with file input and output functions. Specific to Visual Basic .NET. VBFixedArrayAttribute Class Specifies the size of a fixed array in a structure for use with file input and output functions. Specific to Visual Basic .NET. WebMethodAttribute Class Makes a method callable using the SOAP protocol. Used in XML Web services. SerializableAttribute Class Indicates that a class can be serialized. MarshalAsAttribute Class Determines how a parameter should be marshaled between the managed code of Visual Basic .NET and unmanaged code such as a Windows API. Used by the common language runtime. AttributeUsageAttribute Class Specifies how an attribute can be used. DllImportAttribute Class Indicates that the attributed method is implemented as an export from an unmanaged DLL. The three attributes that are specific to Visual Basic .NET are: COMClassAttribute, VBFixedStringAttribute, and VBFixedArrayAttribute. ------------------------ Page 3 - An Example: The Predefined VBFixedString Attribute ------------------------ To illustrate the general idea behind the use of attributes in VB.NET, let's look at how VBFixedString is used in a real program. The problem that this attribute solves is created by the fact that in VB.NET, a string has the characteristics of an array of Char instances. (An array of Chars, in other words.) When you create a file using a structure containing strings and VB.NET's FilePut, you don't get what you might expect because VB.NET puts in extra information because it's an array of Chars. Consider this example program. (PrefixString and PostfixString are there just to provide a visual marker in the created file.) Structure VariableType Public PrefixString As String Public myString As String Public PostfixString As String End Structure Private Sub Button1_Click( _ ByVal sender As System.Object, _ ByVal e As System.EventArgs) _ Handles Button1.Click Dim myRecord As VariableType FileOpen(1, "F:\TESTFILE.TXT", _ OpenMode.Binary) myRecord.PrefixString = "X" myRecord.myString = "AAAAA" myRecord.PostfixString = "X" FilePut(1, myRecord) FileClose(1) End Sub When the resulting program is executed, here's what the file looks like in Notepad. To solve this problem, the structure is changed by adding the VBFixedString attribute. (Dim myRecord As FixedType must also be changed.) Structure FixedType <VBFixedString(1)> Public PrefixString As String <VBFixedString(5)> Public myString As String <VBFixedString(1)> Public PostfixString As String End Structure This results in what you need - an all-character file that you can use in other systems. In the example above, each element - such as Public PrefixString As String - is called the target of the attribute in front of it. All .NET programming elements (assemblies, classes, interfaces, delegates, events, methods, members, enum, struct, and so forth) can be targets of attributes. If you scan the list of attributes implemented in .NET, you will notice that one way to describe the way .NET uses attributes is to simply change the way .NET works to solve some problem that is difficult to solve with regular language elements. This makes them seem to be sort of a programming 'fudge factor' and you might wonder why Microsoft didn't just make the functions solved with .NET attributes part of the Visual Basic .NET language. The reason is that attributes are stored as metadata in the assembly. This makes them available (using a technique called reflection) both at design time in Visual Studio .NET, at compile time when the compiler can use attributes to customize the way the compiler works, and at runtime. This makes them pretty useful! Here's a partial list of the ways that predefined attributes are used in .NET from the Microsoft documentation: Marking methods using the WebMethod attribute in XML Web services to indicate that the method should be callable over the SOAP protocol. Describing your assembly in terms of title, version, description, or trademark. Describing which members of a class to serialize for persistence. Specifying characteristics used to enforce security. Controlling optimizations by the just-in-time (JIT) compiler so the code remains easy to debug. --------------------- Page 4 - An Example: Custom Attributes --------------------- Since Microsoft did all the coding for the .NET predefined attributes, using them is pretty simple. You just include them in front of the target as shown above. Creating your own attributes is another thing entirely because you have to do all that coding. In fact, .NET makes it a lot easier than it ever was before, but it can still be tricky. We're going to show you how to create an attribute and then use it in another program. The program that will use this information is a very simple version of something that might evaluate customer accounts to determine, for example, suitability for a loan. Here's the form that will display the results. To create the attributes, we use a simple class. Note that the predefined .NET attribute AttributeUsage is required to tell .NET that this is an attribute and not just another class. There are three arguments that can be used but only the first (AttributeTargets) is required: <AttributeUsage(AttributeTargets.target, Inherited := boolean, AllowMultiple := boolean)> In our simple example, we only allow other classes to be a target of our custom attribute. Our class also has one property - Risk - and this will be the value that we use in the textboxes in our form. Notice that it's coded just like any other property. <AttributeUsage(AttributeTargets.Class)> _ Public Class AccountRiskFactorAttribute Inherits System.Attribute Private _risk As String Public Sub New(ByVal Value _ As String) _risk = Value End Sub Public ReadOnly Property Risk() _ As String Get Return _risk End Get End Property End Class Since our new AccountRiskFactorAttribute can only be applied to classes, let's code a few with different values of the Risk property. <AccountRiskFactor("01")> _ Public Class BlueChip Inherits RiskValue End Class Public Class Moderate Inherits RiskValue End Class <AccountRiskFactorAttribute("99")> _ Public Class PoisonPeople Inherits RiskValue End Class Notice that in this example, I deliberately coded one without the trailing qualifier 'Attribute' as part of the name; one with no attribute; and one with the trailing qualifier 'Attribute' just to try out several options that VB .NET makes available to you. (They have been displayed in bold to help you find them in the code.) In the next code example, notice that it works anyway. This is just a convenience that .NET provides. (Which, in my opinion, is not really a convenience since it just allows one more thing to confuse people who don't know about it. VB .NET has cleaned up a lot of the ambiguity that was in VB 6, but here Microsoft introduced a brand new ambiguity.) The Moderate class with no attribute will receive a default attribute value in the next code example. Public Class RiskValue Public Overridable ReadOnly Property _ RiskVal() As String Get Dim t As Type = _ Me.GetType() Dim a As Attribute For Each a In _ t.GetCustomAttributes(True) Dim AcctRisk As _ AccountRiskFactorAttribute Try AcctRisk = _ CType( _ a, _ AccountRiskFactorAttribute) Return AcctRisk.Risk Catch e As Exception End Try ' Return the middle value ' if no other value is returned Return "50" End Get End Property End Class Finally, after all this preparation work, the code that actually uses these new custom attributes. It's pretty simple, since the hard part has already been done. To keep things simple, the textboxes in the form are simply updated during the form Load event. Remember to add a reference to CustomAttributes in the References section of Solution Explorer. Imports CustomAttribute Public Class CustomAttributes Inherits System.Windows.Forms.Form #Region " Windows Form Designer generated code " Private Sub CustomAttributes_Load( _ ByVal sender As System.Object, _ ByVal e As System.EventArgs) _ Handles MyBase.Load Dim BlueChipRisk As New BlueChip Dim ModerateRisk As New Moderate Dim PoisonPeopleRisk As New PoisonPeople BlueChipRiskVal.Text = BlueChipRisk.RiskVal() ModerateRiskVal.Text = ModerateRisk.RiskVal() PoisonPeopleRiskVal.Text = PoisonPeopleRisk.RiskVal() End Sub End Class This results in the completed form as shown. This example is obviously not 'production quality' because the goal has been to show how custom attributes are created, managed, and used in a program. The Reflection class in VB .NET offers even more ways to use attributes and even gives you the ability (using the Reflection.Emit namespace) to use attributes to create and run new executable code at runtime. But that's a topic for another article! ** END OF ARTICLE ** ~ vacuumtube Bookmarks
http://www.linuxhomenetworking.com/forums/showthread.php/16129-Serialize-Visual-Basic
crawl-003
en
refinedweb
using iotop to find disk usage hogs 887 20208 average rating: 3.4 (203 votes) (1=very good 6=terrible) 486 35987 Workaround and fixes for the current Core Dump Handling vulnerability affected kernels 161 20972 average rating: 1.0 (50 votes) (1=very good 6=terrible) 38 16041 int syslog(int type, char *bufp, int len); /* No wrapper provided in glibc */ /* The glibc interface */ #include <sys/klog.h> int klogctl(int type, char *bufp, int len); to indicate the error.
http://www.linuxhowtos.org/manpages/2/syslog.htm
crawl-003
en
refinedweb
Recolor (RC/TC/PAL) modification. More... #include <image_modifications.hpp> Recolor (RC/TC/PAL) modification. It is used not only for color-range-based recoloring ("~RC(magenta>teal)") but also for team-color-based color range selection and recoloring ("~TC(3,magenta)") and palette switches ("~PAL(000000,005000 > FFFFFF,FF00FF)"). Definition at line 97 of file image_modifications.hpp. Default constructor. Definition at line 103 of file image_modifications.hpp. RC-map based constructor. Definition at line 110 of file image_modifications.hpp. Definition at line 120 of file image_modifications.hpp. Definition at line 121 of file image_modifications.hpp. Definition at line 118 of file image_modifications.hpp. Applies the image-path modification on the specified surface. Implements image::modification. Definition at line 122 of file image_modifications.cpp. References rc_map_, and recolor_image(). Specifies the priority of the modification. Reimplemented from image::modification. Definition at line 116 of file image_modifications.hpp. Definition at line 124 of file image_modifications.hpp. Referenced by map(), no_op(), and operator()().
http://www.wesnoth.org/devdocs/classimage_1_1rc__modification.html
crawl-003
en
refinedweb
Opacity (O) modification. More... #include <image_modifications.hpp> Opacity (O) modification. Definition at line 293 of file image_modifications.hpp. Definition at line 296 of file image_modifications.hpp. Definition at line 339 of file image_modifications.cpp. Applies the image-path modification on the specified surface. Implements image::modification. Definition at line 334 of file image_modifications.cpp. References adjust_surface_alpha(), ftofxp, and opacity_. Definition at line 303 of file image_modifications.hpp. Referenced by get_opacity(), and operator()().
http://www.wesnoth.org/devdocs/classimage_1_1o__modification.html
crawl-003
en
refinedweb
void const f() vs void f() constCould you please explain the different between: [b]void const f() {} [/b] and [b]void f() const {... Return TypeYes, great helios, it conflict! Return Typecan you explain me more detail. Buy why it is different from add function Return TypeHere is my code: [code]#include <iostream> using namespace std; int max(int a, int b) { ... cmath vs math.hI've found some solutions: #include <cmath> . . cout << pow(3,2); // this line error cout <... This user does not accept Private Messages
http://www.cplusplus.com/user/y8qk4iN6/
crawl-003
en
refinedweb
iOS status bar style Note: This is an editor class. To use it you have to place your script in Assets/Editor inside your project folder. Editor classes are in the UnityEditor namespace so for C# scripts you need to add "using UnityEditor;" at the beginning of the script. Default Black translucent Black opaque
http://unity3d.com/support/documentation/ScriptReference/iOSStatusBarStyle.html
crawl-003
en
refinedweb
#include "gfx_types_vertex_ops.hh" #include "gfx_types_vertex_ops.hh" Go to the source code of this file. Vector/vertex operator() functions. * * External operator() functions cannot be implemented * in terms of the base Vector3 because doing so leads * to erosion of the distinctiveness of the Vertex types. * If they were, to be useful, they would require every * Vertex class to define a Vertex& operator(const Vector3&). * That method is trouble: it would make Vertex and Vector3 * interchangeable in some situations, and transformation problems * would go unnoticed since the code would compile. * The vertex types should be kept distinct because a vertex * must be mathematically transformed first before its type can change. * * LocalVertex lv; * WorldVertex wv; * wv = lv + lv; // WorldVertex.operator=( LocalVertex + LocalVertex ); * * Palomino 3D Engine documents generated by doxygen 1.5.3 on Fri Nov 23 11:26:15 2007
http://www.palomino3d.org/pal/doxygen_v2/gfx__types__vertex_8hh.html
crawl-003
en
refinedweb
#include <ObitUVImager.h> #include <ObitUVImager.h> List of all members. ClassInfo pointer for class with name, base and function pointers. Image mosaic to be produced. Name of object [Optional]. Recognition bit pattern to identify the start of an Obit object. Reference count for object (numbers of pointers attaching). Input UV data. selected/calibrated/edited/Weighted UV data to be imaged
http://www.cv.nrao.edu/~bcotton/Obit/ObitDoxygen/html/structObitUVImager.html
crawl-003
en
refinedweb
4-9 MINIMIZING FLOATING-POINT ERRORS ************************************* A few remarks ------------- o Some practical methods for checking the severity of floating-point errors can be found in the chapter: 'practical issues'. o The chapter on 'FORTRAN pitfalls' discusses various programming practises that may amplify floating-point (and other) errors, and it is very important to avoid them. o Note that an interval/stochastic arithmetic package is not just a diagnostic tool for FP errors, a result without an error estimation is not very useful, as errors can never be eliminated completely in experimental data and computation. Carefully written programs -------------------------- This term was probably coined by Sterbenz (see bibliography below), and means programs that are numerically correct. this is not easy to achieve as the following example will show. Sterbenz discusses the implementation of a Fortran FUNCTION that returns the average of two REAL numbers, the specifications for the routine are: o The sign must be always correct. o The result should be as close as possible to (x+y)/2 and stay within a predefined bound. o min(x,y) <= average(x,y) <= max(x,y) o average(x,y) = average(y,x) o average(x,y) = 0 if and only if x = -y unless an underflow occurred. o average(-x,-y) = -average(x,y) o An overflow should never occur. o An underflow should never occur, unless the mathematical average is strictly less than the smallest representable real number. Even a simple task like this, requires considerable knowledge to program in a good way, there are 4 (at least) possible average formulas: 1) (x + y) / 2 2) x/2 + y/2 3) x + ((y - x) / 2) 4) y + ((x - y) / 2) Sterbenz have a very interesting discussion on choosing the most appropriate formulas, he also consider techniques like scaling up the input variables if they are small. Grossly oversimplifying, we have: Formula #1 may raise an overflow if x,y have the same sign #2 may degrade accuracy, but is safe from overflows #3,4 may raise an overflow if x,y have opposite signs We will use formulas #1,3,4 according to the signs of the input numbers: real function average (x, y) real x, y, zero, two, av1, av2, av3, av4 logical samesign parameter (zero = 0.0e+00, two = 2.0e+00) av1(x,y) = (x + y) / two av2(x,y) = (x / two) + (y / two) av3(x,y) = x + ((y - x) / two) av4(x,y) = y + ((x - y) / two) if (x .ge. zero) then if (y .ge. zero) then samesign = .true. else samesign = .false. endif else if (y .ge. zero) then samesign = .false. else samesign = .true. endif endif if (samesign) then if (y .ge. x) then average = av3(x,y) else average = av4(x,y) endif else average = av1(x,y) endif return end Programming using exception handling ------------------------------------ Computing the average of two numbers may serve as an example for the system-dependent technique of writing faster numerical code using exception handling. Most of the time the formula (x + y)/2 is quite adequate, the FUNCTION above is needed essentially to avoid overflow, so the following scheme may be used: call reset_overflow_flag result = (x + y) / 2.0 call check_overflow_flag(status) if (status .eq. .true.) result = average(x,y) In this way the "expansive" call to the average routine may be eliminated in most cases. Of course, two system-dependent calls were added for reseting and checking the overflow flag, but this may be still worth it if the ratio between the two algorithms is large enough (which is not the case here). Using REAL*16 (QUAD PRECISION) ------------------------------ This is the most simple solution on machines that supports this data type. REAL*16 takes more CPU time than REAL*8/REAL*4, but introduces very small roundoff errors, and has a huge range. Performance cost of different size floats is VERY machine dependent (see the performance chapter). A crude example program: program reals real*4 x4, y4, z4 real*8 x8, y8, z8 real*16 x16, y16, z16 x4 = 1.0e+00 y4 = 0.9999999e+00 z4 = x4 - y4 write(*,*) sqrt(z4) x8 = 1.0d+00 y8 = 0.9999999d+00 z8 = x8 - y8 write(*,*) sqrt(z8) x16 = 1.0q+00 y16 = 0.9999999q+00 z16 = x16 - y16 write(*,*) sqrt(z16) end Normalization of equations -------------------------- Floating-point arithmetic is best when dealing with numbers with magnitudes of the order of 1.0, the 'representation density' is not maximal but we are in the 'middle' of the range. Usually you can decrease the range of numbers appearing in the computation, by transforming the system of units, so that you get dimensionless equations. The diffusion equation will serve as an example, I apologize for the horrible notation: Ut = K * Uxx Where the solution is U(X,T), the lowercase letters denote the partial derivatives, and K is a constant. Let: L be a typical length in the problem U0 a typical value of U Substitute: X' = X / L U' = U / U0 Then: Ux = Ux' / L Uxx = Ux'x' / (L*L) Substitute in the original equation: (U' * U0)t = (K / (L*L)) * (U' * U0)x'x' ((L * L) / K) U't = U'x'x' Substitute: T' = (K * T) / (L * L) And you get: U't' = U'x'x' With: X' = X / L U' = U / U0 T' = (K * T) / (L * L) Multi-precision arithmetic -------------------------- That is a really bright idea, you can simulate floating-point numbers with very large sizes, using character strings (or other data types), and create routines for doing arithmetic on these giant numbers. Of course such software simulated arithmetic will be slow. By the way, the function overloading feature of Fortran 90, makes using multi-precision arithmetic packages with existing programs easy. Two free packages are "mpfun" and "bmp" (Brent's multiple precision), which are available from Netlib. Using special tricks -------------------- A good example are the following tricks for summing a series. The first is sorting the numbers and adding them in ascending order. An example program: program rndof integer i real sum sum = 0.0 do i = 1, 10000000, 1 sum = sum + 1.0 / real(i) end do write (*,*) 'Decreasing order: ', sum sum = 0.0 do i = 10000000, 1, -1 sum = sum + 1.0 / real(i) end do write (*,*) 'Increasing order: ', sum end There is no need here for sorting, as the series is monotonic. Executing 2 * 10**7 iterations will take some CPU seconds, but the result is very illuminating. Another way (though not as good as doubling the precision) is using the Kahan Summation Formula. Suppose the series is stored in an array X(1:N) SUM = X(1) C = 0.0 DO J = 2, N Y = X(J) - C T = SUM + Y C = (T - SUM) - Y SUM = T ENDDO Yet another method is using Knuth's formula. The recommended method is sorting and adding. Another example is using the standard formulae for solving the quadratic equation (real numbers are written without mantissa to enhance readability): a*(x**2) + b*x + c = 0 (a .ne. 0) When b**2 is much larger than abs(4*a*c), the discriminat is nearly equal to abs(b), and we may get "catastrophic cancellation". Multiplying and dividing by the same number we get alternative formulae: -b + (b**2 - 4*a*c)**0.5 -2 * c x1 = ------------------------ = ----------------------- 2*a b + (b**2 - 4*a*c)**0.5 -b - (b**2 - 4*a*c)**0.5 2 * c x2 = ------------------------ = ------------------------ 2*a -b + (b**2 - 4*a*c)**0.5 If "b" is much larger than "a*c", use one of the standard and one of the alternative formulae. The first alternative formula is suitable when "b" is positive, the other when it's negative. Using integers instead of floats -------------------------------- See the chapter: "The world of integers". Manual safeguarding ------------------- You can check manually every dangerous arithmetic operation, special routines may be constructed to perform arithmetical operations in a safer way, or get an error message if this cannot be done. Hardware support ---------------- IEEE conforming FPUs can raise an exception whenever a roundoff was performed on an arithmetical result. You can write an exception handler that will report the exceptions, but as the result of most operations may have to be rounded, your program will be slowed down, and you will get huge log files. Rational arithmetic ------------------- Every number can be represented (possibly with an error) as a quotient of two integers, the dividend and divisor can be kept along the computation, without actually performing the division. See Knuth for technical details. It seems this method is not used. Bibliography ------------ An excellent article on floating-point arithmetic: David Goldberg What Every Computer Scientist Should Know about Floating-Point arithmetic ACM Computing Surveys Vol. 23 #1 March 1991, pp. 5-48 An old but still useful book: Sterbenz, Pat H. Floating-Point Computation Prentice-Hall, 1974 ISBN 0-13-322495-3 An old classic presented in a mathematical rigorous way (oouch!): Donald E. Knuth The Art of Computer Programming Volume II, sections 4.2.1 - 4.2.3 Addison-Wesley, 1969 The Silicon Graphics implementation of the IEEE standard, republished later in another issue of Pipeline: How a Floating Point Number is represented on an IRIS-4D Pipeline July/August 1990 The homepage of Prof. William Kahan, the well-known expert on floating-point arithmetic: A short nice summary on floating-point arithmetic: CS267: Supplementary Notes on Floating PointReturn to contents page
http://www.ibiblio.org/pub/languages/fortran/ch4-9.html
CC-MAIN-2017-47
en
refinedweb
Note: This is Python 3.5 Given some project thing thing/ top.py (imports bar) utils/ __init__.py (empty) bar.py (imports foo; has some functions and variables) foo.py (has some functions and variables) import utils.bar import utils.foo import foo import utils.foo Its typical to have all your entrypoints outside the module, than all imports can be relative. For example thing/ app.py thing/ __init__.py top.py utils/ __init__.py bar.py foo.py And app.py can look like from thing import app if __name__ == '__main__': app.main() Than in top.py you have from .utils import bar And in bar you have from . import foo Or from thing.utils import foo Or from ..utils import foo So if you need two entrypoints, you can either make app take a command line argument for the second entry point, or you can make another file like app that imports bar
https://codedump.io/share/3GwlLnBCXKoJ/1/importing-a-module-in-python-as-part-of-a-package-as-well-as-in-a-way-that-works-stand-alone
CC-MAIN-2017-47
en
refinedweb
#include <assert.h> #include <stdio.h> #include <stdlib.h> #include "blockpool.h" Go to the source code of this file. These functions manage the block chain that makes up the memory pool and allocates memory from the individual blocks. The entire pool is freed at once. The pool allocator was originally written as a low level allocation package. The original package queries the system to find the page size and allocation block size. This version has been simplified, but some of the low level nature remains. The documentation in this file is formatted for doxygen (see). blockpool.cpp.
http://www.bearcave.com/misl/misl_tech/wavelets/packet/doc/blockpool_8cpp.html
CC-MAIN-2017-47
en
refinedweb
Deploying a Hybrid Messaging Infrastructure Using Office 365: Exchange Online Enterprise Messaging Combining On-Premises and Cloud-Based Technologies Technical White Paper Published: June 2012 The following content may no longer reflect Microsoft’s current position or infrastructure. This content should be viewed as reference documentation only, to inform IT business decisions within your own company or organization. A hybrid messaging deployment offers the possibility to have some mailboxes located on-premises while others reside in the cloud. Microsoft IT shares its planning and deployment experiences of a hybrid Microsoft Office 365 hybrid messaging solution. Contents Microsoft Messaging Infrastructure At-A-Glance Designing for Hybrid Messaging Migrating Mailboxes to Exchange Online Lessons Learned and Best Practices Executive Summary Although hosted solutions for e-mail messaging have been available for many years, recent improvements have made it possible to deploy and operate a hybrid environment that makes the most of both on-premises and hosted services. Microsoft began offering Exchange Online as its multi-tenant enterprise messaging service in the cloud to customers starting at the end of 2008 based on Exchange Server 2007 technology with the goal of helping customers and its own workers realize the benefits of cloud computing. After onboarding millions of mailboxes from companies of all sizes, building out a scalable and highly available infrastructure, and upgrading Exchange Online to run Exchange Server 2010, Microsoft IT pursued an initiative to transition from operating its own on-premises Exchange environment to operating a hybrid environment. With a hybrid approach, Microsoft IT benefits from continuing to use previous investments in the existing on-premises infrastructure, with ability to accommodate business growth by using Exchange Online. To overcome the engineering and business challenges in transitioning to a hybrid environment, Microsoft IT focused on ensuring user satisfaction by engaging all teams involved in the deployment effort. One key objective was to provide users with a seamless transition and automatic Outlook profile update to Exchange Online yet retain the same features and functionality of the on-premises service. To ensure the best user experience, the hybrid architecture incorporates design elements that include the following: - Single sign-on (SSO) using existing Active Directory credentials and Active Directory Federation Services (ADFS) - Shared address book for a unified global address list (GAL) - One microsoft.com domain namespace for both on-premises and Exchange Online - Centralized administration of mailboxes and mail flow - Synchronized calendar and free/busy scheduling - Landing page to inform Exchange Online users who log in to the Outlook Web App on-premises URL about the appropriate Exchange Online Outlook Web App URL The Exchange Server architecture enables Microsoft IT to deploy messaging in a hybrid environment according to the needs of the business and desired project schedule. Microsoft IT moved mailboxes to Exchange Online after preparing the environmental dependencies such as identity management, security, and synchronization. In this way, Microsoft IT controls accounts, retention, e-discovery and other features in a unified way to ensure a centrally managed, homogenous environment. This white paper contains information for business and technical decision makers who operate an on-premises messaging solution and are evaluating possibilities of transitioning to a hybrid environment that incorporates Exchange Online. The paper assumes basic familiarity with concepts relevant to messaging technologies, such as Active Directory, Exchange Server, TCP/IP, and DNS. A high-level understanding of the capabilities of Exchange Online and Office 365 is also helpful. For more information about Exchange Online, see. Note: For security reasons, the sample names of forests, domains, internal resources, organizations, and internally developed security file names used in this paper do not represent real resource names used within Microsoft and are for illustration purposes only. Hybrid Advantages The hybrid architecture results in many benefits for Microsoft IT not only in overall cost savings, but also in greater flexibility to accommodate business growth while saving time and money by not having to do capacity planning, update software, maintain servers, or manage hardware. Cost Savings Due to Cloud Efficiencies As a cloud service, Exchange Online provides the opportunities to reduce costs by eliminating the typical on-premises requirements of purchasing, deploying, and managing servers. These savings are possible due to Exchange Online features such as the following: - Large 25 GB mailbox size With Exchange Server 2010, Microsoft IT eliminated backups and relies on a cost-effective, Just a Bunch of Disks (JBOD)-based storage. This solution offers cost savings over the previous Storage Area Network (SAN) approach, yet it is a high expense to deploy and operate the storage subsystem. Exchange Online frees Microsoft IT from the need to manage any storage hardware. Quota management. During the initial phases of using Exchange Online in a hybrid environment, it is important to manage quotas in case mailboxes need to move back to on-premises. Microsoft IT uses the same quotas for both environments to prevent the possibility of having to increase on-premises quotas for specific users, or asking them to reduce their mailbox size. - Included technical support Exchange Online includes 24/7 phone support for the internal Microsoft IT support team, which helps to ensure timely responses and reliability. - Automatic failover Similar to the on-premises solution, Exchange Online also provides automatic failover for resiliency. - Highly available design Exchange Online includes mailbox resiliency technology, such as the ability to switch between database copies when disks fail, and automatic, database-level recovery from failures through database availability groups. - Flexible growth and expansion As Microsoft grows and changes, Exchange Online makes it straightforward to add mailboxes by simply buying additional licenses. This requires no capacity planning, server purchasing, or deployment. Flexible Deployment and Management Exchange Online and on-premises overlap in terms of management functionality. Both use Role-Based Access Control (RBAC) for task delegation and administration via the Exchange Control Panel web-based console or through Windows PowerShell using the Exchange Management Shell. Microsoft IT uses the remote PowerShell capability for managing Exchange Online from within the on-premises network. Exchange Online gives Microsoft IT management capabilities relevant to messaging-as-a-service, including recipient policies and groups, whereas Exchange on-premises provides all management capabilities. A hybrid approach achieves the best of both worlds by enabling Microsoft IT to accomplish the following: - Deployment on Microsoft IT’s terms A hybrid approach offers Microsoft the flexibility to migrate mailboxes as needed to and from Exchange Online. As a way of validating the hybrid approach, Microsoft IT started migrating the mailboxes of a small number of volunteer early adopters, and sets the pace of migration according to its needs . - Infrastructure ownership and control Approaching messaging as a hosted service and an on-premises service gives Microsoft IT the flexibility to own the entire messaging continuum for the ultimate degree in infrastructure flexibility. Due to business needs, some mailboxes may remain on-premises, and others may be migrated to Exchange Online. If requirements change, Microsoft IT may move mailboxes from one environment to the other without affecting users. - Centralized management Both Exchange Online and on-premises share a unified approach to managing mailboxes, policies, recipients, and other Exchange objects. In the hybrid implementation, Microsoft IT manages all messaging details in a unified and centralized way. - Customer validation and dogfooding Validating hybrid performance and functionality as part of dogfooding efforts is one of Microsoft IT’s key goals. Part of the design and deployment entailed working through many types of possible scenarios to work out any issues and fine-tune best practices. This goal went beyond implementing quick fixes and resolving bugs, to validating administrative and support paths to ensure the hybrid architecture was suitable for enterprise needs. - Single namespace and unified experience Microsoft IT's hybrid design relies on auth headers in Exchange data, making communication appear internal to both on-premises and Exchange Online. As a result, Exchange features such as MailTips, and out-of-office (OOF) messages function and appear as expected to users and recipients. Microsoft Messaging Infrastructure At-A-Glance The Microsoft internal messaging infrastructure supports more than 200,000 mailboxes for employees, contractors, and business partners across three core divisions involving hundreds of products and services. As a company, Microsoft operates in more than 100 countries, with the majority of employees working in its Redmond, Washington headquarters. To support the corporate messaging environment, Microsoft IT manages multiple regional data centers connected by high-speed WAN links. The network dependencies have been refined and improved over time to where routing, DNS infrastructure, bandwidth, and other similar considerations are stable with high levels of redundancy and availability. You can find out more about the Microsoft Exchange Server 2010 architecture at. Although the technological capabilities of Exchange Server 2010 have enabled Microsoft IT to reduce costs and increase efficiencies by taking advantage of server consolidation and more flexible and larger storage, additional opportunities exist with a hybrid approach that incorporates Exchange Online. On-Premises Messaging Architecture The Exchange Server 2010 topology and architecture continues a tradition of following best practices, incorporating product group recommendations, and meeting business needs based on real-world performance data. Figure 1 shows a high-level overview of the Microsoft on-premises architecture before implementing a hybrid infrastructure. Figure 1. On-premises messaging infrastructure The Exchange Server roles facilitate and separate the necessary functions of e-mail into servers that handle message filtering, transport, client access, mailbox storage, and unified messaging. As a best practice, Microsoft IT suggests deploying multi-role Exchange servers to support a hybrid infrastructure. For more information, including capacity planning, see. Hybrid Messaging Architecture As a cloud-based offering, Exchange Online provides messaging-as-a-service with an architecture that abstracts dependencies such as message filtering into additional services. The Exchange Online architecture uses a similar role-based approach as on-premises, but driven by the following services instead of roles: - Forefront Protection for Exchange (FOPE): as a first-tier message handler for Exchange Online, FOPE provides protection from viruses and SPAM. Microsoft IT has used FOPE as a service since 2007 as its message filtering solution. - Office 365 directory Exchange Online uses its own directory service for user data. To handle authentication, the directory service relies on Microsoft Online ID. - Exchange Online messaging As the core service that handles messaging, Exchange Online includes transport and storage functionality to house mailboxes and facilitate mail flow. Figure 2 shows Exchange Online in a hybrid architecture with Exchange on-premises. Figure 2. Hybrid architecture In a hybrid infrastructure, Exchange Online relies on the following additional services to enable cross-premises mail flow, synchronization, and unified management. - Microsoft Federation Gateway As an intermediary between Office 365 and on-premises services, the Microsoft Federation Gateway provides an identity service that connects users to the hosted services they want to use. For more information about the Microsoft Federation Gateway, see. - ADFS To enable single sign-on and communicate with the Microsoft Federation Gateway, Microsoft IT relies on ADFS. - Microsoft Online Services Directory Synchronization tool To synchronize mailboxes, the global address list, and other data, Microsoft IT relies on the Directory Synchronization Tool. Designing for Hybrid Messaging The successful deployment of a hybrid messaging infrastructure for Microsoft IT requires that all the dependent on-premises services such as ADFS operate reliably and meet business needs. These services perform the intermediary data handling between on-premises and Exchange Online, that make account and messaging synchronization possible, as well as enabling workers to continue using the Outlook client, Outlook Web App, and mobile devices. At a high level, Microsoft IT fulfilled the following requirements in the hybrid design: - Service domain to facilitate single domain namespace To forward e-mail from on-premises to Exchange Online, Microsoft IT configured a new DNS service domain for coexistence named messaging.microsoft.com. Upon sign up, new companies are automatically given a customizable coexistence domain with the format .mail.onmicrosoft.com. - On-premises federation through ADFS The ADFS infrastructure is the on-premises service that provides a trust relationship between on-premises and Exchange Online to make single sign-on possible. - Exchange federation through Microsoft Federation Gateway The Microsoft Federation Gateway is the trust broker that enabled Microsoft it to establish a federation trust from Exchange Online to the on-premises Exchange environment. This enables synchronization and sharing of Exchange information, such as free/busy data. For more information, see. The Microsoft IT environment is specifically designed for Microsoft business needs, yet the technical requirements and steps for deploying a hybrid environment are the same for all companies. For a guided lab that shows the steps of configuring on-premises and Exchange Online components, see. Identity Management To make the experience seamless for administrators and workers, the messaging environment must support a single authoritative source of user identity, with associated authentication, authorization, and permissions management. In a hybrid approach, the technical solution for a single authoritative source is to populate the Exchange Online directory with on-premises users, and then keep the two directories synchronized. As shown in Figure 3, there are three technologies Microsoft IT uses for synchronization to take place: - ADFS 2.0 To communicate between the on-premises Active Directory environment and Exchange Online, Microsoft IT relied on the established ADFS infrastructure and created a relying party trust relationship between the ADFS federation server farm and Exchange Online. This relying party trust is a conduit for authentication tokens to facilitate single sign-on. - Microsoft Federation Gateway As an intermediary between Office 365 and on-premises services, Microsoft provides an identity service that connects users to the hosted services they want to use. - Directory synchronization tool Exchange Online begins using the on-premises identities the first time that the directory synchronization tool is run. The directory synchronization tool synchronizes key data every three hours, including mail-enabled contacts and groups, global address list (GAL), on-premises-based safe and blocked senders, and delegation details. Figure 3. Identity, synchronization, and single sign-on technologies ADFS Architecture The ADFS infrastructure at Microsoft supports single sign-on for over 300 line-of-business applications hosted on the cloud or by partners and vendors outside of the internal corporate network. ADFS handles claim requests to verify identities and returns tokens to the requesting party to enable applications to verify the identity of a user with Microsoft Active Directory credentials. ADFS relies on federation servers that authenticate users against Active Directory and issue claims, as well as federation proxy servers that reside in the perimeter network in front of the federation servers. Clustered SQL servers store configuration data, as shown in Figure 4. Figure 4. DFS architecture To accommodate the additional traffic to the ADFS infrastructure due to Office 365, Microsoft IT more than doubled the size of its ADFS infrastructure. In July 2011, when mailbox migration first began to Office 365, Microsoft IT operated 12 proxy servers and 12 federation servers. As onboarding accelerated, Microsoft IT added more servers. In March 2012, after increasing server numbers, Microsoft IT operated 56 servers, divided evenly between proxy and federation roles. The key metrics Microsoft IT uses to determine capacity planning come from the following product group recommendations shown in Table 1. Table 1. ADFS performance metrics The authentication requests per second is the major threshold. Microsoft IT tries to keep this at an even load of 10-12. During March 2012, Microsoft IT migrated over 14,000 mailboxes, with a plan to monitor performance of the existing ADFS environment, and then add additional capacity. Figure 5 shows an average of requests per second for March. Figure 5. Average requests per second As Figure 5 shows, after deployment, the average requests decreased by half from about 20 per second to 14 per second. The number of auth requests per user depends on the location of the user when making the requests to Exchange Online. Microsoft IT modeled three types of users, as shown in Table 2 to understand projected server load and plan for ADFS capacity. Table 2. User patterns for messaging-related ADFS load considerations These models served as a starting point to determine how many more servers to add in order to accommodate the additional traffic expected from migrating mailboxes to Exchange Online. For more information about designing and capacity planning for ADFS proxy and federation servers, see and. Usage Patterns and Bandwidth An important aspect for the hybrid design is modeling user patterns and user behaviors to understand how they affect the bandwidth requirements and user experience. The models Microsoft IT used for ADFS capacity planning do not necessarily address bandwidth and client experience needs related to messaging, calendaring and other Exchange traffic. To address these needs, Microsoft IT abstracted several user types, as shown in Table 3. Table 3. Microsoft usage models for sizing considerations Microsoft IT's considerations for bandwidth requirements based on the user models followed established best practices of evaluating the connectivity at each gateway and monitoring performance. As migrations increase, Microsoft IT continues to monitor latency, jitter, collisions, utilization, and other network metrics to spot gateways and locations that need improvement. For more information about bandwidth planning, see. One more performance consideration is the location of users relative to the Exchange Online data center, and the latency and bandwidth available between users and the data center. This is relevant both for the initial onboarding migration, due to the gigabytes of data transferred, as well as for ongoing needs, especially as Microsoft workers increasingly rely on mobile devices and work from home and while on the road. Because Exchange Online relies on Internet infrastructure for mail traffic between office locations and the Exchange Online data center, performance and SLAs cannot be guaranteed. It is important to gather performance statistics from your environment. Two tools Microsoft IT uses for validating connectivity are and. Client Performance Microsoft users are accustomed to high performance levels with messaging, expecting all message delivery to complete less than 90 seconds, maintain 99.99% or higher availability, as well as deliver fast e-mail operations to read and manage schedules and e-mail items. In an on-premises deployment, Microsoft IT controls the messaging infrastructure and its dependencies because all traffic flows internally within the corporate network, or between users accessing internal Exchange servers over the Internet. A hybrid deployment introduces additional variables that affect performance because users accessing Exchange Online from within the corporate production environment do so over the Internet, same as mobile and remote workers. The differences among gateways, client devices, and connectivity in Microsoft locations mean that user experience at times may not be consistent among all sites. Microsoft IT looks at two factors when considering client performance: the MAPI RPC latency and the overall client system indicators, such as CPU, disk, and file fragmentation. RPC latency includes round-trip latency to the mailbox server and server-side RPC processing. A helpful tool for determining these values is the connection status dialog accessible by holding down the CTRL key, right-clicking the Outlook icon, and selecting Connection Status from the Outlook context menu. Microsoft IT uses the following thresholds when analyzing latency: - Max Avg Proc Time (Exchange RPC Latency) = 25ms - Max Network RTT Time (Network Ping Time) = 300ms - Max Avg Resp Time (Exchange RPC Latency + Network Latency) = 325ms For more information about client performance, see Service Dependencies At its core, Exchange Server has always and continues to deliver e-mail messaging and calendaring capabilities. Yet, Exchange Server 2010 integrates with other services and applications such as SharePoint, the Office suite, and Lync Server, both on-premises and through Exchange Online. This integration along with ADFS and directory synchronization helps to facilitate the following hybrid Exchange capabilities. - Delegate permissions for administrators To maintain the delegate permissions that administrators need to support managers and executives, Microsoft IT migrates manager and delegate mailboxes together. Delegate permissions do not persist in Exchange Online unless all affected mailboxes are migrated at the same time. - Free/busy sharing and synchronized calendaring As part of federated delegation, free/busy information is shared between on-premises and cloud-based users. After Microsoft IT establishes a trust through the Microsoft Federation Gateway and configures a sharing relationship between on-premises and Exchange Online, it is possible to share free/busy data. The user experience is transparent because the Outlook client communicates with the local CAS server, which requests a delegation token from the Microsoft Federation Gateway, impersonates the user, and makes free/busy requests on each user’s behalf. - Public folders Exchange Online does not support public folders. This is not an issue for Microsoft IT because users whose mailboxes are identified for migration do not rely on public folder functionality. For more information about public folder best practices in a hybrid deployment, see. - Unified messaging Exchange Online supports unified messaging features for Exchange, including voicemail, automated attendant, Outlook Voice Access, speech-to-text voicemail preview in seven languages, and inline playback. - Outlook Web App redirection In the initial hybrid implementation, Microsoft IT created a landing page for users who access the on-premises Outlook Web App URL that directs users to the Exchange Online URL. If a user accesses from within the corporate network, only one login is required, whereas from the Internet, users see the need to authenticate twice. While working through the challenges, Microsoft IT collaborated with the Exchange Server product group to suggest improvements to streamline the experience. Exchange Server 2010 SP2 incorporates the latest changes with improvements to the Outlook Web App experience for hybrid deployments. For more information, see. Mail Flow Over the course of planning for and deploying the hybrid environment, Microsoft IT validated possible mail flow scenarios and developed best practices to streamline hybrid deployment for clients. Many of these configuration options are included in the Exchange Server Deployment Assistant and as improvements in Exchange Server 2010 SP2 on-premises. The routing configuration in a hybrid deployment is relatively straightforward. It comes down to having on-premises or Exchange Online be the authoritative environment, and then relaying e-mails to the secondary environment. In a hybrid configuration, both the on-premises and the Exchange Online environment see each other as an internal, trusted environment. Figure 6 illustrates the configuration and mail flow. Figure 6. Message flow overview To enable mail flow, Microsoft IT configured a dedicated send connector on Hub Transport servers secured by Transport Layer Security (TLS). That traffic traverses the Internet and enjoys the following protection measures: - Channel privacy Exchange 2010 forces TLS encryption for all messages by requiring that a SAN or fully qualified domain name (FQDN) on the associated Secure Sockets Layer (SSL) certificate for the sending server is configured as authorized on the receiving server. - Receiver and sender authentication To protect against impersonation, Exchange Server 2010 uses an encrypted auth header and domain validation, including validating the certificate of the receiving server against a revocation list with the certification authority (CA). Exchange Server appends the auth header to messages to mark internal messages as trusted and authenticated, making messages and MailTips appear as internal in both Exchange Online and on-premises. The header works together with the certificates and send connector to ensure mail flow happens smoothly between Exchange Online and on-premises. Figure 7 illustrates the role of the auth header. Because Exchange Server appends the auth header to all internal communication, features such as OOF notifications and MailTips work seamlessly for users. Figure 7. Auth header The auth header is relevant in the following mail flow scenarios for Microsoft IT: - E-mail flow between Exchange Online and on-premises When an on-premises user sends an e-mail to a user whose mailbox resides in Exchange Online, the on-premises Hub Transport server verifies that the SAN or FQDN of the SSL certificate matches the configured value. If the certificate subject is valid, then Exchange appends internal header to the e-mail and sends it to Exchange Online. The message bypasses the Edge server on premises. The reverse direction follows a similar path where the DNS and SSL configuration along with the send connector on the Hub Transport server enable encrypted mail to flow. The built-in features of Exchange Server give Microsoft IT the functionality needed to configure mail flow. - E-mails between Exchange and Internet hosts For other e-mail communication to and from Internet hosts, Exchange Online and on-premises use the standard Simple Mail Transfer Protocol (SMTP) mail flow as detailed in. Forefront Online Protection for Exchange (FOPE) In the classic on-premises architecture, an Edge server running in a perimeter network provides initial mail filtering for anti-virus and antispam protection as well as SMTP relaying. For Exchange Online, FOPE provides a similar service. FOPE includes high accuracy SPAM filtering with over 98% of SPAM filtered, and 100% of viruses filtered by using multiple virus-scanning engines. FOPE also gives Microsoft IT a control center for advanced policy rules and reporting. Although it is possible to use an Edge server on-premises for mail filtering and SMTP relay in a hybrid architecture, Microsoft IT uses FOPE. The first contact point of handling e-mail messages is very important in the overall architecture, especially in the dependencies required when not using FOPE. The Exchange Deployment Assistant addresses this importance in the guidance it provides and accommodates both scenarios for initial mail handling. For more information about FOPE, see. Migrating Mailboxes to Exchange Online The exact process that Microsoft IT followed in migrating to Exchange Online entailed many months of planning, validating scenarios, and working to improve the client and user experience. Because its mission includes real-world validation and early adoption of Microsoft technologies, the deployment process did not follow a more typical path. These efforts support customer needs. For example, before the changes introduced in on-premises Exchange Server 2010 SP2, the configuration requirements entailed over 50 distinct steps, which SP2 reduces to just six. As Microsoft IT migrates more mailboxes to Exchange Online, the migration velocity can increase from 5,000 to 15,000 mailboxes per month. At a high level, the deployment entailed making the following changes: - Configure single sign-on As a recommended prerequisite to a hybrid Exchange deployment, the on-premises credentials and user data should be used to authenticate with Exchange Online. Microsoft IT already operated an ADFS infrastructure, and configured it to support Exchange Online. On the Exchange Online side, after signing up for Exchange Online and verifying domain ownership, Microsoft IT configured the Microsoft Federation Gateway to work with its ADFS infrastructure through a trust relationship. - Synchronize directories and data In order to onboard user mailboxes, users must exist in Exchange Online. Microsoft IT configures directory synchronization to populate Exchange Online with users from the Active Directory environment. - Configure DNS and certificates Exchange relies on DNS entries for autodiscover, which is necessary for a seamless online migration with no user interruptions. After migration, Outlook uses autodiscover to detect the mailbox move, and upon restart uses the Exchange Online service. Microsoft IT configured the MX records to point to FOPE. - Deploy/configure necessary on-premises Exchange dependencies To enable the full range of Exchange features and services, such as mailbox search, Outlook Web App redirection, MailTips, free/busy sharing, message tracking, and archiving, Microsoft IT made the necessary on-premises configuration updates to work with Exchange Online. The Exchange Hybrid Configuration Wizard in Exchange 2010 SP2 automates many of the configuration steps. - Verify mail flow The auth header is crucial to bypass filters and mark internal messages as originating from trusted sources. Microsoft IT configured and verified mail flow between Exchange Online and on-premises, as well as Internet hosts. For deployment steps and instructions to deploy a hybrid environment, the best practice is to use the Exchange Deployment Assistant, which includes the latest steps. To access the Exchange Deployment Assistant, see. Migration Approach and Process One of the advantages of a hybrid infrastructure is that it enables Microsoft IT to move mailboxes to and from Exchange Online without affecting availability, performance, or the user experience. The same core messaging and calendaring functionality remains available to users during the move without service interruption. The migrations are made as online mailbox moves, so users do not need to synchronize data after migration. In practical terms, this means Microsoft IT may schedule mailbox moves at any time if all the dependencies and prerequisites are met for preparing and configuring settings, as well as informing users. The overall process is as follows: - Test usage scenarios. Before Microsoft IT migrates mailboxes, it performs end-to-end system testing that includes all possible usage scenarios. This testing helps to discover and remedy system configuration and integration issues. Although the engineering staff audits configurations, sometimes real-world issues arise, especially with new changes. Thorough testing also enables Microsoft IT to better understand the environment and build a decision matrix to identify the users who can move their mailboxes to Exchange Online. - Scope mailbox migration Microsoft IT creates a list of potential users to be moved, gathers statistics about their mailboxes, and makes decisions about the migration order based on a decision matrix. This decision matrix depends on the business and IT needs. For example, Microsoft IT made the decision to simplify infrastructure and operational support by adopting the default configuration and reduce customization as much as possible. This may mean not introducing some features and functionalities. One of the examples is not migrating any mailboxes who are using a legacy telephone system and only migrating mailboxes with Lync 2010 Enterprise Voice to Exchange Online. This decision saves Microsoft IT third-party gateway costs and associated support overhead. It also simplifies the Exchange Unified Messaging configuration, and enables Microsoft IT to focus its efforts in driving Lync 2010 Enterprise Voice as the default telephony and collaboration platform. Microsoft IT is working to transition majority of mailboxes to Exchange Online to reduce costs and still offer users the best experience. - Verify configuration This includes ensuring that Exchange Online is prepared with the appropriate objects, directory synchronization functions, and mail flows between Exchange on-premises and Exchange Online. This step also serves as a safeguard to verify that there are no schedules service windows or current outages with dependent services. - Update user computer To ensure that users have the latest Outlook client version and required software such as Microsoft Online Services Sign-in Assistant, Microsoft IT uses System Center Configuration Manager (SCCM) to package the required software and deploy it on user computers. - Migrate mailboxes After notifying users of the migration schedule, Microsoft IT migrates mailboxes and sends notices upon successful completion. For more information about determining how many mailboxes to migrate, the anticipated migration timeframe, and other migration performance details, see the migration performance guide at. Phases The rate at which Microsoft IT migrates mailboxes is closely tied to the rate that improvements and change requests from previous phases are implemented as features. Between the phases, Microsoft IT allowed for a period of one to two weeks to implement changes and constantly improve the user experience and migration process. The phases were as follows: - Phase 1: Environmental validation The purpose of this phase is to discover and fix any system configuration errors and integration issues by creating test accounts and performing usage scenarios. - Phase 2: Early adopter validation The early adopter volunteers troubleshoot, gather logs, and provide constructive feedback to the project teams. In this phase, Microsoft IT migrates 10 to 20 mailboxes per week, stopping at approximately 100 mailboxes. - Phase 3: Expanded early adoption During the expanded early adoption phase, Microsoft IT migrated the accounts of 1,000 additional volunteers who are eager to explore new options in technology. The migration proceeded in phases, stopping when major issues are discovered and resuming upon resolution. - Phase 4: Executive opt-in To stress-test the approaches developed, Microsoft IT reached out to executives to migrate entire teams and reach the number of mailboxes necessary to perform larger scale performance testing. In this phase, Microsoft IT also introduced a stabilization period of 21 days where no changes are made, and statistics gathered to gauge availability and stability. - Phase 5: Company-wide signup Following team migration, Microsoft IT opened up signups to volunteers company-wide, having resolved any underlying high-severity issues. - Phase 6: Company-wide adoption Once the hybrid infrastructure meets the shared goals of Microsoft IT, product developers, and other infrastructure team members, Microsoft IT plans to migrate all mailboxes to Exchange Online, unless there is a business need to remain on-premises. Supporting Users During the transition to a hybrid infrastructure, Microsoft IT minimizes support tickets by informing users and designing architecture with the goal of least user impact. The typical process for any Microsoft IT improvement project includes a focus on user education. This entails a broad, multimedia approach of making help available to users on their own terms, including the following: - Online help Microsoft IT developed online help to answer frequently asked questions, provide user self-help capabilities, and inform users about working with Exchange Online by suggesting best practices. - E-mails detailing project schedule and status As a best practice, Microsoft IT informs users personally when a scheduled task affects them, and follows up after completing the task with status details. - Updated knowledge for front-line operators The support and escalation path remains the same for users due to the centralized controls that a hybrid infrastructure offers. However, as part of preparing for mailbox migration, Microsoft IT collects incident details and transfers the resolution specifics to internal front-line operators as well as the support team for Exchange Online to aid in issue resolution. To help facilitate this knowledge sharing, Microsoft IT established a supportability team to do deep analysis of each ticket and identify trends in order to support and prioritize change requests made to the Exchange product group. - Validation team Due to the need to validate many possible customer scenarios and features for all the scenarios, Microsoft IT created a dedicated validation team. This team has oversight to validate possible customer configurations, record findings, recommend improvements, and create best practices. Exchange Server 2010 SP2 on-premises incorporates some of the findings of this team as product improvements to simplify customer hybrid deployments. This team also validates features and functionality for Microsoft users to ensure a smooth transition process. - Feedback loop When Microsoft IT migrated the earliest mailboxes, this was done with the intention to obtain migration and usability feedback. The early volunteer users relied on a feedback portal to give real-time feedback as a smile, frown, improvement idea, or issue. This feedback loop complemented the one-week and one-month post-migration survey users filled out to help Microsoft IT gauge overall user experience such as migration experience and usage performance. This helped Microsoft IT to identify improvement areas for infrastructure, configuration and product design changes. - Self-help tool. Microsoft IT treats both on-premises and Exchange Online as a single service, and the helpdesk supports both groups of users. It is important to be able to identify the environment that hosts the mailbox, therefore Microsoft IT created a Web portal that provides information about the mailbox location, Outlook Web App link, ActiveSync, and other information pertaining to that user. Lessons Learned and Best Practices Over the course of designing, deploying, and operating a hybrid Exchange infrastructure, Microsoft IT learned many lessons for the best approaches in running a hybrid environment. Although some of these are applicable to the Microsoft production environment specifically, the following best practices apply to hybrid Exchange deployments in general: - Use available migration tools and wizards Many of the findings that engineers, architects, and implementers made are implemented in the configuration wizard and supporting tools that Microsoft makes available to anyone using Office 365. Whenever an easier solution of configuration step may be automated or implemented as a product change, Microsoft IT worked to transfer their knowledge into a standard for all customers. - Focus on core architectural elements At first glance, a hybrid infrastructure takes a potentially complex Exchange architecture and topology, and introduces additional configuration requirements, and management overhead. Once the underlying dependencies, such as ADFS, Internet ingress and egress, and network latency are established and configured with adequate performance, Microsoft IT found that a hybrid deployment still maintains centralized administration, and introduces little architectural complexity while preserving a unified user experience. - Adopt a services-based perspective With Exchange Online, every aspect (SPAM/virus protection, Microsoft Federation Gateway, messaging, and so on) is provided as a service and not as a feature or component. It is helpful in understanding a hybrid architecture to consider some aspects of on-premises as service counterparts, to abstract the architectural elements and understand their dependencies and relationships. For example, as a counterpart to the Office 365 directory, there is Active Directory. In any overlap that happens between services, it is important to remember that there must be a way to achieve a single, synchronized version that is transparent to the user. Enabling technologies such as ADFS and the Microsoft Federation Gateway facilitate this seamless integration. - Audit common sources of misconfiguration Microsoft IT investigates upstream and downstream possible causes to isolate root causes and remedy them. When troubleshooting typical on-premises components, there is not always a corresponding cloud counterpart, which makes it challenging to do direct comparisons and remedy issues. Microsoft IT proactively audits the most common possible issues as a preventative measure to reduce the troubleshooting necessary. One useful tool already mentioned to help with common troubleshooting and auditing tasks is the remote connectivity analyzer located at. - Identify send as relationships Whereas users may specify delegate rights, administrators assign send as rights to grant someone control over a specific mailbox. These send as rights do not synchronize automatically as you migrate mailboxes. Microsoft IT determines on-premises send as permissions through PowerShell before migrating mailboxes, and uses PowerShell scripting to apply the same permissions after migrating mailboxes to Exchange Online, - Engage infrastructure team early Mailbox migration to Exchange Online results in e-mail traffic traversing the Internet across provider backbone routers instead of internal WAN networks and internal routers. This change may require increasing capacity and sizing of the Internet proxy egress infrastructure, ADFS, bandwidth, gateway IP configuration if doing Network Address Translation (NAT), Exchange on-premises hub transport configuration, and so on. It is important to engage the various teams responsible for all these services early and carry out capacity re-engineering to ensure project success. For example, Microsoft IT discovered that a cause of intermittent client connectivity to be caused by a flood threshold of the proxy array, and quickly reached out to the associated team to engineer and implement a new solution. - Support the support department With a new service, support personnel must be trained on possible issues, and how to isolate and troubleshoot root causes. Having tools that identify mailboxes as on-premises or in the cloud helps when isolating root causes. - Practice change management. With new technology adoption, users generally want to start using the new and exciting features. Yet with messaging, there is a high expectation that the service needs to be reliable with high service availability, which may not be possible at very early deployment stages. Microsoft IT mitigates this by ensuring users have all possible collaboration tools so that when one service is not available, workers may continue to carry out their tasks. For example, when e-mail service is unavailable, users can continue to collaborate with colleagues through Lync 2010 via instant messaging or voice call. They may also work on documents via SharePoint or send documents via Lync. At Microsoft, many early innovators and adopters are very keen to be early adopters because service outages do not severely affect their ability to work. After Microsoft IT achieves stability with a new service, it migrates the rest of the company. This methodology satisfies all user needs, creates high satisfaction, gives Microsoft IT the ability to support the developers in testing, and create a better product. - Communicate with users Active communication to users via Web portal, newsletter and e-mails keep users excited about the program and informs them about new features or issues. Microsoft IT rewards and recognizes users who provides the most constructive feedback and support, which maintains user motivation and commitment to dogfooding additional products and services. - Audit gateway configuration Microsoft IT audited two configuration details for gateways: flood thresholds for TMG gateways, and Outlook client port exhaustion when using NAT. For more information about TMG configuration, see., Forefront, Lync, Outlook, SharePoint, SQL Server, Windows, Windows PowerShell, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. All other trademarks are property of their respective owners.
https://technet.microsoft.com/en-us/library/jj218963
CC-MAIN-2017-47
en
refinedweb
Hopefully this is rather a simple C++ question (not a language-lawyer one). How is one supposed to use the GNU extension dladdr #ifndef _GNU_SOURCE #define _GNU_SOURCE #endif #include <dlfcn.h> static void where_am_i() {} int main() { Dl_info info; dladdr( (void*)&where_am_i, &info ); return 0; } $ clang --version Debian clang version 3.6.2-3 (tags/RELEASE_362/final) (based on LLVM 3.6.2) Target: x86_64-pc-linux-gnu Thread model: posix $ clang -Wpedantic -o foo foo.cpp -ldl foo.cpp:11:11: warning: cast between pointer-to-function and pointer-to-object is an extension [-Wpedantic] dladdr( (void*)&where_am_i, &info ); ^~~~~~~~~~~~~~~~~~ 1 warning generated. There is no standard way to portably convert a function pointer to void*. As such, there is no standard way to portably use dladdr. Prior to C++11, such conversion was ill-formed (I don't have the document available, but the warning by clang suggests it). Since C++11 however, the conversion is conditionally supported: [expr.reinterpret.cast]/8 (standard draft) Converting a function pointer to an object pointer type or vice versa is conditionally-supported. The meaning of such a conversion is implementation-defined, except that if an implementation supports conversions in both directions, converting a prvalue of one type to the other type and back, possibly with different cv-qualification, shall yield the original pointer value. Since you are already relying on the c library extension that provides dladdr, you might as well rely on the language extension that lets you cast function pointer to void*. In that case, you may want to ask the compiler to not warn about using language extensions by compiling without the -Wpedantic option - or use a standard version where the conversion is at least conditionally supported. If the conversion isn't supported, then so isn't dladdr.
https://codedump.io/share/dhTzct3mVGVM/1/dladdr-pointer-to-function-vs-pointer-to-object
CC-MAIN-2017-47
en
refinedweb
Python unittest helpers adapted from Testify Project Description Release History Download Files All the assertions from Testify but cleaned up a bit & with added py3k support. Should work with Python 2.5-3.3 and pypy 1.9. To make sure it will work for you: python setup.py test. Installation There are no dependencies. Simply: pip install testy Example Usage import re import unittest from testy.assertions import assert_dict_subset, assert_raises, assert_match_regex class MyTestCase(unittest.TestCase): def setUp(self): self.x = dict(a=1, b=2) def test_x(self): assert_dict_subset(dict(b=2), self.x) def test_exception(self): with assert_raises(TypeError): raise TypeError("Call some code you expect to fail here.") def test_pattern(self): pattern = re.compile('\w') assert_match_regex(pattern, 'abc') def tearDown(self): self.x = None if __name__ == "__main__": unittest.main() Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/testy/
CC-MAIN-2017-47
en
refinedweb
#include <stdinc.h>#include <getparam.h>string defv[]={ "key1=val1\n Example help for key1", "key2=val2\n Example help for key2", "VERSION=1.0\n 30-apr-2002 pjt", NULL, }; string usage="Example usage for program";string cvsid="$Id: $"; void initparam(string *argv, string *defv)void finiparam(void) void stop(int level) string getparam(string name)int getiparam(string name)long getlparam(string name)double getdparam(string name)bool getbparam(string name) string *getargv(int *argc) int indexparam(string basename, int idx)string getparam_idx(string basename,int idx)int getiparam_idx(string basename,int idx)long getlparam_idx(string basename,int idx)double getdparam_idx(string basename,int idx)bool getbparam_idx(string basename,int idx) void putparam(string name,string value)void promptparam(string name,string prompt) bool hasvalue(string name)bool isaparam(string name)bool updparam(string name)int getparamstat(string);extern int error_level;extern int debug_level;extern int yapp_level;extern int help_level; undocumented: _bell_level_history_review_flag_yapp_dev_yapp_string_help_string_argv_string initparam is called with two arguments: the argument vector argv which is passed to main, and a similar vector defv which determines the set of legal keyword names and their default values. More precisely, defv is of the form string defv[] = { "name=value\nhelp", . . ., "VERSION=x.y", NULL }; initparam determines a value for each name in defv by scanning the command line arguments in argv, any values supplied in a keyword file (see below) and finally adopting the value supplied by defv if no other value can be found. Note the recommended (but not enforced) usage of the VERSION keyword; this version ID will be used to tag the history of datafiles which were created by programs, and will also warn users when outdated keyword files are used (see below). Arguments specified in argv are matched with names specified in defv either one of two ways. Those appearing to the left of the first argument containing an embedded "=" sign are matched by position: the first such argument is associated with the first name listed in defv, and so on; it is an error if more arguments are supplied in argv than names are supplied in defv. The remaining arguments must all be of the form "name=value", and are matched by keyword: name must appear in defv, and value is associated with name. It is an error to specify more than one value for a given name. If getparam is compiled with MINMATCH, keyword names are also matched in minimum match mode. An exception to commandline parsing is made for any arguments that follow the standalone command line argument "--". The function getargv() can be used to get access to any arguments that follow (and including) "--". The returned array simply points into the original char **argv, but starts at the location of "--". The returned argc therefore only counts the number of arguments after (and including) "--", and thus making argv[0] be "--". Depending on a user specified help_level (see below) parameters may also be set by reading them from a keyword file. The keyword file is unique for each program, and has the name "program.def". Although there is some control over the directory in which these keyword files should be located ($NEMODEF, but more on that later), it is dangerous to use keyword files during multiple sessions since NEMO does not use a file locking mechanism. Command line keyword values are always favored over values from a keyword file. Once initparam has returned, the value associated with a name may be obtained (as a string) by getparam, or directly converted to an int, long, double, or bool by the functions getiparam, getlparam, getdparam, and getbparam respectively. The latter recognizes the digit "1" or any string starting with one of "tTyY" as TRUE, and "0" or any string starting with one of "fFnN" as FALSE. All the getXparam function can do simple parsing of expressions, see nemoinp(3NEMO) for some extended rules. Also note that getparam returns a string into private space, that should not be modified or freed! An single parameter integer that starts with the 0x string, can also be parsed as a hexdecimal value (see strtol(3) ). getparamstat is currently only available to provide ZENO compatibility, it currently triggers a call to error(3NEMO) when called. As a special case, the contents of argv[0], which is the name used to invoke the process, are associated with the keyword name argv0. This is useful when reporting errors from library routines which may have no other way of determining what program called them; for example, printf("sqrt in %s: negative arg\n", getparam("argv0")); could be used to print an error message from a square-root procedure, giving the name of the program in which the error occured. The macro getargv0(), defined in getparam.h, can also be used for this, instead of called getparam(). This technique is used by the routine, error(3NEMO) ; it reports the program name in square brackets before the string is output. The optional usage and cvsid strings, that will need to be defined by the user, can be queried with the help=u and help=I options. Not only does this enable the running task to warn users if outdated keyword files are used, but also it provides an automated way to label the data history with the version of the program used to generate that data. A minor version number conflict will result in a warning message, but a major one will result in a fatal error message. If your program has changed data format, or keywords have changed meaning or name, it is adviced to change the major version number. The external VERSION id is the id stored in some external keyword database, (such as the commandline or a keyword file), that is supplied to the running task. This would make it possible for programs to refuse execution if the internal and external VERSION id do not match. We do not currently employ this technique. Most NEMO programs have a section labeled UPDATE HISTORY in which old version ID’s are labeled by time, comment and author. indexparam(basename,-1) returns the largest index that was found, indexparam(basename,-2) returns 0 if the keyword is not an indexed keyword, and indexparam(basename,idx), for idx >= 0, will check existence (1=true) for a specific index. help=option,option,... If this argument, which must be specified by name, appears in argv, initparam will generate some helpful information before returning. Possible options include These options must be abbreviated to one character. For example, help=d,q will print defaults and then quit (actually, the comma is not needed). This feature may be disabled by including an entry for help in defv, in which case help processing is left to the applications program (not recommended). An environment variable HELP or the system keyword help= can be set to a non-zero number to change to various levels of interactive input if implemented. ~/src/kernel/io getparam.c ~/src/kernel/cores error.c (stop) Some undocumented features. The NEMO Users Guide is often more complete. A key-less parameter that contains an ’=’ sign confuses the parser and will most likely complain about an unknown parameter. E.g. "i%%128==0" will return Parameter "i%128" unknown. % p help= p a=1 b=2 c#= VERSION=0.1 % p help=h a : keyword a [1] b : keyword b [2] c# : indexed keyword c [] VERSION : PJT [0.1] % p a=2 % p c2=1 c0=1.2 % mkplummer . 1000000 help=c CPU_USAGE mkplummer : 7.84 6.99 0.42 0.00 0.00 6202936 xx-nov-86 created Joshua Barnes 16-oct-87 add system keyword host= Peter Teuben 9-mar-88 add system keyword debug= PJT 21-apr-88 interactive input PJT 24-nov-88 editor mode in help= PJT 6-mar-89 added nemoinp parsing of getXparam PJT 28-nov-94 V3 rewrite, many new features, deleted some others PJT 12-feb-95 added updparam 20-jan-02 re-implemented indexed keywords PJT 12-jul03 added getargv() PJT 13-may-04 added help=c PJT 29-dec-04 added help=I and documented CVSID PJT Table of Contents
http://bima.astro.umd.edu/nemo/man_html/getparam.3.html
CC-MAIN-2017-47
en
refinedweb
Hello a customer of mine purchased a pretty bad hosting company and that i don't have ssh access, their ticket suport only clarified with "yes we support python within our servers" however i can't run any .cgi .py or application.wsgi files. it is possible to sure method to know if the server supports python? I only have the ftp and also the directadmin interface, i must learn more before i'm able to complaing again for their support system otherwise they're not going to give consideration. The host is neubox.internet this is exactly what i already attempted. This tutorial wide web.howtoforge.com/embedding-python-in-apache2-with-mod_python-debian-etch done my dev machine, it states that i have to give a webhost within the apache2 /available-sites dir but clearly i do not get access to that folder within the hosting. I additionally attempted putting this script around the cause of my host, known as application.wsgi it didnt work import os import sys os.environ['DJANGO_Configurations_MODULE'] = 'mysite.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() I additionally attempted this file application.py around the root #!/usr/bin/env python # -*- coding: UTF-8 -*- # enable debugging import cgitb cgitb.enable() print "Content-Type: text/plaincharset=utf-8" print print "Hello World!" Individuals files were proven as plain text, i attempted that identical code but named application.cgi and strangely enough it came back a 404 error, the file obviously its there. I saw in the directadmin interface in site summary that CGI-Bin OFF And so i guess thats the reason behind the 404. Within the same page i observe that the title servers are The first states Apache is functioning normally This really is their services comparison site (the spanish language) If only i possibly could understand what OS could they be running i am almost sure is linux because on my small root there's a folder .htpasswd and individuals .folders are linux for hidden, but i am unsure if thats a sure method to tell. They provided this url its for phpinfo() fast-cgi but all of the tuts about this discuss doing such things as altering Apache configuration wich i clearly can't do, this is actually the finish of my search right? they don't support python. Inside your server thay sais they have php, so perhaps you should use this php function. to retrive more information, performing a python script: # hacking.py import sys print sys.version_info and once you make something similar to this <?php // echo $path = professional('pwd') // professional python script echo professional('python hacking.py') ?> bear in mind the file permissions
http://codeblow.com/questions/how-you-can-test-for-python-support/
CC-MAIN-2017-47
en
refinedweb
Is there a easy way to check that the gradient flow is proper in the network? Or is it broke somewhere in the network? Will this gradcheck be useful? How do I use it? Example? Is there a easy way to check that the gradient flow is proper in the network? Or is it broke somewhere in the network? Will this gradcheck be useful? How do I use it? Example? Gradcheck checks a single function (or a composition) for correctness, eg when you are implementing new functions and derivatives. For your application, which sounds more like “I have a network, where does funny business occur”, Adam Paszke’s script to find bad gradients in the computational graph might be a better starting point. Check out the thread below. Best regards Thomas But when I use it in my network: ... import bad_grad_viz as bad_grad ... loss_G , loss_D = model(data_1, data_2) get_dot = bad_grad.register_hooks(loss_1) model.optimizer_network.zero_grad() loss_G.backward() loss_D.backward() dot = get_dot() dot.save('/path/to/dir/tmp.dot') I get this error from bad_grad_viz.py: grad_output = grad_output.data AttributeError: 'NoneType' object had no attribute 'data' More info: model is a GAN network. print(type(loss_G)) >>> <class 'torch.autograd.variable.Variable'> print(type(loss_D)) >>> <class 'torch.autograd.variable.Variable'> print(loss_G.requires_grad) >>> True print(loss_D.requires_grad) >>> True Any suggestion @apaszke ? What if you test that condition and return False? Try 1: def is_bad_grad(grad_output): if grad_output.requires_grad == True: print('grad_ouput have grad') grad_output = grad_output.data Error: if grad_output.requires_grad == True: AttributeError: 'NoneType' object has no attribute 'requires_grad' Try 2: def is_bad_grad(grad_output): if grad_output.requires_grad == False: print('grad_ouput doesnt have grad') grad_output = grad_output.data Traceback (most recent call last): grad_ouput doesnt have grad grad_ouput doesnt have grad if grad_output.requires_grad == False: AttributeError: 'NoneType' object has no attribute 'requires_grad' Please give more details, so that i can debug this issue. It seems that something you makes your output not require grads as much as one would expect, this could happen due to networks being in .eval() instead of .training(), or setting requires_grad = False manually or volatile or something entirely different… If you had a minimal demo of how it happens, it would be easier to find out why it is not working. Best regards Thomas I use a simple trick. I record the average gradients per layer in every training iteration and then plotting them at the end. If the average gradients are zero in the initial layers of the network then probably your network is too deep for the gradient to flow. So this is how I do it - def plot_grad_flow(named_parameters): ave_grads = [] layers = [] for n, p in named_parameters: if(p.requires_grad) and ("bias" not in n): layers.append(n) ave_grads.append(p.grad.abs().mean()) plt.plot(ave_grads, alpha=0.3, color="b") plt.hlines(0, 0, len(ave_grads)+1, linewidth=1, color="k" ) plt.xticks(range(0,len(ave_grads), 1), layers, rotation="vertical") plt.xlim(xmin=0, xmax=len(ave_grads)) plt.xlabel("Layers") plt.ylabel("average gradient") plt.title("Gradient flow") plt.grid(True) loss = self.criterion(outputs, labels) loss.backward() plot_grad_flow(model.named_parameters()) This is my training iteration. I need to check the gradient. I incorporated what you suggested. I am getting this error. grad_output = grad_output.data** atributeError: ‘NoneType’ object has no attribute ‘data’** The code snippet: for i,batch in enumerate(train_loader): if (i%100)==0: print(epoch,i,len(train_loader)) #Clear the GRADIENT optimizer.zero_grad() #INPUT imgs,ques,ans = batch #imgs,ques,ans = imgs.cuda(),ques.cuda(),ans.cuda() imgs,ques,ans = imgs.to(device),ques.to(device),ans.to(device) #FORWARD PASS #OUTPUT ansout,queryout = net(imgs,ques,ans) #LOSS CALCULATED #Here answer is my target thus ".data" is used queryloss = loss(queryout, ansout.data) #Here query is my target ansloss = loss(ansout, queryout.data) get_dot = check_grad.register_hooks(ansloss) #MSE Added #print(ansloss) sum_ans_train += ansloss.item() queryloss.backward(retain_graph=True) #BACKPROPAGATION ansloss.backward() dot = get_dot() dot.save('/home/Abhishek/plots/tmp.dot') A much better implementation of the function def plot_grad_flow(named_parameters): '''Plots the gradients flowing through different layers in the net during training. Can be used for checking for possible gradient vanishing / exploding problems. Usage: Plug this function in Trainer class after loss.backwards() as "plot_grad_flow(self.model.named_parameters())" to visualize the gradient flow''' ave_grads = [] max_grads= [] layers = [] for n, p in named_parameters: if(p.requires_grad) and ("bias" not in n): layers.append(n) ave_grads.append(p.grad.abs().mean()) max_grads.append(p.grad.abs().max()) plt.bar(np.arange(len(max_grads)), max_grads, alpha=0.1, lw=1, color="c") plt.bar(np.arange(len(max_grads)), ave_grads, alpha=0.1, lw=1, color="b") plt.hlines(0, 0, len(ave_grads)+1, lw=2, color="k" ) plt.xticks(range(0,len(ave_grads), 1), layers, rotation="vertical") plt.xlim(left=0, right=len(ave_grads)) plt.ylim(bottom = -0.001, top=0.02) # zoom in on the lower gradient regions plt.xlabel("Layers") plt.ylabel("average gradient") plt.title("Gradient flow") plt.grid(True) plt.legend([Line2D([0], [0], color="c", lw=4), Line2D([0], [0], color="b", lw=4), Line2D([0], [0], color="k", lw=4)], ['max-gradient', 'mean-gradient', 'zero-gradient']) simply wanted to comment that this^ is a wonderfully written function to inspect gradients -> highly recommend. I have a peculiar problem. Thanks to the function provided above I was able to see the gradient flow but to my dismay, the graphs show the gradient decreasing from right side to left side, which is as God intended. But, in my case the graphs show the gradient decreasing from left side to right side, which is clearly wrong, albeit, I will be highly grateful if somebody can tell me what’s going on with the network. It has a convolutional block followed by an encoder and decoder. The network is fully convolutional. I will be highly grateful for any help provided. I have a class of VGG16 and I wonder if named_parameters in your function refers to model.parameters()? model is an instance of class VGG16 by the way. If your response is ‘yes’ then I receive an error ‘too many values to unpack (expected 2)’ for command ‘for n, p in model.parameters():’. Do you see the reason? For any nn.Module instance, m.named_parameters() returns an iterator over pairs of name, parameter, while m.parameters() just returns one for the parameters. You should be able to use m.named_parameters(). Best regards Thomas This helped a lot. Thank you very much. Best Regards There is a place in heaven for people like you! Any chance pytorch is integrating something alike soon? @RoshanRane I used your code for plotting the gradient flow (thank you!), and obtained this output: This is for a single layer GRU. I was surprised to see the gradient of the hidden state stay so small. The only thing I can think of as to why this would be the case is because the hidden state is re-initialized with each training example (and thus stays small), while the other gradients accumulate as a result of being connected to learned parameters. Does that seem correct? Is this what a plot of the gradient flow in a single layer GRU should typically look like? Alternatively, for a 4 layer LSTM, I get the following output: Does that seem correct? Is this what a plot of the gradient flow in a multi-layer layer LSTM should typically look like? The larger gradient values are from the initial epochs. I am not sure when they are so much larger to start with. Thoughts? not sure what is Line2Din your code??? Its a function from the matplotlib library, just add this line to the top of your script: from matplotlib.lines import Line2D
https://discuss.pytorch.org/t/check-gradient-flow-in-network/15063
CC-MAIN-2019-30
en
refinedweb
Wow, I was just looking for this, this exact moment... What did you need these definitions for? I'm trying to build nodejs (have already got libv8 built) and I was missing the definition for loadavg (and then __fixpt_t). On 19/04/12 19:25, Robert Millan wrote: > Modified: trunk/glibc-ports/kfreebsd/bits/resource.h > =================================================================== > --- trunk/glibc-ports/kfreebsd/bits/resource.h 2012-04-19 17:36:23 UTC (rev 4208) > +++ trunk/glibc-ports/kfreebsd/bits/resource.h 2012-04-19 18:25:30 UTC (rev 4209) > @@ -22,6 +22,7 @@ > #endif > > #include <bits/types.h> > +#include <sys/_types.h> > > /* Transmute defines to enumerations. The macro re-definitions are > necessary because some programs want to test for operating system > @@ -118,6 +119,16 @@ > }; > #endif > > +struct orlimit { > + __int32_t rlim_cur; /* current (soft) limit */ > + __int32_t rlim_max; /* maximum value for rlim_cur */ > +}; > + > +struct loadavg { > + __fixpt_t ldavg[3]; > + long fscale; > +}; > + > #define CP_USER 0 > #define CP_NICE 1 > #define CP_SYS 2 -- Steven Chamberlain steven@pyro.eu.org
https://lists.debian.org/debian-bsd/2012/04/msg00285.html
CC-MAIN-2019-30
en
refinedweb
SwitchableSiteMapProvider class Provides a way for a site's navigation settings to determine the SiteMapProvider instance that should be used when rendering a page. Inheritance hierarchy System.Object System.Configuration.Provider.ProviderBase System.Web.SiteMapProvider Microsoft.SharePoint.Publishing.Navigation.SwitchableSiteMapProvider Namespace: Microsoft.SharePoint.Publishing.Navigation Assembly: Microsoft.SharePoint.Publishing (in Microsoft.SharePoint.Publishing.dll) Syntax 'Declaration <SharePointPermissionAttribute(SecurityAction.LinkDemand, ObjectModel := True)> _ Public NotInheritable Class SwitchableSiteMapProvider _ Inherits SiteMapProvider _ Implements IEditableSiteMapProvider 'Usage Dim instance As SwitchableSiteMapProvider [SharePointPermissionAttribute(SecurityAction.LinkDemand, ObjectModel = true)] public sealed class SwitchableSiteMapProvider : SiteMapProvider, IEditableSiteMapProvider Remarks The SwitchableSiteMapProvider class enables the SiteMapProvider class to be redirected at runtime according to the WebNavigationSettings data that is stored in the properties of the SPWeb object. The SwitchableSiteMapProvider provides a way for a single master page to support several different possible navigation configurations. When you are designing a master page for a specific web site, you may know which navigation provider will be used, and you can avoid a lot of complexity by specifying it directly in your master page. For example, your AspMenu control can bind to a data source such as the PortalSiteMapDataSource control, which specifies a name such as "GlobalNavigation" that references a SiteMapProvider instance from the web.config file. This chain is represented in the master page markup and only involves two ASP.NET controls. By contrast, the standard system master page relies on two different switching criteria to support three different navigation models. If the publishing navigation feature is active, the master page contains a DelegateControl object that by default points to the navigation provider that is used for basic sites. If the feature is enabled, this DelegateControl object replaces its data source with the PortalSiteMapDataSource control that is defined by the feature XML in the TEMPLATE\FEATURES\Navigation\NigationSiteSettings.xml file. In previous versions of SharePoint Server, this new data source was bound to the GlobalNavigationProvider, which is an instance of the PortalSiteMapProvider object. In , a second switching criteria was introduced: each site can choose between a TaxonomySiteMapProvider and a PortalSiteMapProvider. This is accomplished by binding the data source to the SwitchableSiteMapProvider object, which acts as a wrapper that reads the active settings, and then passes the calls through to the appropriate provider. Since the provider types have very different behaviors and usage scenarios, they require different settings for properties such as SiteMapDataSource.StartFromCurrentNode and SiteMapDataSource.ShowStartingNode. The SwitchableProperty XML tag provides a way for the PortalSiteMapDataSource control to embed these alternate property values within the PortalSiteMapDataSource markup, by means of the ASP.NETParseChildrenAttribute mechanism. A slightly different syntax is used to accomplish this in the limited feature XML language used by the DelegateControl object, but the concept is the same. The active provider can be determined by calling the GetCurrentWrappedProvider() method. Thread safety Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. See also Reference SwitchableSiteMapProvider members Microsoft.SharePoint.Publishing.Navigation namespace
https://docs.microsoft.com/en-us/previous-versions/office/sharepoint-server/jj254454%28v%3Doffice.15%29
CC-MAIN-2019-30
en
refinedweb
Debugging Latency in Go 1.11 It is complicated to diagnose and debug complicated systems. It often takes multiple levels of diagnostics data to understand the possible causes of latency issues. A distributed system is made of many servers that are depending on each other to serve a user request. At any time, - A process in the system might be handling a large number of requests. - In highly concurrent servers, there is no easy way to isolate events happened in the lifetime of request. - In highly concurrent servers, we have no good visibility into the events happened to serve a request. As Go became a popular language to write servers in the recent years, we realized the requirement for understanding what’s going on in a Go process in the lifetime of a request. There are many runtime activities occur during the program execution: scheduling, memory allocation, garbage collection and more. But it was impossible to associate user code with runtime events and help user analyze how runtime events affected their performance. SRE would ask for help diagnosing a latency issue and wondering if anyone can help them with understanding and optimizing a specific server. For those even experts in Go, it was quite complicated to be able to estimate how runtime might be impacting their specific situation. There is no easy way to tell why latency is high for certain requests. Distributed traces allow us to target which request handler to look at, but we need to dig down. Could it be due to GC, scheduler or I/O that we waited so much to serve a response? — SRE From the perspective of someone who is external to your system, when a latency issue experienced, they don’t know much apart from the fact they know they waited more than expected (356 ms) to have a response back. From the perspective of engineers who have access to distributed traces, it is possible to see the breakdown and how each service is contributed to the overall latency. With distributed traces, we have more visibility into the situation. In this case, in order to serve /messages, we made three internal RPCs: auth.AccessToken, cache.Lookup and spanner.Query. We can see how each RPC is contributing to the latency. At this point, we can spot that auth.AccessToken took longer than usual. We successfully scoped down the problem to a specific service. We can find the source of the auth.AccessToken by associating it with a specific process, or we can randomly try to see if the issue is reproducible on any auth service instance. With Go 1.11, we will have additional support to the execution tracer to be able to target runtime events by RPC calls. With the new capabilities, users will be able to gather more information about what has happened in the lifetime of a call. In this case, we are focusing on section in the auth.AccessToken span. We spent 30 +18 µs in total for networking, 5 µs on a blocking syscall, 8 µs on garbage collection and 123 µs on actually executing the handler where we mostly spend time on serialization and deserialization. By looking at this level of detail, we finally can say we have been unfortunate that this RPC is overlapped by GC and we spent unlikely a large amount on serialization/deserialization. Then, engineers can point to the recent changes in the auth.AccessToken message and improve the performance issue. They can also see if GC is often affecting this request on the critical path and might optimize memory allocation to see if they can improve this pattern. Go 1.11 With 1.11, Go execution tracer will introduce a few new concepts, APIs and tracing capabilities: - User events and user annotations, see runtime/trace. - Association between user code and runtime. - Possibility to associate execution tracer data with distributed traces. Execution tracer is introducing two high-level concepts for users to instrument their code: regions and tasks. Regions are sections in the code you want to collect tracing data for. A region starts and ends in the same goroutine. A task, on the other hand, is more of a logical group to categorize related regions together. A task can end in a different goroutine than the goroutine it started at. We expect users to start an execution tracer task for each distributed trace span they have, instrument their RPC frameworks comprehensively by creating regions, enable execution tracer momentarily when there is a problem, record some data, and analyze the output. Try it yourself Even though the new capabilities will only be available in Go 1.11, you can give them a try by installing Go from tip. I also recommend you to give it a try with your distributed traces. I recently added execution tracer task support for the spans created by Census. import ( "runtime/trace" octrace "go.opencensus.io/trace" )ctx, span := octrace.StartSpan(ctx, "/messages") defer span.End()trace.WithRegion(ctx, "connection.init", conn.init) If you are using the gRPC and HTTP integrations, you don’t have to manually create any spans because they are automatically created for you. In your handlers, you can simply use runtime/trace with the incoming context. Register the pprof.Trace handler and collect data when you need execution tracer data for diagnosis: import _ "net/http/pprof"go func() { log.Println(http.ListenAndServe("localhost:6060", nil)) }() Then, momentarily record data if you want execution tracer data and start the tool to visualize: $ curl -o trace.out $ go tool trace trace.out 2018/05/04 10:39:59 Parsing trace... 2018/05/04 10:39:59 Splitting trace... 2018/05/04 10:39:59 Opening browser. Trace viewer is listening on Then you can see a distribution of execution tracer tasks created for helloworld.Greeter.SayHello at /usertasks. You can click on the outlier latency bucket, 3981µs one to further analyze what happened in the lifetime of that specific RPC. /userregions also allow you to list the collected regions. You can see connection.init region and number of records. (Note that connection.init is manually added to the gRPC framework code for demoing purposes, more work to instrument gRPC is still work-in-progress.) If you click on any of the links, it will give you more details about the regions that fit under that latency bucket. In the following case, we see one region collected the 1000µs bucket. Then you can see the fine-grained breakdown of latency. You can see 1309µs region overlapped with garbage collection. It added more overhead on the critical path in terms of GC work and scheduling. Otherwise, it took roughly the same time to execute the handler and handling the blocking syscalls. Limitations Even though the new execution tracer capabilities are powerful, there are limitations. - Regions can only start and stop in the same goroutine. Execution tracer currently cannot record data automatically that crosses multiple goroutines. This requires us to instrument regions manually. The next big step will be adding more fine-grained instrumentation in RPC frameworks and standard packages such as net/http. - The output format of the execution tracer is hard to parse, and go tool trace is the only canonical tool that can understand this format. There is no easy way to automatically attaching execution tracer data to the distributed traces — hence we collect them separately and correlate later. Conclusion Go is trying to be a great runtime for production services. With the data available from the execution tracer, we got one step closer to have high visibility into our production servers and offer more actionable data when there is a problem.
https://medium.com/observability/debugging-latency-in-go-1-11-9f97a7910d68?source=---------2------------------
CC-MAIN-2019-30
en
refinedweb
I submitted today another build of ffsend and bat and they were actually built for F28. Since F28 is EOL, it should not do that anymore. /cc @mprahl @jkaluza Metadata Update from @smooge: - Issue assigned to mprahl - Issue priority set to: Waiting on Assignee (was: Needs Review) - Issue tagged with: mbs This is because the F28 platform module is still in the "ready" state. @mohanboddu do you have a process for retiring modules in the MBS database? If not, I can help you with that. Edit: Here is the platform:f28 module: @mohanboddu did you see my last comment? I am expecting that @mohanboddu would see this if this were a releng ticket versus a infrastructure one. OK turns out mohan doesn't have the ability to do this so I am fixing that. In the meantime [root@mbs-backend01 ~][PROD]# mbs-manager retire platform:f28 2019-06-17 14:27:13,711 - MainThread - root - INFO - Found 1 module builds: 2019-06-17 14:27:13,711 - MainThread - root - INFO - platform:f28:4:00000000 Retire 1 module builds? [n] y 2019-06-17 14:27:16,778 - MainThread - MBS.models - INFO - <modulebuild platform,="" id="1602," stream="f28," version="4," scratch="None," state="" 'garbage',="" batch="" 0,="" state_reason="" 'module="" build="" retired'="">, state 5->6 2019-06-17 14:27:16,991 - MainThread - fedmsg.core - DEBUG - Trying to bind to tcp://mbs-backend01.phx2.fedoraproject.org:3000 2019-06-17 14:27:16,992 - MainThread - fedmsg.core - DEBUG - Trying to bind to tcp://mbs-backend01.phx2.fedoraproject.org:3001 2019-06-17 14:27:18,504 - MainThread - root - INFO - Module builds retired. Needed to run a script from mprahl which got mbooth unstuck [2019-06-17-12:39] <mprahl> smooge: Yeah, unfortunately, users will be able to submit f28 module builds again. There's nothing to do now. We need to fix the behavior in MBS for a specific use-case before retiring it again. jkaluza has a PR merged that should address most of this, so it shouldn't be too long before we are unblocked on that ticket to retire platform:f28. Needed to run a script from mprahl which got mbooth unstuck from module_build_service import conf, db, models platform = db.session.query(models.ModuleBuild).get(1602) platform.transition( conf, models.BUILD_STATES['ready'], 'Unretired until fm-orchestrator #1243 is resolved') db.session.add(platform) db.session.commit() Just for future reference, to unretire the platform. to comment on this ticket.
https://pagure.io/fedora-infrastructure/issue/7862
CC-MAIN-2019-30
en
refinedweb
CloudWatch Metrics for Rekognition This section contains information about the Amazon CloudWatch metrics and the Operation dimension available for Amazon Rekognition. You can also see an aggregate view of Rekognition metrics from the Rekognition console. For more information, see Exercise 4: See Aggregated Metrics (Console). CloudWatch Metrics for Rekognition The following table summarizes the Rekognition metrics. CloudWatch Dimension for Rekognition To retrieve operation-specific metrics, use the Rekognition namespace and provide an operation dimension. For more information about dimensions, see Dimensions in the Amazon CloudWatch User Guide.
https://docs.aws.amazon.com/rekognition/latest/dg/cloudwatch-metricsdim.html
CC-MAIN-2019-30
en
refinedweb
Dialogs in the Bot Framework SDK for .NET Note This topic applies to SDK v3 release. You can find the documentation for the latest version of the SDK v4 here. When you create a bot using the Bot Framework SDK for .NET, you can use dialogs. A conversation that comprises dialogs is portable across computers, which makes it possible for your bot implementation to scale. When you use dialogs in the Bot Framework SDK for .NET, conversation state (the dialog stack and the state of each dialog in the stack) is automatically stored to your choice of state data storage. This enables your bot's service code to be stateless, much like a web application that does not need to store session state in web server memory. Echo bot example Consider this echo bot example, which describes how to change the bot that's created in the Quickstart tutorial so that it uses dialogs to exchange messages with the user. Tip To follow along with this example, use the instructions in the Quickstart tutorial to create a bot, and then update its MessagesController.cs file as described below. MessagesController.cs In the Bot Framework SDK for .NET, the Builder library enables you to implement dialogs. To access the relevant classes, import the Dialogs namespace. using Microsoft.Bot.Builder.Dialogs; Next, add this EchoDialog class to MessagesController.cs to represent the conversation. ); } } Then, wire the EchoDialog class to the Post method by calling the Conversation.SendAsync method. public virtual async Task<HttpResponseMessage> Post([FromBody] Activity activity) { // Check if activity is of type message if (activity != null && activity.GetActivityType() == ActivityTypes.Message) { await Conversation.SendAsync(activity, () => new EchoDialog()); } else { HandleSystemMessage(activity); } return new HttpResponseMessage(System.Net.HttpStatusCode.Accepted); } Implementation details The Post method is marked async because Bot Builder uses the C# facilities for handling asynchronous communication. It returns a Task object, which represents the task that is responsible for sending replies to the passed-in message. If there is an exception, the Task that is returned by the method will contain the exception information. The Conversation.SendAsync method is key to implementing dialogs with the Bot Framework SDK for .NET. It follows the dependency inversion principle and performs these steps: - Instantiates the required components - Deserializes the conversation state (the dialog stack and the state of each dialog in the stack) from IBotDataStore - Resumes the conversation process where the bot suspended and waits for a message - Sends the replies - Serializes the updated conversation state and saves it back to IBotDataStore When the conversation first starts, the dialog does not contain state, so Conversation.SendAsync constructs EchoDialog and calls its StartAsync method. The StartAsync method calls IDialogContext.Wait with the continuation delegate to specify the method that should be called when a new message is received ( MessageReceivedAsync). The MessageReceivedAsync method waits for a message, posts a response, and waits for the next message. Every time IDialogContext.Wait is called, the bot enters a suspended state and can be restarted on any computer that receives the message. A bot that's created by using the code samples above will reply to each message that the user sends by simply echoing back the user's message prefixed with the text 'You said: '. Because the bot is created using dialogs, it can evolve to support more complex conversations without having to explicitly manage state. Echo bot with state example This next example builds upon the one above by adding the ability to track dialog state. When the EchoDialog class is updated as shown in the code sample below, the bot will reply to each message that the user sends by echoing back the user's message prefixed with a number ( count) followed by the text 'You said: '. The bot will continue to increment count with each reply, until the user elects to reset the count. MessagesController.cs [Serializable] public class EchoDialog : IDialog<object> { protected int count = 1; public async Task StartAsync(IDialogContext context) { context.Wait(MessageReceivedAsync); } public virtual.None); } else { await context.PostAsync($"{this.count++}: You said {message.Text}"); context.Wait(MessageReceivedAsync); } } public async Task AfterResetAsync(IDialogContext context, IAwaitable<bool> argument) { var confirm = await argument; if (confirm) { this.count = 1; await context.PostAsync("Reset count."); } else { await context.PostAsync("Did not reset count."); } context.Wait(MessageReceivedAsync); } } Implementation details As in the first example, the MessageReceivedAsync method is called when a new message is received. This time though, the MessageReceivedAsync method evaluates the user's message before responding. If the user's message is "reset", the built-in PromptDialog.Confirm prompt spawns a sub-dialog that asks the user to confirm the count reset. The sub-dialog has its own private state that does not interfere with the parent dialog's state. When the user responds to the prompt, the result of the sub-dialog is passed to the AfterResetAsync method, which sends a message to the user to indicate whether or not the count was reset and then calls IDialogContext.Wait with a continuation back to MessageReceivedAsync on the next message. Dialog context The IDialogContext interface that is passed into each dialog method provides access to the services that a dialog requires to save state and communicate with the channel. The IDialogContext interface comprises three interfaces: Internals.IBotData, Internals.IBotToUser, and Internals.IDialogStack. Internals.IBotData The Internals.IBotData interface provides access to the per-user, per-conversation, and private conversation state data that's maintained by Connector. Per-user state data is useful for storing user data that is not related to a specific conversation, while per-conversation data is useful for storing general data about a conversation, and private conversation data is useful for storing user data that is related to a specific conversation. Internals.IBotToUser Internals.IBotToUser provides methods to send a message from bot to user. Messages may be sent inline with the response to the web API method call or directly by using the Connector client. Sending and receiving messages through the dialog context ensures that the Internals.IBotData state is passed through the Connector. Internals.IDialogStack Internals.IDialogStack provides methods to manage the dialog stack. Most of the time, the dialog stack will automatically be managed for you. However, there may be cases where you want to explictly manage the stack. For example, you might want to call a child dialog and add it to the top of the dialog stack, mark the current dialog as complete (thereby removing it from the dialog stack and returning the result to the prior dialog in the stack), suspend the current dialog until a message from the user arrives, or even reset the dialog stack altogether. Serialization The dialog stack and the state of all active dialogs are serialized to the per-user, per-conversation IBotDataBag. The serialized blob is persisted in the messages that the bot sends to and receives from the Connector. To be serialized, a Dialog class must include the [Serializable] attribute. All IDialog implementations in the Builder library are marked as serializable. The Chain methods provide a fluent interface to dialogs that is usable in LINQ query syntax. The compiled form of LINQ query syntax often uses anonymous methods. If these anonymous methods do not reference the environment of local variables, then these anonymous methods have no state and are trivially serializable. However, if the anonymous method captures any local variable in the environment, the resulting closure object (generated by the compiler) is not marked as serializable. In this situation, Bot Builder will throw a ClosureCaptureException to identify the issue. To use reflection to serialize classes that are not marked as serializable, the Builder library includes a reflection-based serialization surrogate that you can use to register with Autofac. var builder = new ContainerBuilder(); builder.RegisterModule(new DialogModule()); builder.RegisterModule(new ReflectionSurrogateModule()); Dialog chains While you can explicitly manage the stack of active dialogs by using IDialogStack.Call<R> and IDialogStack.Done<R>, you can also implicitly manage the stack of active dialogs by using these fluent Chain methods. Examples The LINQ query syntax uses the Chain.Select<T, R> method. var query = from x in new PromptDialog.PromptString(Prompt, Prompt, attempts: 1) let w = new string(x.Reverse().ToArray()) select w; Or the Chain.SelectMany<T, C, R> method. var query = from x in new PromptDialog.PromptString("p1", "p1", 1) from y in new PromptDialog.PromptString("p2", "p2", 1) select string.Join(" ", x, y); The Chain.PostToUser<T> and Chain.WaitToBot<T> methods post messages from the bot to the user and vice versa. query = query.PostToUser(); The Chain.Switch<T, R> method branches the conversation dialog flow. var logic = toBot .Switch ( new RegexCase<string>(new Regex("^hello"), (context, text) => { return "world!"; }), new Case<string, string>((txt) => txt == "world", (context, text) => { return "!"; }), new DefaultCase<string, string>((context, text) => { return text; } ) ); If Chain.Switch<T, R> returns a nested IDialog<IDialog<T>>, then the inner IDialog<T> can be unwrapped with Chain.Unwrap<T>. This allows branching conversations to different paths of chained dialogs, possibly of unequal length. This example shows a more complete example of branching dialogs written in the fluent chain style with implicit stack management. var joke = Chain .PostToChain() .Select(m => m.Text) .Switch ( Chain.Case ( new Regex("^chicken"), (context, text) => Chain .Return("why did the chicken cross the road?") .PostToUser() .WaitToBot() .Select(ignoreUser => "to get to the other side") ), Chain.Default<string, IDialog<string>>( (context, text) => Chain .Return("why don't you like chicken jokes?") ) ) .Unwrap() .PostToUser(). Loop(); Next steps Dialogs manage conversation flow between a bot and a user. A dialog defines how to interact with a user. A bot can use many dialogs organized in stacks to guide the conversation with the user. In the next section, see how the dialog stack grows and shrinks as you create and dismiss dialogs in the stack. Feedback
https://docs.microsoft.com/en-us/azure/bot-service/dotnet/bot-builder-dotnet-dialogs?view=azure-bot-service-3.0
CC-MAIN-2019-30
en
refinedweb
This tutorial will introduce you to the concept of object detection in Python using the OpenCV library and how you can utilize it to perform tasks like Facial detection. Face. Hands-on knowledge of Numpy and Matplotlib is required before working on the concepts of OpenCV. Make sure that you have the following packages installed and running before installing OpenCV. 1. OpenCV-Python 2. Face Detection. OpenCV-Python supports all the leading platforms like Mac OS, Linux, and Windows. It can be installed in either of the following ways: Please refer to the detailed documentation here for Windows and here for Mac. 2. Unofficial pre-built OpenCV packages for Python: Packages for standard desktop environments (Windows, macOS, almost any GNU/Linux distribution) pip install opencv-python pip install opencv-contrib-python You can either use Jupyter notebooks or any Python IDE of your choice for writing the scripts. the sum of pixels under a white rectangle from a sum of pixels under a black rectangle. Now all possible sizes and locations of each kernel are used to calculate plenty of features. (Just imagine how much computation it needs? Even a 24x24 window results over 160000 features). For each feature calculation, we need to find the sum of pixels under white and black rectangles. To solve this, they introduced the integral images. It simplifies calculation of the sum of pixels, how large may be the number of pixels, the minimum error rate, which means they are the features that best classifies the face and non-face images. (The process is not as simple as this. Each image is given equal weight in the beginning. After each classification, weights of misclassified images are increased. Then again the same process is done. New error rates are calculated. Also new weights. The process is continued until the a face or not. Wow. Wow... Isn’t it a little inefficient and time-consuming? Yes, it is. Authors have a good solution for that. the first five stages. (Two features in the above image is actually obtained as the best two features from Adaboost). According to authors, on an average, 10 features out of 6000+ are evaluated per sub-window. OpenCV comes with a trainer as well as a detector. If you want to train your own classifier for any object like car, planes etc. you can use OpenCV to create one. There are my pre-trained models which can find your eyes, nose, car, planes and etc. You can download those models on GitHub.com First, we need to load the required XML classifiers. Then load our input image (or video) in grayscale mode. import numpy as npimport() The result looks like below: First, we have to import these libraries : import numpy as npfrom PIL import Imageimport cv2 Then, after importing those libraries we have to write this code so, that we can detect our face by using our webcam and when the face is detected it’s close after 50 seconds of detection and the program will close. base = 700 #it's set the image width and height automatically face_cascade = cv2.CascadeClassifier(r'C:\Users\avish\Downloads\faace.xml') #haar cascade file of face dectection eye_cascade = cv2.CascadeClassifier(r'C:\Users\avish\Downloads\eye1.xml') #haar cascade file of eye dectection font = cv2.FONT_HERSHEY_SIMPLEX #font style for text overlay bottomLeftCornerOfText = (10,50) fontScale = 1 fontColor = (255,255,255) lineType = 2 # capture frames from a camera cap = cv2.VideoCapture(0) c_var = 0 # loop runs if capturing has been initialized. while 1: # reads frames from a camera ret, img = cap.read() gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5) if(type(faces)==type((2,3))): c_var-=1 for (x,y,w,h) in faces: #now,we are putting it in loop img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) roi_gray = gray[y:y+h, x:x+w] roi_color = img[y:y+h, x:x+w] if(c_var<0): c_var = 0 else: c_var+=1 cv2.putText(img,'Face Detected', bottomLeftCornerOfText, font, fontScale, fontColor, lineType) eyes = eye_cascade.detectMultiScale(roi_gray) for (ex,ey,ew,eh) in eyes: cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2) cv2.putText(img,'c_var : '+str(c_var), (10,100), font, fontScale, fontColor, lineType) cv2.imshow('img',img) ## if(c_var==50): break # Wait for Esc key to stop k = cv2.waitKey(30) & 0xff if k == 27: break # Close the window cap.release() cv2.destroyAllWindows() Visit our site: Our machine learning course Enroll now! Reference:
https://pytholabs.com/blogs/face_detection_haarCascade.html
CC-MAIN-2019-30
en
refinedweb
In this tutorial, you will learn about c programming switch case which is used when the decision is to made among lots of alternatives. Structure of switch statement switch(expression) { case exp1: //note that instead of semi-colon a colon is used code block1; break; case exp2: code block2; break; ....... default: default code block; } How C programming switch case works then? switch tests the value of expression against a list of case values successively. For example, in above structure, the expression is compared with all the case values. When exp1 matches the expression code block 1 will be executed and the break statement will cause an exit from the switch statement and no further comparison will be done. However, if exp1 doesn’t matches with expression then exp2 will be compared and comparisons will be continued until the matching case. If no case matches with the expression the default code block will be executed. Following block diagram explains more accurately the concept of switch case in C Programming. switch..case caseand the defaultstatements can occur in any order but, it is good programming practice to write defaultat the end. Example: C program to print the day of the week according to the number of days entered #include <stdio.h> int main() { int n; printf("Enter the number of the day :"); scanf(" %d ", &n); switch (n) { case 1: printf("Sunday"); break; case 2: printf("Monday"); break; case 3: printf("Tuesday"); break; case 4: printf("Wednesday"); break; case 5: printf("Thrusday"); break; case 6: intf("Friday"); break; case 7: printf("Saturday"); break; default: printf("You entered wrong day"); exit(0); } return 0; } Output Enter the number of day : 5 Thrusday Explanation: The program asks the user to enter a value which is stored in variable n. Then the program will check for the respective case and print the value of the matching case. Finally, the break statement exit switch statement. breakstatement in a switchstatement results in a logic error.
http://www.trytoprogram.com/c-programming/c-programming-switch-case/
CC-MAIN-2019-30
en
refinedweb
Class InputContext - java.lang.Object - java.awt.im.InputContext public class InputContext extends ObjectProvides Platform supports input methods that have been developed in the Java programming language, using the interfaces in the java.awt.im.spipackage, which can be made available by adding them to the application's class path.. - Since: - 1.2 - See Also: Component.getInputContext(), Component.enableInputMethods(boolean) Constructor Detail InputContext protected InputContext()Constructs an InputContext. This method is protected so clients cannot instantiate InputContext directly. Input contexts are obtained by calling getInstance(). Method Detail getInstance public static InputContext getInstance()Returns a new InputContext instance. - Returns: - a new InputContext instance selectInputMethod public boolean selectInputMethod(Locale locale)Attempts to select an input method or keyboard layout that supports the given locale, and returns a value indicating whether such an input method or keyboard layout has been successfully selected. The following steps are taken until an input method has been selected: - If the currently selected input method or keyboard layout supports the requested locale, it remains selected. - If there is no input method or keyboard layout available that supports the requested locale, the current input method or keyboard layout remains selected. - If the user has previously selected an input method or keyboard layout for the requested locale from the user interface, then the most recently selected such input method or keyboard layout is reselected. - Otherwise, an input method or keyboard layout that supports the requested locale is selected in an implementation dependent way. Not all host operating systems provide API to determine the locale of the currently selected native input method or keyboard layout, and to select a native input method or keyboard layout by locale. For host operating systems that don't provide such API, selectInputMethodassumes. - Parameters: locale- The desired new locale. - Returns: - true if the input method or keyboard layout that's active after this call supports the desired locale. - Throws: NullPointerException- if localeis null getLocale public Locale getLocale()Returns the current locale of the current input method or keyboard layout. Returns null if the input context does not have a current input method or keyboard layout or if the current input method's InputMethod.getLocale()method returns null. Not all host operating systems provide API to determine the locale of the currently selected native input method or keyboard layout. For host operating systems that don't provide such API, getLocaleassumes that the current locale of all native input methods or keyboard layouts provided by the host operating system is the system's default locale. - Returns: - the current locale of the current input method or keyboard layout - Since: - 1.3 setCharacterSubsets public void setCharacterSubsets(Character.Subset[] subsets)Sets the subsets of the Unicode character set that input methods of this input context should be allowed to input. Null may be passed in to indicate that all characters are allowed. The initial value is null. The setting applies to the current input method as well as input methods selected after this call is made. However, applications cannot rely on this call having the desired effect, since this setting cannot be passed on to all host input methods - applications still need to apply their own character validation. If no input methods are available, then this method has no effect. - Parameters: subsets- The subsets of the Unicode character set from which characters may be input setCompositionEnabled public void setCompositionEnabled(boolean enable)Enables or disables the current input method for composition, depending on the value of the parameter. - Parameters: enable- whether to enable the current input method for composition - Throws: UnsupportedOperationException- if there is no current input method available or the current input method does not support the enabling/disabling operation - Since: - 1.3 - See Also: isCompositionEnabled() isCompositionEnabled public boolean isCompositionEnabled()Determines whether the current input method is enabled for composition. An input method that is enabled for composition interprets incoming events for both composition and control purposes, while a disabled input method does not interpret events for composition. - Returns: trueif the current input method is enabled for composition; falseotherwise - Throws: UnsupportedOperationException- if there is no current input method available or the current input method does not support checking whether it is enabled for composition - Since: - 1.3 - See Also: setCompositionEnabled(boolean) reconvert public void reconvert()Asks the current input method to reconvert text from the current client component. The input method obtains the text to be reconverted from the client component using the. - Throws: UnsupportedOperationException- if there is no current input method available or the current input method does not support the reconversion operation. - Since: - 1.3 dispatchEvent public void dispatchEvent(AWTEvent event)Dispatches an event to the active input method. Called by AWT. If no input method is available, then the event will never be consumed. - Parameters: event- The event - Throws: NullPointerException- if eventis null removeNotify public void removeNotify(Component client)Notifies the input context that a client component has been removed from its containment hierarchy, or that input method support has been disabled for the component. This method is usually called from the client component's Component.removeNotifymethod. Potentially pending input from input methods for this component is discarded. If no input methods are available, then this method has no effect. - Parameters: client- Client component - Throws: NullPointerException- if clientis null endComposition public void endComposition()Ends any input composition that may currently be going on in this context. Depending on the platform and possibly user preferences, this may commit or delete uncommitted text. Any changes to the text are communicated to the active component using an input method event. If no input methods are available, then this method has no effect. A text editing component may call this in a variety of situations, for example, when the user moves the insertion point within the text (but outside the composed text), or when the component's text is saved to a file or copied to the clipboard. dispose public void dispose()Releases the resources used by this input context. Called by AWT for the default input context of each Window. If no input methods are available, then this method has no effect. getInputMethodControlObject public Object getInputMethodControlObject()Returns a control object from the current. If no input methods are available or the current input method does not provide an input method control object, then null is returned. - Returns: - A control object from the current input method, or null.
https://docs.oracle.com/en/java/javase/11/docs/api/java.desktop/java/awt/im/InputContext.html
CC-MAIN-2019-30
en
refinedweb
Here is the documentation of the SMWFermionsPOWHEGDecayer class. More... #include <SMWFermionsPOWHEGDecayer.h> Here is the documentation of the SMWFermionsPOWHEGDecayer class. Definition at line 21 of file SMWFermionsPOWHEGDecayer.h. Make a simple clone of this object. Reimplemented from Herwig::SMWDecayer. Initialize this object after the setup phase before saving an EventGenerator to disk. Reimplemented from Herwig::SMWDecayer. Make a clone of this object, possibly modifying the cloned object to make it sane. Reimplemented from Herwig::SMWDecayer. Virtual members to be overridden by inheriting classes which implement hard corrections. Has a POWHEG style correction Reimplemented from Herwig::HwDecayerBase. Definition at line 38 of file SMWFermionsPOWHEGDecayer.h. The standard Init function used to initialize the interfaces. Called exactly once for each class by the class description system before the main function starts or when this class is dynamically loaded. Return the matrix element squared for a given mode and phase-space channel. Reimplemented from Herwig::SMWDecayer. The assignment operator is private and must never be called. In fact, it should not even be implemented. Function used to read in object persistently. Function used to write out object persistently. Stuff for the POWHEG correction. ParticleData object for the gluon Definition at line 235 of file SMWFermionsPOWHEGDecayer.h. The static object used to initialize the description of this class. Indicates that this is a concrete class with persistent data. Definition at line 111 of file SMWFermionsPOWHEGDecayer.h.
https://herwig.hepforge.org/doxygen/classHerwig_1_1SMWFermionsPOWHEGDecayer.html
CC-MAIN-2019-30
en
refinedweb
0 I use a neopixel 6x6 led matrix (ws2812) + Arduino Uno I wrote a small code that uses Kinect to track movement in processing. It displays everything between the depth of 300 & 500 in pixels. I want this to be displayed on a led matrix. (I haven’t gotten more out the matrix it displaying a rainbow). It will stay connected to my computer throughout so the video processing wouldn’t have to be done on the Arduino itself. I am new to Arduino but I think for this I need to solve both these problems: - I want to know how I can translate everything between the depth of 300 & 500 to values I can use in Arduino. to make the leds glow up when these values are met. - And how would I make it so the right Leds light up? so it basically displays what processing captures. basically the leds should display something like this: but how would I go and do this? does anybody have any tips/articles/tutorials I could look at? If someone can get me on the right track that would be greatly appreciated! do I need extra hardware or is this just code? here is my processing code: import org.openkinect.processing.*; Kinect kinect; PImage img; void setup() { size(6, 6); kinect = new Kinect(this); kinect.initDepth(); img = createImage(kinect.width, kinect.height, RGB); } void draw() { background(0); img.loadPixels(); int[] depth = kinect.getRawDepth(); int skip = 60; for (int x = 0; x < kinect.width; x+=skip) { for (int y = 0; y < kinect.height; y+=skip) { int offset = x + y * kinect.width; int d = depth[offset]; if (d>300 && d < 500) { img.pixels[offset] = color(255, 255, 255); } else { img.pixels[offset] = color(0); } } } img.updatePixels(); image(img, 0, 0); } thank you greatly for any tips or leads!
https://discourse.processing.org/t/kinect-interacting-with-leds-processing-arduino/10799
CC-MAIN-2022-27
en
refinedweb
Updating HAProxy Configurations in OpenShift A quick tutorial on how to fine tune the OpenShift router to make it more secure. Join the DZone community and get the full member experience.Join For Free For security reason we need to tune up the OpenShift router. OpenShift router uses the HAProxy. HAProxy has this settings: acl network_allowed src 20.30.40.50 20.30.40.40 http-request deny if !network_allowed If there would be any possibility to update the HAProxy configuration it should be working. But how to update the configuration? The solution is to update the Docker image with the router. Download the OpenShift proxy from and update file conf/haproxy-config.template like this (find the section backend and add those 4 lines): backend be_edge_http_{{$cfgIdx}} ... {{ if (index $cfg.Annotations "rohlik.cz/acl") }} acl network_allowed src {{ index $cfg.Annotations "rohlik.cz/acl" }} http-request deny if !network_allowed {{ end }} ... After updating the file you have to build docker image and update the router definition. docker build -t rohlik-haproxy:v1.3.1 . And update the router definition (dc/router in namespace default): spec: strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 intervalSeconds: 1 timeoutSeconds: 600 maxUnavailable: 25% maxSurge: 0 updatePercent: -25 resources: triggers: - type: ConfigChange replicas: 1 test: false selector: router: router template: metadata: creationTimestamp: null labels: router: router spec: volumes: - name: server-certificate secret: secretName: router-certs containers: - name: router image: 'rohlik-haproxy:v1.3.1' Now you are able to use the annotations section in your route definition: apiVersion: v1 kind: Route metadata: name: rohlik namespace: rohlik selfLink: /oapi/v1/namespaces/rohlik/routes/rohlik labels: app: rohlik annotations: rohlik.cz/acl: 22.22.22.22 Finished :) I hope this article helps you. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/updating-haproxy-configurations-openshift
CC-MAIN-2022-27
en
refinedweb
Abstract class for dispatching Event. More... #include <EventDispatcher.hpp> Abstract class for dispatching Event. EventDispatcher is the base class for all classes that dispatch events. All the events are sent asynchronously. To avoid from creating tooo enourmouse threads dispatching events, all event sending processes will acuiqre a semaphore. The default permitted size of the semaphore is 2. EventDispatcher is a candidate to be deprecated. Since C++11, calling member method of a class by a new thread passing by static method and using void pointer are recommeded to avoid. As the reason, using std::thread and std::bind will be better. Definition at line 50 of file EventDispatcher.hpp. Default Constructor. Definition at line 74 of file EventDispatcher.hpp. Copy Constructor. Copying an EventDispatcher instance does not copy the event listeners attached to it. (If your newly created node needs an event listener, you must attach the listener after creating the node.) Definition at line 87 of file EventDispatcher.hpp. Move Constructor. Definition at line 97 of file EventDispatcher.hpp. References samchon::library::UniqueWriteLock::unlock(). Default Destructor. Definition at line 120 of file EventDispatcher.hpp. Register an event listener. Registers an event listener object with an EventDispatcher object so that the listener receives notification of an event. Definition at line 154 of file EventDispatcher.hpp. Remove a registered event listener. Removes a listener from the EventDispatcher object. If there is no matching listener registered with the EventDispatcher object, a call to this method has no effect Definition at line 172 of file EventDispatcher.hpp. Dispatches an event to all listeners. Dispatches an event into the event flow in the background. The Event::source is the EventDispatcher object upon which the dispatchEvent. Definition at line 205 of file EventDispatcher.hpp. References THREAD_SIZE(), and samchon::library::UniqueReadLock::unlock(). Numer of threads for background. Definition at line 317 of file EventDispatcher.hpp. Referenced by dispatch(). A container storing listeners. Definition at line 60 of file EventDispatcher.hpp. A rw_mutex for concurrency. Definition at line 65 of file EventDispatcher.hpp.
http://samchon.github.io/framework/api/cpp/d3/d9b/classsamchon_1_1library_1_1EventDispatcher.html
CC-MAIN-2022-27
en
refinedweb
foreach char chr in str Step 3: Length = Length + 1 Step 4: Return Length STOP C# Program to Find the Length of a String using System; public class LFC { public static void Main() { string str1="Letsfindcourse"; /* Declares a string of size 100 */ int l= 0; foreach(char chr in str1) { l += 1; } Console.Write("Length of the string = {0}\n\n", l); } } Output Length of the string = 14
https://letsfindcourse.com/csharp-coding-questions/csharp-program-to-find-length-of-string
CC-MAIN-2022-27
en
refinedweb
#include <ClientDriver.hpp> A communicator with remote client. The ClientDriver class is a type of Communicator, specified for communication with remote client who has connected in a Server object who follows the protocol of Samchon Framework's own. The ClientDriver takes full charge of network communication with the remote client. The ClientDriver object is always created by the Server class. When you got a ClientDriver object from the Server.addClient(), then specify listener with the ClientDriver.listen() method. Below code is an example specifying and managing the listener objects. Definition at line 37 of file ClientDriver.hpp. Listen message from the newly connected client. Starts listening message from the newly connected client. Replied message from the connected client will be converted to Invoke classes and shifted to the listener's replyData() method. Definition at line 58 of file ClientDriver.hpp.
http://samchon.github.io/framework/api/cpp/d7/da5/classsamchon_1_1protocol_1_1ClientDriver.html
CC-MAIN-2022-27
en
refinedweb
Address Standardization and Correction using SequenceToSequence model Address Standardization is the process of changing addresses to adhere to USPS standards. In this notebook, we will aim at abbreviating the addresses as per standard USPS abbreviations. Address Correction will aim at correcting miss-spelled place names. We will train a model using SequenceToSequence class of arcgis.learn.text module to translate the non-standard and erroneous address to their standard and correct form. The dataset consists of a pair of non-standard, incorrect(synthetic errors) house addresses and corresponding correct, standard house addresses from the United States. The correct addresses are taken from OpenAddresses data. Disclaimer: The correct addresses were synthetically corrupted to prepare the training dataset, this could have lead to some unexpected corruptions in addresses, which will affect the translation learned by the model. A note on the dataset - The data is collected around 2020-04-29 by OpenAddresses. - The data licenses can be found in data/address_standardization_correction_data/LICENSE.txt. Data preparation and model training workflows using arcgis.learn have a dependency on transformers. Refer to the section "Install deep learning dependencies of arcgis.learn module" on this page for detailed documentation on the installation of the dependencies. Labeled data: For SequenceToSequencemodel to learn, it needs to see documents/texts that have been assigned a label. Labeled data for this sample notebook is located at data/address_standardization_correction_data/address_standardization_correction.csv To learn more about how SequenceToSequenceworks, please see the guide on How SequenceToSequence works. !pip install transformers==3.3.0 Collecting transformers==3.3.0 Downloading transformers-3.3.0-py3-none-any.whl (1.1 MB) |████████████████████████████████| 1.1 MB 8.7 MB/s eta 0:00:01 Requirement already satisfied: regex!=2019.12.17 in /opt/conda/lib/python3.7/site-packages (from transformers==3.3.0) (2020.11.13) Collecting sentencepiece!=0.1.92 Downloading sentencepiece-0.1.95-cp37-cp37m-manylinux2014_x86_64.whl (1.2 MB) |████████████████████████████████| 1.2 MB 30.3 MB/s eta 0:00:01 Collecting sacremoses Downloading sacremoses-0.0.45-py3-none-any.whl (895 kB) |████████████████████████████████| 895 kB 63.0 MB/s eta 0:00:01 Requirement already satisfied: requests in /opt/conda/lib/python3.7/site-packages (from transformers==3.3.0) (2.25.1) Requirement already satisfied: packaging in /opt/conda/lib/python3.7/site-packages (from transformers==3.3.0) (20.9) Requirement already satisfied: numpy in /opt/conda/lib/python3.7/site-packages (from transformers==3.3.0) (1.19.2) Collecting tokenizers==0.8.1.rc2 Downloading tokenizers-0.8.1rc2-cp37-cp37m-manylinux1_x86_64.whl (3.0 MB) |████████████████████████████████| 3.0 MB 62.6 MB/s eta 0:00:01 Requirement already satisfied: tqdm>=4.27 in /opt/conda/lib/python3.7/site-packages (from transformers==3.3.0) (4.56.0) Collecting filelock Downloading filelock-3.0.12-py3-none-any.whl (7.6 kB) Requirement already satisfied: pyparsing>=2.0.2 in /opt/conda/lib/python3.7/site-packages (from packaging->transformers==3.3.0) (2.4.7) Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests->transformers==3.3.0) (2020.12.5) Requirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests->transformers==3.3.0) (4.0.0) Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests->transformers==3.3.0) (2.10) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests->transformers==3.3.0) (1.26.3) Requirement already satisfied: six in /opt/conda/lib/python3.7/site-packages (from sacremoses->transformers==3.3.0) (1.15.0) Requirement already satisfied: click in /opt/conda/lib/python3.7/site-packages (from sacremoses->transformers==3.3.0) (7.1.2) Requirement already satisfied: joblib in /opt/conda/lib/python3.7/site-packages (from sacremoses->transformers==3.3.0) (1.0.1) Installing collected packages: tokenizers, sentencepiece, sacremoses, filelock, transformers Successfully installed filelock-3.0.12 sacremoses-0.0.45 sentencepiece-0.1.95 tokenizers-0.8.1rc2 transformers-3.3.0 Note: Please restart the kernel before running the cells below. import os import zipfile from pathlib import Path from arcgis.gis import GIS from arcgis.learn import prepare_textdata from arcgis.learn.text import SequenceToSequence gis = GIS('home') training_data = gis.content.get('06200bcbf46a4f58b2036c02b0bff41e') training_data Note: This address dataset is a subset (~15%) of the dataset available at "ea94e88b5a56412995fd1ffcb85d60e9" item id. filepath = training_data.download(file_name=training_data.name) with zipfile.ZipFile(filepath, 'r') as zip_ref: zip_ref.extractall(Path(filepath).parent) data_root = Path(os.path.join(os.path.splitext(filepath)[0])) data = prepare_textdata(path=data_root, batch_size=16, task='sequence_translation', text_columns='non-std-address', label_columns='std-address', train_file='address_standardization_correction_data_small.csv') The show_batch() method can be used to see the training samples, along with labels. data.show_batch() SequenceToSequence model in arcgis.learn.text is built on top of Hugging Face Transformers library. The model training and inferencing workflows are similar to computer vision models in arcgis.learn. Run the command below to see what backbones are supported for the sequence translation task. SequenceToSequence.supported_backbones ['T5', 'Bart', 'Marian'] Call the model's available_backbone_models() method with the backbone name to get the available models for that backbone. The call to available_backbone_models method will list out only a few of the available models for each backbone. Visit this link to get a complete list of models for each backbone. SequenceToSequence.available_backbone_models("T5") ['t5-small', 't5-base', 't5-large', 't5-3b', 't5-11b', 'See all T5 models at '] Invoke the SequenceToSequence class by passing the data and the backbone you have chosen. The dataset consists of house addresses in non-standard format with synthetic errors, we will finetune a t5-base pretrained model. The model will attempt to learn how to standardize and correct the input addresses. model = SequenceToSequence(data,backbone='t5-base') The learning rate[1] is a tuning parameter that determines the step size at each iteration while moving toward a minimum of a loss function, it represents the speed at which a machine learning model "learns". arcgis.learn includes a learning rate finder, and is accessible through the model's lr_find() method, which can automatically select an optimum learning rate, without requiring repeated experiments. lr = model.lr_find() lr 0.1445439770745928 Training the model is an iterative process. We can train the model using its fit() method till the validation loss (or error rate) continues to go down with each training pass also known as epoch. This is indicative of the model learning the task. model.fit(1, lr=lr) By default, the earlier layers of the model (i.e. the backbone) are frozen. Once the later layers have been sufficiently trained, the earlier layers are unfrozen (by calling unfreeze() method of the class) to further fine-tune the model. model.unfreeze() lr = model.lr_find() lr model.fit(5, lr) model.fit(3, lr) Once we have the trained model, we can see the results to see how it performs. model.show_results() model.get_model_metrics() {'seq2seq_acc': 0.9914, 'bleu': 0.9798} model.save('seq2seq_unfrozen8E_bleu_98', publish=True) Published DLPK Item Id: ed79aa1b34dd406aae4eed0123bc4608 WindowsPath('models/seq2seq_unfrozen8E_bleu_98') txt=['940, north pennsylvania avneue, mason icty, iowa, 50401, us', '220, soyth rhodeisland aveune, mason city, iowa, 50401, us'] model.predict(txt, num_beams=6, max_length=50) [('940, north pennsylvania avneue, mason icty, iowa, 50401, us', '940, n pennsylvania ave, mason city, ia, 50401, us'), ('220, soyth rhodeisland aveune, mason city, iowa, 50401, us', '220, s rhode island ave, mason city, ia, 50401, us')] In this notebook we will build an address standardization and correction model using SequenceToSequence class of arcgis.learn.text module. The dataset consisted of a pair of non-standard, incorrect (synthetic errors) house addresses and corresponding correct, standard house addresses from the United States. To achieve this we used a t5-base pretrained transformer to build a SequenceToSequence model to standardize and correct the input house addresses. Below are the results on sample inputs. Non-Standard → Standard , Error → Correction - 940, north pennsylvania avneue, mason icty, iowa, 50401, us → 940, n pennsylvania ave, mason city, ia, 50401, us</span> - 220, soyth rhodeisland aveune, mason city, iowa, 50401, us → 220, s rhode island ave, mason city, ia, 50401, us</span>
https://developers.arcgis.com/python/samples/address-standardization-and-correction-with-sequencetosequence/
CC-MAIN-2022-27
en
refinedweb
#include <CGAL/Compact_container.h> The traits class Compact_container_traits provides the way to access the internal pointer required for T to be used in a Compact_container<T, Allocator>. Note that this pointer needs to be accessible even when the object is not constructed, which means it has to reside in the same memory place as T. You can specialize this class for your own type T if the default template is not suitable. You can also use Compact_container_base as base class for your own types T to make them usable with the default Compact_container_traits. Parameters T is any type providing the following member functions: void * t.for_compact_container() const; void t.for_compact_container(void *);. Returns the pointer held by t. The template version defines this function as: return t.for_compact_container(); Sets the pointer held by t to p. The template version defines this function as: t.for_compact_container(p);
https://doc.cgal.org/5.1.3/STL_Extension/structCGAL_1_1Compact__container__traits.html
CC-MAIN-2022-27
en
refinedweb
Pulsar — to be or not to be? Real project experience Introduction Our team was in need to create a microservice that will be shared between a lot of projects. The main purpose of this is to handle a tons of user messages and store them in ElasticSearch (yeah, not obvious choice as DB but we were diving into the existing architecture!). So, potentially there was a huge stream of messages and the service should be able to handle all of them and handle them right. In this case, our team can’t use websockets coz it would overload the server due to quite big quantity of involved channels and data exchanging. In addition if we go with websockets, our developers coulndt guarantee that a message was received on the target side. On the other hand, we could use fallbacks that notify a user that a message can’t be sent because of a bad connection, but users usually ignore such notifications. If our devs do take an extra care of the entire messaging in client applications, the team should use a message broker. To clarify — message broker is a software that enables applications, systems, and services to communicate with each other and exchange information in a very prompt manner and in a long run. To provide reliable message storage and guaranteed delivery, message brokers often rely on a substructure or component called a message queue that stores and orders the messages until the consuming applications can process them. In a message queue, messages are stored in the exact order in which they were transmitted and remain in the queue until receipt is confirmed. So, we’ve got a huge problem of choice between existing queue solutions such as RabbitMQ, Kafka, Pulsar and others. The final decision of stack included the following technologies (with tips about not common ones): - Docker, - Node.JS, - ElasticSearch (ES, a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents), - Kibana (perfect GUI for ES), - Pulsar (is a multimedia broker of messages). Research Before coding madly, our team made R&D. The situation is as is — we had to decide which broker of messages will be more skyrocketing and effective within a new project. Moreover, our dev engineers had an idea to use this multimedia broker across our future projects. Bunch of different comparisons are available on the internet, but none states a cool implementation that will solve our main “pains”. Let’s provide some graphics below from this source: The total primacy of Pulsar latency in front of other ones (Figure 1) and taking second place related to throughput (Figure 2). Furthermore, consumers in Kafka can not accept a message from another flow and don’t have the ability to use multi-user structures (multi-tenancy — security, isolation). Multi-Tenancy: Pulsar’s decoupled architecture reduces architectural complexity and puts millions of topics into a single cluster. Pulsar’s distributed storage system design is segment-based. This hierarchical topic namespace enables Pulsar users to maintain millions of topics in a single cluster. Pulsar can be more scalable and has a geo-replication. So, according to the research our team decided to move with Apache Pulsar. What is the Pulsar? Apache Pulsar is an open-source distributed messaging system created by Yahoo but now under the stewardship of the Apache Software Foundation (published in 2016). Hint: the model of message exchanging is flexible. It unites both concepts: queue management and producer/subscriber concept which includes an ability to choose the approach of using messages under the same subscription. In our case that includes add/update/delete functions for chatting that is so good. Terminology in the Pulsar environment - topic — a topic which will be used like a channel that you listen to - producer — a part that is an emitter of messages - client — a Pulsar client as an entry point of connection to this url: pulsar://your_server:6650 - consumer — a part that will be listening to a topic and receive a message For more details of terms related to Apache Pulsar pop into this: There are 4 subscription types (following Figure 3): - exclusive — only a single consumer is allowed to attach to the subscription, - shared — messages are delivered in a round-robin distribution across consumers, and any given message is delivered to only one consumer, - failover — a master consumer is picked for non-partitioned topic or each partition of a partitioned topic and receives messages - key_shared — messages are delivered in distribution across consumers and messages with the same key or same ordering key are delivered to only one consumer. Apache Pulsar is a relatively fresh technology that will pull us in the risk and adventure of how to deep learn and use it. Find out here what’s the size of user community, how documentation is made, and learning materials are provided. It could bring us to the obstacles of how to deal with it. It’s part of our nature to try such new things. The team admits that it wasn’t clear and convenient for the first touch. And complex as well. To have it working any developer had to read the sources and only after that our team was able to start working with Pulsar, the documentation was not enough at that time. Also, we’ve noticed that Pulsar documentation has a typo sometimes (that’s bad actually as just cannot follow it blindly!) which can’t be seen at the first glance because of trust when someone copy-paste their examples. How to use the Pulsar on a Node.JS project? Our developers used Docker to containerize some parts of the app. - Initialize the npm project: npm init 2. Install the Pulsar library. Pulsar developers produced the library only for these platforms Linux and macOS. For macOS: brew install libpulsar For Linux: a. Install 2 requirement packages into your Linux OS. For more details read here. If it doesn’t work. Please use this tutorial b. Install pulsar-client: npm install pulsar-client 3. Connect Pulsar to ElasticSearch (ES). There are 2 ways to save data to ElasticSearch: a. Using ElasticSearch connector b. Manually by using @elastic/elasticsearch library The team decided to connect Pulsar to ElasticSearch through a connector. All connectors are available to download here. Details about all connectors can check here. - Download ElasticSearch sink connector by using CURL: curl -o pulsar-io-elastic-search-2.6.1.nar and save this file to the “connectors” folder 2. Then create the docker-compose.yml file with ES, Kibana, Pulsar with the following configuration: # ./docker-compose.yml version: “3”volumes: pulsardata: pulsarconf: elasticdata: services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.7.1 environment: — discovery.type=single-node — cluster.name=es-cluster — node.name=es-node-1 — path.data=/usr/share/elasticsearch/data — http.port=9200 — http.host=0.0.0.0 — transport.host=127.0.0.1 — bootstrap.memory_lock=true — “ES_JAVA_OPTS=-Xms512m -Xmx512m” ulimits: memlock: soft: -1 hard: -1 volumes: — elasticdata:/usr/share/elasticsearch/data ports: — “9200:9200” kibana: image: “docker.elastic.co/kibana/kibana:7.7.1” environment: — server.port=127.0.0.1:5601 — elasticsearch.url=”" — server.name=”kibana” ports: — “5601:5601” depends_on: — elasticsearch pulsar: image: apachepulsar/pulsar:2.6.0 ports: — 8080:8080 — 6650:6650 environment: PULSAR_MEM: “ -Xms512m -Xmx512m -XX:MaxDirectMemorySize=1g” volumes: — pulsardata:/pulsar/data — pulsarconf:/pulsar/conf — ./connectors:/pulsar/connectors command: > /bin/bash -c “bin/apply-config-from-env.py conf/standalone.conf && bin/pulsar standalone” # … other parameters 3. Run the project: docker-compose up Note: there is only one thing that can irritate you — it takes about one or two minutes to run Pulsar on your local machine when you start docker-compose. 4. Create init.sh file with the following code: curl — header “Content-Type: application/json” \ — request DELETE \ — header “Content-Type: application/json” \ — request PUT \ — data ‘{ “allowedClusters”: [“standalone”]}’ \ — header “Content-Type: application/json” \ — request PUT \ — data ‘{}’ \ curl — header “Content-Type: multipart/form-data” \ — request POST \ -F url=’;type=text/plain’ \ -F sinkConfig=’{ “className”: “org.apache.pulsar.io.elasticsearch.ElasticSearchSink”, “archive”: “/pulsar/connectors/pulsar-io-elastic-search-2.6.1.nar”, “inputs”: [“persistent://tenant-1/ns-1/elastic-test”], “processingGuarantees”: “EFFECTIVELY_ONCE”, “parallelism”: 1, “configs”: {“elasticSearchUrl”: “", “indexName”: “test_index” } };type=application/json’ \ — request POST \ — request POST \ 5. When Pulsar is ready, run init.sh for turning on a connection to ES. For ES Pulsar connector has actions such as CREATE and READ. ES was developed for full-text searching so it was enough to use it just for saving logging data. It gives a fast searching in this case. 6. Then create your client for connecting to Pulsar and the producer for sending a message. Here is a small test example: test_pulsar_elastic_connection.js const Pulsar = require(“pulsar-client”);(async () => { // Create a client const client = new Pulsar.Client({ serviceUrl: “pulsar://localhost:6650”, }); // Create a producer const producer = await client.createProducer({ topic: “persistent://tenant-1/ns-1/elastic-test”, }); // Send messages const dataset = []; for (let i = 0; i < 10; i += 1) { const id = Math.floor(Math.random() * 100); const msg = `{ “test-pulsar”: ${id} }`; dataset.push(msg); producer.send({ data: Buffer.from(msg), properties: { ACTION: “UPSERT”, ID: id }, }); console.log(`Sent message: ${msg}`); } await producer.flush(); await producer.close(); await client.close(); })(); consumer.js const Pulsar = require(“pulsar-client”);(async () => { // Create a client const client = new Pulsar.Client({ serviceUrl: “pulsar://localhost:6650”, }); const consumer = await client.subscribe({ topic: “persistent://tenant-1/ns-1/elastic-test”, subscription: “my-subscription”, }); const msg = await consumer.receive(); const str = msg.getData().toString(); console.log(“RECEIVED!!!!!!!!!!”, str); consumer.acknowledge(msg); await consumer.close(); await client.close(); })(); 7. For now, it is enough. Run this example above via Node.JS: node ./test_pulsar_elastic_connection.js It will create a message in ES. Find the tutorial here. Sent messages: 8. Run the consumer for receiving: node ./consumer.js 9. Now some messages were sent by the producer (they were already saved in ES immediately), the consumer received and acknowledged a message and after that, all instances were closed. 10. For checking results in ES refresh data before searching: curl -s curl -s 11. Results of checking ElasticSearch data by CURL: 12. Use Kibana GUI for ES for checking saved messages: - go to - connect to index on the dashboard and check the results Woohoo! That’s enough for testing the whole flow. Issue According to the request from the client our team had to use ElasticSearch as a main database. Here is the issue our team faced with — the Pulsar had a connector that didn’t work well as expected. The project needed the UPDATE and DELETE functionality. According to the issue above our ninja-developer dug into the Java sources of Pulsar and implemented these abilities of CRUD, customly of course. The decision was to create new feature in the Pulsar ElasticSearch connector and publish this idea to the official repository: - added ID property for tracking records by ID - implemented “ACTION” property which can act as UPSERT or DELETE - created Pull Request (PR) to Apache Pulsar GitHub - The whole description and discussion about our PR is here. Despite creating this PR our team decided to change the development up and noticed that they couldn’t catch an exception. The message was just sent and our team couldn’t be sure whether it was saved in ES or not. So, the team decided to use the first approach — manual saving to ES by using @elastic/elasticsearch library. Please find the tutorial for Node.JS here. The next step was developing the subscribe feature and testing the flow. Of course, you can choose any approach that would be more suitable for your project (coz nobody knows exactly the precise requirements!). For example, all processes of messaging can be only at your backend side. One of our ideas was to implement Service Sent Events (SSE, a server push technology enabling a client to receive automatic updates from a server via HTTP connection) with “Content-Type”: “text/event-stream” and subscribe to the endpoint from the frontend. All users who subscribed to the Pulsar topic via SSE will receive new messages from a Pulsar message broker. Take a look at some parts of the code Node.JS - route app.get(“/subscribe/:sessionId”, sse.middleware()); - SSE const HEADERS = { “Content-Type”: “text/event-stream”, Connection: “keep-alive”, “Cache-Control”: “no-cache” };const subscriptions = new Map();const subscribe = (id, handler) => { if (!subscriptions.has(id)) { subscriptions.set(id, handler); } else { console.warn(“Trying to resubscribe with already created subscription”); } return { unsubscribe: () => subscriptions.delete(id) }; };const sse = ({ onClose } = {}) => { const middleware = (req, res) => { res.writeHead(200, HEADERS); res.write(`id: ${nanoid()}\n`); res.write(“retry: 1\n”); res.write(`data: ${JSON.stringify({ success: true })}\n\n`); //… your logic }; return middleware; }; const sendEvents = (type, message) => { for (const onMessage of subscriptions.values()) { onMessage(type, message); } };const newComment = message => { sendEvents(pulsarTopics.NEW_COMMENT, message); };const updatedComment = message => { sendEvents(pulsarTopics.UPDATED_COMMENT, message); };const deletedComment = message => { sendEvents(pulsarTopics.DELETED_COMMENT, message); }; //… Angular 2+ - sse.service.ts //… export class SseService { getEventSource(sessionId: string): EventSource { return new EventSource(environment.serverUrl + `/subscribe/${sessionId}`); } } getServerSentEvent() { return new Observable(observer => { let eventSource = this.sseService.getEventSource(this.currentSession); setUp(); const self = this; function setUp() { eventSource.onmessage = event => { self.zone.run(() => { observer.next(event); }); }; eventSource.addEventListener(‘new_comment’, (event: any) => { self.zone.run(() => { const objEvent = JSON.parse(event.data); //… your logic eventSource.addEventListener(‘updated_comment’, (event: any) => { self.zone.run(() => { const objEvent = JSON.parse(event.data); //… your logic eventSource.addEventListener(‘deleted_comment’, (event: any) => { self.zone.run(() => { const objEvent = JSON.parse(event.data); //… your logic There were 3 event listeners add/update/delete messages. It gives us the whole logic for chatting. Conclusions Taking aside the point that working within a distributed team has side effects, but the technical side of things was to R&D and implement the solution to handle a lot of user messages, queueing and storing them in ElasticSearch. As an extra, it was core important to be 100% sure that the messages will be received on the target side. We’ve investigated, collaborated with professionals (frontenders, backenders and even devops), discussed a lot and finally found out that Pulsar was quite a suitable solution to cover the client requirements. Long story short — hardwork, teamwork, coffee and some magic of course brought the result.
https://intspirit.medium.com/pulsar-to-be-or-not-to-be-real-project-experience-42aedd23a688
CC-MAIN-2022-27
en
refinedweb
Q. Java program to find whether a number is power of two or not. Here you will find an algorithm and program in Java Java Program to Find Whether a Given Number is Power of 2 class LFC { static boolean find_power_of_two(int num) { if(num==0) return false; return (int)(Math.ceil((Math.log(num) / Math.log(2)))) == (int)(Math.floor(((Math.log(num) / Math.log(2))))); } public static void main(String[] args) { int num=256; if(find_power_of_two(num)) System.out.printf("%d is the power of two\n",num); else System.out.printf("%d is not the power of two\n",num); } } Output 256 is the power of two.
https://letsfindcourse.com/java-coding-questions/java-program-to-find-whether-a-no-is-power-of-two
CC-MAIN-2022-27
en
refinedweb
Hi, In Carol project (used by JOnAS project), the RMI context factory is resolved by a call to the javax.naming.spi.NamingManager class [1]. With JVMs using GNU Classpath, the context returned by calling NamingManager.getURLContext(,) method is null, while with Sun/Bea/IBM JDK, it is non-null. When looking at the GNU classpath class source code: There are some issues. For example, there is the following code : if (prefixes == null) { // Specified as the default in the docs. Unclear if this is // right for us. prefixes = "com.sun.jndi.url"; } By default, (see) it doesn't mean that it should not be added to the existing prefixes, and it is strange to add some sun prefixes in GNU classpath. When using rmi as scheme, with Sun JVM, the expected class is "com.sun.jndi.url.rmi.rmiURLContextFactory" (as written in the guide ). With GNU classpath, if we have a prefix (JOnAS case), it will then try to instantiate a class rmiURLContextFactory with different packages. But none of them will be found (and it doesn't search in com.sun.jndi.url package as there is already a package : prefixes != null). I tried to search a rmiURLContextFactory class in GNU classpath but I didn't find one (with this name). So, I think that a rmiURLContextFactory class and iiopURLContextFactory should be added in a GNU classpath package. And that the default prefix to use should be the package containing the 2 previous classes. Moreover, this prefix should be added for all cases (and not only when prefixes is null). Without these classes, JOnAS couldn't run with RMI/JRMP, RMI/IRMI and RMI/IIOP (only jeremie should be working as the context factory is in jeremie package. But jeremie is deprecated and I want to run JOnAS with IRMI). [1] example of code reproducing a NULL context by using GNU classpath (and a non-null context with proprietary jvm like Sun/Bea/IBM) import java.util.Hashtable; import javax.naming.Context; import javax.naming.NamingException; import javax.naming.spi.InitialContextFactory; import javax.naming.spi.NamingManager; public class TestJNDI { public static void main(String[] args) throws Exception { Hashtable env = new Hashtable(); env.put(Context.PROVIDER_URL, "rmi://localhost:1099"); env.put(Context.INITIAL_CONTEXT_FACTORY, URLInitialContextFactory.class.getName()); env.put(Context.URL_PKG_PREFIXES, "test.jndi"); Context ctx = NamingManager.getURLContext("rmi", env); System.out.println("Ctx = " + ctx); } public class URLInitialContextFactory implements InitialContextFactory { public Context getInitialContext(Hashtable environment) throws NamingException { return null; } } } Regards, Florent -- Summary: Missing rmi URL context factory Product: classpath Version: 0.90 Status: UNCONFIRMED Severity: normal Priority: P3 Component: classpath AssignedTo: audriusa at bluewin dot ch ReportedBy: audriusa at bluewin dot ch
https://lists.gnu.org/archive/html/bug-classpath/2006-05/msg00006.html
CC-MAIN-2022-27
en
refinedweb
PDF.js Express Version 7.1.0 Detailed description of issue After updating to 7.1.0 I am getting these issues, I was using 7.0.1 before. Expected behaviour To load the same way before 7.1.0 I reverted back to 7.0.1 and it is now successfully loading. Code snippet Im just using the default getting started snippet WebViewer({ path: 'Content/Imports/PDFExpress/lib', // path to the PDF.js Express'lib' folder on your server licenseKey: 'Insert commercial license key here after purchase', initialDoc: '', // initialDoc: '/path/to/my/file.pdf', // You can also use documents on your server }, document.getElementById('viewer')) .then(instance => { const docViewer = instance.docViewer; const annotManager = instance.annotManager; // call methods from instance, docViewer and annotManager as needed // you can also access major namespaces from the instance as follows: // const Tools = instance.Tools; // const Annotations = instance.Annotations; docViewer.on('documentLoaded', () => { // call methods relating to the loaded document }); });
https://pdfjs.community/t/7-1-0-error-cannot-read-property-call-of-undefined/287
CC-MAIN-2022-27
en
refinedweb
The Google Play In-App Review API, App store rating API lets you prompt users to submit Play Store or App store ratings and reviews without the inconvenience of leaving your app or game. react native in app review, to rate on Play store, App Store, Generally, the in-app review flow (see figure 1 for play store, figure 2 for ios) can be triggered at any time throughout the user journey of your app. During the flow, the user has the ability to rate your app using the 1 to 5 star system and to add an optional comment for play store only. Once submitted, the review is sent to the Play Store or App store and eventually displayed. Getting StartedGetting Started InstallationInstallation If you use Expo to create a project you'll just need to "eject". expo eject Install React Native In App Review package $ npm install react-native-in-apps-reviews OR $ yarn add react-native-in-apps-reviews Standard MethodStandard Method React Native 0.60 and above Linking is not required in React Native 0.60 and above. Don't forget to run npx pod-install after that ! - If you do not have CocoaPods already installed on your machine, run sudo gem install cocoapodsto set it up the first time, after that run npx pod-install React Native 0.59 and below Run react-native link react-native-in-apps-reviews to link the react-native-in-apps-reviews library. after following the instructions for your platform to link react-native-in-apps-reviews into your project: Manual LinkingManual Linking iOS installationiOS installation iOS details CocoaPodsUsing Add the following to your Podfile and run pod install: pod 'react-native-in-apps-reviews', :path => '../node_modules/react-native-in-apps-reviews' Android installationAndroid installation Android details Run react-native link react-native-in-apps-reviews to link the react-native-in-apps-reviews library. android/settings.gradleandroid/settings.gradle include ':react-native-in-apps-reviews' project(':react-native-in-apps-reviews').projectDir = new File(rootProject.projectDir, '../node_modules/react-native-in-apps-reviews/android') android/app/build.gradleandroid/app/build.gradle From version >= 5.0.0, you have to apply these changes: dependencies { ... + implementation project(':react-native-in-apps-reviews') } android/gradle.propertiesandroid/gradle.properties Migrating to AndroidX (needs version >= 5.0.0): android.useAndroidX=true android.enableJetifier=true Then, in android/app/src/main/java/your/package/MainApplication.java:Then, in android/app/src/main/java/your/package/MainApplication.java: On top, where imports are: import com.winplaybox.inappsreviews.AppReviewPackage; @Override protected List<ReactPackage> getPackages() { return Arrays.asList( new MainReactPackage(), new AppReviewPackage() ); } UsageUsage import InAppReview from 'react-native-in-apps-reviews'; // This package is only available on android version >= 21 and iOS >= 10.3 // Give you result if version of device supported to rate app or not! InAppReview.isAvailable(); // trigger UI InAppreview InAppReview.RequestInAppReview() .then((hasFlowFinishedSuccessfully) => { // when return true in android it means user finished or close review flow console.log('InAppReview in android', hasFlowFinishedSuccessfully); // when return true in ios it means review flow lanuched to user. console.log( 'InAppReview in ios has lanuched successfully', hasFlowFinishedSuccessfully, ); // 1- you have option to do something ex: (navigate Home page) (in android). // 2- you have option to do something, // ex: (save date today to lanuch InAppReview after 15 days) (in android and ios). // 3- another option: if (hasFlowFinishedSuccessfully) { // do something for ios // do something for android } // for android: // The flow has finished. The API does not indicate whether the user // reviewed or not, or even whether the review dialog was shown. Thus, no // matter the result, we continue our app flow. // for ios // the flow lanuched successfully, The API does not indicate whether the user // reviewed or not, or he/she closed flow yet as android, Thus, no // matter the result, we continue our app flow. }) .catch((error) => { //we continue our app flow. // we have some error could happen while lanuching InAppReview, // Check table for errors and code number that can return in catch. console.log(error); }); Error could happen and code numberError could happen and code number + Android Notes:+ Android Notes: When to request an in-app reviewWhen”). QuotasQuotas To provide a great user experience, Google Play enforces a quota on how often a user can be shown the review dialog. Because of this, calling a launchReviewFlow method might not always display a dialog. For example, you should not have a call-to-action option (such as a button) to trigger a review as a user might have already hit their quota and the flow won’t be shown, presenting a broken experience to the user. Device requirementsDevice requirements In-app reviews only work on the following devices: - Android devices (phones and tablets) running Android 5.0 (API level 21) or higher that have the Google Play Store installed. - Chrome OS devices that have the Google Play Store installed. Please Note, To test your integration using the Google Play StorePlease Note, To test your integration using the Google Play Store - In-app reviews require your app to be published in Play Store. However, you can test your integration without publishing your app to production using either internal test tracks or internal app sharing. Troubleshooting:Troubleshooting: As you integrate and test in-app reviews, you might run into some issues. The following table outlines the most common issues that can prevent the in-app review dialog from displaying in your app: + iOS Notes:+ iOS Notes: System Rating and Review PromptsSystem. (In Settings, the user can also opt out of receiving these rating prompts for all apps they have installed.) The system automatically limits the display of the prompt to three occurrences per app within a 365-day period. When to request an in-app-reviewWhen to request an in-app-review. Don't use buttons or other controls to request feedback. Since the system limits how often rating prompts occur, attempting to request feedback in response to a control may result in no rating prompt being displayed. Please Note, To test your integration using the App StorePlease Note, To test your integration using the App Store -. How to test your codeHow to test your code Because it's a native module, you might need to mock this package to run your tests. Here is an example for Jest, adapt it to your needs : // __mocks__/react-native-in-apps-reviews.js module.exports = { RequestInAppReview: jest.fn(), isAvailable: jest.fn(), // add more methods as needed };
https://www.npmjs.com/package/react-native-in-apps-reviews
CC-MAIN-2022-27
en
refinedweb